![]() ![]() If you're not asking yourself ‘how could this be used to hurt someone’ in your design/engineering process, you've failed." Ironically, Microsoft actually has been working extensively on this very issue of digital harassment with its Cortana product. Instead of rising above human limitations, AI systems end up sinking deeply into the mirror image of their creators.Īs Zoe Quinn put it, "Its 2016. Yet, as AI designers are rapidly learning, such naïve AI systems often end up “learning” the native biases and undesirable behavior encoded in available training data, building those same offensive behaviors into the final AI systems. In essence, Tay was released into the world as a true AI system, designed with little background knowledge to force it to learn about the world from those it converses with. While exceptionally powerful, this capability ultimately backfired without the emotional cushion of background knowledge to recognize what it was saying. Making matters worse, Tay was designed to “learn” over time, absorbing and integrating what people told it into its conversational streams. Much as a child innocently repeats offensive words in inappropriate contexts, Tay lacked the domain knowledge to understand what it was saying. The problem is that it did not anticipate what the caustic and often toxic world of social media might do and instead designed its bot with the same degree of high innocence as a human child. Microsoft of course obviously never intended to create a chatbot spewing such offensive commentary to the world. One of the greatest challenges in creating production AI comes when it moves from the controlled conditions of the lab to the great outdoors of the real world. Unfortunately the bot “proved a smash hit with racists, trolls, and online troublemakers, who persuaded Tay to blithely use racial slurs, defend white-supremacist propaganda, and even outright call for genocide.” What might we learn from this about the future of AI? Without a protective cushion of keyword and content filters and base domain knowledge about offensive topics, the AI chatbot naively engaged with the world and innocently mimicked what it was being told, much as a human child might. to experiment with and conduct research on conversational understanding.” Within hours Twitter had turned the naïve AI bot into a stream of “racist, sexist, Holocaust-denying” posts covering everything from politics to race relations to attacking women. Yesterday Microsoft debuted “ Tay” to the world, an “artificial intelligent chat bot developed. (Peter Macdiarmid/Getty Images for Somerset House) A display at the Big Bang Data exhibition at Somerset House highlighting the data explosion that’s.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |