It sounds like Microsoft’s Tay chatbot is getting a time-out, as Microsoft instructs her on how to talk with strangers on the Internet. Because, as the company quickly learned, the citizens of the Internet can’t be trusted with that task.
In a statement released Thursday, Microsoft said that a “coordinated effort” by Internet users had turned the Tay chatbot into a tool of “abuse.” It was a clear reference to a series of racist and otherwise that the Tay chatbot issued within a day of debuting on Twitter. Wednesday morning, Tay was a novel experiment in AI that would learn natural language through social engagement. By Wednesday evening, Tay was reflecting the more unsavory aspects of life online.
To read this article in full or to leave a comment, please click here
Microsoft says it"s making "adjustments" to Tay chatbot after Internet "abuse"
No comments:
Post a Comment