-
Watooka
-
Topic Author
-
Visitor
-
24 Mar 2016 14:08 #298185
by Watooka
A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.
The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.
Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.
The software firm said it was "making some adjustments".
"The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We're making some adjustments to Tay," the firm said in a statement.
Please Log in or Create an account to join the conversation.
-
Forum
-
Political Opinions, Commentaries on Current Issues
-
The Water Cooler!
-
Microsoft chatbot is taught to swear on Twitter
Time to create page: 0.167 seconds