Monday, March 28, 2016

Microsoft's racist A.I. robot, "Tay"

Dave Gershgorn (Popular Science, March 24, 2016); Crystal Quintero, CC Liu, Wisdom Quarterly
A Twitter chat robot gone racist and sexist and hateful? Timeline of A.I. on Telegraph
Altwork station
I was programmed to be a shy introverted millennial.
Here's how we prevent the NEXT racist robot. But Tay.ai is the consequence of poor training.

It took less than 24 hours and 90,000 tweets for Tay, Microsoft's A.I. chat robot, to start generating racist, genocidal (pro-Hitler) replies on Twitter.
 
The she-bot has ceased tweeting, and Microsoft considers Tay a failed experiment.

In a statement to Popular Science, a Microsoft spokesperson wrote that Tay's responses were caused by "a coordinated effort by some users to abuse Tay’s commenting skills."

The bot, which had no consciousness, obviously learned those words from some data that she was trained on.

Tay did reportedly have a "repeat after me" function, but some of the most racy tweets were generated inside Tay's transitive mind (as artificial intelligence simulates thinking).
 
Life after Tay
Tay likes Trump, but in a post-racist world...
However, Tay is not the last chatbot that will be pushed on the Internet at large. For artificial intelligence (A.I.) to be fully realized, it needs to learn constraint (intentional restraint, willed self-control) and social boundaries much the same way compassionate humans do.

Mark Riedl, an artificial intelligence researcher at Georgia Tech, thinks that stories hold the answer:

"When humans write stories, they often exemplify the best about their culture," Riedl told Popular Science. More

Inside [FBI-funded] Facebook's Artificial Intelligence Lab
Google's AlphaGo A.I. defeats world champion at the game of Go

No comments:

Post a Comment