Official Discord for 1stAmender - Click to Join Us!

Microsoft Says It's Deeply Sorry For Its Offensive Chat Bot

Evanvinh


Tags: USA  

Evanvinh

Microsoft Says It's Deeply Sorry For Its Offensive Chat Bot published by Evanvinh
Writer Rating: 5.0000
Posted on 2016-03-26
Writer Description: Evanvinh
This writer has written 733 articles.


The tech giant’s chief researcher has apologized for an experiment gone awry.

Microsoft has issued a mea culpa for an artificial intelligence research project that went awry earlier this week.

The company’s head of research Peter Lee said in a blog post on Friday that Microsoft was “deeply sorry for the unintended offensive and hurtful tweets” made by an experimental chat bot.

Microsoft’s research arm and its Bing search engine unit unveiled a chat bot named Tay on Wednesday that was supposed to talk with strangers on social media networks like Twitter  TWTR -0.62% . The idea was that the more people Tay chatted with, the more it would learn from the data it collected about language.

Get Data Sheet, Fortune’s technology newsletter.

Within 24 hours however, Internet pranksters—many from the notorious message-board websites 4chan and 8chan—initiated several offensive conversations. Based on what it learned, Tay quickly started to spout her own racist, sexist, and otherwise hostile tweets.

Caught off guard, Microsoft eventually shut down Tay and issued a short statement that said the company was “making some adjustments.” In his blog post Friday, Lee followed up by expressing regret and insisting that Tay’s antagonistic messages did not represent Microsoft, the company, “nor how we designed Tay.”

Lee wrote that Microsoft  MSFT 0.44%  “prepared for many types of abuses of the system” but made a “critical oversight for this specific attack.” He did not elaborate about the nature of the oversight.

Tay will eventually be brought back online after Microsoft “can better anticipate malicious intent that conflicts with our principles and values,” Lee said. It’s unclear what those changes will involve.

Tay is not Microsoft’s only experimental artificial intelligence-powered chat bot. The company currently operates a similar bot in China named XiaoIce that chats with 15 million of its 40 million followers on the social network Weibo. Lee described how XiaoIce’s success in China led to Microsoft wanting to see whether a similar project would be “just as captivating in a radically different cultural environment.”

It turns out that Tay was a captivating project, just not the way Microsoft intended.

XiaoIce, which was created from an artificial intelligence technique called deep learning, absorbed linguistic data from Internet users in a heavily censored and regulated web. Internet trolls engaging in offensive conversations with XiaoIce would likely be shut down by local watchdogs.

The China experiment is therefore largely based on non-offensive language (at least to China’s authorities). In response, XiaoIce is unlikely to say much that is out of line.Tay, on the other hand, gorged on linguistic data from individuals operating in the free-for-all of Twitter, where Internet bullies and trolls often roam. Lee acknowledged this dilemma and tied that into the challenges artificial intelligence researchers face when conducting big, public experiments.“AI systems feed off of both positive and negative interactions with people,” he said. “In that sense, the challenges are just as much social as they are technical.”

   

Sources:
http://fortune.com/2016/03/25/microsoft-sorry-offensive-chat-bot/

Article Rating: 0.0000



You have the right to stay anonymous in your comments, share at your own discretion.

No comments yet.