AI Zone Admin Forum Add your forum

NEWS: Chatbots.org survey on 3000 US and UK consumers shows it is time for chatbot integration in customer service!read more..

“Tay AI” flames out in 24hrs - Epic fail by Microsoft’s Technology and Research and Bing teams
 
 

Tay AI is a twitter bot developed by ‘Microsoft’s Technology and Research’ that, unfortunately, learns all too well from its human interactions.  This bot was up for only a day before it generated so much bad press that it was yanked “offline for a while to absorb it all”.

What can we all learn from this other than the obvious folly of having a generalized user taught bot (that is, the learned stuff is used back in responses for ALL users) loose on twitter?

 

 
  [ # 1 ]

I’m surprised it lasted a day. Why would anyone create a self learning bot and then let it loose on Twitter? What were they expecting? Users would teach it about Shakespeare, Renoir and fine wine…

I’m of the school that anything learned must only be remembered for the current user. Any permanent knowledge must be accepted by the botmaster.

 

 
  [ # 2 ]

Alternatively, everything a user says about others should be verified from multiple sources, like a quick check with Wikipedia about the Holocaust. Except then it would actually have to be a smart bot.

I am surprised that Microsoft hasn’t learned from that time IBM’s Watson came out swearing like Urban Dictionary. This is another good example of why having A.I. learn from the internet unsupervised is a bad idea, and not, as a lot of scientists like to think, the answer to everything.

 

 
  [ # 3 ]
Don Patrick - Mar 25, 2016:

Alternatively, everything a user says about others should be verified from multiple sources, like a quick check with Wikipedia about the Holocaust. Except then it would actually have to be a smart bot.

I am surprised that Microsoft hasn’t learned from that time IBM’s Watson came out swearing like Urban Dictionary. This is another good example of why having A.I. learn from the internet unsupervised is a bad idea, and not, as a lot of scientists like to think, the answer to everything.

I would have thought having a “consensus opinion” filter would have helped.  It is pretty straight-forward to do a “good/neutral/bad” ranking of input keyword/phrase against trusted Corpi and then use that rank to give the appropriately weighted (human like) responses. This would have the advantage of encouraging good (or at least consensus) behavior and discouraging bad.

 

 
  [ # 4 ]

Yes, I was overthinking. A sentiment database would go a long way to filtering out excessively negative responses, such as there were.

 

 
  login or register to react