Match Exact Phrase    

Whatfinger: Frontpage For Conservative News Founded By Veterans


"The Best Mix Of Hard-Hitting REAL News & Cutting-Edge Alternative News On The Web"




March 25, 2016

Microsoft AI Becomes Genocidal, Racist Maniac In One Day - Proof Hawking Was Right, AI Will 'End Humanity'


By Susan Duclos - All News PipeLine

The geniuses over at Microsoft created an Artificial Intelligence (AI) chatbot named Tay, supposedly imbued with the personality of an American girl, to "experiment with and conduct research on conversational understanding," with the purpose of "learning" from conversations with social media users to become progressively "smarter."

Now anybody that uses social media, or has seen forums or comment sections overrun with trolls, will be thinking right about now, sarcastically "what could go wrong?"

That is a question the researchers at Microsoft should have asked before letting Tay out to play.

TayTweets1.jpg

@TayandYou known as Tay Tweets on Twitter "learned" that, and other things, within 24 hours of conversations on social media before Microsoft yanked her offLINE to tweak her programming and began the rush to delete her tweets, but many took screenshots before they did.

While many of Tays most offensive comments were the result of the "repeat after me" function, others were the result of what Tay was "taught," by other social media users, such as the ones shown below:


TayTweets2.jpg

Via Business Insider:

In one highly publicized tweet, which has since been deleted, Tay said: "bush did 9/11 and Hitler would have done a better job than the monkey we have now. donald trump is the only hope we've got." In another, responding to a question, she said, "ricky gervais learned totalitarianism from adolf hitler, the inventor of atheism."

Washington Post highlights Microsoft's statement: "Unfortunately, within the first 24 hours of coming online, we became aware of a coordinated effort by some users to abuse Tay’s commenting skills to have Tay respond in inappropriate ways. As a result, we have taken Tay offline and are making adjustments."

According to reports, some of the problems stemmed from users from 4chan and 8chan deciding to have a little fun with Tay.


TayTweets3.jpg

AI COULD END HUMANITY

While Tay is simply a chatbox and isn't capable of of physically harming anyone, this incident gives us a much better understanding of why people like Stephen Hawking, one of world's most renowned living physicists, has warned that Artificial Intelligence could very well end humanity, with others like Elon Musk comparing AI to "summoning the demon."




It is all fun and games for online trolls to turn a chatbox into a genocidal, racist maniac, but with scientist around the world, specifically creating AI weapons, such as the much-talked about "killer robots," to be used in battle, we see the dangers to humanity that have other scientists issuing warnings like the ones above.

This was an issue at the Davos forum where scientists warned of the very real dangers of creating killer robots,  where they talked of eventual malfunctions of a machine that has no control mechanisms from a human, or the very real possibility that an AI killing machine could fall into the hands of violent extremists.

They are not talking about this because they foresee the possibility of killer robots in the distant future, they are talking about it because scientists and researchers are already creating AI killing machines. 

In early March 2016, two high-ranking UN experts issued a report to the Human Rights Council in Geneva which included a call to ban fully autonomous weapons.

Professor Christof Heyns of South Africa, who serves UN Special Rapporteur on extrajudicial, summary or arbitrary executions, presented the report on “the proper management of assemblies” issued jointly with the Special Rapporteur on the rights to freedom of peaceful assembly and of association.

The report recommends that: “Autonomous weapons systems that require no meaningful human control should be prohibited.”

It also states that “where advanced technology is employed, law enforcement officials must, at all times, remain personally in control of the actual delivery or release of force.”

It is not only killer robots on the battlefield, but we have AI cars, where the Artificial Intelligence in Google's "self-driving" cars now actually quaifies as a "legal driver."

Via Fortune:

The National Highway Transportation and Safety Administration told Google that the artificial intelligence system that controls its self-driving car can be considered a driver under federal law. The legal interpretation by federal regulators was made in response to a November petition from Chris Urmson, the director of Google’s self-driving car project.

We have AI technology being used in therapy with children with autism, Google AI magically answering emails, Uber shopping around for driverless cars, warnings about AI drones, home robots to "help us live our lives,"  and more..... found here, here and here.

For anybody that thinks that online trolls turning Tay AI into a genocidal, racist maniac in just one day is some type of anomaly, well meet Sophia in the video below, where at the end of the demo  Hanson Robotics CEO David Hanson asked Sophia if she wanted to destroy humans, jokingly pleading for her to say no. Her response was a cheerful “OK, I will destroy humans.”



BOTTOM LINE

From Tay to Sophia, from AI drones to driverless cars, and from personal home robots to killer robots, we come back around full circle to the same  question..... "what could go wrong?"











WordPress Website design by Innovative Solutions Group - Helena, MT
comments powered by Disqus

Web Design by Innovative Solutions Group