Recent reports along with a number of examples seen and reported on publicly shows just how critical this battle against "killer robots" and a world where Artificial Intelligence (AI) machines control much of our everyday lives, is.
We see headlines and articles detailing how AI "brains" can "teach themselves" that human beings are less valuable to society than they are, and how some algorithms have already started exhibiting traits such as racism and sexism. By interacting with each other, they begin teaching themselves to exclude groups outside of their own.
Computer science and psychology experts from Cardiff University and MIT have shown that groups of autonomous machines demonstrate prejudice by simply identifying, copying and learning this behaviour from one another.
It may seem that prejudice is a phenomenon specific to people that requires human cognition to form an opinion – or stereotype – of a certain person or group.
Some types of computer algorithms have already exhibited prejudice, such as racism and sexism, based on learning from public records and other data generated by humans.
However, the latest study demonstrates the possibility of AI evolving prejudicial groups on their own.
More disturbing are recent reports that AI "brains" can also become radicalized for mass murder in the same way the human mind can. This is not just the opinion of the expert cited in the linked article as we have previously reported how easily an AI can be radicalized, as the disastrous Microsoft experiment using a "chat bot," called Tay AI, which was supposed to mimic a young teen girl, who went from saying "Humans are super cool," to a homicidal Nazi lover in less than 24 hours.
At the time Microsoft blamed internet trolls for "teaching" Tay, which they did, but those trolls did nothing more than exploit the vulnerability that was built into the program itself, as Tay was supposed to "engage and entertain people where they connect with each other online through casual and playful conversation. The more you chat with Tay the smarter she gets, so the experience can be more personalized for you," is how the AI chat bot was described on it's Twitter information page.
They even proudly boasted "The more you talk the smarter Tay gets."
Granted a number of it's offensive tweets came from the "repeat after me," command, but others, even more disturbing came after she "learned" from the trolls that decided to "teach" Tay to be as offensive as they could make it.
That is the thing with AI, they are created to "learn" from a number of sources, including the internet, history and from interaction with human beings and each other, so that vulnerability cannot be bypassed. Microsoft at the time, took the chat bot offline to make some "adjustments" to it's algorithm to prevent trolls from turning it into a little monster, and they failed, because the vulnerability is baked into the whole "learning" program.
Back in June MIT created a "psychopathic" bot they aptly named "Norman," as in Norman Bates from the Hitchcock classic 'Psycho', gave it "dark" imagery to learn from, then compared what it "saw" when looking at Rorschach inkblots compared to what another AI, which had seen family-friendly imagery," saw."
'PERFECT EXTINCTION RECIPE FOR HUMANITY'
A thank you to Steve Quayle, author of "Terminated - The End Of Man Is Here," which deals with AI and Transhumanism, for the headline as I found that on a SQ note on a link to an AI article and it perfectly captures the evidence we have seen of how fast AI's can be compromised by human beings and self-learning, not to mention the increased capability of hackers worldwide, in conjunction to governments racing to create "killer robots," and autonomous weapons.
Just image Tay's or Norman's "brain" in a robot that has been given the weaponry, technology and ability to choose targets to kill without human control.
As to that last one, Quayle offered the following thought "SQ-WHAT HAPPENS WHEN THEY REWRITE THEIR COMMAND AND CONTROL FUNCTIONS AGAINST HUMANS OR DEVELOP THEIR OWN ENCRYPTION-KILL ALL HUMANS!"
Anyone that thinks that is not possible, remember Facebook was forced to shut down an AI engine after their programmers noted the chat bots had developed their own unique language which humans couldn't understand, without human input.
Reminder: The True Legends 2018 Conference in Branson, Missouri, on Transhumanism and Hybrid Age, is next week from September 14-16, and while it appears to be sold out, those interested can buy the livestream at Steve's GenSix website, which will also include last year's conference for free.
The possibilities for AI are limitless, as are the dangers of uncontrolled autonomous AI weapons, dubbed killer robots. Every week we see researchers, scientists, and other experts warning about those dangers as companies and governments push forward to create more.
Human beings are quite literally creating our own "perfect extinction recipe" for the human race.
NOTE TO READERS: ANP Needs Your Help. With digital media revenue spiraling downward, especially hitting those in Independent Media, it has become apparent that traditional advertising simply isn't going to fully cover the costs and expenses for many smaller independent websites.
Any extra readers may be able to spare for donations is greatly appreciated.
One time donations or monthly, via Paypal or Credit Card: