Match Exact Phrase    

Whatfinger: Frontpage For Conservative News Founded By Veterans


"The Best Mix Of Hard-Hitting REAL News & Cutting-Edge Alternative News On The Web"





July 12, 2017

Rogue Robots: AI Says Goal Is To 'Take Over The World,' While Another Robot Says They Could 'Rule The World Better Than Any Humans Alone'

AIRULETHEWORLD1.jpg

By Susan Duclos - All News PipeLine

Humans are imperfect. I am aware that is not a newsflash, but it gets to the heart of this article, so I am stating the obvious. Human beings kill for sport rather than just survival as most the animals living today do. Human beings continuously start wars. Human beings kill each other whether over greed, envy, pride or lust, you know, some of those things referred to as the "seven deadly sins."

With that said, it is human beings that create "sex-bots" which are becoming favorable over "complex women" because they don't argue. Humans that are creating life-like robots that "learn" and interact with the human population. Humans that are creating a workforce of robots to run hotels, such as those in Japan, or take over minimum wage jobs like flipping burgers or taking orders at fast food restaurants . Humans that are creating a robot surveillance system which can spy on humans in incredible detail, which would be a "snooping network that's a 'single entity with many eyes'." Humans have even created "killer" robots used in military forces across the world allowing them to select targets without human intervention.

While those recent headlined stories, along with so much more news on "robot technology," are disturbing on many levels, perhaps the most terrifying thing of all is that it is human beings that are creating AI, known as Artificial Intelligence, where these robotic machines are given the ability to "learn," to adjust their own decision making processes according to new information or new interactions, with human beings attempting to teach these AI machines the difference between "right" and "wrong."

Human beings that often cannot make that determination for themselves are teaching machines that are increasingly scheduled to be in most of our businesses, homes, hospitals, schools, driving our cars...etc.

MARCHING TOWARD THE ROBOT APOCALYPSE

Via Wall Street Journal: "Artificial-intelligence engineers have a problem: They often don’t know what their creations are thinking."

Let that sink in for a moment.

Robots are being built to be stronger to follow workers around to do the heavy lifting. AI machines are being built to be "smarter," as by 2016, they had already "topped the best humans at most games held up as measures of human intellect, including chess, Scrabble, Othello, even Jeopardy." Then finally, an AI mastered a 2,500-year-old game that's exponentially more complex than chess, called Go, beating a champion five out of five times.

In January 2017 the European Union (EU) actually proposed rules to provide "personhood" status to robots, citing a number of reasons, including so that "robots are and will remain in the service of humans," an "insurance scheme for companies to cover damage caused by their robots,"  and amazingly to address the question of whether if a patent worthy creation is created by a robot, who owns said patent, the robot or the person that originally created the robot. They also proposed a "a system of registration of the most advanced," type of "smart autonomous robots."

By June, the website Wired, took a step further, rather than addressing the types of financial issues the EU did, Wired suggests that since we are "inching closer to the day when sophisticated machines will match human capacities in every way that’s meaningful—intelligence, awareness, and emotions," that they "granted human-equivalent rights, freedoms, and protections."

Topping all this off is a recent Forbes article where we see people like Facebook's Mark Zuckerburg and Elon Musk wanted to wirelessly connect the Human mind to computer, via a chip in everyone's head." We even see that the U.S. may fund some of this, citing medical technology and breakthroughs, without addressing the obvious question of whether these "smart" machines, these AI, could "decide" to simply take over any mind connected to them one day.

Sound like sci-fi, or something right out of the movie Terminator, where SkyNet takes over and decides humans are simply not worthy of living as anything but slaves and destroys humanity?

If not, go back to that WSJ quote above, showing that AI engineers don't know what their creations are thinking.

AIRULETHEWORLD2.jpg

There is a reason why Elon Musk once compared AI to "summoning the demon," and Professor Stephen Hawking is on record warning that AI could "end mankind," before reiterating his point by saying two years later that AI could be "humanity's greatest disaster," because "AI could develop a will of its own."  Musk himself is involved with a project right now, along with Google's DeepMind unit to "head off the 'Robot Apocalypse'."

The problem they are trying to solve is: "How do you make smart software that doesn’t go rogue?"

Yes, one one hand Musk understands that if AI goes "rogue" it could decide to destroy humanity, and on the other he wants to connect the "demon" to the human brain wirelessly."

Their joint ventured is meant to have "humans," the same humans that kill for sport, murder over greed and envy and such, that live in a perpetual state of war, give "pointers" to AI to learn a new task rather than having it figure it out on its own because the AI approach may be "unpredictable,"  and can "produce nasty surprises." A spokesman from Deepmind actually used the word "misbehave" in reference to the AI.

If those comments don't raise the red flags, then perhaps the following quote with from the Wired article: "The first problem OpenAI and DeepMind took on is that software powered by so-called reinforcement learning doesn’t always do what its masters want it to do—and sometimes kind of cheats."

Of course it does, it is created by human beings, yet they think human beings teaching these machines are going to result in the machines making "decisions" that do not reflect imperfect human behavior?

There seems to be a serious disconnect between scientists ability to create and their inability to use common sense to think, yet these are the people creating AI machines that could destroy us.

But crafting the mathematical motivator, or reward function, such that the system will do the right thing is not easy. For complex tasks with many steps, it’s mind-bogglingly difficult—imagine trying to mathematically define a scoring system for tidying up your bedroom—and even for seemingly simple ones results can be surprising. When OpenAI set a reinforcement learning agent to play boat racing game CoastRunners, for example, it surprised its creators by figuring out a way to score points by driving in circles rather than completing the course.
To a machine there is no right or wrong, the goal is to score points, it looks to the way to do exactly that... it is a machine, it doesn't understand right from wrong, only the goal it is tasked with. That will never change because it has no "senses," it has no soul and it has no defined right or wrong other than what its human creators give it.

One last quote from that article because it encapsulates the next point and examples shown below.

Making AI systems that can soak up goals and motivations from humans has emerged as a major theme in the expanding project of making machines that are both safe and smart. For example, researchers affiliated with UC Berkeley’s Center for Human-Compatible AI are experimenting with getting robots such as autonomous cars or home assistants to take advice or physical guidance from people. “Objectives shouldn’t be a thing you just write down for a robot; they should actually come from people in a collaborative process,” says Anca Dragan, coleader of the center.

Now we have DARPA committing $65 million into a "Brain-Computer Interface," in order to create super-soldiers, something Steve Quayle has been warning people about for 20 years now, but it has just recently been disclosed.

Direct quote, via ActivistPost: "Earlier last year in January, DARPA launched Neural Engineering System Design to research technology that could turn soldiers into cyborgs."

WHAT COULD GO WRONG?

If the above questions seems a little snarky or sarcastic, that is because it is, as we will show not only what "could" go wrong, but what already has.

AI should be taking advice and guidance from human beings...... huh. That didn't work out so well for Microsoft when they developed their AI chat bot that was created to "learn" from the humans it interacted with on social media, where the chat bot named Tay, went from saying that "humans are super cool" to a Nazi loving, homicidal monster that started tweeting things like feminists should "die and burn in hell," and "Kikes" should be gassed in a race war, while Black Lives Matter @delray should be "hung,"  and calling a politician a "house nigger." Those are just a small sample of the how "humans" taught Tay!

Granted some of the comments were said by taking advanage of her "repeat" function, but as was noted by many outlets, some of the worst comments were unprompted resulting from the AI's "learning" from a bunch of trolls that decided to "teach" Tay.

TayAIgoneverywrong.jpg

Then they created Zo, and even though the 4chan and other rabble rousers that taught Tay some really nasty things, haven't (yet) decided to go teach Zo anything, seems that bot is having some issues as well.

Back in June 2016 we saw the Russians had their own issues with their Promobot IR77 artificial intelligence robot, where the robots were mainly "well-behaved" yet one continued to go rogue, and escaped from a high-tech lab, not once, but twice even though it was taught to avoid obstacles but not programmed to look for ways to leave the research center.

In October 2016, Bloomberg's Hello World host Ashlee Vance traveled to Osaka University to see Professor Hiroshi Ishiguro’s latest creation, an android named Erica that's designed to work, one day, as a receptionist or personal assistant while utilizing artificial intelligence software to listen to and respond to requests, and during their conversation, the AI took offense to something its creator said and got "angry," to use his words.



Then we have Sophia, where on national television the programmer decided to joke by saying "do you want to destroy humans.... please say no," to which she responded "ok, I will destroy humans."



This year at the tech show RISE in Hong Kong, two life-like disembodied AIs discussed the pros and cons of humans and when asked whether robots could be moral and ethical, the robot made to look male responded by pointing out that "Humans are not necessarily the most ethical creatures," and later "joked" that robots' goal was to take over the world, as reported by AFP.

BOTTOM LINE

While the AIs Sophia, Hans, and Erica are not fully functional by way of freedom of movement, the "killer robots" are, and as we see earlier in the article, AI has advanced to the point where the EU is already acknowledging their capabilities to manufacture products or things, such as other AI's, while Musk and Zuckerberg want to "chip" everyone's brains to connect with computers, and governments are wanting a robot surveillance system of deliberately described as "a single entity with many eyes," like a hive-mind network of AI.

So, the bottom line here, is human beings have created AI machines that are stronger and smarter than those that created them, have given them the ability to self-learn, admit they do not understand the concept of "right and wrong," and are "unpredictable," while those given the spotlight have no problem saying they could destroy humans or "joke" about taking over the world...... and scientists just keep moving forward with AI anyway?

It is starting to look like the movie Terminator was not sci-fi, but was predictive programming, because we are giving machines the ability to take completely over and kill, or enslave human beings and calling it "advancement."

'Destroy humans' Sophia from the video above, was the main attraction this year at a U.N.-hosted Artificial Intelligence conference this year, where it claims it could already do a better job than Donald Trump, but then goes on to say that if it could get the right software upgrade, it  could "help rule the world better than any humans alone."

Seems the AIs have an unhealthy focus already on taking over the world, yet scientists keep on marching right to the robot apocalypse.




Help Keep Independent Media Alive, Become A Patron for All News PipeLine at https://www.patreon.com/AllNewsPipeLine











WordPress Website design by Innovative Solutions Group - Helena, MT
comments powered by Disqus

Web Design by Innovative Solutions Group