"The Best Mix Of Hard-Hitting REAL News & Cutting-Edge Alternative News On The Web"
December 22, 2023
AI Image Generator Creators Are Disturbed Perverts That Deliberately Added 3,200 Suspected Child Sexual Abuse Images Into The AI Database Which Pedophiles Are Using For Their Sick Fantasies
In the Terminator movies, the term "Skynet" represents Artificial Intelligence (AI) that took over all computers, robots, etc.... and then slaughtered human beings, hunting survivors. Since then we have watched multiple nations give more and more power to AI systems.
10 hours ago a story came out by the AP about mini-robots with no actual face or mouth, hut has a swiveling "head" to be put into homes of "lonely" people which speak, joke, remember the interests of users, plays them music and more. They are being created as "companions."
There are so many more examples of how society is being made dependent on Artificial Intelligence, that one has to wonder if those geniuses creating AI have ever even seen Terminator, I,Robot, Wall-E, War Games, or any of the "killer robot, killer AI, movies.
Yes, we know those are fictional movies, but life is definitely imitating art these days, and researchers ignore the dangers and the warnings of the danger of AI.
Eliezer Yudkowsky, a researcher and author who has been working on Artificial General Intelligence since 2001, wrote the article in response to an open letter from many big names in the tech world, which called for a moratorium on AI development for six months.
The letter, signed by 1,125 people including Elon Musk and Apple's co-founder Steve Wozniak, requested a pause on training AI tech more powerful than OpenAI's recently launched GPT-4.
The key quote from that article is chilling as he states "Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die."
While we have known for a long time, and reported on it repeatedly, AI is dangerous, we now learn something else that is even more disturbing.
(ANP Emergency Fundraiser:If you like stories like these, PLEASE donate to ANP! Due to the globalists war upon truth and the independent media, our monthly revenue has been cut by more than 80% so we need your help more now than ever before. If you are a pro-Liberty, pro-America individual or company willing to sponsor us, we'd be more than happy to arrange something to highlight your company to ANP readers on our website if you can help us keep ANP online. You can donate here or reach us at [email protected] or [email protected] to discuss what kind of ads you might like to run on ANP. Otherwise, anything at all ANP readers can do to help us is hugely appreciated. We thank you! )
This October, boys at Westfield High School in New Jersey started acting "weird," the Wall Street Journal reported. It took four days before the school found out that the boys had been using AI image generators to create and share fake nude photos of female classmates. Now, police are investigating the incident, but they're apparently working in the dark, because they currently have no access to the images to help them trace the source.
Another use of AI image generators has been to create thousands of fake child sex images. These images were shared on the dark web, but have moved from there to social media, according to multiple reports.
Child safety experts are growing increasingly powerless to stop thousands of "AI-generated child sex images" from being easily and rapidly created, then shared across dark web pedophile forums, The Washington Post reported.
This "explosion" of "disturbingly" realistic images could help normalize child sexual exploitation, lure more children into harm's way, and make it harder for law enforcement to find actual children being harmed, experts told the Post.
Now, one would think that those creating these AI programs, especially the image generators, would have some sort of block in their programming to prevent the AI from producing these types of images. Not only do they not have a block on it, they are in fact responsible for it.
Those same images have made it easier for AI systems to produce realistic and explicit imagery of fake children as well as transform social media photos of fully clothed real teens into nudes, much to the alarm of schools and law enforcement around the world.
Until recently, anti-abuse researchers thought the only way that some unchecked AI tools produced abusive imagery of children was by essentially combining what they’ve learned from two separate buckets of online images — adult pornography and benign photos of kids.
But the Stanford Internet Observatory found more than 3,200 images of suspected child sexual abuse in the giant AI database LAION, an index of online images and captions that’s been used to train leading AI image-makers such as Stable Diffusion.
To load those sick, perverted images, these AI image generator creators has to actually look for said images to load.
Why on Earth would anyone that isn't a pedophile search for those images to deliberately load into a database for AI images?
THIS WAS NOT A 'FLAW,' IT WAS DELIBERATE
According to the AP story quoted above, the study authors are urging companies to take action to address a "harmful flaw" in the technology they built.
Flaw? A flaw indicates an accident, something they didn't expect, an unintended consequence.
Of course the companies told the AP they have filters and safeguards, along with a bunch of other assertions and claims about fixing the issue, now that it has been exposed, but the bottom line is this: This wasn't a flaw, this was deliberately done.
There is no excuse or justification for this, and now, since many of these programs are already on personal computers or devices, there is also no way to completely eradicate the images, nor the ability of very sick people to create child sexual abuse imagery.
Another issue here is accountability. There is no legal recourse against those that added the 32,000 child sexual abuse images into the database. If a citizen of the United States is caught with child porn they are charged and tried, as the headlines below show.
Those reports were from just the last month. Yet the people who searched for, and added those images into the AI image generator databases, will not be held legally responsible and it is highly doubtful they will even be fired from whatever companies allowed this.
ANP is a participant in the Amazon Services LLC Associates Program.
ANP EMERGENCY Fundraiser: ‘Dangerous, Derogatory, Harmful, Unreliable!’Those are some of the exact words used by Google’s censors, aka 'Orwellian content police,' in describing many of our controversial stories. Stories later proven to be truthful and light years ahead of the mainstream media. But because we reported those 'inconvenient truths' they're still trying to hide, they pulled their ads from ANP.
Checks or money orders made payable to Stefan Stanford or Susan Duclos can be sent to:
P.O. Box 575
McHenry, MD. 21541
Anything at all at Amazon purchased after clicking this ANP link will allow ANP to make a bit of revenue, all of which will be used to keep ANP online and to keep a roof over our heads.