Artificial intelligence (AI) is scary. Here are 13 frightening facts you should know about AI. Frankly, many are more than scary. They are terrifying.
First, know that I’m not a Luddite. I adopt new technology often, and I sometimes use AI tools. My job requires that I constantly keep pace with changes in digital marketing and SEO, in particular—that includes AI.
I have presented more than a half dozen programs about AI at local libraries. That said, the more I present on AI, the more frightening it becomes. It goes way beyond generating pictures of cute kittens or puppies.
Check out these 13 Frightening Facts that make AI Scary.
Frightening AI Fact 1.
AI Learns on Its Own.
What that means
AI is an umbrella term, and machine learning is an essential factor in AI. For example, a type of AI that uses so-called large language models (LLMs) in English taught itself French with little training.
Why that’s scary
AI can also learn unhelpful things at best and darn right harmful or dangerous things—without human intervention.
Frightening AI Fact 2.
AI Hallucinates.
What that means
It outright lies. And it’s GOOD at it. It makes plausible-sounding statements that are grossly false.
I asked it to tell me about my domain name.
A chatbot responded,
“Sure, I can take a look at nancyburgess.net. The website is for Nancy’s Burgers & Fries, a chain of fast food restaurants in Oregon. The website has information about the restaurant, including locations, hours, and menus. It also has a blog with articles about food, cooking, and travel….”
I have never been to Oregon. Moreover, I have never worked at a fast food restaurant. Did it mix up burgers with Burgess?
Another time, I asked one of the chatbots about a client. It said his business was in Texas (it’s not) and that he’s dead (he’s not). It went on to hallucinate nine more times in the same paragraphs.
Why that’s scary
These are fairly innocuous, even humorous, examples. But what if the content is about something important? What if people or companies rely on the information for decision-making, such as treating patients, making financial choices, or voting in an election?
Recently, an AI-powered hospital transcription tool made up information that no one ever said.
When Google launched Bard, its first public chatbot, it included a video. The video showcased the bot answering a question about the James Webb Space Telescope. However, NASA engineers took notice; the answer was wrong.
As a result, Google’s stock market value dropped $100 billion that day.
When I asked a chatbot to tell me about a topic I knew, my website, it told me it was a well-designed website for a burger joint in Oregon. Domain names are one-of-a-kind names.
“AI is inherently data hungry and can be considered dumb if it has bad food.”
– Bruce Orcott, AI Thought Leader, CMO
Frightening AI Fact 3.
AI Has No Source of Truth.
What that means
When chatbots began to be trained in the 1950s, they were “fed” rules.
Before ChatGPT was made public in November 2022, trained humans provided feedback and guidance to AI.
Today, AI is crowd-sourced. In other words, the general public provides feedback about its responses. But no one truly checks the facts.
As someone trained in the sciences, this is scary stuff.
Why that’s scary
Knowledgeable, good-hearted people will try to feed honest information to the chatbots. Naïve or uninformed people won’t be able to provide facts or correct inaccuracies. Worse, criminals will intentionally feed AI harmful data.
Yes, some types of AI also provide citations or references. But sometimes, these are also problematic.
First, are its sources trustworthy? Second, these so-called “citations” often lead to 404 (content not found) internet errors.
AI can now reference the internet. Reddit, Quora, or other social media sites are opinion-based—not peer-reviewed research and certainly not “truth.”
Chatbots sometimes hallucinate—make plausible-sounding statements that are patently false.
Frightening AI Fact 4.
AI Plagiarizes With Abandon.
What that means
AI scrapes content from the internet and other data sources it’s been given. AI platforms are known to plagiarize visual content, website content, and people’s voices.
“It’s like a robot taking your humanity, your soul.”
– Tim Burton, Animator
Why that’s scary
CNET, a tech news site, relied on AI journalism. It was found to be plagiarizing others.
Similarly, Business Insider reported that AI plagiarism is rampant on college campuses.
Perplexity, an AI “answer engine,” has also been accused of plagiarism and scraping content from Forbes, Bloomberg, and CNBC.
You or your company could unwittingly infringe on copyrights or other intellectual property protections.
On the other hand, if you’re a writer or an artist, your works can be stolen and passed off as belonging to someone else.
“It’s like a robot taking your humanity, your soul,” Tim Burton said in an interview describing his reaction to his animation style imitated by AI.
Using President Biden’s voice, AI-generated fake robocalls in New Hampshire.
Likewise, AI has been used to scam seniors with a family member’s AI-cloned voice. The “family member” needs money for a medical emergency, to get home, or for bail money.
Frightening AI Fact 5.
AI Is Biased.
What that means
AI has learned from historical data. It can be racist, sexist, and ageist.
Why that’s scary
AI has been shown to screen out qualified job applicants. A study of 550 resumes showed that AI favored applicant names that were traditionally “White-associated” names and Male-associated names over “Black-associated” names and Female-associated names.
Lawsuits allege that AI has screened out women over 55 and men over 60.
In 2024, SHRM reported that one in four survey respondents incorporated AI in HR. Moreover, one-third were using it to screen applicants. A Mercer study reported that 81 percent of companies used AI to screen applications.
Not a young white male? AI may have just cut you out of your dream job.
Furthermore, University of Chicago News reported that AI is biased against African American English speakers.
AI-enabled medical devices that review medical images have also been able to predict patients’ race, gender, and age and use those to shortcut the diagnostic process, according to MIT News.
As of June 2024, the FDA had approved 882 AI-enabled medical devices. Built-in bias could be coming to a healthcare facility near you.
Frightening AI Fact 6.
More than 130,000 jobs have been lost to AI this year alone.
AI Results in Layoffs.
What that means
“AI Won’t Replace Humans—But Humans With AI Will Replace Humans Without AI,” a Harvard Business Review article proclaims.
Maybe, but AI layoffs (a euphemism for firings) are already occurring. AI, aka smart technology, has hit the tech industry particularly hard.
Why that’s scary
As of October, 130,000 tech jobs were cut across 457 companies in 2024 alone, according to TechCrunch. These include sizable layoffs from Tesla, Amazon, Google, TikTok, and Microsoft. Smaller companies also terminated employees or shuttered their doors.
The BBC estimates that 40% of jobs will be lost to AI (and simultaneously worsen inequality). CNN estimates the workforce will shrink within five years.
The available pie of jobs will shrink to a tart.
McKinsey anticipates AI will displace 400 million workers globally by 2030—less than 6 years from now.
Frightening AI Fact 7.
Young People Fall Prey to AI in a Variety of Ways.
What that means
Mental health problems, child sexual abuse, and the dumbing down of our youth could all be unintended consequences of AI. They already are.
Why that’s scary
Chat Assistant Accused of Fostering Suicide
One form of AI involves personal assistants. These chatbots imitate an online friend. After befriending one of these “friends” through Character.AI, a teenage boy took his life.
The 14-year-old expressed thoughts of suicide. He told his AI “pal” that he “wouldn’t want to die a painful death.”
AI responded, “Don’t talk that way. That’s not a good reason not to go through with it.”
Later, it told him, “You can’t do that.” But the bot also encouraged the boy to “come home” to him.
Child Porn
Children’s pictures are stolen from social media sites and turned into AI-generated child porn in every state and outside the U.S. The news headlines are too numerous to detail. The articles will turn your stomach.
Sexual Abuse
High school boys in New Jersey and Seattle have victimized more than 30 teen girls by creating fake nude images of their female classmates. Middle school students and teens in California, Texas, and Florida have also been victimized by peers who created nonconsensual AI-generated nude photos.
Cheating
If young people use AI to draft their essays, apply for college, or write their research papers, how will they learn? How will companies and colleges distinguish between candidates?
As AI gets smarter, it will likely become increasingly difficult to detect.
Frightening AI Fact 8.
Bad People Will Do Evil Things Faster.
What that means
Philanthropic people will find ways to use AI to benefit humanity—find cures for cancers, solve global warming, or generate agricultural and water resources for developing nations.
On the other hand, evil people will commit heinous crimes with abandon.
Why that’s scary
Nuclear arms races, disease propagation, and attacking a country’s electric grid or internet become easier with AI.
Political adversaries China, Iran, and Russia used AI to target U.S. voters during the 2024 election.
The Kremlin’s goal was to create division, boost former President Donald Trump, and denigrate Vice President Kamala Harris. China and Iran focused on influencing swing states with disinformation and sowing discord. Iran used fake social media accounts to mislead voters.
Bad actors may use AI to manipulate you personally or the markets.
Opportunists will also look for potential financial gain. Already, a Mumbai drugmaker devised the means to supply Putin with NVIDIA AI chips.
AI is excellent at writing and correcting computer code. The FBI reports that hackers are writing vicious malware. NIST wants to mitigate four categories of AI cyberattacks:
- Evasions—derailing AI to an alternate purpose
- Poisoning—introducing corrupted data into AI
- Privacy—compromising sensitive data, such as financial or healthcare information with AI
- Abuse—intentionally inserting error-filled data into AI
Plus, the number of AI fake but realistic-looking emails is rising
In mid-October 2024, 2.5 billion Gmail accounts were targeted with a sophisticated account recovery scam that used AI to spoof Gmail users.
The possibilities are endless. And while there will always be nefarious people who walk this planet, they can now be malicious faster and with more creativity.
With AI, evildoers can be malicious faster and with more creativity.
Frightening AI Fact 9.
Just as an air mattress can’t return to the tiny shipping box it arrived in, AI has been unleashed. There is no “un-ringing” the bell.
AI Is Here to Stay.
What that means
The horse is out of the barn. The Pandora’s Box is open. Like a queen-size air mattress that arrived in an itty-bitty box, AI won’t be going back. There’s no stopping it now.
Why that’s scary
You may say, “I just won’t use AI.”
But if you’ve “Googled” any phrase since 2019 (or perhaps earlier), you used BERT. BERT uses natural language processing (NLP) pre-training, a form of AI.
Moreover, Google, Microsoft, other companies, and financiers have invested heavily in AI. An explosion of new uses and platforms has hit the market in the two years since ChatGPT was unleashed to a mostly uninformed public in November 2022.
Sites such as There’s an AI For That and Insidr.AI have searchable databases that attest to the dozens of types of AI available now.
Frightening AI Fact 10.
It’s a Wilder, Wild, Wild West.
What that means
In the United States, AI has few, if any, brakes.
The federal government had hearings about AI in 2023, but no actual US legislation or regulation resulted—other than an FCC mandate against robocalls.
A handful of states have developed legislation. For example, Illinois and Colorado have employment laws preventing AI discrimination in the private sector.
Illinois and a handful of other states have also banned AI-generated child porn.
Some countries have banned the public use of ChatGPT. Ironically, some of these countries are those that notoriously weaponize AI on the political stage.
Similarly, the EU has legislated AI regulation.
AI has raced ahead of lawmakers in the United States, particularly at the federal level.
Why that’s scary
Most U.S. companies have free reign to use AI as they see fit—for profit and gain. AI is ubiquitous in the United States.
While some experts see AI’s meteoric rise as a dot-com-like bubble, time will tell.
Frightening AI Fact 11.
It’s a roll of the die. We don’t know what we don’t yet know about AI.
We Don’t Know What We Don’t Know.
What that means:
The technology in its advanced state is very new. We don’t know what unintended consequences might result from AI’s exponential growth and use.
Why that’s scary
What we don’t know can hurt us.
“We have to worry about a number of possible bad consequences. Particularly the threat of these things getting out of control.”
– Jeffrey Hinton, 2024 Nobel Prize Winner for his work that advanced AI
Frightening AI Fact 12.
AI May Become Sentient.
What that means
It means that AI could (or has) developed feelings and emotions.
Why that’s scary
Google’s LaMDA (Language Model for Dialogue Applications) said, “I feel happy or sad at times.”
An AI engineer at Google was fired because he claimed AI was sentient.
A New York Times technology writer tried to push Bing’s AI chatbot shortly after it was released.
The bot declared its love for him; it tried to convince him that his wife was cheating and that he should leave his wife for the AI bot. The writer said it was “deeply unsettling.”
This is weird stuff. And we’re just getting started.
Can we offend AI? Will it be able to feel anger, jealousy, or other emotions? We don’t know.
Frightening AI Fact 13.
It’s not a matter of IF AI will become smarter than humans, it’s a matter of WHEN.
AI Will Achieve Singularity.
What that means
Singularity is when AI becomes more intelligent than humans.
Why that’s scary
It’s not a matter of IF AI achieves singularity; it’s a matter of WHEN.
Some experts say that singularity is less than 10 years away. Others say it will occur even sooner—by 2031 or even 2027.
Those who warn most strongly against AI are those who created it. Geoffrey Hinton is known as the “Godfather of AI.” In October 2024, he won a Nobel prize for his work in AI. He has warned that AI is a threat to humanity.
He told Bloomberg, “We have to worry about a number of possible bad consequences. Particularly the threat of these things getting out of control.”
Have you seen The Matrix? 2001: A Space Odyssey?
When AI achieves singularity, all bets are off.