AI has improved the lives of millions of people. However, it also hurt many.
Some have experienced a decline in their quality of life and the world around them. Whether it’s AI dependency, artificial companionship, reality distortion, or legal infringements, AI remains a topic of controversy.
In this article, we’ll put aside the AI hype and talk about its shadowy side
The copyrighted data that makes AI possible
The AI magic we see in tools like ChatGPT is possible thanks to the large amounts of internet data used to train AI models. While some online data is freely available, other content is protected by copyright.
The integration of copyrighted materials into AI model training has sparked controversy. Authors, artists, and other creators argue that AI companies have infringed upon their intellectual property rights by using their works without permission or compensation.
Is using copyrighted works to train AI models fair use?
There’s controversy surrounding this topic. Some argue that training AI models is transformative and falls under fair use. Others, including some AI researchers, disagree.
Let’s talk about Suchir Balaji. He was a former researcher at OpenAI who helped gather the internet data the company used to build ChatGPT. He came to the conclusion that OpenAI’s use of copyrighted data violated the law and decided to no longer contribute to technologies that he believed would bring society more harm than benefit.
Balaji was considered a witness in the New York Times’ copyright lawsuit against OpenAI and listed in court filing as having ‘relevant documents’ about copyright violation. Sadly, a few months ago he was found dead in his apartment.
To date, there have been some legal victories against AI companies like this and this, as well as agreements and a few early wins for AI companies. Each AI legal battle has its nuances, but they have something in common: the use of copyrighted data for training AI models.
When AI becomes “your best friend”
AI tools designed for companionship are increasingly filling emotional voids for people. In fact, the AI companion trend is real.
Replika, an AI “friend” app, has been downloaded over 30 million times alongside its competitors, a phenomenon fueled by the grim statistic that one in four people globally report being lonely.
Replika’s own marketing appeals directly to those seeking connection: it’s described as “AI for anyone who wants a friend with no judgment, drama, or social anxiety involved”
Who needs friends when you have a perfectly agreeable AI chatbot available 24/7?
Yet this comfort comes at a cost. Psychologists and AI ethicists worry that relying on AI for friendship can ultimately leave us more isolated by encouraging us to further isolate ourselves from people who could provide genuine friendship.
By indulging in these AI friends, we risk forgetting how to deal with the very real quirks of human communication, which involves patience, empathy, and disagreement.
This loneliness loop can become deeper.
Some Replika users grew deeply attached — even romantically — to their AI partners. When the app underwent a major update to tone down its flirtatiously “too sexually aggressive” behavior, many users were heartbroken to find their once-loving bots suddenly distant. One user even confessed that he had fallen in love with his Replika companion and felt a “kick in the gut” when his AI girlfriend stopped engaging in intimate conversations after the update.
Users rely on this type of bot for emotional support and even love. This creates a digital illusion. When this illusion is disrupted, the loneliness will come rushing back, perhaps even more intensely.
AI dependency: Outsourcing our brain
Every generation of technology has raised fears that humans might get lazy or dumb as a result — calculators making us bad at math, GPS ruining our sense of direction, etc.
With AI, this concern kicks into overdrive.
Why write an email when Gmail’s smart compose can do it for you?
Why brainstorm yourself when AI can spit out twenty ideas in seconds
Why use your critical thinking to solve a problem when ChatGPT can help?
AI tools can certainly boost efficiency and even quality, but there’s a fine line between enhancement and replacement of human effort.
This AI dependency will have a more negative impact on students and professionals. The more we rely on AI to tell us what to do, the less practice we get in weighing options ourselves. A 2023 study found evidence that AI use was linked to a measurable loss in human decision-making ability. The findings show that 68.9% of laziness in humans and 27.7% in the loss of decision-making are due to the impact of AI.
Nowadays, some people already jokingly (or not) say they’d be helpless without GPS. Soon, people might say they can’t write anything without ChatGPT.
The joke may turn on us as this AI dependence deepens.
How deepfakes are rewriting reality
One of the most visible and alarming applications of AI is the creation of deepfakes — highly realistic fake videos, images, or audio generated by advanced algorithms. By leveraging deep neural networks, AI can synthesize a person’s likeness or voice with frightening accuracy.
So, what are we really up against?
This technology comes with several challenges, but the biggest concerns are its potential to manipulate public opinion, spread misinformation, and enable fraud.
Deepfakes first gained notoriety through non-consensual pornography — where faces were swapped into explicit videos without consent. But the threat has expanded far beyond that. Political deepfakes are now a major concern. In 2022, during the Russia-Ukraine war, a doctored video showed Ukrainian President Volodymyr Zelensky supposedly announcing surrender. While the video was quickly debunked and removed, it proved how AI could fabricate realistic fake speeches to mislead the public. As major elections approach, experts worry that deepfakes could be weaponized to create fake statements or scandals, potentially swaying voters before the truth is revealed. Even if exposed later, the damage can already be done.
Beyond politics, deepfakes are fueling a new wave of fraud. In 2019, scammers in Europe used AI-generated voice cloning to impersonate a company executive, tricking a UK-based CEO into wiring $243,000 to criminals. AI voice phishing has since escalated — there are reports of scammers cloning voices from short online clips (such as YouTube or TikTok videos) to impersonate people in distress, even faking kidnappings to extort money. Law enforcement agencies warn that AI-generated voices can also bypass security measures like voice authentication, enabling financial fraud and unauthorized access to sensitive systems.
The common thread in all these cases is erosion of trust: we can no longer be sure if a video, voice, or image is real
Oh man you are RIGHT in my back garden sitting in the sun cracking a couple of cold frothy ones and chinking. Such a great article. Oh gosh, I have so much to discuss along this vein. It will have to be numerous articles. Watch this space…
🙏🐇🫶👍