You’re Not The Only One Feeling AI Fatigue (Or: Why That New AI Tool Isn't for You)
More AI tools, more AI fatigue.

There’s a pre and post ChatGPT era.
Before OpenAI unveiled ChatGPT, only a few programmers would dabble in creating chatbots and simple models for local execution, and only machine learning engineers and a few AI enthusiasts would use the word “artificial intelligence” on a daily basis.
Things changed on November 30, 2022.
A few weeks after ChatGPT was released, platforms like X, YouTube, Reddit, and even TikTok were flooded with influencers offering tips on leveraging AI to improve our everyday lives. Some offered genuinely valuable insights, while others were clearly just fishing for clicks. Little by little it was common to find posts with the "next AI tool that beats ChatGPT" or "(introduce profession) are dead. Learn how to use ChatGPT to replace them.”
What in the world was going on?
Then every week seemed that a new AI tool was unveiled or upgraded. Many companies were rolling out their own chatbots to the masses. We're talking about big names like Perplexity AI, Google Bard, Bing Chat, Meta's LLaMa, Stability, Hugging Face, Scale AI, and Antropic's Claude AI, to name just a few.
It was like being stuck in a loop.
Suddenly, staying updated with the latest models and their upgrades became a part of my daily routine. I would dedicate as much time as I could to this to keep learning and never fall behind.
Months into this new world, I asked myself,
Are these new models and features truly a step forward, or are they the same old tech repackaged with a new name and perhaps with less intelligence?
Do I need to learn most AI tools out there?
Here’s the answer to these questions.
It's the same old routine
An avalanche of papers, videos, and models hit us almost in real-time. Just when we’re getting the hang of one AI tool, along comes another with more features or an extra million parameters.
The novelty has worn off; I'm drained, swamped, and frankly, beat. Yet, I felt this internal push to dive into the "latest model" to see if it actually lived up to its claims or was as revolutionary as they say.
That was a self-imposed mandate that I eventually managed to shake off.
Most AI tools aren’t for us. Let’s take this tweet as an example: "I've spent countless hours using 1500+ AI tools. There's only a handful I actually use … Here's the top 24 AI tools that I use daily(ish)." It really puts things into perspective. Technically only 1.6% of those tools ended up being of any real use to him.
This realization led me to think about the redundancy and limitations inherent in the countless tools at our disposal. At this point, I might as well consider rolling out my own version of ChatGPT with one of the APIs out there. Groundbreaking? Hardly.
In the grand scheme of the AI evolution, it's not that groundbreaking. It's not that I don't value new knowledge; quite the contrary, it's actually quite stimulating. Yet, it seems like something that would serve my personal curiosity more than it would make a dent in the public domain, where it would merely blend into the sea of existing tools without making any significant impact.
The relentless march of AI development is inevitable. However, the current slow pace is tied directly to chip availability. This implies that until a tech giant decides to invest in a bulk of GPUs to train more robust models (and set a new benchmark), we're pretty much stuck running in circles.
Companies are launching low-quality AI products so they don’t fall behind
Do you remember how bad was Google’s Bard when it was initially launched? I do. It didn’t support roles, couldn’t connect to external plugins, and was bad at reasoning, coding, and more.
But why did Google launch such a product? To avoid falling behind OpenAI's ChatGPT. Although Bard wasn’t as good as ChatGPT, it was free and made people build sentences with the words "Google” and “AI“ together. Probably that’s also the reason why they made that infamous Google Gemini Ultra demo back in December (now that Gemini Ultra is available to the public it’s clear Google overhyped its demo)
I think launching a low-quality product like Bard was never Google's initial strategy. That’s why Bard doesn’t exist anymore. Google rebranded Bard to Gemini also known as Gemini Advanced, which runs Gemini Ultra 1.0 (I know, all this branding thing also causes fatigue).
Google should’ve given us a decent product like Gemini from the start, rather than something like Bard.
But this feeling of not falling behind in the AI race isn’t exclusive to Google. Other tech companies that are doing better have also disappointed us. Until now, OpenAI hasn’t given us a reliable tool that can detect if a text was generated with AI. In fact, they shut down their flawed AI text-detection tool. Also, Midjourney doesn’t offer a watermark (or other type of system) that helps people detect that their images were generated with AI.
The AI race is making companies launch low-quality AI products and features. This is why it’s better to stick with just a couple of them that fit our needs. Don’t feel anxious to try a new AI tool everyone is talking about if the one you’re using now does what you need.
How to manage AI fatigue?
Recently Google phased out Bard and introduced a premium version known as Gemini Advanced. This might seem like yet another blip on the radar of endless announcements we've grown accustomed to. I'm convinced that without the current AI hype, this news might have been received differently. Probably in some weeks, this will be old news, much like the now-forgotten Google Bard.
What to do with all this information?
Contrary to popular belief, I decided to hit pause a few months back, to really take in everything and apply the 80/20 principle.
Here’s what I did.
I turned a deaf ear to the influencers persuading me to jump on every new AI tool
I let go of information that, while seemingly enticing on paper, I realized it wouldn't make a difference to where I am now in terms of knowledge, tools I use, and the needs I have.
I chose to focus on refining my existing skills and exploring how AI could assist me with this
When a new AI tool or model emerges, I ask myself, 'Is it better than my current tool?' and 'Does it fulfill a need I have?’ If the answer to both questions is a definitive no, I choose not to pay attention to it.
Ever since I focused on what's truly important to me, I've managed to lift the self-imposed weight off my shoulders. This also helps me give the appropriate weight to important news that for others might just be one of the many posts on their feed.
Here are my final thoughts.
AI fatigue is a real thing, driven by both what's happening around us and our own responses to it. It's hard to control the information coming our way, but we have to choose what truly matters to us, always mindful of the context we're in.
We should resist the urge to be immediately impressed by new technologies, avoiding the temptation to invest too much time in AI tools that we might ditch tomorrow. If a tool meets my needs perfectly today, chances are it will continue to do so tomorrow.
thanks, I've been researching AI fatigue for a while now and looking through various case studies about its impact. I think there are a lot of negative effects, and it seems like people are realizing it now:, there a several quotative data studies now: https://www.amazon.com/dp/B0D2BQV1DC and https://www.amazon.com/gp/product/B0D1XYG6J1 to name a few.
Thank you for a sane and balanced approach to this. I agree and take a similar one. The AI tools will is becoming a commodity with time and a good idea is to focus on something very much human, which cannot be replaced. Using AI as a tool to do the, not as work replacement tool is a good place to stay. I have wrote an article about what keeps us human in the post AI world. I add the link here:
https://tomasmilka.substack.com/p/identity-after-the-ai-revolution