Artificial Corner

Artificial Corner

We’re Finally Starting to Understand How AI Works

A recent study by Anthropic offers a glimpse into the AI black box

Kevin Gargate Osorio's avatar
Frank Andrade's avatar
Kevin Gargate Osorio and Frank Andrade
Apr 18, 2025
∙ Paid
Image made with GPT 4o

Ever since I started developing, learning, and working with AI, there’s always been a component we in the tech world refer to as a black box—an element that can be, to some extent, unpredictable.

Chances are, many of us have spent time analyzing outputs, tweaking training data, and digging into attention patterns. Still, a large part of the AI's decision-making process has remained hidden.

At least, that was the case until a few weeks ago.

In a recent study titled "Tracing Thoughts in Language Models," researchers at Anthropic claim they’ve caught a glimpse inside the mind of their AI, Claude, and observed it thinking. Using a technique they compare to an “AI microscope,” they were able to trace Claude’s internal reasoning steps with an unprecedented level of detail.

The findings are both fascinating and a bit unsettling.

Claude appears to break tasks down into understandable subproblems, plan its responses several words ahead, and even generate false reasoning when i…

User's avatar

Continue reading this post for free, courtesy of Frank Andrade.

Or purchase a paid subscription.
© 2026 Frank Andrade · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture