New Interactive Explainers: ChatGPT vs. Claude
How do the two recently launched interactive visual features compare?
Last week, Anthropic and OpenAI both released interactive visual explainers for their chatbots.
These explainers supplement your text chat with diagrams, charts, and other visuals you can manipulate to better understand concepts.
This brought me right back to August 2025, when three companies near-simultaneously released their versions of “study” modes:
Just as I did then, I wanted to see how the ChatGPT version compared to Claude.
So I went ahead and tested them, because who’s gonna stop me?!
How do the interactive explainers work?
Although the idea is similar, the two companies approach it differently.
ChatGPT
OpenAI chose to curate a shortlist of 70+ STEM concepts in advance, so when you ask about them, ChatGPT should automatically pull up the relevant visual explainer.
These explainers are pre-built and will always look and work the same way.
Claude
Claude, instead, designs a new interactive explainer from scratch every time:
Because of this, Claude’s explainers aren’t limited to math and science topics and can also be requested on demand with relevant commands:
Claude will decide when to build a visual for something, or you can ask it to do so directly (with a query like “draw this as a diagram” or “visualize how this might change over time”).
So what does this look like in practice?
Let’s find out!
The five tests
Note: I wanted to test “compound interest” and “exponential decay” in addition to the Pythagorean theorem, but ChatGPT couldn’t actually trigger any interactive explainers for me. Does it work for you?
As such, I’m comparing the three examples OpenAI showcased on the page.
Even though I have a paid Anthropic plan, I am testing with the free versions of each tool to reflect the average user’s experience and keep the comparison fair.
Since ChatGPT only has its preset library to pull from, this can’t be a true apples-to-apples comparison.
Instead, I’ll kick off with three explainers from ChatGPT’s list and supplement with two freeform ones that Claude might be able to build on the fly.
Test #1: Pythagorean theorem
Let’s take a trip down memory lane back to our school years and basic geometry.
Prompt: “Explain the Pythagorean theorem.”
ChatGPT
ChatGPT’s version is pretty barebones, but it does the trick:
I can adjust the lengths of the a and b sides (legs) to see how this affects the c side (hypotenuse). The formula below the sliders explains the relationship.
Claude
Claude didn’t trigger any visuals automatically and provided a text explainer first:
But when prodded to “Show me interactively,” Claude built what’s arguably a better version of ChatGPT’s pre-made diagram:
I like that Claude color-coded the sides and mapped the calculations directly onto the corresponding squares. Much easier to connect the dots!
But I do wish Claude had included the square root calculation for the c side to really bring this home.
Claude also lets me save the resulting diagrams as Claude Artifacts, so I’ll be sharing them here:
My take
While they both do largely the same thing, I find that Claude’s version is easier to parse at a glance, even though it doesn’t provide the final connection by visually mapping the square root calculation back to the length of the hypotenuse.
Test #2: Mirror equation
Yet another equation I’d long forgotten everything about. Fun!
Prompt: “Show me the mirror equation.”
ChatGPT
ChatGPT uses the same clean blue-line diagram style for this:
Claude
Claude made a visual by default this time, but a static rather than interactive one:
Again, this was an easy “Show me interactively” fix:
While Claude’s visual is again prettier to look at and shows helpful color-coded effects, the lines of the diagram don’t always connect to the mirror in a way that’s easy to grasp:
My take
I’m still impressed by Claude’s ability to create on-the-fly visuals that are very close to what ChatGPT has pre-coded for it. Bonus points for letting users pick and visualize concave vs. convex mirrors.
The downside is that diagram elements sometimes connect in ways that aren’t fully clear, so the value you get from them depends on your existing understanding of the concepts.
Test #3: Ideal gas law
If you’d asked me to explain this one before working on the article, I’d have stared at you with a blank expression before slowly backing away. But here we are.
Prompt: “Demonstrate the ideal gas law for me.”
ChatGPT
I like that the animation makes it instantly clear how gas molecules speed up at higher temperatures and why pressure might increase as the volume decreases. It’s perhaps the most visually intuitive of ChatGPT’s examples.
Claude
Claude made a graph with slider options instead of a visualized container:
But I wanted to see if Claude could mimic ChatGPT’s version, and it 100% could:
You get the same intuitive feel for molecule speed and pressure impacts, with better layman-friendly labels, too.
My take
The pattern is clear: Claude can usually match ChatGPT’s visuals, but it often needs a bit of prompting to get there, which requires the person to know what they’re after. ChatGPT’s visuals (when they eventually work) should pop up by default to supplement its text-based answers.
Test #4: Combustion engines
We’re now moving out of ChatGPT’s pre-programmed “comfort zone.”
Prompt: “Show me how an internal combustion engine works.”
ChatGPT
Oh man, this was a quadruple fail right out of the gate.
ChatGPT instantly defaulted to text, ignoring the “show” qualifier completely:
When I nudged it using the “show me interactively” (which worked for Claude), ChatGPT created an image:
I then tried to be even more explicit by saying, “I want something I can actually manipulate and interact with.”
ChatGPT gave me a text buffet of options. Finally, I had to ask it for HTML outright:
The result was super basic and didn’t do much to explain the individual steps:
No helpful labels of any kind. The piston escapes outside the cylinder. Manipulating speed doesn’t affect the way the engine works, so the decision to include it is questionable.
Finally, ChatGPT couldn’t create anything shareable, so I had to paste its HTML into a third-party site to be able to share it here:
Claude
For once, Claude nailed the task on the very first try:
It spit out a clean, visually pleasing four-step diagram that I could click through to understand the concept clearly.
Take a look for yourself:
My take
Man, this wasn’t even close.
Not only did Claude intuitively know what I needed, but the result was significantly better with far less effort.
Test #5: Tectonic plates
Let’s give ChatGPT a chance to redeem itself!
Prompt: Show me the major tectonic plates and their movements.
ChatGPT
Oh no, here we go again:
ChatGPT has apparently never heard of “show, don’t tell.”
Let’s be more explicit again:
Behold, the “Tectonic Plate Map”:
There you have it, kids: Tectonic plates are unidentified colorful rectangles that are permanently stuck to each other!
Any questions, class?
Claude
Again, Claude did better right out the gate:
But I wanted this to be more interactive, so I nudged:
It worked!
The visual isn’t winning any awards for aesthetics or geography, but it does actively aid my conceptual understanding and provides useful info about each plate in a clean modal box.
The list of major plates is complete and accurate, according to Wikipedia.
My take
Claude is consistently more visually engaging and genuinely helpful at showcasing concepts on the fly.
ChatGPT struggles to visualize anything outside of its library of approved templates. Even with repeated nudging, the outcome isn’t nearly as polished as Claude’s versions.
General observations
Here are my concluding thoughts.
Note that they’re based exclusively on my limited tests with free versions of each tool.
I’d expect both paid model options (GPT 5.4 Thinking and Opus 4.6) to handle these tasks better.
However, since interactive explainers are marketed as being available to everyone, I think it’s only fair to mimic the experience of the average user.
ChatGPT (“The Curator”)
While it didn’t work for me, the idea behind human-approved visualizations popping up automatically for given topics is solid.
The good: For the pre-approved topics, ChatGPT should load the visuals almost instantly. They’ll be precise, predictable, and guaranteed to show accurate equations. Also, since they trigger automatically, the user doesn’t even have to know this option exists or learn how to invoke it.
The not-so-good: The visuals are rather basic and often feel dry and technical. When you move beyond the shortlist, ChatGPT truly struggles to create helpful, interactive elements on its own.
Who this is for: School and high school students who have specific STEM concepts fresh in their minds and need to anchor them with visuals.
Claude (“The Designer”)
Despite not having a curated library of concepts, Claude is strictly better at coding polished interactive elements from scratch, even on a free account.
The good: Claude’s explainers almost always look pretty, have more helpful labels, and—at least in my tests—are easier to grasp at a glance, especially for concepts that don’t come from ChatGPT’s STEM list.
The not-so-good: Since Claude’s explainers are always designed on the fly, they’re less predictable. You’re essentially rolling the dice whenever you ask for a new visual. Without a robust review, Claude’s outputs are subject to the usual AI hallucinations, which might defeat the purpose of helping a layman understand brand-new concepts. Finally, Claude often needs prodding to create an interactive element, so people who aren’t aware of this feature might not even get to experience it at all.
Who this is for: People who want to grasp almost any complex concept in a flexible, visual, and interactive way.
But take these for a spin yourself and let me know what you think.
Pick a concept you always struggled with and see if interactive explainers can help!
Thanks for reading!
If you enjoyed this, here’s how you can help:
❤️Like this post if it resonates with you.
🔄Share it to help others discover this newsletter.
🗣️Comment below—I love hearing your opinions.
Why Try AI is a passion project, and I’m grateful to those who help keep it going. If you’d like to support me and unlock cool perks, consider a paid subscription:

























