Why Try AI

Why Try AI

Chain-of-Thought Prompting: It’s off the Chain

A simple "trick" that forces LLMs into thinking through their responses.

Daniel Nest's avatar
Daniel Nest
Feb 15, 2024
∙ Paid

Happy Thursday, techno tinkerers,

The people (you) have spoken:

What topic shall I cover next? CoT gets 13/15 votes.
All 15 of you!

So today we’re looking at chain-of-thought prompting.

Take a deep breath, and let’s read this article step by step.

Why Try AI? is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

What is chain-of-thought prompting?

In a nutshell, chain-of-thought (CoT) prompting is any method that nudges a large language model into thinking through its response before answering.

It was first introduced in a 2022 paper called Chain-of-Thought Prompting Elicits Reasoning in Large Language Models.

Here’s the first example from it:

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Daniel Nest
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture