Why Try AI

Why Try AI

Zero-Shot, One-Shot, Few-Shot, More?

The "monkey see, monkey do" way to prompt LLMs and chatbots.

Daniel Nest's avatar
Daniel Nest
Feb 08, 2024
∙ Paid

Happy Thursday, netizens!

Today, we’ll take our prompting journey a step further.

If you’ve followed my recent posts, you should feel pretty comfortable working with chatbots. You dive in using a Minimum Viable Prompt instead of overengineering things from the get-go.

You also know that you can get much better responses by getting chatbots to ask you questions before answering.

But what if you already know exactly what you’re after and want an LLM to follow a specific template?

That, friends, is where [insert-number-here]-shot prompting comes into play.

Why Try AI? is a reader-supported publication. To receive new posts and support my work, consider becoming a free or paid subscriber.

What’s with all these shots?

Scores for LLM benchmarks often list information about the way a given model was prompted.

Take the following table for the Google Gemini family:

Google Gemini family vs. other models as measured by different LLM benchmarks
Source: Google Gemini Report

Some of the above refer to “CoT”—short for “Chain of Thought” prompting—which I might cover separately.

But in mo…

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Daniel Nest
Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture