Zero-Shot, One-Shot, Few-Shot, More?
The "monkey see, monkey do" way to prompt LLMs and chatbots.
Happy Thursday, netizens!
Today, we’ll take our prompting journey a step further.
If you’ve followed my recent posts, you should feel pretty comfortable working with chatbots. You dive in using a Minimum Viable Prompt instead of overengineering things from the get-go.
You also know that you can get much better responses by getting chatbots to ask you questions before answering.
But what if you already know exactly what you’re after and want an LLM to follow a specific template?
That, friends, is where [insert-number-here]-shot prompting comes into play.
What’s with all these shots?
Scores for LLM benchmarks often list information about the way a given model was prompted.
Take the following table for the Google Gemini family:

Some of the above refer to “CoT”—short for “Chain of Thought” prompting—which I might cover separately.
But in mo…

