9 Comments

Daniel, do you know if CoT prompting is being included in many of the newer or upcoming LLMs? I would guess firms have woken up to this by now.

Expand full comment
author
Feb 15·edited Feb 15Author

The way I understand CoT prompting, it'd be tricky to "build into" a model.

Few-shot CoT prompting especially is very task specific (you need to feed the model similar examples to improve its output on a particular task), so it would only make sense in an LLM built for very niche purposes.

Now I don't know if e.g. a system prompt for some models includes something like a zero-shot instruction along the lines of "When a user requests solutions to a complex math or reasoning problem, take a deep breath and work on it step-by-step." That's conceivable.

But as I write, it's more about users knowing when CoT is requoired and using it appropriately for that specific purpose rather than something you "incorporate" into an LLM as a blanket instruction.

At least that's my take on it.

Expand full comment

So, as a hack, doing preset instructions with your own GPT wouldn't really accomplish the same things? I wonder: if you put something like, "You are a prompt engineer, seeking to answer each question thoroughly. Please double check your work with every answer, making sure each step is correct before completing the next step", something really general to that effect, would it have a positive effect at all?

Expand full comment
author
Feb 15·edited Feb 15Author

Sure, if it's a specialized GPT that is meant to assist with a specific subset of tasks. You could even just use the simple "When returning an answer, take a deep breath, and think of the problem step-by-step" from this article.

That GPT will then treat every request more carefully and break down its thinking. If that's the intent behind the GPT, perfect!

I was just differentiating between a custom GPT or one-off task requiring CoT and "baking CoT into a general LLM," which makes less sense.

Expand full comment

Isn't that good for any task, though? Like, I guess what I'm asking is: why not just do this for every LLM up front? Just include those custom instructions.

Expand full comment
author
Feb 15·edited Feb 15Author

I tried to address that in this section: https://www.whytryai.com/i/141521303/when-to-use-cot-prompting

For few-shot CoT prompting, it's outright impossible to "do this for every LLM" because what "this" is depends on the exact task and the few-shot examples you'll be giving to help the model give you a solution.

But even for zero-shot CoT prompting, it'd probably be overkill to have a model that constantly launches into a long-winded breakdown of its thinking when a regular user just asks "What's another word for 'blue'?"

Expand full comment

Roger that, I get the cumbersome nature of doing that every time. Not sustainable!

I guess as speed improves, I could see something like an internal dialogue: people could just get the answer they want within a couple of seconds, just like now, but there could be the sort of checking going on to see whether something makes sense.

If the processing power and speed become less of an issue, I bet we see exactly this. Then, if you wanted to dispute something or fact check it, you could ask to see the work. Am I approaching something tangible here?

Expand full comment