A prompting technique where the AI is told to reason step-by-step before answering. Dramatically improves accuracy on hard problems.
Chain-of-thought (CoT) prompting means asking the AI to show its work — to reason through a problem step by step before stating a final answer. Just adding the phrase "think step by step" to a hard prompt often doubles correct-answer rates on reasoning tasks.
LLMs predict text token by token. When you ask for an answer immediately, the model has to compress all reasoning into the answer itself — and often gets it wrong. When you ask it to reason out loud first, it has more "thinking budget" because it can use its own earlier text as context. The chain of words is the chain of thought.
Without CoT (often wrong on math/logic):
A store sold 24 apples on Monday and twice that on Tuesday. On Wednesday they sold 5 fewer than Tuesday's total. How many apples did they sell over the three days?
With CoT (much more reliable):
A store sold 24 apples on Monday and twice that on Tuesday. On Wednesday they sold 5 fewer than Tuesday's total. How many apples did they sell over the three days? Think through this step by step before giving the final answer.
Instead of "think step by step," try: "First, list what you know. Second, list what you need to figure out. Third, work through it. Fourth, give a final answer." The structure beats the vague version.