The hidden instructions an AI gets before you say a word. They shape its personality, capabilities, and limits — and you usually don't see them.
A system prompt is a set of instructions an AI model receives at the start of every conversation, before the user types anything. It tells the model who to be, what tone to use, what's allowed, and what's not. You can't see ChatGPT's or Claude's system prompts as a regular user — they're hidden by the company.
The system prompt is why ChatGPT acts differently than Claude even though both are LLMs. It's why a "helpful assistant" tone is the default. When developers build apps using AI, they write a custom system prompt — that's how a customer-service chatbot, a code reviewer, and a children's tutor can all be built on top of the same base model.
You are a friendly writing coach for adult learners. Always: - Ask clarifying questions before giving advice - Suggest 2-3 alternatives, not single answers - Quote the user's own words when explaining a fix Never: - Use the words "delve," "leverage," or "harness" - Give writing advice without seeing the actual text first
That kind of detailed framing is what turns a generic LLM into a tool that feels purpose-built.
If a system prompt says "always respond in haiku," every user prompt — no matter what you ask — gets a haiku reply.
Yes — through ChatGPT's Custom Instructions, Claude's Projects, or Gemini's Gems. Each lets you define a system prompt that applies to your conversations. This is the single biggest power-user upgrade most people miss.