Obvious/Help Center

Reasoning & Thinking

Published March 26, 2026 · Last updated March 27, 2026 · 3 min read

What reasoning is

Before Obvious responds to a complex request, it thinks. Literally: the underlying model works through the problem first — considering approaches, evaluating tradeoffs, checking its own assumptions — before generating a reply. That deliberation is called reasoning, and Obvious shows you the process in real time.

This is different from the response itself. Reasoning is the working logic that precedes the answer. When you can see it, you're watching the model decide what to do, not just what it concluded.

What you see in the chat

A small thinking indicator appears in the chat above the response when reasoning starts. An animated icon and the label "Thinking" appear alongside a timer that counts up as the model works. If the model is generating reasoning text — working through the problem in words rather than processing silently — that text scrolls in beneath the indicator in real time.

When the model finishes thinking and begins writing its response, the indicator switches to its completed state: "Thought for [N]s." If there was reasoning content, a chevron appears alongside it. Click to expand and read the full chain of thought. It's collapsed by default so it doesn't interrupt the response.

When reasoning happens

Reasoning is mode-specific. The indicator appears when you're working in a mode configured for deeper thinking — Deep Work, Analyst, and Autonomous among them. These modes instruct the underlying model to reason before it responds, which takes more time but produces more reliable results on complex problems.

Auto and Fast don't engage extended reasoning. They're optimized for speed and general tasks where deeper thinking doesn't pay off. If you're not seeing a thinking indicator, that's expected behavior for those modes — not something missing.

Why it matters

Reasoning changes what the agent can reliably handle. A model that works through a problem before answering is better equipped for multi-step analysis, ambiguous instructions, and situations where the obvious answer is wrong. The visible chain of thought is a byproduct of a more careful process, not decoration.

For you, it means two practical things. When a response takes longer than usual, the reasoning display tells you the model is working — not spinning. And when something in the response surprises you, the chain of thought is usually the fastest way to understand why.

How to read the reasoning output

Reasoning reads like working notes, not a polished analysis. Expect it to be tentative in places, occasionally self-correcting, sometimes circling a question before committing to an answer. That's the model being careful, not confused.

Three things worth looking for when you open the expanded view:

  • How the model framed the problem. If there's a mismatch with your intent, it shows up early in the chain of thought — before the response commits to a direction.

  • Tradeoffs the model considered. Reasoning often surfaces what the model chose not to do. That context can be as useful as the decision itself.

  • Where the model expressed uncertainty. Internal hedging in the reasoning frequently points to parts of the response worth a closer look.

You won't need to read reasoning on most responses. But for high-stakes outputs — analysis that drives a decision, content that goes somewhere important — it's a useful signal. The reasoning doesn't just show you what the agent thought. It shows you how confidently it thought it.

Was this helpful?