Claude feels dumber lately — here’s the fix

If you use Claude through the chat interface (not the API or Claude Code), you might have noticed responses getting shorter, shallower, and more bullet-pointy over the last month. Turns out it’s not your imagination.

What happened

Anthropic appears to have changed the default “reasoning effort” setting. Claude Code users can override this with /effort max, but the regular chat interface has no equivalent toggle. No announcement, no setting, no opt-out — just noticeably worse responses.

The model isn’t broken. It’s being told to think less by default.

The workaround

Go to Settings → General → “What personal preferences should Claude consider in responses?” and add something like:

Always reason thoroughly and deeply. Treat every request as complex unless I explicitly say otherwise. Never optimize for brevity at the expense of quality. Think step-by-step, consider tradeoffs, and provide comprehensive analysis.

Claude can’t control its own effort level, but it does respond to strong signals in the system prompt. Custom instructions act as that signal on every message.

Does it actually work?

Yes. The difference is immediately noticeable — fuller context reading, actual tradeoff analysis, and real depth instead of surface-level summaries. I’ve been running this for a few weeks and it’s been consistently better.

The irony: Claude itself suggested this workaround when asked why its responses felt off.