Summary

Anthropic published a detailed postmortem saying recent Claude Code quality complaints came from three separate changes: lower default reasoning effort, a session-idle bug that repeatedly cleared older thinking, and a verbosity prompt change that hurt coding quality. The company said all three issues were resolved by April 20 in v2.1.116 and that the API was unaffected.

What changed

Anthropic publicly traced Claude Code quality regressions to three product changes and said it reverted or fixed them by April 20 in v2.1.116.

Why it matters

The signal here is not just the fixes. Anthropic is treating coding-agent quality drift as an operational issue worth explaining with dates, root causes, and reversions, which is unusual transparency for a frontier AI product and useful context for teams deciding how much to trust hosted agent tooling.

Evidence excerpt

Anthropic said it traced reports to three separate changes and that all three issues had been resolved as of April 20 in v2.1.116.

Sources