In a recent announcement, Anthropic revealed that users of its AI model, Claude, were not mistaken in their complaints about a decline in response quality. The company conducted an investigation that identified three specific changes made in March and April that negatively impacted user experience.
Adjustments and Their Consequences
First, on March 4, Anthropic altered the default reasoning effort level of Claude Code from high to medium. This adjustment aimed to reduce latency associated with longer reasoning times. However, the company later admitted, “This was the wrong tradeoff,” and reverted the change on April 7, responding to user feedback that preferred a higher default intelligence level.
Bug Introductions and Fixes
The second issue arose on March 26, when a bug was introduced during a cache optimization change. This bug inadvertently cleared cached session data with each interaction, leading to a forgetful and repetitive behavior from Claude. The issue was resolved on April 10 for both Sonnet 4.6 and Opus 4.6.
Revisions to System Prompts
On April 16, Anthropic implemented a revision to its system prompt to reduce verbosity, which included new length limits for responses. Despite initial internal testing suggesting the change was safe, subsequent evaluations indicated a three percent drop in performance for both Opus 4.6 and 4.7. This adjustment was reverted on April 20.
Future Commitments
In light of these challenges, Anthropic has committed to conducting more thorough internal tests for future public builds of Claude Code. The company also plans to enhance its Code Review tool and improve the evaluation of system prompt changes. Additionally, a new social media account, @ClaudeDevs, will be created to facilitate clearer communication about product decisions and their rationale.
Anthropic concluded by stating, “This isn’t the experience users should expect from Claude Code,” as they reset account usage levels to restore a more reliable user experience.
This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.








