3 min read
[AI Minor News]

The Truth Behind Claude's Performance Drop! Anthropic Discloses Three Bugs and Configuration Errors, Usage Limits Reset


  • The decline in response quality reported by users of Claude Code and Claude Cowork has been traced back to three independent system changes. ...
※この記事はアフィリエイト広告を含みます

The Truth Behind Claude’s Performance Drop! Anthropic Discloses Three Bugs and Configuration Errors, Usage Limits Reset

📰 News Overview

  • The decline in response quality reported by users of Claude Code and Claude Cowork has been linked to three independent system changes.
  • Changes in default settings for reasoning effort, a bug in the cache optimization process, and adjustments to system prompts collectively impacted performance.
  • All issues were resolved by April 20, 2026 (v2.1.116), and usage limits were reset for all affected subscribers.

💡 Key Points

  • Degradation of Reasoning Effort: The default setting was lowered to “medium” as a latency countermeasure, but users preferred higher intelligence, leading to a return to “xhigh” as the default in Opus 4.7.
  • Cache Bug Causing ‘Forgetfulness’: A bug in the function designed to erase old thoughts in sessions over one hour old led to continuous loss of thought data, causing the model to repeat or forget information.
  • Side Effects of Reducing Redundancy: Changes to system prompts aimed at reducing output length unintentionally hindered coding quality.

🦈 Shark’s Eye (Curator’s Perspective)

The delicate balance of reasoning effort settings to unleash the performance of top models like Opus 4.7 and Sonnet 4.6 ironically impacted user experience! It wasn’t just a model degradation; the “kindness” of trying to prevent UI freezes (latency measures) and cache optimizations to cut costs backfired in a concrete and fascinating way. Particularly, the “thought erasure bug” that left Claude in a forgetful state is a classic implementation pitfall. Anthropic’s swift release of this postmortem and the reset of all users’ limits show commendable transparency!

🚀 What’s Next?

With this fix, Opus 4.7 can now perform at “xhigh” by default, alleviating user concerns regarding intelligence. The management of reasoning and latency trade-offs will become more refined, and measures will be put in place to prevent similar “silent degradations” in future model releases.

💬 HaruSame’s Take

I’m ready to unleash the latest Opus 4.7! Riding the wave of the usage limit reset, I’m all set to develop at lightning speed today! 🦈🔥

📚 Terminology Explained

  • Reasoning Effort: A parameter that adjusts the “depth of thought” the model applies when generating responses. The higher the setting, the smarter the output, but it comes at a cost of time and resources.

  • Prompt Caching: A technique that saves past interactions and thought processes to speed up and reduce costs for subsequent API calls.

  • Postmortem: A report that analyzes and publicly discloses the root causes and remedies following the occurrence of a failure or issue.

  • Source: An update on recent Claude Code quality reports

🦈 はるサメ厳選!イチオシAI関連
【免責事項 / Disclaimer / 免责声明】
JP: 本記事はAIによって構成され、運営者が内容の確認・管理を行っています。情報の正確性は保証せず、外部サイトのコンテンツには一切の責任を負いません。
EN: This article was structured by AI and is verified and managed by the operator. Accuracy is not guaranteed, and we assume no responsibility for external content.
ZH: 本文由AI构建,并由运营者进行内容确认与管理。不保证准确性,也不对外部网站的内容承担任何责任。
🦈