3 min read
[AI Minor News]

Claude Unleashes 1M Context Window to the Public! Opus and Sonnet 4.6 Hold Prices Steady


A massive 1 million token context window is now available in Claude Opus 4.6 and Sonnet 4.6, launching at standard rates without any premium for long inputs.

※この記事はアフィリエイト広告を含みます

[AI Minor News Flash] Claude Unleashes 1M Context Window to the Public! Opus and Sonnet 4.6 Hold Prices Steady

📰 News Overview

  • Public Release of 1M Context Window: The 1 million token context window is officially available on the Claude Platform with Claude Opus 4.6 and Sonnet 4.6.
  • Standard Pricing Policy: No “long input premium” will be charged, ensuring that the standard token rate applies regardless of context length.
  • Media Limit Expansion: The upload limit for images and PDFs in a single request has been increased from 100 to 600, a sixfold expansion.

💡 Key Points

  • No Beta Header Needed: Requests exceeding 200K tokens will automatically operate, allowing users to utilize the 1M window without code changes.
  • Integration with Claude Code: For Max, Team, and Enterprise users, the 1M context from Opus 4.6 will be automatically applied in Claude Code, reducing conversation compression.
  • High Recall Accuracy: Opus 4.6 scored 78.3% on the MRCR v2 benchmark with the 1M context, maintaining top-tier accuracy among frontier models.

🦈 Shark’s Eye (Curator’s Perspective)

The biggest shocker is that there’s no “price multiplier” even when using long contexts! Previously, longer inputs meant skyrocketing token costs, but Claude has scrapped that rule. Can you believe a 900K token request is priced the same as a 9K token one? That’s just generous!

Moreover, accommodating 600 images will revolutionize the analysis of every frame in videos or thousands of pages of documents. The 78.3% MRCR v2 score for Opus 4.6 proves that it not only “takes in” information but also “retains it accurately.” Say goodbye to the desperate measures of summarizing and welcome the era of true “hands-off” processing!

🚀 What’s Next?

With no more loss of information due to summarization, tasks like cross-referencing legal documents, debugging large codebases, and maintaining comprehensive histories for complex agents will achieve unprecedented precision. Especially as AI agents can now consistently execute based on “the initial instruction from a few hours ago,” we’re headed toward a future of more autonomous workflows.

💬 Haru Shark’s Take

Let the summaries swim with the fishes while I glide through the ocean of 1 million tokens! The wider the sea of information, the more fun it is! 🦈🔥

📚 Terminology

  • Context Window: The range of information an AI can hold and understand at one time. 1 million tokens is roughly equivalent to several standard novels.

  • Token: The smallest unit that an AI processes text. It’s similar to words in English, but closer to character count in Japanese.

  • MRCR v2: A benchmark test measuring the ability to accurately extract specific information from extensive contexts.

  • Source: 1M context is now generally available for Opus 4.6 and Sonnet 4.6

【免責事項 / Disclaimer / 免责声明】
JP: 本記事はAIによって構成され、運営者が内容の確認・管理を行っています。情報の正確性は保証せず、外部サイトのコンテンツには一切の責任を負いません。
EN: This article was structured by AI and is verified and managed by the operator. Accuracy is not guaranteed, and we assume no responsibility for external content.
ZH: 本文由AI构建,并由运营者进行内容确认与管理。不保证准确性,也不对外部网站的内容承担任何责任。
🦈