Ask Onix
Anthropic addresses unexpected token consumption in Claude Code
Anthropic is prioritizing a fix for users of its AI coding assistant, Claude Code, after reports that token usage limits are being exhausted faster than anticipated.
User frustrations surface on Reddit
The company acknowledged the issue in a Reddit post, stating it was investigating why customers were hitting their token caps prematurely. Tokens, which users purchase to access AI services, determine how much interaction is allowed, but their consumption rates can be unpredictable.
Developers took to the thread to voice concerns. One user noted that their free account lasted longer than their $100-per-month subscription. Another described how a single coding session in a loop could deplete a daily budget within minutes. A third user reported that a brief, one-sentence reply in a conversation unexpectedly maxed out their token allocation.
Recent changes may compound the problem
Last week, Anthropic introduced peak-hour throttling for Claude, causing tokens to be consumed more quickly during periods of high demand. The timing of this change has drawn additional scrutiny from users already grappling with the token issue.
Claude Code is widely used by software developers to streamline coding tasks, and disruptions to the service can significantly impact workflows. Subscription tiers range from $20 per month for individual users to $200 for higher usage, with custom pricing available for businesses.
Anthropic's recent missteps
The token issue is the latest challenge for the company. Earlier this month, Anthropic accidentally released a portion of its internal Claude Code source code on GitHub due to what it called "human error." The file contained 500,000 lines of code, but the company assured users that no sensitive customer data or credentials were exposed.
"This was the result of human error, not a security breach," an Anthropic spokesperson said.
The incident follows a previous leak in February 2025, when an earlier version of the source code was exposed. Independent developers had already reverse-engineered parts of Claude Code prior to these leaks.
Legal battle adds to company's challenges
Anthropic is currently engaged in a legal dispute with the U.S. government over the use of its AI tools by the Department of Defense. The outcome of the case could have broader implications for how AI technologies are deployed in government and military contexts.
What's next
Anthropic has not provided a timeline for resolving the token issue but described the fix as its "top priority." Users are advised to monitor the company's updates for further developments.