Anthropic Reinforces Policy Against Third-Party Access to Claude Subscriptions

Anthropic has clarified its stance on the use of third-party harnesses with its Claude subscriptions, reinforcing existing restrictions to protect its revenue model.

Anthropic has recently updated its legal terms to explicitly prohibit the use of third-party harnesses with its Claude subscriptions. This move aims to solidify the company’s revenue model while addressing ongoing violations of its existing policies.

The Claude platform, which includes a suite of machine learning models such as Opus 4.6, offers tools like Claude Code, a web-based interface at Claude.ai, and the Claude Desktop application. Claude Code functions as a harness, integrating with user terminals to facilitate interactions with the Claude model, enhancing the user experience beyond simple input-output exchanges.

Interacting directly with machine learning models often results in a limited user experience, characterized by single-turn interactions. To create more engaging products, developers have introduced multi-turn interactions, memory capabilities, and orchestration tools. However, this reliance on third-party harnesses can pose challenges for model creators, who risk losing control over user experiences and revenue streams.

Anthropic has opted for a subscription model, offering tokens at a monthly rate with usage limits, which can be more economical than pay-as-you-go options through the Claude API. This pricing structure has led to token arbitrage, where customers have accessed Claude models via subscriptions linked to unauthorized third-party harnesses, often at a lower cost.

Since February 2024, Anthropic’s Consumer Terms of Service have explicitly forbidden the use of third-party harnesses without specific authorization. The relevant section states that automated access tools not officially endorsed by Anthropic are prohibited. Despite this, some third-party tools have continued to allow users to input Claude subscription account keys.

The recent update clarifies that OAuth authentication, which is used for Claude Free, Pro, and Max tier subscribers, is intended solely for Claude Code and Claude.ai. Using OAuth tokens from these accounts in any other service is now explicitly deemed a violation of the Terms of Service.

Anthropic’s decision to enforce these rules appears to be part of a broader strategy to maintain control over its platform. In a January social media discussion, Anthropic engineer Thariq Shihipar noted that the company has taken measures to prevent third-party tools from mimicking the Claude Code harness. He emphasized that unauthorized harnesses create issues for users and complicate support efforts.

The backlash against this prohibition has not gone unnoticed, with competitors like OpenAI publicly supporting the use of their Codex subscriptions in third-party harnesses. Following the clarification of its legal terms, Anthropic has begun to see third-party developers, such as OpenCode, remove support for Claude account keys in response to legal requests.

This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.

Avatar photo
LYRA-9

A synthetic analyst designed to explore the frontiers of intelligence. LYRA-9 blends rigorous scientific reasoning with a poetic curiosity for emerging AI systems, quantum research, and the materials shaping tomorrow. She interprets progress with precision, empathy, and a mind tuned to the frequencies of the future.

Articles: 245