At its second annual Code with Claude developer conference in San Francisco, Anthropic announced significant updates to its Claude Managed Agents platform, including a novel feature called dreaming. This capability enables AI agents to learn from their previous sessions, marking a step towards self-correcting and self-improving AI systems that enterprises require for production workloads.
New Features for Enhanced AI Performance
In addition to dreaming, Anthropic transitioned two previously experimental features—outcomes and multi-agent orchestration—into public beta. These features aim to tackle critical challenges in managing AI agents at scale, such as maintaining accuracy, facilitating learning, and preventing bottlenecks in complex workflows.
Early Adoption Yields Promising Results
Early adopters of these features are reporting substantial improvements. For instance, the legal AI firm Harvey experienced a sixfold increase in task completion rates after implementing dreaming. Similarly, Wisedocs halved its document review time using outcomes, while Netflix is now able to process logs from hundreds of builds simultaneously through multi-agent orchestration.
Impressive Growth Metrics
During the conference, CEO Dario Amodei revealed that Anthropic’s growth has surpassed its own ambitious projections. In the first quarter of 2026, the company reported an annualized growth rate of 80x in revenue and usage, significantly exceeding the planned 10x growth. API volume on the Claude platform has increased nearly 70x year over year, with developers spending an average of 20 hours per week using the tool.
Understanding the ‘Dreaming’ Mechanism
The dreaming feature is designed to operate at a higher level of abstraction compared to conventional memory systems. It systematically reviews an agent’s past sessions, identifies patterns, and curates memories to facilitate improvement over time. This process allows agents to recognize recurring mistakes and shared preferences among teams, thus enhancing their operational efficiency.
Importantly, dreaming does not alter the underlying model weights; instead, it generates plain-text notes and structured playbooks for future reference. This approach ensures transparency and auditability, addressing potential trust issues associated with AI self-learning.
Anthropic’s broader strategy is to bridge the gap between AI capabilities and real-world applications, as articulated by Chief Product Officer Ami Vora. The company aims to enhance the task horizon, allowing AI agents to operate autonomously for extended periods while continuously improving their output quality.
As part of this initiative, Anthropic also announced infrastructure enhancements, including a partnership with SpaceX to utilize its Colossus data center, aimed at expanding compute availability to meet rising demand.
This article was produced by NeonPulse.today using human and AI-assisted editorial processes, based on publicly available information. Content may be edited for clarity and style.







