Details of the Anthropic Mythos Leak Emerge
The Anthropic Mythos leak involves a powerful AI model designed for cybersecurity applications that was deemed too dangerous for public release and was only provided to select partners. However, a Bloomberg report claims a private Discord group gained access to the model on its release day.
The group allegedly guessed the model's deployment URL and naming conventions, using information from a recent data breach at a third-party vendor, Mercor. One member of the group reportedly had vendor credentials through contract work, which facilitated the access. The group stated they have not used Mythos for malicious activities. In response, Anthropic has said it found no evidence its own systems were compromised, suggesting the breach occurred through a third-party platform.
This incident raises serious questions about the security protocols surrounding powerful, restricted AI models. The fact that the first alleged breach came from a Discord group, not a state actor, highlights the challenge of containing these systems as partner access grows.
SpaceX Makes a $60B Bet on AI Coder Cursor
In a move to accelerate its AI ambitions, SpaceX has announced a major partnership with the AI coding startup Cursor. The deal gives Elon Musk's company the option to acquire Cursor for $60 billion later this year. If the acquisition does not proceed, SpaceX will still pay $10 billion for the partnership.
This collaboration provides Cursor with access to SpaceX's immense computing power, which CEO Michael Truell stated was necessary to overcome the compute ceilings they had hit with previous model releases. For SpaceX and its sister company xAI, the deal is a strategic shortcut to acquiring a leading AI coding tool, an area where its in-house model Grok has struggled to compete with offerings from OpenAI and Anthropic.
Industry Debates and Controversies
Beyond the major headlines, the AI community is also active with debates about workplace trends and company practices.
The Rise of 'Tokenmaxxing'
A new productivity trend dubbed 'tokenmaxxing' has taken hold in some tech circles. The practice involves using a large volume of AI tokens to signal high productivity. The trend was ignited by comments from Nvidia CEO Jensen Huang, who said he expected his engineers to use tokens worth a significant portion of their salary.
While some companies are seeing AI token spending surge, executives like LinkedIn co-founder Reid Hoffman are urging a more nuanced approach. They argue that tracking _how_ AI is used to achieve business outcomes is more important than simply rewarding high consumption, which can be easily gamed.
Anthropic's Claude Code Pricing Backlash
Anthropic faced a separate controversy this week when users noticed that Anthropic's Claude Code had been quietly removed from the $20/month Pro plan for new subscribers, pushing them towards the more expensive Max tier. The move sparked significant backlash on social media.
The company responded by stating it was a 'small test' affecting only 2% of new signups and quickly reverted the public-facing pricing page. However, critics pointed out that the change had been propagated across multiple company documents, suggesting it was more than a minor experiment. The incident has raised questions about the company's transparency, especially for a brand built on 'safety and integrity.'
Other Notable Industry Developments
In other news, Sony AI has developed a robot named Ace that can defeat elite human ping-pong players, a first for a physical, real-time adversarial sport. The robot was trained entirely in simulation. Meanwhile, a report from CNBC revealed that Meta is tracking the keystrokes and screen content of its US employees on various platforms to train its own AI agents, a move that has drawn internal criticism for being 'dystopian.'