In this issue
Anthropic just secured a massive compute deal with SpaceX to lease the Colossus 1 data center. The move brings over 220,000 new GPUs online, doubling rate limits for Claude Code and removing recent performance bottlenecks.
For operators building internal tools and multi-agent workflows, this means you can finally scale up without systems failing mid-task. Reliable infrastructure is exactly what teams need to run longer automated sessions instead of just running limited tests.
Topics of the day:
Anthropic's compute deal doubles Claude Code limits
OpenAI brings Codex to Chrome and drops live voice models
Interact AI turns static websites into live conversations
DeepMind tests autonomous agents inside EVE Online
The Shortlist: Perplexity's Mac agent, Google's screenless Fitbit Air, and Lightricks' open-source LTX-2 editor
Anthropic's SpaceX deal doubles Claude Code limits
What's happening: Anthropic just signed a major compute deal to lease SpaceX's Colossus 1 data center and solve recent performance bottlenecks. This partnership brings over 220,000 new GPUs online and doubles rate limits for all Claude Code paid plans.
In practice:
You can now run longer unattended coding sessions without hitting a wall since the five-hour usage caps are officially doubled for Pro, Max, and Enterprise tiers.
Developers building complex internal tools get a direct upgrade because Opus API limits are rising considerably to support heavier automated workloads.
With compute constraints lifted, you can confidently build multi-agent workflows to handle broader business operations without worrying about the system failing mid-task.
Bottom line: Compute was the bottleneck. And to be honest, it still is - now you might have four-hour limits, but then you will just hit your weekly limits faster. If you've been holding back on long-running agents because of rate caps, today's the day to actually test what they can do at scale.
OpenAI puts Codex in Chrome and ships live voice models
What's happening: OpenAI quietly launched a new Codex extension that runs in the background of your Chrome browser to automate tasks across multiple tabs. They also dropped three real-time audio models into their API that can translate, transcribe, and reason out loud as people speak.
In practice:
You can use Codex to automate repetitive browser work like complex data entry or updating CRM records while you focus on higher leverage projects.
The new voice reasoning model lets you build capable phone agents that can seamlessly check calendars and book appointments during a live customer call.
Live streaming translation creates a clear growth lever by letting your support or sales team speak naturally to prospects in over 70 different languages.
Bottom line: Voice is finally fast enough to sit inside a live call, and Codex is finally embedded where you actually work. Pick the one that matches your bottleneck and ship it this week.
Interact AI turns static sites into live conversations
What's happening: Interact AI just launched an adaptive interface that turns static websites into live product experiences by holding real-time conversations with your visitors. The tool composes pages on the fly to surface exact value props instead of forcing users to dig through generic marketing copy.
In practice:
You can reclaim sales bandwidth by letting an AI guide answer preliminary questions and automate the repetitive discovery calls your team dreads.
Early data shows the approach can drastically lift conversions, with one compliance company reporting a 90% jump in lead conversion in a recent Sprinto case study.
You will capture rich user context since the entire chat history and visitor preferences persist directly into their first product login.
Bottom line: A homepage that talks back is now within reach for anyone with a Stripe account. Try it on your highest-intent traffic page first, not the whole site.
DeepMind tests AI agents inside EVE Online
What's happening: Google DeepMind just announced a research partnership with Fenris Creations to use the 23-year-old game EVE Online as a sandbox for testing AI agents. They want to see how models retain memory, reason, and learn inside a massively complex player-driven economy over long timelines.
In practice:
Long-term memory retention means your future AI assistants will remember context from projects spanning several months without needing constant prompting.
Because EVE is a massive living market, this testbed paves the way for autonomous economic agents that you can deploy to dynamically adjust pricing or manage supply chains.
Operating in complex virtual societies helps researchers build tools capable of navigating unpredictable human behavior, the same hurdle most operators face with customer-facing automation.
Bottom line: EVE is a stress test, not a product. But agents that survive a two-decade-old player-driven economy will absolutely survive your CRM. Plan accordingly.
The Shortlist
Perplexity released an upgraded Mac application that acts as an autonomous agent capable of executing complex tasks across your local files, native desktop programs, and the web.
HBR coined the term "AI fog" to explain how rapid technological shifts are breaking traditional long-term planning, advising professionals to optimize for career adaptability instead of rigid roadmaps.
Google unveiled the Fitbit Air, a screenless wearable pebble designed to monitor your health vitals continuously and feed that personalized data directly into its AI-powered coaching system.
Lightricks published an open-source desktop editor for its LTX-2 video model, allowing creators to generate and edit cinematic sequences locally on a standard consumer GPU.
This newsletter is where I (Kwadwo) share products, articles, and links that I find useful and interesting, mostly around AI. I focus on tools and solutions that bring real value to people in everyday jobs, not just tech insiders.

