Former OpenAI researcher Andrej Karpathy is telling everyone to stop fighting AI in the classroom. He argues the cat-and-mouse game of AI detection is a waste of time and that we should be adapting how we measure performance instead.
This advice isn't really about school—it's a playbook for modern management. The core idea is to assume AI is already part of the workflow and build assessments around the quality of the final outcome, not the process.
Topics of the day:
Karpathy's call to assess results, not AI use
Ilya Sutskever breaks his silence on AI's future
NATO and Google's blueprint for private AI
Harvard's AI model for cracking the genetic code
The Shortlist: MIT maps the real labor exposure of AI with its new “Iceberg Index,” Speechify bets on voice-first browsing with a new assistant, and Microsoft releases Fara-7B, a small on-device agentic model for automating web tasks.

Karpathy: Stop trying to catch AI cheaters
What’s happening: Former OpenAI researcher Andrej Karpathy is urging educators to stop using AI detection tools, arguing they're fighting a losing battle and should instead adapt how they grade.
In practice:
Instead of policing AI use in training modules, assess new hires on applied skills in live, observable scenarios like sales calls or code reviews.
Shift performance reviews from evaluating how a report was written to the quality and impact of the final outcome.
Encourage your team to use AI as a first-draft generator, but focus collaborative time on the strategy and refinement that requires human judgment.
Bottom line: Karpathy’s advice isn’t just for the classroom - it's a template for modern management. The smart move is to assume AI is part of the workflow and redesign assessments around what really matters: results.
Ilya Sutskever breaks his silence
What’s happening: After a year away from the spotlight, OpenAI co-founder Ilya Sutskever broke his silence with a new vision for AI. He argues the era of just adding more data and compute is ending, and the next breakthroughs will come from new ideas that help AI learn more efficiently.
In practice:
Today’s AI aces complex benchmarks but often fails at simple tasks, which explains why your AI tools still need a human babysitter for quality control.
The next big thing won’t be a bigger GPT, but a “superintelligent learner” you can train on the job, like an AI intern that learns your specific workflows, not just the entire internet.
This shift to “continual learning” means future tools will optimize themselves in real-time, reducing the need to constantly retrain models and freeing you up for higher-level work.
Bottom line: The AI arms race is moving beyond pure scale, with the next wins coming from smarter, more efficient systems. This opens the door for tools that are more reliable and can be trained on your specific business problems, moving beyond one-size-fits-all models.
NATO taps Google Cloud for Defense AI
What’s happening: NATO is partnering with Google Cloud to run AI and secure its most sensitive data on air-gapped servers located inside its own facilities. It's a big move to bring powerful cloud tech into a totally disconnected environment.
In practice:
This gives a blueprint for handling highly sensitive data without sending it to a public cloud, a model industries like finance or healthcare can follow.
It unlocks modern AI tools for analyzing proprietary information that was previously off-limits, automating insights from private datasets.
This signals a growing demand for secure, on-premise AI, creating opportunities for businesses that need advanced analytics while meeting strict compliance.
Bottom line: The playbook for secure, private AI is no longer just for massive government agencies. Powerful analytics and automation are becoming accessible for any organization with strict data controls.
Harvard AI cracks genetic disease code
What’s happening: Harvard Medical School researchers introduced popEVE, an AI model that identifies disease-causing DNA mutations with stunning accuracy, solving about one-third of previously undiagnosed cases in a recent study.
In practice:
This shows how AI can sift through massive, noisy datasets to find the one critical signal that humans miss, identifying over 120 new gene connections to rare diseases.
It drastically cuts down on false positives compared to previous models, freeing up researchers to focus on the most promising leads instead of chasing dead ends.
Researchers can already access the model via an online portal, showing how complex AI breakthroughs are being democratized for wider use and faster validation.
Bottom line: This isn't just a win for medicine - it's a blueprint for using AI to solve intractable problems in any field. The real power is in augmenting human expertise to find answers hidden in plain sight.
The Shortlist
MIT published its “Iceberg Index,” a labor simulation showing that AI can handle tasks worth 11.7% of U.S. wages, with the biggest hidden exposure in admin and finance roles, not just tech.
Speechify added voice typing and a conversational AI assistant to its Chrome extension, betting that voice commands will become a primary way users interact with browsers.
Microsoft released Fara-7B, a small, open-source agentic model designed to run locally and automate web tasks, offering strong performance while keeping user data on-device.
This newsletter is where I (Kwadwo) share products, articles, and links that I find useful and interesting, mostly around AI. I focus on tools and solutions that bring real value to people in everyday jobs, not just tech insiders.
Please share any feedback you have either in an answer or through the poll below 🙏🏽
