Hi - it’s time for another edition of the Human in the Loop newsletter. OpenAI is launching a jobs platform and AI certification program, taking direct aim at Microsoft-owned LinkedIn in the professional talent market.
For operators, this creates a centralized hub for finding credible AI talent without the usual noise. The architects of the models are now building the systems to define the new AI-driven workforce.
Topics of the day:
OpenAI’s new jobs and certification platform
Why AI models hallucinate (and how to fix it)
China escalates the global open-weight AI race
The future of licensed data for AI training

OpenAI Challenges LinkedIn’s Turf
What’s happening: OpenAI is launching a jobs platform and AI certification program to connect skilled workers with employers, directly challenging Microsoft-owned LinkedIn in the professional talent market.
In practice:
Hiring managers can now source pre-vetted talent with verified AI skills, cutting through the noise of self-proclaimed “AI experts.”
You will be able to get certified directly though OpenAIs Academy to formally signal your AI fluency on your resume and professional profiles.
OpenAI also plans to provide small businesses with a dedicated track to find AI-savvy workers, potentially lowering the barrier to adopting more advanced automation.
Bottom line: This creates a centralized marketplace for AI talent, making it easier for companies to find credible candidates. It also shows the architects of AI are now building the infrastructure to define and certify the new AI-driven workforce.
Teaching AI to Say “I Don’t Know”
What’s happening: OpenAI published a new paper explaining that models hallucinate because they're trained to guess confidently rather than admit they don't know the answer. The core issue is that standard training rewards lucky guesses, not honesty.
In practice:
This could drastically reduce the time you spend double-checking AI outputs for research, analysis, and drafting content.
More honest models could unlock safer, mission-critical automations in areas like customer support or data analysis where errors are costly.
For anyone building AI features, a model that says "I don't know" builds trust and could be a key differentiator for user adoption.
Bottom line: The biggest barrier to fully trusting AI in business has been its tendency to confidently make things up. This research offers a practical path to building models you can actually depend on for important work.
China Raises the Stakes in AI
What’s happening: Chinese AI labs Alibaba and Moonshot AI just dropped two massive trillion-parameter models in a single weekend, dramatically escalating the global race in the open-weight model space.
In practice:
This release gives your business access to powerful, free, and open-weight alternatives for building custom applications outside the major US platforms.
You can fine-tune these massive models on your proprietary data to create highly specialized assistants for tasks like complex financial analysis or industry-specific research.
The rise of competitive Chinese labs creates new commercial options and reduces vendor lock-in, forcing US providers to compete harder on price and performance.
Bottom line: Increased competition puts downward pressure on API costs and gives you more leverage when choosing an AI partner. The best foundation models for your business might soon come from outside Silicon Valley.
AI Agents Get a Memory Upgrade
What’s happening: Anthropic will pay authors a $1.5 billion settlement over claims that it used pirated books to train its Claude AI models.
In practice:
This signals the end of the “scrape everything” approach to training data, forcing any company building custom AI to budget for data licensing.
Expect a boom in legally-sourced training datasets, leading to more specialized and reliable AI tools for industries like law and finance.
If you use AI for content creation, you now have to consider the model’s data source, as using tools trained on pirated material could introduce legal risks to your business.
Bottom line: The Wild West days of AI training data are ending. This forces labs to use licensed data, which means the AI tools you use will become more transparent and legally sound.
This newsletter is where I (Kwadwo) share products, articles, and links that I find useful and interesting, mostly around AI. I focus on tools and solutions that bring real value to people in everyday jobs, not just tech insiders.
Please share any feedback you have either in an answer or through the poll below 🙏🏽

