OpenAI and FDA start talks on AI use

OpenAI and the FDA are exploring how generative AI could drastically cut the time it takes to approve new drugs.

In partnership with

Dear Readers,

We are ringing in the end of the week with great hope in the medical sector. AI is the fastest implemented technology. And so it is no surprise that the FDA is now collaborating with OpenAI to enable even faster drug approval.

Plus: The latest AI news from around the world. Have fun!

In Today’s Issue:


All the best,

OpenAI and FDA start talks on AI use

The TLDR
The FDA is partnering with OpenAI to explore “cderGPT,” an AI system designed to streamline the drug approval process by automating repetitive tasks. Early tests show scientific reviews can be completed in minutes instead of days. If successful, this could accelerate access to life-saving therapies and modernize regulatory workflows.

Why does it take over a decade for a new drug to reach the market? The US Food and Drug Administration (FDA) is asking itself this question and is now looking for answers in artificial intelligence. The “cderGPT” project, which aims to speed up the approval process with the help of AI, is being discussed in talks with OpenAI.

“cderGPT” - named after the Center for Drug Evaluation and Research - could automate repetitive tasks such as checking the completeness of applications and thus save valuable time. An initial pilot project was very promising: scientific reviews that previously took three days were completed in minutes.

This is a significant step for the AI community: the use of AI in drug evaluation could not only make processes more efficient, but also speed up access to vital therapies. Of course, there are challenges: The reliability of AI models and the quality of training data are in focus. However, the FDA plans to equip all departments with a secure, generative AI platform by the end of June 2025.

Why its important: The integration of AI into drug approval promises faster decisions and more efficient processes. This could speed up access to innovative therapies and improve healthcare worldwide.

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.

In The News

Qwen3 Unveiled: Next-Gen Multilingual AI with Thinking Budget

Qwen3, the latest from the Qwen model family, introduces dynamic mode switching and a "thinking budget" to optimize reasoning and latency. Supporting 119 languages, it blends dense and MoE architectures and is fully open-source under Apache 2.0.

Microsoft Lays Off 7,000 Employees Amid Tech Job Slump

Microsoft has cut about 3% of its workforce — roughly 7,000 employees — saving an estimated $1.4 billion annually. The move highlights growing concerns in the tech job market, as even top computer science graduates struggle to find employment.

AI Surpasses Doctors in Healthbench Accuracy

As of GPT-4.1 and O3, AI models now outperform both standalone physicians and physician-AI teams on the Healthbench benchmark. Error rates are also steadily declining, signaling rapid advancements in medical AI reliability.

Graph of the Day

Currently, the best model in terms of price-performance ratio is Grok-3-mini high

USA: Republicans want to exclude states from AI regulation for ten years

A new bill in the US House of Representatives proposes to prevent states from enacting their own AI laws for a decade. This measure, embedded in a comprehensive legislative package, would centralize regulatory authority and protect Big Tech from local control. Critics warn of a step backwards for consumer protection, transparency and democratic control.

Singapore presents global consensus for AI safety

At the ICLR conference, Singapore presented an international consensus on AI safety research to promote cooperation across geopolitical divides. The aim is to establish common standards for the development of safe AI systems. This initiative positions Singapore as a neutral mediator between the rival AI powers of the US and China.

UN officially discusses autonomous weapon systems for the first time

On May 12, 2025, UN member states met to discuss the regulation of AI-controlled weapon systems. Despite growing concern about the use of such systems in conflicts such as in Ukraine and Gaza, there are no binding international standards to date. Human rights groups warn against uncontrolled armament and call for urgent action.

Question of the Day

Quote of the Day

Reply

or to participate.