Join Sam Altman and Lex Fridman as they discuss OpenAI's roadmap, GPT-5, Sora, AI safety, and the evolving path toward Artificial General Intelligence (AGI).

Speed defeats the sensor. When you only have sixty seconds, you don't have time to listen to the voice in your head that tells you an idea is stupid; you just have to move from passive consumption to active creation.
https://youtu.be/LercGDxFaWg?si=rlWr5Gfzkoh-pKgK


In this episode, Sam Altman discusses the ongoing development and expectations for GPT-5. While specific release dates are often kept under wraps, the conversation explores the technical leaps OpenAI aims to achieve with the next generation of large language models. Altman emphasizes the importance of iterative deployment and how each version, including GPT-5, brings the world closer to more capable and reliable artificial intelligence systems.
Sam Altman explains OpenAI's commitment to building safe Artificial General Intelligence (AGI) that benefits all of humanity. The discussion covers the internal culture of safety, the role of researchers like Ilya Sutskever, and the rigorous testing required before deploying powerful models. Altman highlights that AI safety is not just a technical hurdle but a continuous process of alignment and societal integration to prevent potential risks associated with superintelligence.
The conversation touches upon Sora, OpenAI's groundbreaking text-to-video model, as a major milestone in generative AI. Altman views Sora as a step toward models that understand the physical world and temporal consistency. By expanding from text to video, OpenAI is diversifying its portfolio of tools, showcasing how multi-modal capabilities are essential for the broader goal of creating versatile and intelligent systems that can simulate reality.
During the interview with Lex Fridman, Sam Altman addresses the dynamics within OpenAI, specifically mentioning Ilya Sutskever's influence and brilliance as a researcher. Despite past organizational shifts, Altman expresses deep respect for Ilya's contributions to deep learning and AI safety. The dialogue suggests that while leadership structures may evolve, the core mission of OpenAI remains focused on collaborative breakthroughs in the field of artificial intelligence.
Creato da alumni della Columbia University a San Francisco
"Instead of endless scrolling, I just hit play on BeFreed. It saves me so much time."
"I never knew where to start with nonfiction—BeFreed’s book lists turned into podcasts gave me a clear path."
"Perfect balance between learning and entertainment. Finished ‘Thinking, Fast and Slow’ on my commute this week."
"Crazy how much I learned while walking the dog. BeFreed = small habits → big gains."
"Reading used to feel like a chore. Now it’s just part of my lifestyle."
"Feels effortless compared to reading. I’ve finished 6 books this month already."
"BeFreed turned my guilty doomscrolling into something that feels productive and inspiring."
"BeFreed turned my commute into learning time. 20-min podcasts are perfect for finishing books I never had time for."
"BeFreed replaced my podcast queue. Imagine Spotify for books — that’s it. 🙌"
"It is great for me to learn something from the book without reading it."
"The themed book list podcasts help me connect ideas across authors—like a guided audio journey."
"Makes me feel smarter every time before going to work"
Creato da alumni della Columbia University a San Francisco
