Altman and President Greg made a rare joint appearance, pointing out that many people are misusing AI. They remain at the stage of reading articles to understand AI, without truly utilizing AI to turn ideas into reality. They clarified OpenAI's core business logic: 'computing power is not a cost but a profit center,' achieving infinite expansion by earning spreads on computing power. Strategically, the company decisively halted Sora and fully committed to 'agents (Agent)' and 'personal AGI.'
A rare podcast conversation revealed OpenAI’s comprehensive strategic shift towards “Agent” and “Personal AGI,” as well as how computing power, as a “sure-profit” arbitrage business, has become the ultimate chip in determining the future commercial landscape and human hierarchy amid this irreversible technological surge.

In the deep waters of commercializing generative AI, the market's focus on large model companies has rapidly shifted from purely technical benchmarks to the closure of business models, the sustainability of high capital expenditures, and the incremental space for future applications.
On April 22, two core figures of OpenAI—CEO Sam Altman and President Greg Brockman—uncommonly appeared together on a podcast. In this nearly one-and-a-half-hour discussion, the two executives not only outlined for the first time OpenAI’s clear profit logic and product convergence roadmap but also directly addressed key controversies surrounding computational bottlenecks, the Musk lawsuit, and competitors 'spreading anxiety.'
Through this dialogue, it became evident that OpenAI is accelerating its transformation from a “large model research institution” into an infrastructure giant underpinned by “computing power arbitrage” with a “universal agent (Agent)” as the super entry point.
Most people still think this is a search engine.
At the beginning of the interview, host Ashlee Vance recounted a detail: she took her son to visit Oberlin College in Ohio, where, in a 600-plus auditorium, the so-called elite students asked AI questions limited to “how to use it in class” and “how to prevent students from cheating.”
Ashlee said: “My strongest feeling at the time was, wow, you have absolutely no idea about the massive change brewing and about to sweep everyone away.”
Altman believes this cognitive lag is traceable. He recalled OpenAI’s situation before ChatGPT launched: “We beat human champions in esports tournaments, created a robotic arm capable of solving Rubik’s cubes with one hand, published stunning papers, and even made it to the front page of The New York Times. At the time, we thought we were doing great. But the reality was, no one cared, and it didn’t make any waves in the real world.”
"We defeated human champions in e-sports competitions, created a mechanical arm capable of solving a Rubik's cube with one hand, published extremely impressive papers, and even landed on the front page of The New York Times. We thought we were doing amazing work at the time. But the reality was, nobody cared, and it did not create any ripples in the real world."
Until ChatGPT came along.
Altman frankly stated, "From a purely technical perspective, ChatGPT is absolutely not the most impressive breakthrough we have achieved, not by a long shot." But this time, the world has been transformed – because people can finally experience it directly and derive real value from it.
Brockman shared a more vivid story:
One of his friends, while listening to his sister describe an app she wanted to create, input the requirements into Codex as he listened. A few hours later, the application was up and running. His sister was astonished: Who made this? Brockman's friend replied, "You did it yourself."
This moment of realization is still missing for the vast majority of people today. They remain at the stage of reading articles to understand AI, without truly using AI to turn ideas into reality.
Brockman summarized the fundamental shift behind this: "We are redefining machines so that they adapt to humans." For decades, humans had to adapt to machines, learning programming languages and understanding underlying logic; now, you only need to articulate your intent.
The essence of business: "Computing power is not a cost center, but a profit center."
Regarding Wall Street's most pressing question – "When will the massive capital expenditure (CapEx) on computing power pay off?" – Brockman provided an extremely straightforward yet highly persuasive underlying logic.
"One thing to recognize is that, for us, computing power is not a cost center but a profit center," Brockman pointed out incisively when discussing the business model. "In many ways, our business is extremely simple. We purchase computing power, add a profit margin, and resell it. As long as we maintain a positive profit margin, it is scalable because the demand is absolutely infinite."
This closed-loop business model of 'earning the spread on computing power' gives OpenAI the confidence to expand its infrastructure infinitely. Altman recalled the situation after the release of ChatGPT:
"We had to buy up all the available computing power – we simply had to do it because the demand was so obvious."
Altman strongly refuted rumors in the market that OpenAI was scaling back its infrastructure. He emphasized that the company would continue to build as many data centers as possible and pointed out that the real risk lies in the lagging physical manufacturing and power infrastructure in the United States.
To address the physical bottlenecks of building data centers, Altman proposed a breakthrough solution—general-purpose robots.
“If the U.S. wants to remain competitive in manufacturing, we need a large number of robots capable of building more robots. If we can have general-purpose robots… you use them to configure factories, extract and refine resources, then the entire landscape will fundamentally change.”
The Computational Divide and Social Stratification: A Choice Between Prosperity and Equality
As AI capabilities approach AGI, Altman believes that the sole core determinant of future social power and wealth distribution will no longer be 'skills,' but 'computational resources.'
During the interview, Altman outlined two potential future societal models:
The first future: “The floor is significantly raised, with material abundance making people feel ten times wealthier than a decade ago. However, since AI acts as an extremely powerful lever, those who already possess resources and computational power will be amplified, potentially leading to trillionaires and drastically worsening inequality.”
The second future: “Overall prosperity is compromised, with ordinary people only feeling twice as wealthy, but in exchange for a more equitable distribution.”
Rationally, Altman leans towards the first model of absolute prosperity, but he understands that it may be emotionally unacceptable to the public. Therefore, his only proposed solution is to 'expand the total pool of computational power':
“Everyone should aspire to gain more computational power… because if computational power becomes constrained and prices soar, it will ultimately become an exclusive privilege of the wealthy.”
Route Restructuring: Halting Sora, Fully Betting on 'Agents' and Personal AGI
After a period of rapid expansion, OpenAI is currently undergoing an extremely rigorous internal resource optimization. Brockman has now fully taken over the coordination of the company’s product and research efforts. He has released a highly significant incremental message to the market: OpenAI is undergoing a major strategic convergence, directing all core resources toward 'Agents'.
To ensure this strategy, some star projects that once amazed the world have been decisively divested or downgraded, including the video generation model Sora.
When asked why Sora was cut, Brockman did not mince words: 'Because it is an entirely different branch on the technology tree. Its use cases do not align with our core objectives.'
Regarding the upcoming product form, Brockman provided a clear direction: 'We are obviously in a transition phase towards Agents... The model has shifted from being 'the product itself' to becoming 'a component of the product.'
He revealed that OpenAI's current absolute mainline mission is divided into three steps:
First, the Intelligent Agent Platform (Agentic Platform): Brockman described this as the 'brain plus body' — the model is the brain, while the software layer being built by OpenAI serves as the body. 'This layer is very thick, encompassing skills, connectors, computer usage, context memory management, and both must be co-designed.'
Second, Codex for Everyone (Computer Work): Brockman explicitly stated that Codex will expand from targeting software engineers to 'everyone,' emphasizing his choice of wording: 'I don’t use the term 'knowledge workers'; nobody thinks of themselves as knowledge workers. I say 'computer work' — how much time do you spend hunched over a screen, developing carpal tunnel syndrome? It is these tasks that we aim to take over.'
Third, Personal AGI (Personal AGI): Defined as 'an AI that truly understands you, possesses your background, and one you can trust.' Brockman gave an example: 'It knows your favorite musician is performing in your city, tickets have just gone on sale, and it directly secures the tickets for you because it has established sufficient trust to know that 'this doesn’t require approval.'
He concluded: 'You don’t need language models, nor various details. You simply need something that operates on your behalf and understands your goals — that is what we are building.'
Responding to Market Controversy: Musk's Lawsuit and Competitors' 'Fear-Based Marketing'
At the end of their conversation, the two executives rarely displayed a tough stance, directly addressing external disputes surrounding the company.
Regarding Elon Musk’s lawsuit against OpenAI, Brockman described it as an 'opportunity to tell the truth.' He pointed out that the core reason for the breakdown of negotiations years ago was not about the nonprofit structure but rather about power ambition:
“He (Musk) wanted majority shares, wanted to be CEO, and wanted absolute control. ... We couldn’t agree because we truly believe no single individual should dominate the entire future, regardless of who that person is. That was the breaking point that led us to say ‘no.’”
On the upcoming lawsuit with Musk, Altman expressed rare anticipation: "I’m a bit worried he might withdraw the lawsuit before trial, depriving us of the chance to clarify everything."
When asked whether OpenAI or Anthropic performed better, Brockman admitted: "We were slower to realize the importance of applying models to chaotic real-world codebases compared to Anthropic. They deserve credit for that, which has also improved our execution capabilities." However, he quickly added, "Head-to-head, Codex versus Claude, the results we’ve achieved are very favorable."
Moreover, Altman strongly criticized the market strategies of competitors (specifically mentioning Anthropic during the interview). He described some peers’ emphasis on AI’s extreme dangers and refusal to cooperate with governments as 'fear-based marketing.'
“This is obviously an incredible marketing tactic—telling the public, ‘We’ve built a bomb and plan to drop it on your heads, but if you’re chosen by us, we’ll sell you a bunker for $100 million.’” Altman sharply condemned the approach.
Below is the translated transcript of the interview (with some portions edited), assisted by AI translation:
Greg: 1, 2, 3. Alright. (Call testing)
Host: Absolutely correct.
Altman: We need to finish all the personal dramas within 90 minutes.
Host: Oh, yes. Oh, don't worry. Let's talk about building superintelligence.
Altman: Will this part be included in the podcast?
Host: No.
Altman: Well, maybe not.
Bear with my slightly silly opening, but I think you'll be fine.
Host: Welcome to the financial fortress, the capital of capital. This is not that podcast. This is Core Memory. I am Ashlee Vance, and I am Kylie Robison. I think we have prepared an excellent episode for you. Usually, we introduce the guests during this segment, but perhaps it's unnecessary this time.
Today, we are joined by Sam Altman and Greg Brockman, co-founders of OpenAI. You may have heard of it. Thank you for being here.
Altman: Thank you for having us.
Greg: Thank you very much.
Host: I think this is the first time you've done a podcast together, right?
Altman: It's great. But I think it's true. At least it hasn't happened for a long time. It's been a long time.
Host: Maybe it's the first time. And I won't compete with you for doing this on our show, but you did buy a podcast, right?
Greg: We just locked it down.
Host: You locked it down. Yes, I accept. I accept.
Host: I'm curious as to why you bought a podcast. We don’t need to go too deep into it, but do you have any simple thoughts on that?
Greg: I think the people who created TBPN are remarkable.
I believe they are highly creative thinkers. In the world we are entering, where we are building these AI systems that are so useful to people and helping them understand why they are valuable for their personal and professional lives, I think they are exactly the kind of people who can help convey that message.
Host: I see you there. Have you appeared on TBPN before?
Greg: I've been on it.
Host: Alright. Alright. I don't watch every episode, but it's an interesting podcast.
Well, I thought, you know, since it's the two of you, and we haven't done something together for a while, I wanted to take a moment at the beginning to reflect on the past.
Kylie and I have known both of you over the years. You know, I was just reflecting as we prepared for this interview. We just passed our 10-year anniversary mark. The two of you are among the remaining co-founders. I think Wojciech is the third. So, you know, you are the constant thread running through the company.
You started as underdogs. You ended up as leaders. There has been a lot of drama, ups and downs along the way. I mean, I'm really curious about how your relationship has evolved through all of this, how you complement each other, and whether it has changed over time.
Greg: It's been beautiful. Look, we all wish there was less drama and more focus purely on the technology.
But in a world full of so much chaos and drama, you know, tension, struggles, and power plays, it's been really beautiful to build a relationship with someone who has the full context. We have all this history where we truly leaned on each other during good times and very tough times. It's one of the most beautiful things about OpenAI.
You know, in many ways, the first moment for OpenAI came after that dinner in July 2015 when Sam and I drove back into town together, looked at each other, and said, we have to do this, right? There was always this discussion: Is it too late to start a lab now that can pursue AGI and make a positive impact?
Altman: It seems absurd now, but we were so worried back then that it might already be too late.
Greg: Yes.
Host: You missed it. You missed it.
Host: I remember having that feeling when you first started.
Altman: I thought, 'No, it will run away, you know.'
Host: Yes.
Greg: Yes. You know, the conclusion of the dinner was that it wasn’t obviously impossible. I think we both just felt, 'Well, this is too important. We just have to do it.'
Yes. I think that spirit has continued to this day and is reflected in many of our early operational approaches. I remember I was unemployed at the time, so I went all-in the next day. Sam actually had a full-time job. But we were constantly on the phone, about five times a day.
Sam: Yes, exactly.
Host: Were you already very close friends at that time, or not?
Altman: We had known each other for a long time, or it felt like a long time—though it wasn’t actually that long.
Greg: Around 2010, 2011.
Altman: That's when you started working at Stripe.
Greg: Exactly, yes. So we got to know each other through the Collison brothers.
Greg: Yes. So we were casual social friends before that.
Sam: It wasn't as long ago as I thought. It might have been 2010, and this is 2015, so it's been five years.
Greg: Time compresses.
Sam: Yes.
Host: Obviously, I mean, in all this high-pressure environment, doing this work — I can imagine it just brings you closer together.
Sam: Yes.
Sam: People use the term 'traumatic bond.' I hate that phrase. I prefer something like 'comrades in the trenches.' One of the benefits of hard work — regardless of circumstances but especially during stressful periods — is that you really forge those kinds of relationships. At least I haven't seen such bonds formed any other way.
Greg: Yes. I do think the way Sam and I work and interact might differ from what you'd expect in a typical co-founder relationship. I feel like we maintain constant contact. Five calls a day, two minutes here, five minutes there, and that spirit has always been there.
Greg: I think we are constantly in sync. We don't always agree on everything, right? Our perspectives on the world are not exactly the same. But that's precisely why we are so powerful together, right? I think we have very complementary approaches. Sam might suggest an idea, and I would think about whether we could do it another way, or what if we approach it from this angle, or how it relates to other things we are considering.
Greg: One thing I deeply appreciate about Sam is that I think he can always see the connections between different ideas, or just focus on the bigger picture of what we need to achieve, and then we figure out together how to actually execute it. I think linking grand ambitions with execution has always been a hallmark of OpenAI.
Host: Yes. Over the past decade, were there moments when you felt your disagreements were particularly important? Do you remember any pivotal moments?
Sam: I think one of the best things Greg does – and this isn't my instinct – is really pushing to focus on what matters most, whether in his own work or in what the company needs to accomplish.
So there were times when I wanted to do more, and Greg would say, you know, is this the most important thing? Let’s really just focus on this one thing. Let’s keep the company aligned. We’ve had disagreements on this, and it has always been a very helpful spirit that Greg brings to the entire company.
Greg: Yes, yes. I’d also like to add that when it comes to thinking about computing power, it’s about continually raising ambition. Sometimes I’d feel like, okay, logically I know that yes, we are moving toward this compute-driven economy, and yes, demand will always exceed supply, but we still have all this hard work to do. We already have all these large computers, and we’re getting them operational. You know, you still have all this physical infrastructure to build, and you feel overwhelmed by it, and Sam would say no, we need more.
I think this has actually been a very important aspect – sometimes it’s easy to lose sight of the higher-order bit, which is that this matter will be so important, not just for the next six months, but for the next two years, five years, ten years. I think maintaining that balance – you need that balance, sometimes diving into details but not getting lost in them – I think that balance is exactly what has made OpenAI successful and will be key to our future direction.
Host: There must have been a particular product or strategy where your disagreement was the most intense. Which one was it?
Sam Altman: As Greg was speaking just now, I was thinking, it’s not a product, but something came to mind. I was going to mention this even before you asked – we used to frequently discuss how to talk about safety issues.
We never disagreed on the extreme importance of safety, nor on what doing it right or wrong would mean. But in this field, there has always been a strange relationship in how we talk about safety, how we use the word 'safety,' and to what extent this is about power rather than truly staying safe.
I think in our early history, I was more prone to getting caught up in the mindset of 'we must really talk about this issue within a specific framework.' Greg, however, very principledly insisted that we should not fall into conventional frameworks. We cannot talk about it that way. Even then, because it was so important, I believe we did fall into a trap of talking about it in the wrong framework more than we should have.
But I think one of OpenAI's biggest contributions to date has been finding a different way to talk about safety—not just how we build products and discuss what society needs to do, but also how we deploy them. The entire concept of iterative deployment—reaching a world where we figure out how to deploy increasingly safer products as risks escalate.
Greg held a line there, which I think was quite important for the company, withstanding tremendous pressure. I think it was crucial for our overall strategy—not just how we talk about things, but also how we deliver and build products.
Greg Brockman: Yes. If you look at the OpenAI Foundation, the nonprofit organization that governs OpenAI and holds a significant amount of equity. One of its pillars is AI resilience. This truly means thinking about how to make AI a beneficial force for the world. The answer is not any single intervention, right? It’s not like you achieve your mission just by having chain-of-thought monitoring. It’s actually an entire set of deep, different ways that society should position itself around this technology.
I think this perspective—that you can’t solve the problem of AGI being beneficial to the world in a single paper—is something that must be a global effort, requiring contributions from society, many different people, and many different approaches to truly understand what this technology is, how it will impact people, and how it will affect the world. This was completely misunderstood or unappreciated 10 years ago when we first started, because it was easy to fall into the mindset of 'you know, we are technologists, we are building the technology, and that’s the only problem we need to solve.' I’m not saying anyone explicitly said that, but I think sometimes you fall into that psychological trap of thinking this way. So we spent a lot of time becoming first-principles thinkers, truly considering how to effectively deliver transformative technology to the world in a way that actually helps people’s daily lives.
One thing you realize is that if you have a very powerful technology that is going to change the world, it might be better if you first have a less powerful technology that has already helped change the world in a positive way. So, if you think this way, you start to be genuinely led down a path—thinking about resilience, thinking about iterative deployment.
Again, I think this is a lot of the dynamic within OpenAI. Between the two of us, we are always thinking about these issues: how do we really fulfill this mission and make it progress better?
Host: Yes, it feels like just yesterday I saw you at a music festival in 2022 or 2023, and the discussions about AI back then were completely different from what we hear today. It feels like the shift over the past decade has been in how you discuss safety and alignment. I wonder how you reflect on that now. For example, in those panel discussions and media interviews, what would you change? What have you learned about discussing safety?
Greg: Before discussing safety, I think as tech geeks, we fell into this framework: we said, 'We’re going to build superintelligence, and then... it’s going to be good for you,' but we didn’t fill in those ellipses sufficiently. We talked about how we’re building this amazing technology that will do all these wonderful things. Now there’s a sense in the world that, okay, it looks like you were right—you are going to build this thing. But why? Why do we want it? What will it do for us? A lot of what the field has said, like 'It will cure cancer, and you’ll be happy,' clearly hasn’t resonated.
Sure, many would say curing cancer would be great. But I think what people really want is prosperity, autonomy, and the ability to continue doing meaningful work. The other day, I came across an incredible post that really struck me, about the 'right to struggle against adversity.' People actually want some challenges in life. You don’t want everything to be perfect every day, with everything done for you. There’s a fear of AI, like, 'You’re right, suppose you build it, suppose it makes all this money and does all the work, etc.' Then what am I going to do? What are my children going to do? What will life look like? Where will growth come from? What will people strive for?
Altman: I think as a field, as OpenAI, as Greg and I, we talk a lot about this amazing technology and what it can do, discussing this technological marvel, but we haven't connected enough on 'this is what the future will look like.' When I talk to parents with school-age children, the most common question is: 'What should my child study? What will the future be like? What will still have economic value?' I realize that this isn’t exactly what they are trying to ask. Instead, they are asking: 'How can my child live a fulfilling life in this new world?' You can answer this from an economic perspective, but it doesn’t truly reassure people. I think it's because there’s something deeper at play.
I was just about to say, I think we actually have a lot of insights into this. For example, within ChatGPT, many people have said that their lives or the lives of loved ones were saved by information obtained through ChatGPT—whether it’s someone whose child suffered from blinding headaches, and they were denied an MRI. They used ChatGPT to research the symptoms and used it to advocate for insurance approval for the MRI. It turned out to be a brain tumor. They were able to intervene in time and save his life. That family said that without ChatGPT, they would have had no idea what to do while dealing with this experience.
And that’s just one story. There are many, many more stories like this. I think people are truly realizing that this technology can help not only society in an abstract sense but also themselves, right? It can help them make money. Now, we are starting to see this wave of entrepreneurship. I think this will be a significant theme throughout the year.
And I think we have a lot of understanding of this. We are building AI that doesn’t require you to adapt to computers, right? Think about how we work—it’s not natural, right? This isn’t what we were designed to do. Instead, computers will work for you.
Host: So the question is, what is work? What is good? What is it that you truly want to accomplish? It will be profoundly human, right? This sense of agency, this empowerment. So I think this technology is now bringing about very optimistic—not blindly optimistic, but very positive changes, although what people tend to notice more easily is what will disappear.
For instance, things you thought were stable are going to change, but it’s much harder to see what’s coming next. What new things will you get? What benefits will arise as part of this transformation? I think we are increasingly aware that, beyond clarifying the other side, we must also clearly articulate this side.
Host: Based on what both of you just said, I have a few questions. I just went to Oberlin College with my son, and we were in a room with about six or seven hundred people, and I found it quite interesting—I mean, this is in Ohio, and people came from all over the country. The president was speaking, followed by an open Q&A session, and many of the questions were about AI. But these questions made me realize how much we live in a bubble here, because the questions—without meaning to belittle anyone—were quite basic. My overall impression of the room was, 'Wow, you really don’t understand much about what’s unfolding and what’s about to sweep over all of you.'
I don’t know—it’s not... I don’t know, it’s concerning. To me, it was eye-opening. You know, they obviously asked some questions, such as how will you use AI in classrooms? How will you prevent students from just relying on it? But then the questions extended beyond those boundaries, and I felt that the fairly intelligent people in this room weren’t really grasping what’s happening. So what I’m saying is, if people don’t know examples of what these tools can do, and we don’t even know what form it will take—what you just mentioned—I don’t see how we can address this issue and somehow prepare people.
Greg: I actually have a more optimistic view on this. I think if you just read about AI, it gives you an impression, right? Again, it’s that feeling, like you’re trying to understand this new technology—what it is—but when you use it, it feels so intuitive, right? In many ways, that’s the purpose of AI. If you think back to how we’ve designed computers over the past 70 years, machines didn’t really understand us, right? You’d have these goals in your mind, and you had to translate them into the machine’s language, whether it was writing assembly code or, okay, now we have higher-level languages. Now, you can kind of talk to your computer, but even if you think about how ChatGPT works, you still need to really understand the concept of language models, right? Like why you have to create new conversations, right? New tabs—why can’t it just be something you talk to, remembering everything? Right? Why? These are technical limitations, but we are improving them. So I think what we are building is the most intuitive technology.
We are building something that allows machines to adapt to you. When people use it, they’ll realize, wow, this is what I can do with it. For example, some of my favorite stories are from someone in the Midwest—you know, a friend of mine—his sister told him about an application she wants to see, one she hopes someone will create.
She described it in great detail. As he listened, he was like, "Mm-hmm. Mm-hmm," while inputting exactly what she said into Codex. He hit enter. A few hours later, he showed her the result. It was the application she had described. She said, 'What is this? Who made it?' It was exactly what she wanted.
Then he said, 'You did.' Right. I think everyone has this kind of epiphany moment. Once you experience what AI can do for you and how empowered you are now, it's like having an image in your mind that you want to see realized in the world. If you look at some of the new image models coming out, we have absolutely incredible capabilities—like you can create in ways never before possible. And if you think about another story, my grandmother suffered from dementia and Alzheimer’s disease.
It was extremely difficult for everyone. But one thing that really impressed me was that she was actually able to use her Alexa to play music, and she could still remember the lyrics and sing along. It was a way for her to stay connected with herself through technology. That really left a deep impression on me.
If you can build interfaces that anyone can intuitively connect with, given the technologies we seem to have now that are tailored to specific individuals—or where you have to develop all these skills that aren’t deeply relevant to what you truly want but are somehow specific to the technology itself—it changes the dynamic.
Altman: While Greg was speaking, three things came to mind, and I agree with all of them. One was that before we launched, we tried to talk to people and say, 'This AI thing is coming; you need to pay attention. It’s going to change everything. It’s incredibly important.' But basically, no one paid any attention. We wrote these beautifully crafted blog posts, performed all these incredible feats—we won gaming competitions, we had a robot that could solve a Rubik’s Cube with its hands, we did all these amazing things—and we thought, 'We’re so cool. The New York Times wrote about us.'
We must be doing great.' But in reality, no one really cared. It didn’t make any real impact. Then we launched ChatGPT, which was far from being the most impressive thing we’ve done technically. Far from it.
As long as people could feel it, they would say, 'Okay, I get it.' I think that was probably the moment when the world collectively said, 'Maybe this AI thing is real,' because people could use it, derive value from it, and form their own understanding.
It’s very different from just hearing about it. As Greg said, the same thing happened with the coding model. But so far, those two moments have been truly significant. I think the world has updated its perception and said, 'Okay, this is happening.' There will be more such moments in the future, but so far, being able to ask a computer any question and get an answer or have the computer perform any task for you using code—these are incredibly powerful developments. This is how the world is updating its understanding.
We say superintelligence is coming, and it’s going to change everything. You know, maybe people listening to this podcast will say, 'Okay, that sounds reasonable. Maybe I should pay attention.' But it’s not going to have that big of an impact on the world right now.
So I think the most important thing we can do to help audiences who are thinking not just about what this means for preventing cheating in classrooms but what it means for the world, is to launch excellent, delightful products that create tremendous value and are easy to use. And we’ll continue to do just that.
The second thing is, we've seen time and again in history that when we release something pretty good, people say I really can't imagine how it could get any better. It can't get any better. The image model Greg mentioned, which we will soon launch, is a real example for me.
I roughly think image generation has been solved. I feel like it's really good. I don't need it to get any better. Then this new thing really reminded me, wow, this can go even further, there's still so much more I can do. What can it do? It creates incredibly good images.
Host: All text? Because I was trying to create a core memory product model. Try again soon. Good job. The team did a great job on this project. But even with ChatGPT, when we put GPT-4 into ChatGPT, I remember many people, like my well-informed friends, saying: "This is it. This is AGI. If the model gets smarter, I don't care. Just make it cheaper.
I can't. It's amazing." If you go back to use that version, it should be around March 2023, of GPT-4, you'd say: "This is terrible." But at the time, people were saying: "It's solved. It's solved." I mean, it passed the Turing Test. It's done. It can't get any better, you know, and then it keeps getting better and better, you can keep raising your expectations and possibilities of what you can do, not to mention then you enter the reasoning models and start coding.
I'm just talking about how much ChatGPT improved during that period. So I think we see this over and over again, the world says okay, you've made this amazing thing, it's as good as it gets. This is already the best I could ask for. Then month by month, or at least quarter by quarter, expectations and capabilities significantly increase.
Altman: Then the third thing Greg said is, relative to what they will become, these models are still quite dumb, but more importantly, their understanding of your life is quite limited. You still need to coax them, trying to get what you want. We're not far from having a model that knows all your background information.
It understands you. It understands your life. It knows what you're doing. It knows what you care about. It understands the people in your life. Of course, if you wish, in the way you want, it can access your computer and browser. Over time, it may increasingly access what’s happening in the real world around you.
That will completely change the experience of using computers and AI. I’m very excited about that. But I think even we don’t have a good intuition yet for what that will really feel like. On that note, yes, think about how much time you currently spend just explaining to ChatGPT or whatever tool you’re using what’s going on.
Think about how frustrating that is. It’s like if you had a colleague, and you’re constantly trying to explain to them, no, it’s kind of like what I want. This is what’s going on. I can package the context like this. Exactly right. It’s really not—it’s really not how you want these systems to behave. And adding to what Sam just said, one of the biggest technical challenges we face at OpenAI is the abundance of opportunities.
AI is such a field of infinite opportunity. No matter which dimension you scale in, there will be something new, unprecedented, and amazing. So it's important to have a vision, knowing where we can truly focus, where we can achieve the greatest return and benefit, you know, making multiple different efforts add up to a whole.
I think when it comes to the fact that we now have these coding systems, but I think it will obviously extend to all computer work. Interestingly, by the way, when you think about the work you do, it is so seamlessly intertwined—you are having face-to-face conversations, typing on your computer, and trying to figure out how to input context, which will be such an important issue—all of this matters, but in your personal life, you really need this as well.
We started referring to the goals we want to achieve as personal AGI, right? This AI that truly understands you, holds your context, and can be trusted—right? You ask it questions or inquire about financial or health-related matters, and it provides reliable information. All of this is important, and it also requires that context.
So you start to see a blurry line, at the technical level, between AI used for deep computer work and AI used in your personal life. Even if you want different systems—one that only knows your work context and one that knows your personal context.
So you can advance these in parallel and build upon the same technical foundation. What amazes me most about the technology we are building is that, at its core, it is still a neural network, right? It remains deep learning. You are expanding it. You are building a system and then applying it to all these incredible applications.
Host: Can I ask a somewhat similar question? I mean, I promised myself I wouldn't do this to you guys. Um, but I'm kind of—I'm not a skeptic, but I like—I can see AI doing amazing things, but I've been somewhat skeptical until recently—what made you change your mind?
Greg: Well, there are a few things. One is that I started using Agents more and shaping them to do what I wanted, experiencing some of what both of you mentioned. It’s like, wow, this actually saves me a lot of time. It executes my instructions fairly well. Another thing is that, you know, I often cover biotechnology, um, and just seeing some emerging results has made me feel that coding and biotech seem to be the most obvious areas where it's making a real impact.
But, you know, I had dinner with you the other day and read some of what you wrote, and I see a clear path based on LLMs leading to the goal you described. Though part of me feels, maybe because I rely heavily on language, that my writing skills still fall short—it’s just anecdotal and intuitive—but the feedback I get still feels like 'no, this isn’t superintelligence.'
Greg Brockman: We're not quite there yet in terms of personalization.
Host: But, but can LLMs take us there?
Greg Brockman: LLMs will take us there; it just feels very uneven right now. But that’s okay. For example, just in the past few days, o1 solved a long-standing problem—a mathematical puzzle that had been closely watched for a long time.
A mathematician spent a great deal of time studying this problem, contemplating it for many years before releasing his perspective on this contribution. Terence Tao also mentioned that it seems possible that AI has discovered certain connections between different fields of mathematics, and we are beginning to see these machines generate real beauty.
This is now a specific domain, right? Mathematics and creative writing are very different. I believe these AIs are capable of offering intriguing insights and providing assistance in this manner. You know, these mathematicians are now saying a lot about 'thinking about what else we can do.' Therefore, I think we have an uneven frontier when it comes to what these AIs excel at.
We know how to keep pushing this frontier forward. But I think what we are truly looking for is something akin to AlphaGo’s 37th move, right? It wasn’t just a profound insight; it genuinely transformed people's understanding of Go, and now more people are playing the game, right? So, it really added significance to what people are doing.
So I think what we need to do is create something that will draw fewer complaints from you. But I also believe that in terms of writing, you’ll be able to achieve far more than you can imagine today.
Host: Sorry to keep harping on this, but I was told that GPT-5 would be really good at writing.
What made me less cynical was the reasoning model. When Claude provided truly excellent research, and then you had a research function, to me it was like, wow, this is amazing because it saved me a lot of time. But before that, I was promised that we should already have good writing capabilities by now.
Host: The writing feels soulless, you know, like something is missing.
Greg Brockman: Well, let me tell you from a technical perspective, the technology we possess allows us to train models in this unsupervised manner, right? So we really look at all publicly available data, and it learns to predict what comes next. It's really about figuring out what makes sense to do in new situations.
Then we implement a reinforcement learning step where it actually tries out ideas and receives rewards or penalties based on its performance. You know, just positive signals, negative signals, not...
Host: You’re not hitting it.
Greg Brockman: Exactly right. Not at all. Yes. You simply understand the signal. The challenging part is, how do you assess it? How do you determine whether something is
Host: Yes.
Greg Brockman: a thumbs-up or a thumbs-down. So it is much easier in math and science than in some more open-ended fields, but we are also developing increasingly intelligent AI that can provide such reward signals. Thus, part of the challenge has always been how to expand the set of tasks that can be scored, which has been a major focus.
To make it something Vance would consider great writing, which we haven’t yet attempted. Instead, we aim to solve open mathematical problems that even the world’s smartest mathematicians have failed to crack. I won’t comment on your intellectual standing relative to these mathematicians, but I will say this—writers, these writers, are intelligent.
However, solving mathematical problems is also extremely difficult. So if we can achieve that, I am highly confident that this approach can also learn what you consider excellent writing to be and deliver that for you.
By the way, we also have some new models coming out soon along this dimension.
Host: So after this podcast comes out, I can’t wait to evaluate it. Let us know. If it looks better, let us know.
Altman: Writing. I think personalization, I think we have been improving, as all capabilities are moving upward along this uneven frontier. Therefore, I believe that in this area, what is important to judge is not the current position but rather the slope, and fitting it into an exponential curve.
So if you think about how writing compares to where it was a year ago—sorry for making you disappointed about GPT-5. Uh, don’t worry, we are highly motivated to, you know, truly deliver. Um, but I think we really have a clear path to improve it for every application people desire.
Host: Since day one, the Core Memory podcast has been supported by the excellent team at E1 Ventures. They are a young and ambitious venture capital firm in Silicon Valley, investing in young and ambitious companies and talent. Many thanks to E1 Ventures for all their support.
Host: In the world you have depicted, um, this technology, well, you know, assuming everything works well, you know, it’s quite an optimistic scenario, um, diseases are being cured, um, resources are becoming more abundant, problems are being solved, so humanity as a whole is being uplifted. I still don’t see — I mean, it seems very, very — some of the smartest people in the world are — are developing this technology, and to me, in almost all cases, it will disproportionately benefit certain people. Although everyone may get a little boost, things will really become more extreme because every time you talk about how to control these tools, how to use these tools, I just feel that things will become more extreme, and there's that joke about the permanent underclass, and that feeling when you see these really powerful tools, like, my God, what's the point of my existence? And that sense of disenfranchisement.
Greg: Yeah. Like, you could have all this time to do interesting creative things. I just think other people are going to manipulate this thing in such extreme, such amazing ways.
This is at the core of OpenAI's mission. It’s really why we created this place because you see this powerful technology coming. It will be the most important technology ever. How do you ensure it benefits everyone? Truly benefits all of humanity. That's our mission. To ensure AGI benefits all of humanity. We are serious about it. If you look at our corporate structure, we’ve tried multiple iterations, trying to encode some of our values into the structure — how do we ensure this really benefits people?
You can see this in the product choices we make, right? We decided to launch ChatGPT because we truly believe this is a technology that needs to be put into people’s hands. By the way, this was highly controversial. There was another school of thought saying the way to do this is you have to build it in secret; you can’t give people access; you need to do it another way. I think when we think about how to truly make society resilient, how to truly benefit people, all of this points to the direction we’ve been heading. There’s a lot of nuance here, but I think the path we’re on aligns very well with this. I think the bottom line for everyone isn’t just going to improve a little bit. I think we're heading toward a world where — I mean, think about it, even having a doctor in your pocket better than the best medical team anyone in the world can get today, accessible to anyone with a smartphone — it’s coming, and it’s going to be free, right? That’s a crazy, crazy fact. This isn’t just a little improvement; it’s fundamentally raising the bottom line dramatically.
Now, I think about the question of raising the ceiling, and I think that’s fair, right? I think we’ve thought a lot about how distribution will work. You look at the OpenAI Foundation, which owns somewhere between 25% to 30%, roughly in that range, of OpenAI equity, right? That’s over $150 billion if OpenAI becomes extremely successful, and all that value locked in the nonprofit will genuinely be used to benefit the world. We think that’s...
Altman: Um, we were having dinner the other day, and I just wanted to interject while listening to you talk about what you do with your agent. I mean, look, you’re freaking brilliant, and I was just thinking, my God, Greg, you and this machine.
Although I feel like I’m doing some interesting things to help myself, when I hear you describe your life and what you’re doing, I just think, 'My God, man.' I mean, there are going to be superhumans running around. I don’t know. It’s kind of intimidating. I mean, it’s cool, but it’s also pretty intimidating. I think, 'I have no idea how to use this the way you do.'
Host: Can I try to give a less polished version?
Greg: Sure, go ahead.
Host: Um, I hope I don’t get into trouble for this. Uh, I can see three possible futures for the world. Um, I can see one where, like Greg said, the bottom line improves significantly.
You know, everyone subjectively feels about 10 times wealthier. The level of material abundance and prosperity would be incredibly immense. People would say, 'My goodness, compared to life a decade ago, I'm living so much better now.' But at the same time, in that world, those who truly master how to use Agents, acquire massive computing power, etc., we will see some trillionaires—perhaps around 10 of them.
So the baseline will rise significantly—truly significantly—but because this is a lever people can genuinely utilize, the most capable and ambitious individuals, those who already start wealthy and gain access to large amounts of computing power, inequality will become more severe. So that's one possible world I can envision. I can also imagine another world through various different potential scenarios.
In that world, the baseline wouldn't rise as much. We wouldn't create as much overall prosperity, but the level of inequality would be smaller. So perhaps in ten years, people might feel twice as wealthy, but inequality would decrease. Actually, I’ll stop here to discuss these two scenarios because I believe this is the crux of the issue. Is the third scenario scarier? No, but it would distract from the key point I want to convey.
I think for many people around the world, it’s clear, Greg. I believe people should lean more towards the first scenario—it’s obvious. I think you’d agree with that. But emotionally, many do not.
Greg: What I would say is, I don’t want to claim what is obvious or not.
But personally, I just see the tremendous potential embedded in this technology. I think it’s crucial from a societal perspective and even from the standpoint of U.S. competitiveness. Look at robotics—I feel we are not leading in robotics at all. We truly aren’t. But in software, we do have an opportunity. I think all these factors combined make it worth considering simultaneously.
Altman: Yes, obviously, there are many other areas where people disagree, but I think most would agree that the U.S. needs to stay competitive in chips, robotics, AI, and everything else. However, there will be a significant question about how we organize our society and economy: Do we pursue maximum prosperity and accept the resulting inequality, or do we impose restrictions because we care more about relative dynamics and fears—if we don’t act, yes, the world may become more prosperous, but those who are really good at using this tool will hold all the power. You know, emotionally, I truly understand why this isn’t so obvious. I really understand the fear—if we let inequality spread here, the compounding nature of this tool is something we still don’t fully grasp.
Finally, I would say, regardless, I think everyone should hope for more computing power, more infrastructure, and as affordable access to AI as possible, because otherwise, I think you will exacerbate inequality—if the supply is limited and prices rise due to supply and demand, only the wealthy will have access to it.
Greg: Yes, I want to add something to that because I think, to some extent, this relates to what was mentioned earlier about how Sam and I work together and think, right? Because listing those two options, I think there’s actually a good point: Are these the only two options? Is there something in this space that we’re not seeing? I think the last point Sam made was a mental unlock for me—AI is an opportunity.
AI is an opportunity for everyone, right? If you can access computing power. If you have computing power, you can succeed; if you don’t, you can’t, right? No matter how skilled you are with Agents, if you lack the computing power to run them, you can’t do much. So I think a world where everyone has access to computing power—for my generation, growing up, we were better at using computers than our parents, right? We grew up with it. We are native to it.
I think the current generation growing up will be as proficient in using Agent as I am, and I believe they will be ten times better than me. Watching all of this unfold is truly mind-blowing.
Host: It really is mind-blowing. I mean, I wrote about it—this was like the first big story on Core Memory, the guy who built a nuclear fusion device using Claude.
You’ve recently, according to reports, undergone consolidation and refocusing.
I’m genuinely curious. I believe the audience would also like to know: What’s currently on the table? What has been cut? What do you care about? Why were these cuts made? And why is this important?
Altman: Before Greg answers, let me just say this. Greg has taken on the task of figuring out our cohesive product portfolio and the research that supports it. This has brought an amazing sense of satisfaction within the company. The rollout of all these initiatives might take a bit longer. He’s only been in this role for a few weeks, roughly that long. But the energy, excitement, and enthusiasm that Greg has brought to what he’s doing here is incredible.
Host: So, could you elaborate on what it will look like?
Host: Can you elaborate further? For instance, when you joined a few weeks ago, some of these cuts were already underway, right? I mean, but after you arrived, you evaluated everything.
Greg: I’ve been deeply involved behind the scenes with many parts of OpenAI, so I think taking on this front-facing role in this particular area is relatively recent. There’s an interesting fact: I actually built the first version of the API. So, ever since OpenAI had a product, I’ve been working on it and have always cared deeply about it. There’s a lot I could say about it, such as how central it is to our mission in ways we didn’t fully realize before building products, and only later did we come to understand its importance.
That’s why I’ve stayed so close to it. Where we are now is clearly at a moment of transition toward Agent. No doubt about it. Right? In the field of software engineering, for instance, you’ve been feeling this shift over the past six months. Over the course of 25 years, there’s been a transition—from 'Yeah, kind of like autocomplete' to 'Okay, you’ve got a sidebar in your editor, and you’ll start mainly interacting there, but you’re still primarily doing the same kind of software development you used to,' and now 'Actually, you want a tool like Codex, which is truly an Agent management platform. The Agent handles all the details, the Agent does all the tedious work, and maybe 20% is about how you put things together, how you structure code, and some higher-level aspects that humans still genuinely care about and want to manage, but the exact code details—no, that’s the Agent’s job.'
So, the question we face first is, how do we seize this moment? Because it’s not just about software, right? We see the potential across every vertical, right? Law, finance, certain mechanical skills like writing, creating spreadsheets, and presentations.
So how do we make our model excel in these aspects, right? It’s like you collaborate with domain experts, create evaluations, develop training data, and truly leverage the powerful domain knowledge of AI, applying it within these vertical fields, gaining experience, and receiving feedback such as 'Yes, you’ve done well' to figure out what is good.
Thus, we have a concrete vision and know what to do, but we need to ensure that we are building the right product interfaces to unleash all of this. One thing we've discovered is that models have transitioned from being the product itself to becoming part of the product, right? In the past, there was a very thin software layer on top of them, and you didn’t need to think too hard about how they were architected.
But now, it's a very thick layer, right? You have things like skill connectors, methods for connecting to computer usage, managing context and memory, and exact ways of handling all these elements. So there is a deep software layer, which you might almost think of as AI.
It's somewhat like we have a brain-like model, and now we're building the body. Both are challenging. They must be co-designed together. So much of what we’re focusing on right now involves first building an amazing agent platform. This is our top priority for delivery. We have teams executing this exceptionally well.
I’m very excited about what we’re going to release in the coming weeks. The second question is where you actually want to apply these agents. Our priority is genuinely oriented toward computer-related work, right? By the way, I am deliberately using this term. People love to talk about knowledge work, but no one considers themselves a knowledge worker, right? That’s not something people identify with.
The term feels somewhat detached from tangible realities. But I like the phrase 'computer work' because I really don’t want to do computer work. It doesn’t sound like something I’d want to pursue. However, you realize how much time you spend doing just that—sitting at a desk, typing constantly, hunching over, developing carpal tunnel syndrome—all those sorts of things.
So we’re focused on bringing Codex, which already exists today, not only to software engineers but truly making Codex a tool for everyone. This is coming very soon. We’re even releasing some updates during this podcast recording. There’s still a lot of exciting progress in the pipeline in that direction.
Then the third focus is truly contemplating personal AGI. Think about ChatGPT, which has a billion users now. Everyone on Earth will want an AI that represents them, holds their context, and has earned their trust—it’s not just something you build or interact with one-on-one; rather, it can act on your behalf externally.
For example, maybe it knows you like a particular musician who is in town, notices proactively that tickets have just been released, and finds some great, inexpensive tickets available for purchase. So it goes ahead and buys them, right? Perhaps it knows that it has established trust with you, so it understands that, yes, I am permitted to do this without needing approval.
Or maybe it realizes that I should double-check. I’m not entirely sure—I should verify. So we’re also building this capability. If you think about these elements, they all integrate into a coherent whole. Ultimately, fast-forward to where we’re headed—you really just want an AGI, right? You don’t want just a language model.
You don’t want threads. You don’t want any of these details. You just want something that can assist you, act on your behalf, solve problems for you, understand your goals, and achieve them in both professional and personal contexts. This is what we are prioritizing and building.
Yes. So, you know, for me, an important question is, you know, people are asking us how we view consumers? How do we view enterprises? The answer is, if you go by today’s definitions of these terms, we care deeply about consumers. We care deeply about businesses.
But I think the meanings of these terms will change and blur because what we are doing is unleashing another wave of entrepreneurship. I believe we are seeing the forefront of it. Small companies are able to generate substantial revenue that was previously unattainable. This has been a trend for some time now.
And this will only truly accelerate. Is it enterprise-level? Is it consumer-level? Actually, it doesn’t quite fit either category, does it? So I think our real focus is solving objectives across all scenarios. That’s how we view things. It means we need to deprioritize other things that are also excellent.
Therefore—
Host: Sora was cut.
Greg Brockman: Yes.
Host: Sora was the most obvious one.
Greg Brockman: Why? Because it represents a different branch on the technology tree, right? If you look at the models that really power Sora, they weren’t unified with the core GPT series. Secondly, the use cases weren’t aligned either, right? It didn’t align well with this goal-oriented approach—though it had elements of creative expression, which is very important. I think it was an incredible model, and the team did an amazing job. I believe that technology will continue in other applications, but our focus is on the product suite I described. What do we want to deliver in the next 3 months, 6 months, 12 months? By the way, what I described is just the first step, as we also see a path toward even more powerful models. For example, what we’re doing now in mathematics is somewhat astonishing—the result I mentioned earlier about solving the new Erdős problem seems particularly significant.
Host: Just now, someone was still using GPT-5.4 Pro, just like two years ago.
Greg Brockman: Right. We used to train our models. We had a team of 20 people trying to train our model to solve Olympiad computational problems, and we won a bronze medal. A team of 20, spending about two weeks, plus a significant amount of computing power. Now, just this model that we trained very casually, someone is able to direct it toward a problem and achieve such results. What if you direct it toward drug discovery? What if you really let that team of 20 and all that computing power focus on pushing it for scientific discovery? This is something no one has priced yet.
So I think this is about preparing for the moment of Agent, truly ensuring that the product investments we are making are well-structured. We need to consider how all these components fit together, have well-functioning connectors, and ensure every part can be integrated. Moreover, we need to build an ecosystem because it’s not just about what we build, right? We want to create some example Agents.
You can almost think of Codex as an example Agent, but it should be like this—if you are a developer, if you are someone with creative ideas, you can build your own Agent, right? You can build it for your application, for your purpose. If you care about a specific mathematical problem, you should be able to apply the Agent to that mathematical problem.
So, we are realizing all of this.
Host: But you—I mean, okay, let me quickly go over this question. I mean, so part of the reason might be that you had to cut Sora to get the computing power, right? So that computing power—
Host: It reportedly consumed a massive amount of computing power.
Greg Brockman: I mean, every successful thing in this field consumes a massive amount of computing power.
Host: I mean, both of you, especially I think including Ilya and some others, are known for being fully committed early on to deciding where to direct your computing power, and now you have to make these very tough decisions.
You have to serve all your customers. You need to make some money because you are spending a lot of money, so both of you are naturally inclined to take the biggest possible risks, but now you are somewhat constrained by the business, which seems very difficult. I feel like it goes against your nature; you look skeptical.
Greg Brockman: That's a strange way to phrase it because I don’t feel constrained by the business. On the contrary, I feel the business enables me to do more, because it is precisely the business that allows us to say we can scale up computing, right? I remember when we launched ChatGPT, right after that we were discussing how much computing resources we needed to purchase. At that time, I thought we needed to buy everything available. We had to do it because the demand was clearly overwhelming, and I believe this greatly unleashed our ability to acquire substantial computing resources.
Sam Altman: I think without this incredible revenue engine, we wouldn't be able to convince anyone that we should get all of this.
Greg Brockman: But when you see reports about things like Stargate, it obviously gives the impression that you've pulled back on the infrastructure front.
Sam Altman: I don’t know where that came from. Like, maybe there’s a site here or there where we say, okay, this particular site might only have air cooling, so it’s less valuable to us than another site — these are very specific cases. But people really want to write 'pulling back' stories. And then soon it will be that OpenAI is too reckless, how can they spend such crazy money. So, the media will go crazy either way, probably just because you need to write something, I guess. But we’ll continue to build out as much compute as we can.
Greg Brockman: I think one point worth considering is that compute is not a cost center for us; it's a profit center, right? When you deploy it into products. So in many ways, our business is very straightforward, right? We rent or buy compute and resell it at a markup. As long as we maintain some positive margin, it's scalable, right? Because demand is infinite.
Host: So, okay. Then, is the data center hardware still moving forward full steam, or?
Sam Altman: Are you referring to our own chips?
Host: Yes. Our own chips, networking.
Sam Altman: Yes. We’re very excited about our own chips. There’s an incredible team working on it.
Host: So, what about Titan?
Sam Altman: No, we don’t discuss timelines, of course, but I’ll just say I’ve spent a lot of time with that team, and I think they’re doing great work.
Host: So everything is moving forward at full speed. The robots also sound like they are moving forward at full speed.
Sam Altman: It will take some time before we have something that makes you say, "Aha, this is the team moment chat should have," but you're not on that project. Social robots are not the current focus, but obviously they will be very important in the future.
Host: And social networks.
Sam Altman: Not doing that. Not much. And obviously, super apps, browsers, all these things are still ongoing.
Greg Brockman: Yes. Regarding super apps, one thing to realize is that I think it's one of those terms that easily draws attention, but I feel like you guys seem to have come up with it.
Sam Altman: I mean, as an internal shorthand, yes. What needs to be realized is that sometimes when we communicate with the team, and of course eventually with the world, those shouldn't be the same thing. Super apps are like icebergs in many ways, right? Yes, we will have an app, and you'll see today’s update to the Codex app, making Codex a tool for everyone. I think ultimately, to reach the state we want, there are many steps to go. But what this really means is that we are building this unified agent infrastructure that I described, which I believe will unleash tremendous potential for everything people want to do.
Host: Okay.
Well, you and I have talked about some of the dramatic events you've been through, which sometimes did slow you down. Unfortunately, you still have dramatic events now. Well, there’s a lawsuit to deal with.
That will be very interesting.
Yes, yes. Okay, we’ll talk about that later. Um, for example, who do you think has executed better over the past two years, OpenAI or Anthropic?
I think it's difficult to evaluate this solely from the current moment, right? Because in my view, we need to take a step back and truly consider what all of us are here to achieve, correct. From OpenAI’s perspective, these things we’ve been discussing—some aspects are obvious, such as selling to enterprises, figuring out how to deliver these programming tools effectively—that has always been part of our focus. In fact, I believe competition genuinely enhances your own thinking, making you realize, hey, we need to concentrate on this. Take programming as an example: I feel we entered the game relatively late—not just building models that excel at programming on an abstract level, where we’ve consistently had the best numbers in programming competitions—but also applying them to messy codebases, real-world data, and similar challenges. I think this is something we realized later than Anthropic did, so I see it as a credit to them, but it also helped improve our own execution. Now, comparing Codex with Claude, I believe we have achieved highly favorable results. Yes, I think we’ve done exceptionally well, and our team across the company has performed remarkably, truly creating a product that is not only competitive but also leading in many, many aspects. However, I believe the core focus has never been about the ups and downs of the news cycle but advancing towards AGI and ensuring its benefits reach everyone. This is something I think we have been extremely dedicated to, and the team has executed exceptionally well.
So I just want to say, you can’t always judge from the outside, but if you think about the things we discussed, there is this focus, but it adds up over multiple timeframes, leading us to where we need to go. Yes, I have a million questions I’d like to ask in the remaining time, but one we touched upon was ensuring that a bunch of models get into the hands of many people who possess powerful technology. But we’ve now reached a stage where some models are too powerful. We’re told they are only open to certain companies. Hmm, Claude and Mythos have certainly made a lot of headlines, and I think they’ve also sparked more fear. I’m curious about how you view this new moment. Have we reached a point where we need to start hiding these powerful models behind the scenes instead of putting them in everyone’s hands?
There have long been some individuals in the world who want to keep AI controlled within a small group of people.
Well, there are many different ways to justify this, some of which are valid, such as legitimate safety concerns. However, I anticipate that if what you want is to control AI, saying 'only we can be trusted'—I think fear-based marketing might be the most effective way to justify that.
Hmm, that doesn’t mean it isn’t legitimate in some cases. Uh, but this is obviously incredibly powerful marketing, saying 'we’ve built a bomb, and we’re about to drop it on your head. We’ll sell you a bunker for $100 million. You’ll need it to run all your operations, but only if we choose you as a customer.' Balancing these new capabilities while still believing that the world needs access to, use of, and understanding of this technology—and proposing new ideas for it—isn’t always easy. We
Our preparedness framework incorporated cybersecurity quite early on, and we’ve been continuously developing mitigation measures, considering how to release these models, how to deliver them to users through trusted access programs, and subsequently how to make more powerful models available to everyone. However, there will be increasing talk about 'certain models being too dangerous to release.'
Indeed, there will be some very dangerous models that must be released differently. But returning to the point Greg mentioned earlier—our goal is to benefit everyone, and at the same time—we don’t want to call it a marketing strategy, but it’s about bringing the entire world along on this journey with us, where we’ll provide you with increasingly advanced technology.
This comes with corresponding responsibility. We will do our best to help create an environment where the world can succeed. But we will try to avoid fear-based marketing as much as possible.
Host: Let me ask you directly. Do you think Mythos is just a lot of marketing?
Sam Altman: I’m confident that it excels in cybersecurity. We’ve been discussing this topic for a long time.
This has always been in our model, but there is a way to express it — our version is that these models will become much stronger in the area of cybersecurity. Our preparedness framework includes this category. This is our plan for deploying these models into the world. This is what our trusted access program looks like. These are the mitigation measures we have set on the models.
I don't think Anthropic's preparedness framework considers it as a category. So I'm sure Mythos is an excellent cybersecurity model, but I believe we have a plan we are satisfied with regarding how to bring such capabilities to the world.
Host: When the incident involving Anthropic and the Department of Defense occurred, from my perspective, it seemed like David Sachs and some individuals with long-standing ties to Elon were involved.
I could see them pushing to make certain things happen and draw government attention to these issues. You intervened fairly quickly afterward and released an announcement that was more favorable to you. I mean, Elon was also very aggressive toward you. Some aspects of this are interesting to me because there is a world where you and Anthropic are actually on one side, and Elon, and even Zuck to some extent as I understand, are on the other side applying pressure. Do you
think Anthropic was treated fairly in that incident? Do you feel
Sam Altman: No. I think there was a lot of bad behavior from all parties involved, but I don’t believe Anthropic was treated well.
Host: Could you elaborate on what specifically went wrong?
Sam Altman: I didn’t like — look, as things unfolded later, these models reached a cybersecurity threshold that clearly aligned with national security interests.
I think everything had a slightly different tone, but, you know, the threat of using a DPA and the actual use of supply chain risk designation — that’s not the kind of relationship I think our government and our AI efforts should have. We really care about supporting the U.S. government. And I think as these models become more powerful, this will become increasingly important.
I certainly don’t think it’s a good stance for labs to say, 'We have this superweapon, by the way, and we won’t collaborate with you to help defend the nation.' But I also don’t think it’s good for the government to engage in public disputes in the media and use heavy-handed tools owned by the government, which need to be used with great caution.
Greg: One thing about OpenAI is that we generally try to stay moderate, centrist, and rational, which is also what we attempt when dealing with the U.S. government. But I see no future, no good future, where leading AI efforts do not assist the U.S. government. If everything we say is true, if we believe what we say about the future trajectory of models—and I certainly do—then the government needs our help, and we are honored to provide it.
Host: You know, as a tech journalist, I'm older now. I mean, when I first started writing about tech, the big battle was basically Microsoft versus everyone else. They were the big villain, the proprietary software company. You had open-source companies as the opposition. You had Marc Andreessen and Netscape, you know, things played out in the media with some sharp words, but then people had their own philosophies and belief camps.
Clearly, these dynamics in the AI field are intense. So much seems tied to the personalities of you, Elon, Zuck, Dario, and Demis, along with various hostilities and worldviews, like something out of the Shakespearean drama you wrote about. How can we possibly move past all this? I mean, all of you seem so entrenched—I can't see a way out. And obviously, as we mentioned, the upcoming lawsuits will only escalate tensions rather than lead to reconciliation.
Altman: I’m sorry to say this, but some of the people involved only believe they can get it right because they think the risks are infinite and, for different reasons, they don’t believe anyone else can do it right or they don’t want anyone else to do it. I think that leads to some very toxic behavior. We can’t control others’ actions, but we will continue to advocate for treating this as a collective project humanity must achieve together, rather than one person or ideology winning or losing, or achieving sole victory, as you might put it.
I mean, we would be remiss not to acknowledge this—it’s super sensitive, but let’s say it. If you look at what happened to you personally in the past week, there have been countless sci-fi books written about factions that truly don’t want technological progress to occur… you know, what happens when things become more extreme. I don’t know, it feels like that switch is now flipping along those lines—related to those trying to stop progress.
Yes. I mean, you know, I read Richard Clarke’s book years ago, where a character similar to Bill Joy decides AI cannot happen and goes around destroying data centers. You know, these narratives have always been there. It just feels like, my God, I can’t see how things could get more intense. So, it doesn’t seem to be getting better. I assume it will fluctuate, but directionally it will become more intense. It must feel like what you wrote about. I mean, obviously, it must be terrifying. Terrifying. Yes. Yes.
I mean, I don’t think I have anything profound to say here. It was a crazy way to wake up. The first day I was kind of in an adrenaline rush, just trying to figure out logistics, and then the second day I was like, you know, there will be more of this, and it was deeply frustrating. I went through a real depressive cycle. But it was very scary, yes, and I don’t think I have anything particularly profound to say. I think apocalyptic rhetoric isn’t helpful. I think the way some other labs talk about us isn’t helpful. Honestly, I don’t want to take sides. I think the way Anthropic talks about OpenAI isn’t helpful.
I hope calmer moments prevail. One thing I’d like to say is that while we can’t control others, we can control ourselves—one thing that really impressed me and surprised me was Sam’s performance throughout this process. On that very day, he was still doing absolutely what needed to be done, constantly pushing the mission forward. I don’t take that for granted at all. I think the level of resilience Sam demonstrated is extremely rare and highly underrated.
I mean, I’ve seen the same thing. I don’t know, people will say “Oh, you must feel relieved now” or something like that. I don’t know. It’s hard to think of anyone in the past three years who has gone through more dramatic business and personal challenges…
The narrative is indeed quite dramatic.
People won't sympathize; they'll say you...
If they don’t sympathize, it doesn’t matter. I just... The night before the incident at my house, I invited some people over for dinner, and we were chatting—some colleagues from work—and we were discussing the next phase. They said, 'Ah, you’ve had a tough time in the media recently, but in a way, it’s also good.' Then I said, 'Well, you know, at least no one wants to kill me.'
Hmm.
Greg: You said that the night before. But it really made me realize, you know, people can say all sorts of harsh things, as long as they are just words, it’s not a big deal. You just move on.
Host: You always tell me your dream is to go to Napa after retirement. I mean, why not?
We still have a lot of work to do.
But now it is clear that someone will achieve AGI. We have about five companies that could reasonably do it.
Which company do you most hope will get there first? Is that the key issue?
No, because you see, I think the key point is this—the missing piece is how we help people truly understand what this technology can do for them. I think that’s something every company can contribute to, right? I think it’s something we have our own perspective on. We talk about it a lot in this podcast. Fundamentally, I believe it’s somewhat the responsibility of everyone trying to create this technology to demonstrate its benefits and why people should want it.
Why should people protect and defend the ability to create it? Why does the U.S. need to maintain leadership here? Why is all this good not only for the country but also for you personally, for your future, and for the future of your children? It’s something we think about every day when we wake up.
I don’t think it’s an exaggeration; it’s something we’ve been talking about. We think about it, whether it’s quirky ideas, you know, the legal structure of our company—we’ve done it many times because we’re trying to find solutions for this mission, for what we consider so important. If others want to contribute to it, the more the better—it’s something we all should be doing, but it’s uniquely what drives us.
We firmly believe that if we can provide technology that allows everyone to thrive, and if we can give people more control over their future, it will lead to a better world, albeit through some twists and turns. I don't think everyone working in this field shares this belief. But anyone who does, we are delighted to collaborate with, because this is the mission by which we want to drive the world forward.
Host: Returning to an earlier question, I think we still have time to discuss a few more topics. Alright. Do you have any questions you'd like to ask? Yes, I do, and I'll respond right away. In fact, I would like to conclude with the question I'm about to ask.
Greg: Okay. Alright, we're ready. We're prepared.
Host: What I mean is, how do you perceive the seriousness of this trial?
Altman: I actually think this is a real opportunity for us to tell our story, because if you look at the history of OpenAI, whenever there was a disagreement, we often let the other side tell the story while we tried not to say, 'Well, that's not what really happened; let's talk about the truth.' But in this case, we finally had no choice, right? Because we had to defend ourselves, we had to state the truth, and we had to recount what happened. I am very proud because I spent a lot of time reviewing history and examining vast amounts of information. Of course, there are always things that can be taken out of context, but...
Greg: Aha, you said something like that. Your diary is quite famous. I know, right? But the issue is that they are not — first of all, these are extremely personal documents, and having something so private taken from you and then weaponized is incredibly painful. But these specific sentences are the worst things the opposing side could find. You'd think, really? The problem is that they are all taken out of context, aren’t they? The thing is, hey, we are in the middle of this...
negotiation, right? We all agree that the only way forward for OpenAI is to become a for-profit entity. Sam, Ilia, Greg, Elon — we all agreed on this. We all said this is what we must do. This is for the mission. Now you’re in the middle of this crazy negotiation, right? Elon says he needs majority ownership. He needs to be CEO. He requires full control. And we almost made it work. Like, okay, fine. We won't be equal partners. You need such a large stake. If that’s what you say you need to be involved, we want you to participate. We can make it happen. Alright, alright. You know, Sam would be CEO. Elon would be CEO. He needs this so that everyone knows he’s in charge. Fine. But absolute control?
Absolute control over OpenAI. Even if you say, okay, I’ll dilute my shares. I’ll give up control later. Then you start thinking, what is our mission? Do we truly believe in our mission? Do we genuinely care about this vision — that we want this technology to benefit everyone? There shouldn’t be one person controlling the entire future, no matter who that person is. That was the breaking point. That’s what led us to say no. So for years, we never told this story, but now we will. Therefore, I think this is indeed an opportunity for people to understand what truly drives us and what we stand for.
Altman: I think it’s insane for him to do this, but my concern at this moment is that he might decide to withdraw the lawsuit before the trial, and then we wouldn’t be able to do all of this. But I’m more than happy to explain everything to the world and turn the page on this chapter.
Host: Alright. So my question goes back to the beginning of this podcast, like the narrative I was talking about.
I feel like as an AI reporter, I grew up hearing that we would all die, we would all lose our jobs. I feel like I've heard that over and over again, and now part of your personal notebook has been made public. I'm curious, with the benefit of hindsight, how would you talk about this issue differently?
Greg: Well, I think the way we think about it today is how I wish I had talked about it back then, in a sense, but I don't know if we could have done that with the knowledge we had at the time.
For example, some of it even relates to the technology itself. Our idea in 2017, 2018 was that the way to build AGI would be through competitive multi-agent simulations. Imagine an island with a thousand agents, all fighting for survival and replication. You can see that if you put in a lot of computing power, maybe it would create something very smart.
But that smart thing wouldn’t be connected to human values at all. You’d have to have a separate step to figure out how you’d even communicate with this thing, right? It’s not growing up in the real world. It doesn’t have any concept of language. It has no connection to our reality. It’s just smart and powerful. That’s a very scary system, right? From the start, you’d have to think about how you might align it.
But instead, we have this language model route, which is rooted in our values, rooted in understanding humans. We have chain-of-thought, which means you can actually monitor it, and if you handle things correctly in a technical sense, you have a pathway to make it trustworthy so that it truly represents what really motivates AI, and you’d realize we have a completely different technical path to achieve the outcomes we’re discussing, and this path is much more optimistic.
Altman: So I think we had to do some technical learning—what would this technology actually look like? How would it be created? What’s the way to make it useful? I think that’s something we couldn’t understand back then, but we understand it today.
Host: This has been a fascinating discussion. It’s great to have both of you here together.
Host: We’re just a small podcast. We’re honored that you took the time. I guess what I mean is, obviously we’ve covered a lot of ground, but the key point is that there will be new models in a relatively short period of time.
Altman: Yes. Really good new models.
Host: Yes. Well, thank you all. Thank you very much for inviting us.
Greg: A new model that everyone can use.
Altman: Thank you.
Host: Thank you.
Looking to pick stocks or analyze them? Want to know the opportunities and risks in your portfolio? For all your investment-related questions,just ask Futubull AI!
Editor/joryn
