Weeks before OpenAI launched ChatGPT in November 2022, the buzzy artificial intelligence company’s executive team devoted an entire meeting to debating one question: should they even release the tool?
“If you know Sam [Altman], he likes to cycle through topics at a high rate, so the fact that we spent this much time on one topic meant it was important,” Brad Lightcap, COO of OpenAI, told CNBC, adding, “It was a debate – people were not 100% sure that this was going to be the right thing to do or something worth our time.”
At the time, Lightcap said, OpenAI had a limited number of GPUs and capacity, and largely thought of itself as a company that builds tools for developers and businesses. He recalled that Altman, CEO, was a big proponent of “just trying it,” his thesis being that there was something important and personal about text-based interaction with the models.
The move paid off. ChatGPT broke records as the fastest-growing consumer app in history, and now has about 100 million weekly active users, along with more than 92% of Fortune 500 companies using the platform, according to OpenAI. Earlier this year, Microsoft invested an additional $10 billion in the company, making it the biggest AI investment of the year, according to PitchBook, and OpenAI is reportedly in talks to close a deal that would lead to an $86 billion valuation.
But recently, those milestones have been eclipsed by a roller coaster couple of weeks at the company. Last month, OpenAI’s board ousted Altman, prompting resignations – or threats of resignations – including an open letter signed by virtually all of OpenAI’s employees, and uproar from investors, including Microsoft. Within a week, Altman was back at the company. Last Wednesday, OpenAI announced a new board, including former Salesforce co-CEO Bret Taylor, former Treasury Secretary Larry Summers and Quora CEO Adam D’Angelo. Microsoft obtained a nonvoting board observer position.
CNBC caught up with Lightcap last month after the company’s first in-person event, Dev Day, and then briefly chatted with him again last week after the leadership changes.
This interview has been edited for length and clarity.
We’re coming up on the year anniversary of ChatGPT. This time last year, weeks before its debut, DALL-E was under research preview, Stable Diffusion was getting a lot of attention, and your chatbot didn’t exist yet. What was it like on the team then?
I think at that point we very much were thinking of ourselves as a company that built tools for developers, so it was a little bit of a new flavor of thing for us to have to think about, ‘OK, this is something that the average person could pick up and use.’
We had a flavor of that with DALL-E – we had launched it in the spring, and we’d let people basically play with it and we saw a lot of fanfare and excitement there. But we always thought – because DALL-E was such a visual medium – that it was going to be the high watermark for what the consumer level of interest would be in these tools. So I think when we were looking at ChatGPT, we were using DALL-E as a little bit of a benchmark for what success might look like, in terms of just how many people would use this, who would be interested in it, would this be something that people played with for a little bit and decided that this isn’t really a tool and is more of a toy.
I remember us taking bets on how large ChatGPT would ever get. I think I had one of the more aggressive bets, which was a million concurrent users at any given point at the apex of our use, and we were trying to plan against that, and of course I was trying to run all the models against that, as the finance person. So that was kind of where we were, and we were very wrong.
What did you predict as far as the business opportunity, and how did the rollout and adoption differ from your expectations?
At the time, there was no way to know all the things that it could be useful for. And I think that’s the paradox, somewhat, of this technology – it’s so broadly useful, and it kind of seeps into all the cracks of the world and all the cracks of your life as a tool in places that you didn’t know you needed a tool.
So you do the business analysis ahead of time, and you try and think, “OK, well, what would people use this for? What would drive sustained consumption of it?” And you try and assign it utility. You try and think about it as, “People might use it for creative writing, they might use it for this or that.” And in a way, there were so many things that now, in retrospect, we know people use it for, but at the time, we could never conceive of – to justify why this was ever going to be such a big thing.
There’s maybe an interesting lesson there, which is that the business analysis doesn’t always tell the story, but being able to take a bet and really clue in on where something is going to have broad-based utility, broad-based value, and where it’s going to resonate with people as a new thing – sometimes that has to trump the business analysis.
In August, 80% of Fortune 500 companies had adopted ChatGPT. Now, as of November, you’re at 92%. As far as that remaining 8% of companies that haven’t adopted the tool yet, have you noticed any trends?
My guess is it’s probably heavy industry in some senses. … Big, capital-intensive industries like oil and gas, or industries with a lot of heavy machinery, where the work is more about production of a good and a little bit less about being an information business or a services business.
In your eyes, what’s the most overhyped and underhyped aspect – specifically – of AI today?
I think the overhyped aspect is that it, in one fell swoop, can deliver substantive business change. We talk to a lot of companies that come in and they want to kind of hang on us the thing that they’ve wanted to do for a long time – “We want to get revenue growth back to 15% year over year,” or “We want to cut X million dollars of cost out of this cost line.” And there’s almost never a silver bullet answer there – there’s never one thing you can do with AI that solves that problem in full. And I think that’s just a testament to the world being really big and messy, and that these systems are still evolving, they’re still really in their infancy.
The thing that we do see, and I think where they are underhyped, is the level of individual empowerment and enablement that these systems create for their end users. That story is not told, and the things that we hear from our users or customers are about people who now have superpowers because of what the tools allow them to do, that those people couldn’t previously do.
Let’s talk about the business of generative AI. Critics say there are consumer apps galore, but is there a risk of saturation? What does the technology really mean for business?
We’re in this really early period, and I think it’s really important that we maintain the ability for the world to sustain a very high rate of experimentation and a very high rate of trial and error. If you look at historical trends of past phase shifts in technology, there’s always this really important experimentation phase. It’s very hard to get the technology right from day zero. We get there eventually – the end state of the technology, we eventually converge to that point – but it’s only after really trying a lot of things and seeing what works and then seeing what doesn’t, and for people to build on top of the things that work, to create the next best things.
My spicy take on this is I think the most important things that get built on top of this technology are actually things that haven’t been created yet. Because it takes some cycles of building with the tools to really understand what they’re capable of, and then how to combine the tools with other aspects of technology to create something that’s really greater than the sum of its parts. And so that’s to be expected, I think it’s very healthy.
Years ago, people were surprised by AI’s level of use in trucking – it was seen by some as too traditional of an industry, and now we’re at the point when AI is part of virtually every sector. As far as adoption trends you’re seeing in recent years, is there a through line like that – an industry using AI in a new or different way that you’re especially surprised by?
There’s definitely high pull with technical industries. I think one thing that we’ve seen is it’s a great technical assistant – whether you’re a software engineer, mechanical engineer, chemist or biologist, there’s a vast pool of knowledge that sits on the other side of your discipline that your mastery of kind of dictates your effectiveness.
I think people spend their careers just trying to master that discipline, by trying to absorb as much knowledge as they can about the domain. And especially in some domains, whether it’s, you know, biology or chemistry or AI, the literature on the field is constantly evolving and constantly expanding – there’s constantly new things being discovered, new studies being done. So I don’t know if it’s the most surprising thing per se, but one of the coolest things we see is ChatGPT acting almost like a sidekick in that regard, almost like a research assistant. … We feel the pull from those industries in a way that, sitting back where I did in November of 2022, I would not have expected.
We’re now a couple of months into ChatGPT Enterprise. I remember you launched after less than a year of development, with more than 20 beta tester companies like Block and Canva. How, specifically, has usage grown? Who are some of your biggest clients since launch, and how much of a revenue driver is it for OpenAI?
The enthusiasm has been overwhelming. We’re still a smallish team, so we don’t offer the product self-serve as of today – we will imminently – but we’ve tried to get through as many interested parties as we can get through. …
A lot of the focus of the last two months was really making sure that those first few customers that we implemented and onboarded saw value in the product. … We’re still working through waitlists of many, many, many thousands, and our hope is to get to everyone, and that’s going to be a goal for 2024.
Now that we have ChatGPT Enterprise, what’s the current biggest revenue driver for OpenAI? How do you think that will evolve?
We almost never take a revenue-centric approach to what we build and how we launch stuff. We almost always take a usage-centric approach, which is that we very much look at the things we build as needing to qualify in one of two areas – they need to be really useful tools for developers to go off and build things or they need to be really useful abstractions for users to find more value in the product. So that was basically how we looked at [the] launch.
It actually kind of maps quite perfectly if you look at GPTs, for example – it’s something that checks the box, hopefully, on that second part: “Is this a way to abstract the power of the intelligence in ChatGPT and to point it at something that’s very specific, and to give it the right context, the right tools, the right connections, to be able to get really good at solving for a specific thing?” That may be a thing that’s useful in your work, or maybe a thing that’s useful in your life, or it may just be a fun thing – it may just be that you create a funny GPT and it’s a cool thing to have.
ChatGPT going multimodal – offering image generation and other tools within its same service – is a big priority for the company that you outlined at Dev Day. Tell me about why it’s so important.
The world is multimodal. If you think about the way we as humans process the world and engage with the world, we see things, we hear things, we say things – the world is much bigger than text. So to us, it always felt incomplete for text and code to be the single modalities, the single interfaces that we could have to how powerful these models are and what they can do.
So you start to layer in vision capabilities. The fact that a computer can see something that’s happening in the world, and describe it and engage with it and reason about it, is probably the most astounding thing that I personally have seen at OpenAI in my five years here. I still can’t really wrap my head around that and the implications of it. But you can start to see, if you squint, how things that weren’t possible previously now start to become possible.
You think about things as simple as being able to help visually impaired people better understand the world around them with low latency and high quality. You think about ways that companies now can better understand their equipment, for example, and can create experiences for consumers that can kind of demystify how the thing in front of them works just by pointing a camera at it. You think about being able to help people better understand and analyze things in an educational capacity – a lot of people are really visual learners – and the ability to see something and be able to engage with a graphic in a way that is more suitable for their learning style, that’s an entirely different capability that we’ve unlocked.
So that’s what’s exciting to me is that it now gives us a way to use the technology that more aligns with the way humans engage with the world – and ultimately make the technology more human.
We know that OpenAI’s GPT-4 large language model is likely more trustworthy than GPT-3.5 but also more vulnerable to potential jailbreaking, or bias. Can you take me through how the new Turbo model announced at Dev Day differs, if at all, and your plans for addressing?
I think we’re probably going to release a Turbo model card [a transparency tool for AI models]. So that’s probably the better place to reference some of the technical benchmarking.
What’s your biggest hope for the year ahead? What do you think future versions of GPT will be able to accomplish that current versions can’t?
I tend to think of the progress curve here as moving along the quality of reasoning ability. If you think about what humans fundamentally do well, it’s that we can take a lot of different concepts, and combine those things together, specific to the thing we want to do or something we’re being asked to do, and create an outcome that is specific to that request in a creative way. We do at work every day, we do it in artistic capacities every day, and it’s the thing that kind of underpins how we made the world the world.
That’s the direction I think we’d like to see the technology go – that its reasoning ability is dramatically enhanced; it can take increasingly complex tasks and figure out how to decompose those tasks into the pieces it needs, to be able to complete them at a high level of proficiency; and then adjacent to all of that to do it really safely, the emphasis we put from a research perspective on getting the safety aspects of the technology right. And as the systems become more capable, we need to keep the safety bar moving in parallel, because these systems will become more and more autonomous over time. And this doesn’t work, if you can’t get the safety aspect right too.
In the past year, what’s one day that really stands out to you at the company?
The day we launched GPT-4 was really special. People, I think, don’t quite realize how long we’d been sitting with GPT-4 before we released it. So there was an internal level of excitement about it, and an internal feeling of just knowing that this was going to be a real shift in what these models are capable of and what people consider to be a really high-quality language model. It’s the type of thing you want to share with the world as soon as you have it. And I think we as a team get a lot of energy from the world’s reaction to these things, and the excitement that we see in our customers, our developers, our users, when they get to engage with it. There was that pent up excitement that had built over the preceding seven or eight months of knowing that that moment was coming. …
We didn’t do a big launch event the way we did with Dev Day. It was one of those launches where you just kind of hit the button one morning and all of a sudden it didn’t exist and now it does exist. I almost like those more – the bigger launch moments are fun, but I got to spend the day with the team here in San Francisco, and … there was a moment right after we launched it, I think we were in our all-hands space in our cafe, and everyone just looked around at each other, and there was kind of this mix of excitement and relief and exhaustion, but everyone was smiling. And that’s a very special thing … you don’t get a lot of moments like that.
What did you personally do when you got home to celebrate?
I think I worked until late in the night.
In OpenAI’s less than 10 years, we’ve seen it go from nonprofit to a “research and deployment” company. People have asked about what that means and what your structure is like, as well as how much Microsoft owns. Can you provide some clarity on that journey?
High-level, we always knew that we wanted to have a structure that, at its core, retained the original OpenAI – the OpenAI nonprofit. When we structured the company, the question was how to do that. And that was basically the work I did when I first joined OpenAI: figuring out, ‘Is there a way to actually place OpenAI’s mission – and its nonprofit as the embodiment of that mission – at the center of what our new structure would be?’
So that’s the first thing to understand, I think, about OpenAI: It’s not a normal company in that sense. It really is a company that was designed to wrap around the original nonprofit quite literally, structurally, but also spiritually to be an extension of the nonprofit’s mission. Its duty, primarily, is to carry out the nonprofit’s mission, which is to build artificial general intelligence that’s safe and broadly beneficial for humanity. So maybe it sounds crazy, and certainly there would have been easier structural and technical ways to build companies that would have come with lower, smaller legal bills, but it was really important to us to get that right. So I don’t know if we did – time will tell. One nice thing is the structure is really adaptable. And so as we learn more over time and have to adapt to the world, we can make sure that the structure is set up for success, but I think the core piece of it is we want to retain OpenAI’s core mission as the raison d’etre for the company.
And Microsoft’s ownership?
I won’t comment on the specifics of any of the structural aspects, but it’s a structure that’s designed to partner with the world, and Microsoft happens to be a great partner. But we very much think about how we make this structure something that is extensible into the world, and has an engagement with the world that can fit with the nonprofit’s mission. So I think that was kind of partly also what underpinned the profit cap model.
You’ve worked with Sam Altman since OpenAI’s founding. What are the main differences between you at work? What strengths and weaknesses do you fill in for each other?
Sam is fun to work with – moves incredibly fast. I think he and I have that in common, that we like to maintain high velocity on all things.
I think where we balance each other out is that Sam is definitely future-oriented – I like to think that he’s trying to live years in the future, and I think should live years in the future, and he’s quite good at that. My job is to make sure that the way that we built the company, the way we build our operations, the way that we build our engagement model with our customers and our partners, reflects not only where we think the world is going on that five-plus-year basis, but also accomplishes the things that we want to accomplish today.
The challenge that we have is that the technology is changing quickly. So there’s a big premium that we put on being able to try and educate the world on how to use the technology, the type of work we do, from safety all the way through to capabilities, how we think about products and the shifting face of our products. And there’s an orchestration that has to get done really well, to do that right at high speed, when the ground underneath you is changing quickly. So that’s where I think probably, hopefully, my value-add is, is focusing on getting that right – building a great team that can help us get that right. If you can get that right and put one foot in front of the other, I think you eventually end up on the right five-year path.
We saw a lot change at OpenAI within the span of about a week. Now that Sam is back at the company and the new board structure has been released, what are your thoughts on how that will impact the day to day? And do you anticipate additional changes to structure happening in the coming months?
I don’t expect any day-to-day change – our mission is the same, and our focus remains doing great research and building for and serving customers, users, and partners. We have shared that we have an initial board now, and expect to add more board members
What’s the general mood like at the company right now?
The last couple weeks brought the company together in a way that is hard to describe. I feel a tremendous amount of gratitude to our team and a deep appreciation for our customers and partners, who were incredibly supportive throughout. That support really energizes us to continue to work that much harder toward our mission. Personally, I feel very focused.
[Lightcap and OpenAI declined to comment further on specifics of the circumstances around Altman’s ouster and reappointment.]
Don’t miss these stories from CNBC PRO: