November 21, 2023

What does the OpenAI implosion mean for you?

hero image for blog post

On Friday morning, I was pitching a board member on my vision to build 100 Section GPTs for the upcoming GPT store

My thought: The GPT store could become one of the most valuable stores ever built in tech (similar to Apple’s app store), and we should move quickly to occupy shelf space. Why not have at least 100 GPTs to help managers complete high quality performance reviews or set their teams’ OKRs? Now that’s the business school of the future.

A few hours later, Sam Altman was ousted as CEO, and I was obsessively refreshing OpenAI news and thinking, “Well, shit. Maybe the GPT store won’t even launch.” 

I wasn’t alone – tens of thousands of GPT developers (and other OpenAI true believers) watched in horror over the weekend, asking themselves: “Is my commitment to OpenAI and their technology going to turn out to be a massive mistake?”

I don’t think so. Not yet, at least. Here’s my take on what’s happening at OpenAI and what it means for you.

P.S. Want to learn more from me on AI? Join the AI Mini-MBA, the four-week program building the new AI class of professionals. Use code AIMBA for 25% off.

What’s happening at OpenAI 

(Note: I’ll probably be proven wrong on this in a matter of hours. That’s fine – making predictions is an exercise in futility.) 

If you buy into the current speculation, OpenAI co-founder / researcher Ilya Sutskever (and the rest of the board) believed his co-founders were moving too quickly to commercialize OpenAI and paying too little attention to its safety and security as a technology. 

That’s fair – but why now? I think it could be one of a few reasons: 

  1. OpenAI made a technological breakthrough (as Sam Altman hinted at during a conversation last week) and Sutskever thought, “We’re not ready for the implications of this technology on humanity.” 
  1. The OpenAI team was taken by surprise by the insane consumer demand for ChatGPT. When they founded as a non-profit, they didn’t realize they’d have hundreds of millions of users and a projected $1.3 billion in revenue, which makes a big breakthrough even scarier (Sutskever himself said, “I will admit, to my slight embarrassment…when we made ChatGPT, I didn’t think it was very good.”)
  1. Sutskever was uncomfortable with Altman taking money from Saudi investors to fund a new chip venture, or from Softbank to build an AI phone – both of which potentially jeopardize the mission to build responsible AI. 
  1. Altman was unwilling to give Sutskever the computing resources he needed to do his research on safety and security, preferring to allocate compute to user growth. (Note: OpenAI paused GPT-4 signups last week due to lack of capacity).

Most likely, it’s some combination of the above and/or a few things we’ve yet to find out. Bottom line, Altman/Brockman had a taste (or big gulp) of their futures – they could build the most powerful and iconic tech company ever. They could be the next Gates/Ballmer, Page/Brin or Jobs/Jobs. Untold wealth and influence was around the corner. Not so fast, said the non-profit (but controlling) board.

I have to give huge kudos to the board. They did what they were set up to do (note: not what most boards are set up to do) – safeguard humanity from the dangers of reckless AI development – under immense pressure and scrutiny. Based on the structure of the company, their job is not to make shareholder returns, and in that context, they probably made the right call.

Right call, really bad execution, reflecting their lack of board and business experience. Lots of downstream impacts they did not expect  – including that almost everyone at OpenAI would threaten to quit, meaning that servers could go down, they’d lose control of the technology, Microsoft could sue them, and they’d go out of business. Oops. So much for AI safety and going slow and steady. Train wreck.

The biggest lesson might be: never get between Silicon Valley workers and their financial exit. Sure, some OpenAI employees make millions every year and already are financially set. But most of them are not. The secondary offering from Thrive Capital was going to provide a lot of liquidity to a lot of employees that have been working 6-7 days a week for 5+ years. Once you start to plan to spend money you don’t have, you get REALLY pissed when that money goes away. 

Right now we’re in a holding pattern. Microsoft has hired Altman and former OpenAI president Greg Brockman, but we don’t know what their next move will really be. Microsoft holds the cards – they could start hiring away the hundreds of OpenAI developers who have threatened to quit, or they (and other investors) could eject the interim CEO and re-install Altman at OpenAI by the end of the week. 

I don’t really see Sam Altman working at MSFT (200,000 employees) for very long – so this feels like a play to exert more pressure on the board to reverse course.

What this means for you (a member of the AI class)

If you’re as bullish on AI as I am, you were probably using ChatGPT at work, either to optimize personal processes or build something new. And now you’re wondering what you should do. 

Here’s what I’d tell you:

  1. Nothing has changed (yet). ChatGPT still works. It’s possible OpenAI will turn it off, but I think the risk is very low right now. Ignore the soap opera, spend the time improving your prompting.
  1. If you’re using GPT, keep using it. The area of least risk is optimization and efficiency. So keep using GPT (and other AIs) to find low-cost productivity wins, embed AI in your workflows, and save time in your workday.
  1. If you’re building something new, keep building – but build multi-model. AI isn’t going away. If it’s not OpenAI’s model, it’ll be Google’s or Microsoft’s or someone else’s. Keep building your AI products, but build model-independent so you’re not dependent on GPT. Our Prof AI bot is being built to the OpenAI Assistants API – but we will also test Anthropic’s API, since we use Claude every day and prefer it over GPT for certain tasks.
  1. Pitching the enterprise might get tough. If you’re pitching your bosses on new AI projects, they might get spooked for the next few months. Some are already skeptical (is this the next crypto?) and the chaos from the past week won’t help. So build on tools that won’t spook them as much (e.g., Microsoft, AWS), and focus your talk track on productivity and efficiency gains – versus big transformational investments that look even riskier now.

Want to learn more from me on AI? Join the AI Mini-MBA, the four-week program building the next AI class of professionals. Use code AIMBA for 25% off.

Greg Shove
Greg Shove, CEO