January 8, 2024

Where's the money going to be made in AI?

hero image for blog post

Money and talent is flowing toward AI at a rapid rate. Funding to AI-related startups surpassed $68.7 billion (the size of the DOJ’s budget) in 2023. 25% of U.S. venture capital in 2023 went to AI companies, up from 11% in 2022.

There are currently 57,000 active job postings on LinkedIn related to AI, and the number of active AI job posts increased 22% in July-October compared to the three months prior.

When I was in my 30s, I benefited from coming of (professional) age during the rise of the Internet and e-commerce. If you’re in your 30s, generative AI is your Internet moment – a major technological wave that you can ride for your career. You’re probably looking at this AI wave and thinking, “Hmm - maybe I should get in the door before it’s too late.” And if you’re not, you should be.

But let’s do the math first. I was around for the dot com boom – and the crash. Some of these companies will end up being the next Google. Others (many more of them) are going to be the next Pets.com.

Here’s our take on the companies that will work out and the ones that won’t. Disagree? Shoot us a note. We’ll share the best predictions in a future post.

– Greg

The evidence so far: There’s money to be made in AI

AI isn’t the next NFTs – we’re already seeing evidence of revenue.

  • Microsoft attributed 3 points of its 29% growth in Azure cloud revenue in Q3 to AI.
  • Tom Tunguz estimated that Microsoft Copilot is doing $360 million in ARR.
  • OpenAI is currently generating revenue at a pace of $1.6 billion a year – nearly $2 million per employee.
  • Datadog CEO Olivier Pomel said “AI-native customers” contributed 2.5% of their annualized revenue in Q3, around $52 million.
  • Canva says its projected $1.7 billion in annualized revenue (a 21% surge since the last forecast in March) can be attributed to its investment in AI.

Most of this can be attributed to paid user growth as a result of AI. OpenAI charges $20/month for ChatGPT Plus and has 2 million subscribers. Canva says 65 million users have joined the platform in the last year, coinciding with its AI investment.

And these numbers are small compared to expectations for the next 5-10 years. Amazon CEO Andy Jassy has said that generative AI could generate “tens of billions in revenues.”

But there are also warning signs

Revenue models aren’t business models. AI tokens are expensive to serve, ranging from $0.03 to $3.60 for an average 250-word query and answer.  

So far, few companies have figured out how to translate AI revenue into AI profits. Users that pay $20/month for ChatGPT Plus are limited in the number of messages they can send due to high costs. Microsoft is losing an average $20 per month per user to run its Github Copilot service, and as much as $80 per month for heavy users. Hard to make up for that in volume.

Most AI companies right now are in the first stage of startup growth: scale. OpenAI is trying to lure developers by dropping its costs significantly – the Amazon / Uber play of years past. The playbook is to use investor capital, fund massive growth, and find the sustainable business model later.

This will likely work for OpenAI, Anthropic, and other big tech players that have billions in funding and an early lead (like it eventually worked for Uber and Amazon). But for everyone else (there are 15,000 AI startups in the US alone) investor patience will be shorter. These companies will need to figure out how to make money.

Where will the money be made?

If you’re going to invest your time or capital into one of these companies, you’ll be better off with a company that has a shot at becoming a real business, not just cool tech. So let’s unpack where money will be made in AI.

Most analysts approach this question through the lens of a product map – the winners will be the picks and shovels (the hardware and infrastructure that support AI) and less money will be made by applications and features built on top. It’s how Amazon, eBay, Shopify, and Stripe won the ecommerce game: providing the platform for e-commerce, not the actual products sold on the Internet.

And sure, picks and shovels is always a good place to be. If you can get a job at Nvidia (hardware) or Azure’s AI arm (infrastructure), that’s likely a great place to invest the next 3-5 years of your career. Same with companies that have built foundational AI models like OpenAI, Anthropic, or MidJourney. If you’re looking for a job in AI, you should be applying to these organizations every six months. Don’t take no for an answer — these companies are the future “big tech” of AI.

But the reality is that most of us will be evaluating opportunities in the application layer – companies that offer products built on top of large language models like GPT or Llama.

Today, many analysts have written off the application layer as “wrappers” whose only real value is a user interface on top of an AI model (the real technology). And clearly, many applications will struggle to move from users to revenue to profits (see above), and/or will face disruption by AI models themselves as platforms like ChatGPT and Claude increase their capabilities.

But just as the rise of mobile, cloud, and SaaS spawned platform winners AND application layer winners, AI will do the same. There will be hundreds of AI applications that cross the chasm from user growth to revenue growth to profitability.

So here’s how we’d evaluate AI application layer companies for their ability to make money.

The two types of AI companies that will make money

Once you remove the veneer of AI, these companies are tech businesses just like any other tech business. They’ll face the same problems that every other application developer faces, namely:

  • Unit economics: Can they make more money than they spend on a per-unit basis? (AKA, can they charge more per token generated than it costs to generate an output)?
  • Distribution: Can they figure out the right channels to get their AI product to customers (e.g., D2C, enterprise contracts, etc.)?
  • Differentiation: Can they define enough of a competitive advantage via proprietary data, UI, etc., since the core AI technology itself will not be differentiated?

There are typically two strategies to building a sustainable business model as a tech startup. Companies either focus on rapid growth (funded by deep-pocketed venture capitalists) or have to develop a product/use case that justifies sustainable margins.

1. Growth companies

Companies where the problem is so ubiquitous that growth happens extremely quickly and VCs will fund it. This is the strategy of most AI models today (e.g. Anthropic, Inflection, OpenAI). These companies will need to raise hundreds of millions, and won’t worry about profitability for the foreseeable future. There may be a few AI apps that can adopt this strategy – but they’ll need to serve horizontal use cases in massive markets. Think Writer, which recently raised a $100 million Series B to be the “full stack generative AI platform for the enterprise.” This is a riskier play because these companies need a massive user base to win, but a few winners will emerge.

2. Margins companies

Companies that solve a problem so acute that customers (most likely enterprise buyers) are willing to spend a lot of money to fix it. The problem has to be well-suited to AI’s strengths – in other words, there’s a lot of proprietary data available and the model can be trained on it. This will be the case in areas like healthcare, where nurses spend 25% of their time on costly administrative tasks, there’s a wealth of patient data already available to train AI, and most importantly, there’s a high willingness to spend to fix this problem.

You might have the opportunity to invest your time or dollars at a growth company like Writer. But most application companies will likely need to find some path to margins, especially in a post-ZIRP world. They’ll need to find a problem that’s so acute that customers are willing to pay more than it costs to use AI to solve the problem.

Assessing AI application companies

If you’re evaluating an AI application startup, here are three questions we’d ask to understand if the company is solving an acute problem that could eventually translate to profits.

1. Does the company augment or take over repetitive, costly knowledge work?

One way to identify an acute problem is to find one that companies are already paying high-cost labor to do. Think paralegals, video editors, underwriters, or software developers. These are knowledge workers making $75-$150 per hour to do repetitive – but important – knowledge work.

AI companies that solve these problems are likely to find buyers who will pay enough to cover token costs for this work, since they’re already paying expensive humans.

Take Harvey, an AI startup that builds custom LLMs for law firms. The average law associate in New York City makes $210,000 per year – if we assume the lawyer is working 60 hours a week, that’s about $70 per hour. Therefore, if AI can complete the same amount of work at the same quality as the associate for $70/hour, law firms will likely shift some of that work to LLMs.

Startups like Descript (video editors), Planck (underwriters), and Synthesia (video talking heads) will try to make a similar case to replace or augment high cost, repetitive knowledge work.

2. Does the company have proprietary training data?

Quality is a key component in how much a buyer is willing to pay for AI. Namely, can the AI product do as good or better of a job than the knowledge worker? This won’t be possible without robust training data.

Many of today’s hot AI healthcare startups stand out here. Tempus (worth $8.1 billion) has a massive, proprietary database of clinical and molecular data. Butterfly Network has a robust dataset of ultrasound images, used to train their AI algorithms.

In addition to increasing the quality and value of the product, differentiated training data also offers a moat against competitors – which brings us to the final (and hardest) question to ask when evaluating AI applications.

3. Can the company avoid disruption by foundational AI models like OpenAI?

This is the toughest question to answer, because it’s impossible to predict the moves of OpenAI and others. However, just like Amazon has the ability to put a startup out of business with one press release, OpenAI, Microsoft, and other big tech AI companies will be able to do the same to AI startups.

A good example here is Dify, the AI chatbot builder. At Section, we used Dify this summer to prototype our AI tutor bot. But if we’d done this work four months later, there’d be absolutely no need for Dify, since OpenAI added custom GPTs to ChatGPT Plus – which have nearly the same capabilities as Dify. Overnight, Dify’s business just got a lot harder.

Nearly every investor will ask this question: Why won’t OpenAI just release this feature? So you should be asking the same when evaluating an AI application startup. Look for a cogent, confident, but realistic answer from the company, and do your own analysis.

Is the problem the company is trying to solve specific enough that it will need unique training data? Is it in a big enough market to build a viable business, but not so broad that Microsoft or OpenAI will quickly expand to that use case?

Our down and dirty advice

It’s still early, and the risks associated with AI-powered businesses are real and multiplying. That includes LLM hallucinations, pending government regulation, and copyright lawsuits – the New York Times just filed their first lawsuit last week.

But this is also an opportunity. These risks will scare many people away – which rewards those willing to handle the risk.

First: If you can get into a company like OpenAI, do it.

It’s a once-in-a-lifetime opportunity, and it’s fun to work at a company with massive growth that’s part of everyday life and culture. It’s why it’s so hard to get a job at Google, Amazon, Nike, Genentech – these companies define their category, grow quickly, and want the best and the brightest.

If you’re (more likely) evaluating an AI startup that’s in the application layer, look for one trying to solve a hard, high-value problem.

Look for “unsexy” AI companies in a specific vertical with clear value for the buyer (consumer or enterprise). More boring, but also more necessary (and less likely to go out of business). Look for companies with cash runway, a go-to-market plan, and a clear path to actual users.

If you don’t want to change jobs right now, get into AI at your current company.

Having AI on your resume in any way will benefit you – even if it’s at a company or project that ends up going the way of many e-commerce startups in the late 90s (and can’t find a business model). At least you will be closer to the action and able to evaluate other AI opportunities.

Right now, very few people are AI experts. You don’t need a lot of experience to have a lot of experience. Get some experience managing AI products or selling AI products, and you’ll be in the top 10% of candidates within 12 months. And if given the opportunity, run your company’s AI app or project through the three criteria above.

Bottom line: However you do it, join the AI class in 2024.

Greg Shove
Greg Shove, CEO