Economics of Generative AI
The Economics of Generative AI
What’s the business model for generative AI, given what we know today about the technology and the market?
OpenAl has built one of the fastest-growing businesses in history. It may also be one of the costliest to run.
The ChatGPT maker could lose as much as $5 billion this year, according to an analysis by The Information, based on previously undisclosed internal financial data and people involved in the business. If we’re right, OpenAl, most recently valued at $80 billion, will need to raise more cash in the next 12 months or so.
I’ve spent some time in my writing here talking about the technical and resource limitations of generative AI, and it is very interesting to watch these challenges becoming clearer and more urgent for the industry that has sprung up around this technology.
The question that I think this brings up, however, is what the business model really is for generative AI. What should we be expecting, and what’s just hype? What’s the difference between the promise of this technology and the practical reality?
Is generative AI a feature or a product?
I’ve had this conversation with a few people, and heard it discussed quite a bit in media. The difference between a technology being a feature and a product is essentially whether it holds enough value in isolation that people would purchase access to it alone, or if it actually demonstrates most or all of its value when combined with other technologies. We’re seeing “AI” tacked on to lots of existing products right now, from text/code editors to search to browsers, and these applications are examples of “generative AI as a feature”. (I’m writing this very text in Notion and it’s continually trying to get me to do something with AI.) On the other hand, we have Anthropic, OpenAI, and assorted other businesses trying to sell products where generative AI is the central component, such as ChatGPT or Claude.
This can start to get a little blurry, but the key factor I think about is that for the “generative AI as product” crowd, if generative AI doesn’t live up to the expectations of the customer, whatever those might be, then they’re going to discontinue use of the product and stop paying the provider. On the other hand, if someone finds (understandably) that Google’s AI search summaries are junk, they can complain and turn them off, and continue using Google’s search as before. The core business value proposition is not built on the foundation of AI, it’s just an additional potential selling point. This results in much less risk for the overall business.
The way that Apple has approached much of the generative AI space is a good example of conceptualizing generative AI as feature, not product, and to me their apparent strategy has more promise. At the last WWDC Apple revealed that they’re engaging with OpenAI to let Apple users access ChatGPT through Siri. There are a few key components to this that are important. First, Apple is not paying anything to OpenAI to create this relationship — Apple is bringing access to its highly economically attractive users to the table, and OpenAI has the chance to turn these users into paying subscribers to ChatGPT, if they can. Apple takes on no risk in the relationship. Second, this doesn’t preclude Apple from making other generative AI offerings such as Anthropic’s or Google’s available to their user base in the same way. They aren’t explicitly betting on a particular horse in the larger generative AI arms race, even though OpenAI happens to be the first partnership to be announced. Apple is of course working on Apple AI, their own generative AI solution , but they’re clearly targeting these offerings to augment their existing and future product lines — making your iPhone more useful — rather than selling a model as a standalone product.
All this is to say that there are multiple ways of thinking about how generative AI can and should be worked in to a business strategy, and building the technology itself is not guaranteed to be the most successful. When we look back in a decade, I doubt that the companies we’ll think of as the “big winners” in the generative AI business space will be the ones that actually developed the underlying tech.
What business strategy makes sense for development?
Ok, you might think, but someone’s got to build it, if the features are valuable enough to be worth having, right? If the money isn’t in the actual creation of generative AI capability, are we going to have this capability? Is it going to reach its full potential?
I should acknowledge that lots of investors in the tech space do believe that there is plenty of money to be made in generative AI, which is why they’ve sunk many billions of dollars into OpenAI and its peers already. However, I’ve also written in several previous pieces about how even with these billions at hand, I suspect pretty strongly that we are going to see only mild, incremental improvements to the performance of generative AI in the future, instead of continuing the seemingly exponential technological advancement we saw in 2022–2023. (In particular, the limitations on the amount of human generated data available for training to achieve promised progress can’t just be solved by throwing money at the problem.) This means that I’m not convinced that generative AI is going to get a whole lot more useful or “smart” than it is right now.
With all that said, and whether you agree with me or not, we should remember that having a highly advanced technology is very different from being able to create a product from that technology that people will purchase and making a sustainable, renewable business model out of it. You can invent a cool new thing, but as any product team at any startup or tech company will tell you, that is not the end of the process. Figuring out how real people can and will use your cool new thing, and communicating that, and making people believe that your cool new thing is worth a sustainable price, is extremely difficult.
We are definitely seeing lots of proposed ideas for this coming out of many channels, but some of these ideas are falling pretty flat. OpenAI’s new beta of a search engine, announced last week, already had major errors in its outputs . Anyone who’s read my prior pieces about how LLMs work will not be surprised. (I was personally just surprised that they didn’t think about this obvious problem when developing this product in the first place.) Even those ideas that are somehow appealing can’t just be “nice to have”, or luxuries, they need to be essential, because the price that’s required to make this business sustainable has to be very high. When your burn rate is $5 billion a year, in order to become profitable and self-sustaining, your paying user base must be astronomical, and/or the price those users pay must be eye-watering.
Isn’t research still inherently valuable?
This leaves people who are most interested in pushing the technological boundaries in a difficult spot. Research for research’s sake has always existed in some form, even when the results aren’t immediately practically useful. But capitalism doesn’t really have a good channel for this kind of work to be sustained, especially not when this research costs mind-bogglingly high amounts to participate in. The United States has been draining academic institutions dry of resources for decades, so scholars and researchers in academia have little or no chance to even participate in this kind of research without private investment .
I think this is a real shame, because academia is the place where this kind of research could be done with appropriate oversight. Ethical, security, and safety concerns can be taken seriously and explored in an academic setting in ways that simply aren’t prioritized in the private sector. The culture and norms around research for academics are able to value money below knowledge, but when private sector businesses are running all the research, those choices change. The people who our society trusts to do “purer” research don’t have access to the resources required to significantly participate in the generative AI boom.
Now what?
Of course, there’s a significant chance that even these private companies don’t have the resources to sustain the mad dash to training more and bigger models, which brings us back around to the quote I started this article with. Because of the economic model that is governing our technological progress, we may miss out on potential opportunities. Applications of generative AI that make sense but don’t make the kind of billions necessary to sustain the GPU bills may never get deeply explored, while socially harmful, silly, or useless applications get investment because they pose greater opportunities for cash grabs.
For more of my work, visit www.stephaniekirmer.com.
Further Reading
https://www.theatlantic.com/technology/archive/2024/07/searchgpt-openai-error/679248/
https://www.washingtonpost.com/technology/2024/03/10/big-tech-companies-ai-research/
Economics of Generative AI was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.