In the research, Andreessen spoke with dozens of Fortune 500 and enterprise leaders, and surveyed 70 more, to understand how they're using, buying, and budgeting for genAI.
“We were shocked by how significantly the resourcing and attitudes towards genAI had changed over the last 6 months," the firm wrote.
Nearly every single enterprise they spoke with saw promising early results from generative AI experiments. And these firms are planning to increase their generative AI spend anywhere from 2X to 5X in 2024.
What highlights matter most from the research?
I got the scoop from Marketing AI Institute founder/CEO Paul Roetzer on Episode 89 of The Artificial Intelligence Show.
Says Andreessen:
In 2023, the average spend across foundation model APIs, self-hosting, and fine-tuning models was $7M across the dozens of companies we spoke to. Moreover, nearly every single enterprise we spoke with saw promising early results of genAI experiments and planned to increase their spend anywhere from 2x to 5x in 2024 to support deploying more workloads to production.
"The budget numbers really jumped out to me," says Roetzer. "Those are not insignificant numbers."
That money is shifting from one-off innovation budgets to permanent software line items.
AI delivers plenty of benefits. But the one that enterprises measure most right now is: productivity gains.
Says Andreessen:
"Enterprise leaders are currently mostly measuring ROI by increased productivity generated by AI. While they are relying on NPS and customer satisfaction as good proxy metrics, they’re also looking for more tangible ways to measure returns, such as revenue generation, savings, efficiency, and accuracy gains, depending on their use case."
This definitely syncs with what Roetzer is seeing in his work with enterprises.
The obvious ROI play is making the organization immediately more efficient and productive with AI. But, says Roetzer, you have to benchmark performance before and after AI. Without these benchmarks, you're just guessing.
All the enterprises interviewed by Andreessen are now testing multiple models.
"Just over 6 months ago, the vast majority of enterprises were experimenting with 1 model (usually OpenAI’s) or 2 at most. When we talked to enterprise leaders today, they’re are all testing—and in some cases, even using in production—multiple models, which allows them to 1) tailor to use cases based on performance, size, and cost, 2) avoid lock-in, and 3) quickly tap into advancements in a rapidly moving field."
This also syncs up exactly with what Roetzer is seeing.
Some models are just better at certain use cases.
"People want to avoid getting locked in and they want to quickly tap into advancements," he says.
Right now, enterprises are concerned more with control, not cost. Says Andreessen:
"Control (security of proprietary data and understanding why models produce certain outputs) and customization (ability to effectively fine-tune for a given use case) far outweighed cost as the primary reasons to adopt open source."
This is leading to plenty of cloud service provider driven decisions, says Roetzer. Your cloud provider's AI is a natural choice for your needs.
"In a big company where you already have a trusted provider who already has access to all of your data, you are way more likely to trust them to infuse this stuff in and you don't have to get them through procurement again."