MIT professor and economist Erik Brynjolfsson said recently that he’d be disappointed if artificial intelligence didn’t lift the current anemic 1.2% productivity growth rate to 3% or even 4%. This would be a good thing for business and government because it could potentially help with the labor shortage, drive earnings growth and increase tax revenues, which would ostensibly help address current debt levels.
This is one of the promised impacts of AI. Although the hype surrounding generative AI has narrowly propped up certain sectors of the market, such as AI startups and the “magnificent seven,” the macro effects have not been felt thus far as adoption remains largely experimental.
In this Breaking Analysis and ahead of Supercloud 4, ETR’s Erik Bradley and Daren Brabham join the program to share the latest trends on AI adoption, how gen AI is being used, some of the deployment models and the AI leaderboard, based on spending momentum and presence in the market.
AI initiatives steal from other budget buckets
First let’s look at the enterprise information technology market broadly and the impact AI spending is having on other sectors.
The graphic above shows the nearly 30 sectors tracked in the Enterprise Technology Research Technology Spending Intentions Survey (TSIS). It depicts Net Score or spending momentum on the vertical axis and presence or Sector Pervasion on the horizontal plane. The red dotted line at 40% indicates a highly elevated spend velocity in a sector.
Note the squiggly line on ML/AI. As we exited the isolation economy, AI momentum decelerated along with overall IT spending. But look what happened the month after ChatGPT was announced. ML/AI has now taken over as the No. 1 sector in terms of spending momentum.
The rub is that as we said in our research note, Lower for Longer, for the past two years, overall enterprise tech spending expectations have decelerated from a high of 7.5% growth expectations to the current 2.9% level. The issue being that chief financial officers and chief executive officers are generally not allocating discretionary spending for chief information officers to pursue AI. Rather the trend appears to be AI initiatives are stealing budget from other sectors, many of which are being deemphasized as AI experiments are funded.
ETR’s Erik Bradley adds the following additional context to this data:
There are a bunch of data points here I want to add. One, yes, ETR is growing the number of sectors it follows. We actually just added FinTech, so we’re continuing our coverage growth and we’ll continue to do so as we need to. But as it comes to spending, yes, 3% doesn’t sound that exciting, but we were coming from 0.8% at one point last year, and 3% is a huge growth from there. And the other thing I really want to point out is when we were capturing those really bad low numbers at under 1%, it was coming from the world’s largest organizations, the Fortune 100, Fortune 500, global 2000 that had the worst spend. Right now we’re seeing them back up at the mean. So when we’re seeing the largest organizations in the world actually increase their spend, I do think we’re coming out of this. I do think that’s a good thing. So that’s point one.
Point two, when we’re talking about the sector net score for ML/AI, this is at historic levels. We’ve been tracking spend for 12 years. And right now, ML/AI has a sector net score of 52%. That’s up from 38% just 12 months ago. And by comparison, the overall average of our survey is only 20%. So ML/AI is at three times higher than the average in the survey. Only container orchestration comes close by breaking 40%. AI is really just in rarefied air. It’s all alone in that realm.
And then quickly, your comment about regarding AI stealing budget. We did a recent drill-down study and it showed that 67% of those global 2000 respondents stated that they’re not using net new dollars for their generative AI evaluation. They’re actually taking it from other areas, and only 33% are actually introducing new money to fund gen AI.
AI adoption is tracking the hype…
Let’s take a look at the latest spending data on AI adoption.
The chart above shows results from the most recent October ETR spending survey. In this view, IT decision-makers were asked if the customer is evaluating gen AI and large language models for business use cases and if so which ones. You can see the steep decline from the April 23 survey for those organizations not evaluating gen AI – the gray to yellow bars going from 52% to 26% today. And you can see the steep uptick in the use cases from April to October.
One important side note worth mentioning: The percentage of customers in the Global 2000 evaluating gen AI is much higher than the industry average, with only 14% of the Global 2000 saying they’re not evaluating gen AI.
… but most activity remains experimental
Looking at the data below in terms of what’s actually happening in production, we see two significant takeaways: 1) Most of the action remains in the blue bars at the evaluation phase; and 2) The use cases are really those most typically associated with ChatGPT-like work — productivity-focused and pretty straightforward.
Not to imply that these use cases are a negative, but generally they don’t point to a radical process reengineering going on within organizations.
Daren Brabham makes the following additional points about this data:
I think especially in this macro environment, organizations are going to be as as cautious as they can. They’re getting whipped up in the excitement and the buzz of generative AI at rates we’ve never seen before around a technology. But tight budgets and uncertain business environments really mean organizations are still dipping their toes in the waters that feel the most familiar.
So first off on the previous chart, it’s remarkable how fast organizations are evaluating generative AI for business use cases. It went from six months ago, half of the organizations were not evaluating, and now only a quarter are not evaluating. So that’s a pretty remarkable pace. But second, relevant to the chart you just showed is we see pretty safe or tried-and-true use cases that people are starting with. Businesses have been testing and refining AI chatbots for customer service for several years now. So it’s a natural place to start when a company tries to see if generative AI can do something better or quicker or more nuanced than building the logic flows of a typical chatbot.
And so that’s why it’s the biggest bar among the use cases here in full production. Text and data summarization, which is mostly where people are familiar with ChatGPT’s capabilities. That’s a close second in production, so no surprise there either. And then code generation and documentation next, and again, this builds on years of baby steps in programming platforms out there that have been trying to build in automation and shortcuts and various flags to help there. So all that’s coming through, but notably, like you said, the pattern we’re seeing here is that a little less than half the levels of production compared to those evaluating for each of these business use cases here, but still a remarkably fast pace and kind of exciting to see where it goes.
The power law of generative AI
Let’s now introduce the Power Law of Gen AI that the CUBE Research team put together a while back.
The graphic above takes a derivative of the power law framework and applies it to Gen AI. The vertical axis represents the size of the LLMs and the X axis speaks to model specificity. The orange line takes an example from the historical music industry where four music labels, Universal, Warner, Sony and EMI had nearly 90% of the market. Hence the hard right angle.
Adapting this to gen AI, we see that the large cloud companies along with Nvidia Corp. and OpenAI LP are capturing a lot of the narrative today, where consumer adoption is driving volume and industry economics. But other third-party and open-source models are, we believe, pulling the torso – shown by the red arrows – up to the right to smooth the curve. And then, like many industries, we expect a long tail of domain-specific AI within various industry sectors. We take this out to the telco industry and the edge, which we think is where much of the AI inference will occur on very attractive power/watt platforms like Arm Ltd.’s systems-on-chip.
We asked the ETR guests to give us their thoughts on the above graphic. Their comments are summarized as follows:
- Daren speaks about the rhythm or curve of innovation as new technologies are introduced.
- He points out that while dominant players capture a significant share, there are also niche use cases that emerge in the long tail.
- There’s a question about what it would take for organizations to adopt new technologies like generative AI.
- A recent ETR study shows that the main hurdle for organizations is to improve data quality. He highlights that data quality is fundamental since large language models only perform optimally with good data.
- The data should be organized, findable and stored properly in data warehouses or lakes.
- He predicts that the focus will shift toward addressing the state of data readiness in organizations to leverage AI effectively.
- He then sets up Erik, who has a focus on security in relation to AI.
- Erik agrees with the previous points and emphasizes the ongoing significance of security, given the current cyberthreats.
- Erik mentions that AI, especially generative AI, plays a crucial role in enhancing security and protecting vital sectors like utilities, healthcare and banking.
- He believes that even without a direct economic ROI driver, the adoption of generative AI in security will occur thanks to its inherent importance.
- Innovation curve: Dominant players capture significant share, but niche use cases also emerge.
- Main hurdle for AI adoption: Improving data quality.
- State of data: It must be organized, findable and properly stored.
- Security focus: Current priority for CIOs, with a significant role for AI. Generative AI seen as vital for enhancing cybersecurity.
Where will gen AI work be performed – public or private infrastructure?
Let’s dig into the ETR data to see if there’s any evidence that this Power Law model is taking shape. Pursuant to the previous conversation, there’s a lot of discussion in the community about where the AI work is going to take place. Now, interestingly, when you ask those still in production what’s holding them back from getting there, the No. 1 answer is “We’re still in the eval phase,” but the two big ones beyond this are: 1) Data privacy or security concerns; and 2) Compliance and legal concerns. The point is there are many who feel that because of these issues and concerns about IP leakage, much of this work will occur on private infrastructure.
This graph above is from an earlier ETR drill-down study – note the smaller N – but it’s still instructive. ETR asked customers do they use public or private infrastructure for gen AI workloads? The data is clear: It’s literally a 50/50 mix. And when you look at the data in the G2000, it’s much more weighted toward private infrastructure, with the first two bars jumping to 42% and the third dropping to 18%. Again small N’s – only 17 G2000 in this drill-down – but still it’s an interesting data point.
Which AI players are catching the wave?
When you look at the leaderboard from the ETR data, it’s dominated by the big three cloud vendors and OpenAI. Below we show a similar YX graph with Net Score or spending momentum on the vertical axis and pervasiveness in the data on the horizontal axis. Note the position of OpenAI, which shot out of nowhere and is a dominant force today. But the big three clouds are prominent. What’s notable here is their respective positions and change in position relative to the announcement of ChatGPT.
- Microsoft Corp. shot to the momentum lead with its OpenAI partnership.
- Amazon Web Services Inc., which has always been a player in AI with tools like Sagemaker, lost ground on the X axis; and
- Google LLC, with services such as VertexAI, Bard and the like gained ground on both the X and Y axes.
As well, you can also see Databricks Inc. in the mix above the 40%, line as is Anthropic. And very interesting, toward the bottom of the momentum scale, you see Oracle Corp. and IBM Corp., both in the game and both with on-premises or hybrid estates.
Erik Bradley and Daren Brabham provide the following additional details:
There is a lot to unpack here. I’m going to go off-script for a second because I think it’s really interesting when you showed those vendor trends, and a couple of names, again, Anthropic just jumped out of the gates, right? We’re seeing that at 50% [Net Score]. OpenAI, when we first started tracking it in the ETS, it broke records. The evaluation rates were absolutely through the roof. We transferred it over here to the larger survey, and again, it is by far the highest we’ve ever captured. Not surprised to see Microsoft, AWS and Google all in there. Databricks also very well-positioned. Dataiku almost at 50%. So we’re talking a little bit about these data science tools that are helpful. And then to your point about IBM, I do really want to point out that 16% seems low, it’s on the bottom of the chart, but that was going from a negative 8% just 12 months ago. So kudos to them. They had the technology early. Now IBM is getting a resurgence and a second life.
But go back to what we were originally saying. I think one of the most interesting things that we’ve found so far in our data is that there’s an even 50/50 split between people that are using vendors or embedded AI into things they already have or just going out and doing it on their own. We have not yet seen a clear leadership between “Hey, I’m going to let my tools and my services go ahead and provide it for me,” or “I’m going to build this myself.” So I think we’re going to see that play out. It’s going to be pretty interesting as we do.
I think it’s a horse race, right? I mean it’s an exciting space for tech vendors to pay attention to because there’s no declared winner yet. And so there’s lots of movement to be had and lots of data to gather on where people are headed. No surprise, the three big cloud players, Microsoft, AWS, Google are up in the mix. I think if I were to conjecture why people are investing so heavily, I think generative AI brought this excitement and sort of validated for many top executives who weren’t as close to the ground of ML and AI work. Maybe it validated that there really is some power and excitement here. And so maybe there was a go-ahead to say, “Let’s invest in what we already have, let’s add more oomph to the platforms we’re already on.” So perhaps that’s part of it. They are just good platforms in general too and have really good MLOps capabilities, so maybe no surprise there. What Erik is referencing, this blend between using the embedded capabilities within a vendor versus going into a standalone generative AI company, I think is pretty remarkable.
Analyst angle: What to watch going forward
We’ll close with a summary and some critical factors to watch in the days and months ahead.
The Business case. IT budgets aren’t growing dramatically and enterprises need to show value or AI project spending will be squeezed. At the same time, organizations have fear of missing out so that could heighten urgency to find value.
Productivity is the magic metric. Productivity numbers at the macro will be a key to the promises of AI being fulfilled. Growth has a way of solving problems and meaningful productivity improvements and revenue acceleration from AI will lift all boats. It hasn’t happened broadly yet.
Cloud versus on-prem. There’s lots of optionality today in the cloud, but large installations are training models off public infrastructure to protect IP. We’re watching for meaningful on-prem adoption in industries.
Competition is fierce. The competitive landscape is evolving and sands are shifting. Trends still favor Microsoft and OpenAI, but AWS and Google are not standing still. Nor are the other third-party providers. Anthropic, for example, is playing the field and other firms are investing heavily. AI embedded in software (such as Snowflake Inc., Databricks, Salesforce Inc., SAP SE, ServiceNow Inc., Workday Inc. and the like) should drive demand in 2024.
The edge. AI inference at the edge is a wild card and could produce highly disruptive economics that find their way into enterprise IT.
Erik Bradley and Daren Brabham provided the following additional thoughts that are summarized as follows:
Erik Bradley’s perspective:
- Follow the money. The focus will be on the financial implications of adopting AI.
- AI is perceived as being self-sustainable as it can generate savings by reducing repetitive tasks or even full-time employee hours.
- It is unlikely that CFOs will hinder the adoption of generative AI given its potential to keep up with the market.
- Follow the use cases. Real-time use cases for gen AI are evident in areas like customer support, chat and call center technology.
- Pace of technology advances. The rapid evolution and adoption of technologies, particularly the pace of how quickly ChatGPT made its presence known. This unprecedented speed from hype to production suggests that this technology is here to stay.
Daren Brabham’s perspective:
- Generative AI’s impact on existing technologies:
- There’s a notion that generative AI might surpass certain technologies designed to improve processes or provide shortcuts, such as robotic process automation.
- Vendors in the RPA, business intelligence, analytics and other sectors need to consider embedding AI capabilities to remain relevant.
- Generative AI’s potential to revolutionize various industries also poses challenges for vendors to adapt or risk becoming obsolete.
- Concerns and considerations surrounding generative AI adoption:
- Implementing generative AI interfaces requires careful governance to ensure efficient use and prevent redundancy.
- Data literacy becomes vital, ensuring users can differentiate between accurate information and potential “hallucinations” from the data.
- Training and education will play a significant role, equipping individuals to interact efficiently with transformative technologies like generative AI.
- Financial implications: AI can generate savings and be self-sustainable.
- Real-time use cases: Areas like customer support, chat, and call centers.
- Rapid evolution: Technologies, especially ChatGPT, are being adopted faster than ever.
- Generative AI versus existing technologies: Potential to surpass or revolutionize current tools and processes.
- Governance and training: Critical aspects to ensure effective AI adoption and interaction.
The pace of innovation catalyzed by gen AI feels unprecedented in tech. Is this belief due to recency bias or a legitimate force in tech? We’ll continue to provide insights, data-based opinions and deep research to track the progress, help customer take advantage of the latest technologies with a focus on business value.
Many thanks to Erik Bradley and Daren Brabham for their partnership and collaboration for Breaking Analysis.
Supercloud 4 reminder
Don’t forget, Supercloud 4 is happening on Tuesday, Oct. 24. Today’s Breaking Analysis is a preview to Supercloud 4. It’s a live virtual event from our Palo Alto studio. The topic is gen AI and specifically the transformative effects on industries. We’ve talked a lot about the impacts on the technology industry — Daren Brabham closed by discussing several technology sectors that are ripe for disruption. But the other piece of Supercloud 4 is looking at industry transformations. For example, we have experts discussing the impacts in healthcare, financial services, manufacturing and other sectors.
Because that’s where the multitrillion-dollar economic impact is going to be felt. So go to supercloud.world and sign up.
Keep in touch
Thanks to Alex Myerson and Ken Shifman on production, podcasts and media workflows for Breaking Analysis. Special thanks to Kristen Martin and Cheryl Knight, who help us keep our community informed and get the word out, and to Rob Hof, our editor in chief at SiliconANGLE.
Also, check out this ETR Tutorial we created, which explains the spending methodology in more detail. Note: ETR is a separate company from Wikibon and SiliconANGLE. If you would like to cite or republish any of the company’s data, or inquire about its services, please contact ETR at firstname.lastname@example.org.
Here’s the full video analysis:
All statements made regarding companies or securities are strictly beliefs, points of view and opinions held by SiliconANGLE Media, Enterprise Technology Research, other guests on theCUBE and guest writers. Such statements are not recommendations by these individuals to buy, sell or hold any security. The content presented does not constitute investment advice and should not be used as the basis for any investment decision. You and only you are responsible for your investment decisions.
Disclosure: Many of the companies cited in Breaking Analysis are sponsors of theCUBE and/or clients of Wikibon. None of these firms or other companies have any editorial control over or advance viewing of what’s published in Breaking Analysis.
Image: AF DigitalArtStudio/Adobe Stock
Your vote of support is important to us and it helps us keep the content FREE.
One-click below supports our mission to provide free, deep and relevant content.
Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger and many more luminaries and experts.