Data delays are a fact of life for company AI developers.
Is there a major “lag time” between artificial intelligence investments and artificial intelligence data performance?
Yes, there is, concluded a new study by San Francisco-based Monte Carlo, a data analysis company.
In its brand-new State of Reliable AI survey, Monte Carol analysts say that while they’re establishing generative AI foundations insider their companies, their data platforms aren’t strong enough to support those new Gen AI initiatives.
The study tracked 200 company data leaders at a time when the C-suite is grappling with Gen AI adoption. While it’s usually the chief information office or the chief technology officer tasked with helming AI rollouts, corporate finance officers are watching closely, many of them concerned about the initial price tag to implement comprehensive AI programs, with some calling for eight-figure investments.
“(These) leaders are tasked with not only shepherding their companies’ GenAI initiatives from experimentation to production, but also ensuring that the data itself is AI-ready, in other words, secure, compliant, and most of all, trusted,” the report stated.
The data from the study certainly backs that sentiment up, as companies struggle to make their Gen AI investment work, let alone pay off.
This from the study:
• 100% of data professionals feel pressure from their leadership to implement a GenAI strategy and/or build GenAI products
• 91% of company leaders (VP or above) have built or are currently building a GenAI product
• 82% of respondents rated the potential usefulness of GenAI at least an 8 on a scale of 1-10, but 90% believe their leaders do not have realistic expectations for its technical feasibility or ability to drive business value.
• 84% of respondents indicate that it is the data team’s responsibility to implement a GenAI strategy, versus 12% whose organizations have built dedicated GenAI teams
In publishing these results, Monte Carol says there’s a “disconnect” between company data departments and the business management realm. That’s leading to big problems.
“Data leaders feel the pressure and responsibility to participate in the GenAI revolution, but some may be forging ahead in spite of more primordial priorities—and in some cases, against their better judgment,” the report noted.
Big Data Problems Already Existed
One vexing issue is that company technology executives were already dealing with an “exponentially greater volume” of data than in decades past. Since bringing Gen AI architectures aboard, 91% of data managers said that “both applications and the number of critical data sources has increased even further,” which “deepens the complexity and scale of their data estates” in the process.”
Even worse, there’s no clear path to a long-term resolution, short of throwing more cash at the problem.
“Data is the lifeblood of all AI – without secure, compliant, and reliable data, enterprise AI initiatives will fail before they get off the ground. Data quality is a critical but often overlooked component of ensuring ethical and accurate models, and the fact that 68% of data leaders surveyed did not feel completely confident that their data reflects the unsung importance of this puzzle piece,” said Lior Solomon, VP of Data, at Drata, a Haworth, N.J.-based data services company. “The most advanced AI projects will prioritize data reliability at each stage of the model development life cycle, from ingestion in the database to fine-tuning or RAG.”
Yet another problem is that many respondents still rely on tedious and unscalable data quality methods, such as testing and monitoring, Monte Carlo noted in the study. 54% of data professionals surveyed “depending exclusively” on manual testing. That’s fueled some financial headaches as about 66% of survey respondents said they’ve experienced a data incident in 2024 that cost their organization $100,000 or more.
“This is a shocking figure when you consider that 70% of data leaders surveyed reported that it takes longer than 4 hours to find a data incident,” the study reported. “What’s worse, previous surveys commissioned by Monte Carlo reveal that data teams face, on average, 67 data incidents per month.
That’s a heavy load technology and business executives face with Gen AI rollouts. Still, the reality is that data teams will have to double down and look for data issues before they become problematic.
“In 2024, data leaders are tasked with not only shepherding their companies’ GenAI initiatives from experimentation to production, but also ensuring that the data itself is AI-ready, in other words, secure, compliant, and most of all, trusted,” said Barr Moses, co-founder and CEO of Monte Carlo. “As validated by our survey, organizations will fail without treating data trust with the diligence it deserves. Prioritizing automatic, resolution-focused data quality approaches like data observability will empower data teams to achieve enterprise-grade AI at scale.”
Brian O’Connell, a former Wall Street bond trader and best-selling author, is a prominent figure in the finance industry. With a substantial background as an ex-Wall Street trader, he has authored two best-selling books: ‘The 401k Millionaire’ and ‘CNBC’s Creating Wealth’, demonstrating his profound knowledge of finance and investing.
Brian is also a finance and business writer for esteemed national platforms and publications, including CNN, TheStreet.com, CBS News, The Wall Street Journal, U.S. News & World Report, Forbes, and Fox News.