Employees attribute AI project failure to poor data quality
Join Transform 2021 for the most important themes in enterprise AI & Data. Learn more.
A clear majority of employees — 87% — peg data quality issues as the reason their organizations failed to successfully implement AI and machine learning. That’s according to Alation’s latest quarterly State of Data Culture Report, produced in partnership with Wakefield Research, which also found that only 8% of data professionals believe AI is being used across their organizations.
For the report, Wakefield conducted a quantitative research study among 300 data and analytics leaders at enterprises with more than 2,500 employees in the U.S., U.K., Germany, Denmark, Sweden, and Norway. The enterprises were polled regarding their progress establishing a culture of data-driven decision-making and the challenges they continue to face in embracing this kind of decision-making.
According to Alatian, 87% of professionals say that inherent biases in the data being used in their AI systems produce discriminatory results, creating compliance risks for their organizations. Survey-takers pointed to the need for curation and governance, literacy and understanding of data, and collecting data from more varied sources.
A lack of executive buy-in was also cited as a top reason AI wasn’t being used effectively at organizations, with 55% of respondents saying it was more important than employees without the skills to create AI models . When it comes to data quality issues, data professionals say that inconsistent standards across data collection, compliance and privacy issues, and a lack of democratization or access to data were the three most common blockers.
As Broadridge VP of innovation and growth Neha Singh noted in a recent piece, many firms try to develop AI solutions without having clean, centralized data pools or a strategy for actively managing them. Without this critical building block for training AI solutions, the reliability, validity, and business value of any AI solution is likely to be limited. McKinsey estimates that companies may be squandering as much as 70% of their data-cleansing efforts.
Of the enterprises that have deployed AI, respondents cited better modeling skills among analysts, cataloging data for visibility and access to data, and the ability to crowdsource info as ways to combat bias in AI. Roughly a third — 31% — say that incomplete data is a top data issue that leads to AI failing.
The findings agree with other surveys showing that, despite enthusiasm around AI, enterprises struggle to deploy AI-powered products into production. Business use of AI grew a whopping 270% over the past four years, according to Gartner, while Deloitte says 62% of respondents to its corporate October 2018 report adopted some form of AI, up from 53% in 2019. But adoption doesn’t always meet with success, as the roughly 25% of companies that have seen half their AI projects fail will tell you.
“To assess readiness for AI, one must first look at the larger role of data within organizations — a language some companies struggle to learn and command,” the report reads. “There remains a large gap between the haves and the have-nots; successful deployment of AI among the haves and failure or a stop-and-start implementation among the rest will only widen that gap. Companies should be asking themselves if they have the right plans in place to become a more data-driven organization, and what that actually looks like in practice.”
- up-to-date information on the subjects of interest to you
- our newsletters
- gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
- networking features, and more
Source: Read Full Article