Why Your AI, Data Science Project Has Stalled

Executive Summary

  • Projects globally are finding that the AI projects they started have, in many cases, stalled.
  • We explain why this wide-scale phenomenon has occurred.

Introduction

Under the constant barrage of hype around AI, it is curious how AI is now being proposed for things that it can’t solve and to fill in the “blank.”

Our References for This Article

If you want to see our references for this article and other related Brightwork articles, see this link.

Problems With SAP Progress

This is expressed in the following quotation.

When I worked with SAP everything was more clear for SI and a Client. There was a clear understanding of what should be implemented. It was possible to enumerate everything: purchase orders, invoices, debtors etc. With AI nobody can estimate the functional gap. Just try to answer what number of developers you need for “The analysis and forecasting model provided as a web service is integrated into some system” and how fast everything will be done. Basically everything we can do in the AI area is to set together with a customer, and say – “Let’s dream together”.

Something to remember is that AI became a significant hype train before most of the AI’s claims were proven out.

Issue# 1: The Unwarranted Boundless Optimism Being Proposed Around AI

The following is an excellent example of this.

Promoters have figured out that people are susceptible to exaggerated stories around AI. Notice that this video claims growth in AI capability was previously promised back in the 1960s but never occurred. This is a common feature of current AI forecasts; they entirely ignore and erase previous forecasts’ inaccuracies. If individuals can’t even (or more likely have lacked the interest to) measure forecast error, it seems that they aren’t in an excellent position to opine on the future of AI. 

There is little difference between this video and a person hallucinating on a hard drug. The marketing departments at consulting firms (as just one example) have gone “buck wild” in their exaggerations around AI.

Most of the companies that are have sold the benefits of AI to their clients have not demonstrated AI competence. A perfect example of this is IBM. As we cover in the article, IBM failed massively with its Watson initiative in the article How IBM is Distracting from the Watson Failure to Sell More AI. However, even after spending a billion dollars and developing an AI system focused on health care that was widely considered useless, this did not stop IBM from selling AI’s potential to customers.

The consequence is that the vast majority of those companies that work in AI have very few accomplishments in AI.

Issue #2: Companies Don’t Know Enough About AI Themselves

Companies that hire consulting companies typically don’t know that much about AI or data science, and therefore they are easily tricked by consulting firms. There is now a growing amount of evidence that companies don’t even know the right people to hire. This is explained in the following quotation from the article Stop Hiring Data Scientists.

Congrats! Conveniently, you also came in with your shiny graduate degree, your prestigious research background, all of a sudden your $95k salary at your previous company is $130k at your new company. Wow! What an upgrade. And then your boss meets with you and assigns you your work plan. You toil over the work and very quickly realize your day-to-day has gone from a highly skilled statistician workload to that of a SQL warrior. All of a sudden you’re spending 90% of your time building reports, delivering PowerPoints, and building Power BI dashboards to share daily user metrics. You half laugh, half cry as you realize you’re now doing the work of an entry-level data analyst. 10 years of graduate school and another 5 as a postdoc to spend your day writing a few SQL queries and maintain old dashboards.

Not only has this statistician found herself in a win-lose situation (win because she’s making lots of money, lose because she hates her job), the business has found itself in a lose-lose situation. The business is paying $130k for a role they could have filled with a highly skilled analyst for $75–100k. They’re also getting a lower quality of work because the statistician just isn’t interested in doing the work.

Companies are stuck in a Catch-22. Most do not understand enough about Data Science and AI themselves to make progress in the topic, and they can’t rely on most of the consulting firms as they are themselves posers on the subject. Secondly, both AI vendors and consulting firms are continually overstating the progress other companies are making in AI.

Issue #3: Finding Out The Data Does Not Exist for AI

Consulting firms and vendors are interested in selling AI projects, and they do so by minimizing the issues related to shortcomings on the data side. AI is based upon multivariate data sets. It is easy to say that the AI/ML can be run against data sets, but companies typically do not have anywhere near enough data to run their AI aspirations. Most companies have a difficult time maintaining single variate data, such as sales history. In the article How to Access Forecast Promotion Management, companies usually do not remove promotions from their sales history. And they do not do this because it is considered too much work to remove the effect of promotions (which naturally alter the authentic demand experienced by a product or a service.

Notice this quotation.

These problems are being recognized at a senior level. Among C-suite respondents 38 percent say poor data quality has caused analytics and AI/ML projects to take longer, while 36 percent say they cost more or fail to achieve the anticipated results (33 percent). With 71 percent of organizations relying on data analysis to drive future business decisions, these inefficiencies are draining resources and inhibiting the ability to glean insights that are crucial to overall business growth. – Betanews

So the company itself or the consulting firm had no idea that they lacked the data to begin their AI project? How is that possible? Preliminary or exploratory work can be performed on the data to determine its likely usefulness. But instead, a vast number of projects are getting “blindsided” by the fact they don’t have the data they need to meet whatever AI project goal to set for themselves.

Once projects run into issues with a lack of data, as we cover in the article How Many AI Projects Will Fail Due to a Lack of Data?, firms like IBM gaslight their customers by telling them they should have thought about their lack of data. Gaslighting customers is a topic which IBM rarely, if ever, brings up during the sales process.

Issue #4: A Lack of Transparency on AI and Data Science Problems and Limitations

Companies are hiding their problems with AI and data science and only reporting on the positive outcomes. This is creating a situation where only the promises of the benefits of AI are being communicated. And the reality of these projects is challenging to find, which we cover in the article The Next Big Thing in AI is to Excuse AI Failures.

For example, one can find failures with AI, such as the following that we found.

It took less than 24 hours for Twitter to corrupt an innocent AI chatbot. Yesterday, Microsoft unveiled Tay — a Twitter bot that the company described as an experiment in “conversational understanding.” The more you chat with Tay, said Microsoft, the smarter it gets, learning to engage people through “casual and playful conversation.”

Unfortunately, the conversations didn’t stay playful for long. Pretty soon after Tay launched, people starting tweeting the bot with all sorts of misogynistic, racist, and Donald Trumpist remarks. And Tay — being essentially a robot parrot with an internet connection — started repeating these sentiments back to users, proving correct that old programming adage: flaming garbage pile in, flaming garbage pile out. – Verge

And fake AI…

Then there’s the artificial intelligence system that’s not very “artificial.” That was the accusation leveled at Engineer.ai in an article that appeared in The Wall Street Journal in August. The Indian startup claimed to have built an AI-assisted app development platform, but the WSJ, citing former and current employees, suggested it relies mostly on human engineers and “exaggerates its AI capabilities to attract customers and investors.”

Engineer.ai has attracted nearly US$30 million in funding from a SoftBank-owned firm and others. Founder Sachin Dev Duggal says the company’s AI tools are only human-assisted, and that it provides a service to help customers make more than 80 percent of a mobile app from scratch in about an hour. The WSJ story argued that Engineer.ai did not use AI to assemble code as it claimed, instead it used human engineers in India and elsewhere to put together the app. – Medium

But these are the public failures. The vast majority of AI and data science failures are very quiet and hushed up. We cover in the article Did Hillary Lose the Election to Due Failed Big Data AI?, the hard failure of supposed AI that was to help Hillary Clinton win the presidential election. However, it is infrequent to find anyone who writes about this failure. The AI project reporting method seems to accept the exaggerated claims around AI and then leave out these projects’ success rates or do not go back and check how previous AI projects fared.

If companies are implementing some AI or data science initiative, the actual progress reports would be shocking short of what was promised, with the majority reporting no progress at all.

Conclusion

A significant reason for the lack of progress in AI is the lack of reliable information about AI. Projects are being funded and moving forward on AI and data science projects without any idea of the average success ratio.