The AI, Big Data and Data Science Bubble and the Madness of Crowds
Executive Summary
- AI, data science, and Big Data are presently an enormous bubble.
- We cover the implications of this bubble.
Introduction
As this article is written and published in 2020, we are in the midst of the most significant AI bubble of all time. As Brightwork Research & Analysis is one of the only fact-checking entities in the enterprise software space, we have for years tracked false statements by either companies trying to raise money or sell AI, data science, or Big Data projects.
Our References for This Article
If you want to see our references for this article and other related Brightwork articles, see this link.
Getting Real on the Claims in the Space
AI, data science, and Big Data are making important contributions to our lives. Still, they are all also much oversold in terms of the current state and the future state, and it is also much like the 2008 bubble a great deal of lying being performed because, as this book is written, slapping the AI label is the easiest way to raise money.
And the fact that part of AI, Big Data, and data science are true is part of the problem. When something is ½ true or true in certain areas, it is far more challenging to determine if the rest is true. This is particularly true of something that requires a significant amount of domain expertise or is technical to determine truth from falsehood. And as a researcher who fact checks the software industry, I have compiled the evidence to say that most companies in the IT space do not care all that much about what is true. AI claims are a perfect place for unscrupulous entities to function because the present projections about AI make it challenging to outline what the limits of AI are.
The Inevitable Outcome
This will inevitably lead to disappointment as it becomes more apparent that these items cannot meet the promises made for them. Enormous amounts of resources have been poured into all three and will continue to be poured. The big three promise generalized benefits, but the case studies they provide tend to be very narrow and not intelligent. Instead, they are robotic automation of a highly restricted process like Watson playing Jeopardy! Only appears intelligent when viewed from a distance. The more exposure one attains to each case study, much like learning a magician’s tricks, the more explainable the artifact’s behavior becomes.
At Brightwork, we use AI to create summaries of articles because it is much faster than doing so manually. We recently added text-to-speech, courtesy of Google Text-to-Speech, which is based on a neural network. Voice recognition, like writing software, is undoubtedly valuable, as is grammar checking. I used all of these things to write part of this book. I am a researcher who receives articles in various languages, which I could never read without a handy translator like Google Translate.
Ironically, an article written about the exaggerated statements of AI, Big Data, and data science would be written by an author who is extensively leveraging AI/ML-based language manipulation tools. But again, all of these things are narrow applications of AI. And they are all forms of weak AI. While I leverage these items, I never convince myself that the software I am using is intelligent or that it will be conscious, or like HAL, will eventually lock me out of the spacecraft once it learns of my plans to unplug it.
Instead, all of these technologies are running through an automated procedure. Unlike a sentient entity, it does not have any opinion on what it is doing because it is not alive and is merely aping its human instructions. A many-layered neural network may develop those instructions, but this means that the procedure has more layers.
This is expressed in the following quotation.
“Mimicking the human mind is a daunting task with no guarantee of success. There have been some legendary exceptions, like AT&T’s Bell Labs, Lockheed Martin’s Skunk Works, and Xerox’s Parc, but few companies are willing to support intellectually interesting research that does not have a short term payoff. It is more appealing to make something that is useful and immediately profitable.
I don’t know how long it will take to develop computers that have a general intelligence that rivals humans. I suspect that it will take decades.
Statistical evidence is not sufficient to distinguish between real knowledge and bogus knowledge. Only logic, wisdom, and common sense can do that. Computers cannot assess whether things are truly related or just coincidentally correlated because computers do not understand data in any meaningful way. Computers do not have the human intelligence needed to distinguish between statistical patterns that make sense and those that are spurious. Computers today can pass the Turning Test, but not the Smith Test. The situation is exacerbated if the discovered patterns are concealed inside black boxes that make the models inscrutable. Then no one knows why a computer algorithm concluded that this stock should be purchased, this job applicant should be rejected, this patient given this medication, this prisoner should be denied parole, this building should be bombed.” – The AI Delusion
Exaggerated Artificial Intelligence
Many people are convinced about the intelligence of various items because they are listening to consulting firms. Or listen to luminaries like Elon Musk, to companies trying to sell their products or increase their stock price. Alternatively, they may rely on media entities that sell advertising space to the same companies seeking to promote AI, data science, and Big Data, which are not reliable sources.
It’s worth noting that many AI, data science, and Big Data projects have no ROI.
They were sold on the basis of false claims, and companies don’t usually go around admitting that consulting firms or software vendors tricked them. This is expressed in the following way in the following quotations.
Despite negative images and talk, Luis is sure that artificial intelligence is here to stay, at least for a while. So many companies have made large investments into AI that it would be difficult for them to just stop using them or to stop the development. – Forbes
That is, as soon as something becomes sufficiently promoted, its investment continues regardless of the actual benefits because the decision-makers who bought into the claims become captured by the claims. This is because they are unwilling to admit to others that they received wrong information and did not do the work necessary to check the claims on which they based their decision.
The Reality of the Current AI Bubble
There has been a significant gap in holding accountable those who promote and continue to spread false information about AI.
There is an excellent section in the book Listen Liberal, a political book that addresses many technological topics. And one of the topics is what he calls “into worship.” The author Thomas Frank points to Amazon, Uber, TaskRabbit, which are all cloaked in innovation talk, but while the software is new, the model of labor exploitation is old.
We already have many examples of abuses on the part of Big Data, for example, privacy violations, surveillance, China’s social rating system, etc.. However, we seem to assume that all technology improvements will be used for good. Is AI going to solve world pollution? Really? How? Is AI going to regulate polluting industries? Will it make people stop driving monster trucks, taking private planes, and traveling long distances?
The black-box nature of what is commonly referred to as AI is increasingly evident, as even its designers are unsure of what it is doing.
They know the basic operating procedures. If you review much of the popular writing on AI, the commenters almost revel in the fact that they don’t understand what the AI is doing.
And this black box lack of thinking is prevalent in the coverage of AI. There is seemingly no attempt to measure AI proponents on their forecast accuracy. Much of the coverage of various AI proponents focuses on their academic credentials and the prestige of the schools they went to, with very little to zero focus on the content of what they are saying or whether their statements are predictive. My research for this book revealed that there is no entity tracking the accuracy of predictions by various AI experts, and journalists covering AI predictions show little interest in verifying whether the individual has any forecast accuracy. This issue is shared with another luminary, who often comments on AI, Elon Musk. Elon Musk’s inaccurate predictions, generally and about his company, in particular, are legion. There is a website dedicated to cataloging Elon Musk’s false statements called elonmusk. today. Some of Musk’s predictions about AI concerning Tesla include that Teslas would be able to read that..
“By next year, a Tesla should be able to drive around a parking lot, find an empty spot, read signs to confirm it’s valid & park.”
And that..
“You will be able to do pretty much anything via voice command. Software team is focused on core Model 3 functionality right now, but that will be done soon, then we will add a lot more features.“
And Tesla would have 500,000 robotaxis on the road by 2020, which Musk explains in the following video Elon Musk’s Big Announcement: Tesla Will Make Uber Obsolete.
In this video, he states the following.
It’s very difficult to wrap one’s mind around it. Because we are used to extrapolating on a linear basis. But when you have got massive amounts of — as the hardware — as you have massive amounts of hardware on the road — the cumulative data is increasing exponentially. The software is getting better at an exponential rate. I feel very confident in predicting robotaxis for Tesla next year (stated in April of 2019)
As I wrote this in 2020, there are no Tesla robotaxis on the road. Notice that this statement is very similar to many of the statements around AI — that things are progressing so rapidly that we will soon see enormous breakthroughs.
One cannot simply chalk up all of this inaccuracy on AI to honest mistakes. Elon Musk has made numerous predictions that have not come true, with only some of these being related to AI. There is a clear profit incentive at work in these inaccurate predictions. The timing of Elon Musk’s robotaxi prediction is dissected in the following quotation.
It doesn’t matter if the markets buy into Tesla’s claims or not. The bottom line is Elon Musk committed securities fraud when he raised money on a claim that he knew was a lie. Will there be any repercussions for Musk or Tesla shareholders?
It’s highly unlikely.
So, why did Musk not mention the robotaxi plans is the official prospectus? The answer is simple: He knew the promise was a lie, and he will have to walk back on it eventually. There was no way Tesla was going to deliver on it. Even the markets did not buy into Musk’s “optimism” as Tesla shares continued falling after the capital raise.
It doesn’t matter if the markets buy into Tesla’s claims or not. The bottom line is Elon Musk committed securities fraud when he raised money on a claim that he knew was a lie.
Will there be any repercussions for Musk or Tesla shareholders? It’s highly unlikely. – CCN
Several months after Elon Musk had made the prediction of robotaxis and raised $2.3 billion partially based on the claim, he walked back the prediction, as is explained in the following quotation.
On April 22, just days before Tesla turned to public markets to raise $2.3 billion in debt and equity, Musk had announced at an event called Tesla Autonomy Day: “Next year for sure — we’ll have over a million robotaxis on the road.”
Existing Tesla owners could download software and turn their electric cars into moneymaking driverless cars, he said. Their cars would appreciate in value, Musk said, so owners would make more money selling their used car than they’d paid for it new.
On Tuesday, speaking to shareholders at the company’s annual meeting, Musk hedged his earlier statement, saying Teslas would be “capable” of such driverless operation. “We will have a million cars capable of self-driving” next year, he said. “We’ll still need regulatory approval. – Los Angeles Times
However, as with previous AI proponents like Minsky and Ray Kurzweil, previous inaccuracies do not impact their credibility. And there is little desire or interest to go back and verify the accuracy of the predictions. For the past several years, companies that have made grandiose AI claims have been able to raise enormous amounts of capital. The appeal to investors is that first, there is some unique IP that the capital raising company owns, and secondly, that AI allows the approach to be scalable.
Conclusion
The degree of exaggeration of AI, data science, and Big Data is extraordinary. These tools are being proposed to address a range of issues, from environmental degradation to healthcare reform. The actual accomplishments of these technologies significantly fall short of the claims made about them.
Article Update in August 2025
Five and a half years after I wrote this article, I found this video, which covered the decline of Big Data.
This video effectively highlights how many big data claims did not materialize, instead being fast-forwarded to claims about AI. This is amazing because I didn’t even realize I had stopped hearing the term Big Data until I recently came across it, which means that I had essentially forgotten how the term had faded from view. This video smashes Big Data as essentially just an investment ruse. It could just as easily have been called “Big Lies.”
This video presents big data as yet another pump-and-dump stock scheme.
Down Goes Data Science?
Not only that, but the term “data science” is a misnomer, as data science is not a science. — a term which, nevertheless, was joined at the hip with Big Data, used to be so common, is now far less prominent than it was several years ago when I wrote this article.
The Continual Financial Incentive to Make False Claims About New Technologies
If you were one of the people who made a lot of false claims about big data, you were rewarded with an improvement in your career and an improvement in your compensation. There was no corresponding benefit to people who were correct about Big Data. AI has failed to live up to its potential. Producing very narrow solutions, such as language processing, grammar checking, image creation, etc. All that needs to happen now is to invent a new term. And then the industry can transition to the new term, presenting the fact that all the promises made regarding AI have been forgotten.
AI Waste
In this video, Dr. Maggiori does an excellent job of explaining the mass of inefficiency in the AI space.
The Upcoming AI Bust
This video explains how difficult it is to get people to pay for AI versus the amount of money invested in AI.
I upgraded my Google account to Google Workspaces, and within Google Workspaces, one of the items that they market to you is MLM. And also Gemini. After testing Gemini, I was so unimpressed. I would not even be willing to pay the extra $8 a month. To access any of Google’s AI tools. However, big tech firms, such as FANG, are investing in AI as if there’s a broad interest in paying to use the tools that they’re developing.