The Problematic Black Box Nature of Neural Networks and Deep Learning

Last Updated on March 5, 2022 by Shaun Snapp

Executive Summary

  • Neural networks and deep learning are normally black box systems.
  • This black box nature of neural networks leads to problems that tend to be underemphasized in a rush to promote these systems.


Black boxes are where the activity within the solution is unknown to the users or observers. Generally, people tend to respond positively to black boxes, but with neural networks and deep learning, increasingly, even the designers don’t know what it is doing. They know the basic operating procedures. If you review much of the popular writing on AI, the commenters almost revel in the fact that they don’t understand what the AI is doing. There is a new concept that the only important thing is a system that finds relationships, not that it is understood why the relationship exists. This is a central concept of Big Data and data science.

However, when the users and the designers don’t understand how something works, it is impossible to validate the output or adjust when the solution or model does not apply to a specific environment.

Our References for This Article

If you want to see our references for this article and other related Brightwork articles, see this link.

The Commonly Found “Black Box-itis”

The issue with black boxes generally is described in the following quotation.. 

“Indeed, some of the human minds behind the computers boast that they don’t really understand why their computers decide to trade. After all, their computers are smarter than them, right? Instead of bragging, they should be praying. The riskiness of high frequency black box investing is exacerbated by a copycat problem. If software engineers give hundreds of computers very similar instructions, then hundreds of computers may try to buy or sell the same thing at the same time, wildly destabilizing financial markets. To Wired’s credit, they recognized the danger of unsupervised computers moving in unison: At its worst, it is an inscrutable feedback loop…that can overwhelm the system it was built to navigate.”

Nobody knows exactly what unleashed the computers. Remember, even the people behind the computers don’t understand why their computers trade. In one 15 second interval, the computers trading 27,000 contracts among themselves, half the total trading volume, and ended up with a net purchase of only 200 contracts at the end of this 15 second madness. Market prices went haywire, yet the computers kept trading. Proctor & Gamble, a rock solid blue chip company, dropped 37 percent in less than four minutes. Some computers paid more than $100,000 a share for Apple, Hewlett-Packard, and Sotheby’s. Others sold Accenture and other major stocks for less than a penny a share. The computers had no common sense. 

They have no idea what Apple or Accenture are worth. They blindly bought and sold because that’s what their algorithms told them to do. The madness ended when as built in safeguard in the futures market suspended all trading for five seconds. Incredibly, this five second time out was enough to persuade the computers to stop their frenzied trading. Fifteen minutes later, markets were back to normal and the temporary 600 point drop in the Dow was just a nightmarish memory.” – The AI Delusion

This is further explained in the following quotation.

“William Goldman, a best selling novelist who won two academy awards for his screenwriting, wrote in his memoir, Adventures in the Screen Trade: The single most important fact, perhaps, of the entire movie industry: NOBODY KNOWS ANYTHING

Goldman gives two examples:

Raiders[of the Lost Ark] is the number four film in history as this is being written…But did you know Raiders of the Lost Ark was offered to every single studio in town–and they all turned it down? 

All except Paramount.

Why did Paramount say yet? Because nobody knows anything. And why did all of the other studios say no? Because nobody knows anything. And why did Universal, the mightiest studio of all, pass on Star Wars?…Because nobody, nobody — not know, not ever — knows the least goddamn thing about what is or isn’t going to work at the box office.

We are impulsive, emotional, and susceptible to fads and follies. Predict that!

In addition, we’ve seen over and over again that data-mining algorithms will inevitably discover models that predict the past incredibly well, but are useless or worse, for predicting the future. Big Data does not always know best, and it is perilous for businesses, governments, and ordinary citizens to make important decisions based on data-mined models — especially models hidden inside black boxes. – The AI Delusion

And in the following quotation, borrowing from the movie Dr. Strangelove. 

What is concerning is that this movie presents automated systems that were in place during the Cold War. 

“This irreversible computerized response is the basis for Stanley Kubrick’s satirical film Dr. Strangelove, with the US and the Soviet Union each building Doomsday Machines that are activated by early warning systems and cannot be stopped by human intervention. Fortunately when it happened in real life in 1983, there was a human involved, a Soviet lieutenant named Stanislav Petrov, who was on duty in Moscow monitoring signals from the Soviet early warning system . When the system indicated that US missiles were headed for the Soviet Union, Petrov violated protocol and did not launch a Soviet counter-attack, which surely would have been the start of a mutually assured destruction. As Petrov hoped, it turned out to be a false alarm that was caused by an unusual reflection of sunlight off clouds that fooled the Soviet early warning system. Imagine the consequences if the Soviets had been using a computer algorithm that could not be overridden.” – The AI Delusion


Neural networks and deep learning are frequently black boxes, and the more complex the problem is they are solving, the more impenetrable they are. With the massive investments in neural networks, we reach a time when their designers do not understand a higher and higher percentage of functioning programs.