The First AI Winter and What The Lighthill Report Said About AI Progress

Executive Summary

  • Most people who work with or read about AI know of the first and second bubbles and winter.
  • What did the Lighthill report say about AI’s progress towards its goals in 1973?

Introduction

The first AI bubble was based upon highly inaccurate projections of what could be accomplished in generalized intelligence or Strong AI by proponents like Marvin Minsky.

Graphic from Sebastian Schuchmann.

Our References for This Article

If you want to see our references for this article and other related Brightwork articles, see this link.

Why Did The First AI Winter Occur

The first AI winter occurred because AI could not meet the projections made for it. The Lighthill report was published in 1973, a special investigation into this exact topic. 

The Lighthill report is the name commonly used for the paper “Artificial Intelligence: A General Survey” by James Lighthill, published in Artificial Intelligence: a paper symposium in 1973.[1]

Published in 1973, it was compiled by Lighthill for the British Science Research Council as an evaluation of the academic research in the field of artificial intelligence. The report gave a very pessimistic prognosis for many core aspects of research in this field, stating that “In no part of the field have the discoveries made so far produced the major impact that was then promised”.

It “formed the basis for the decision by the British government to end support for AI research in all but two universities”[2] — University of Edinburgh, University of Sussex and University of Essex. While the report was supportive of research into the simulation of neurophysiological and psychological processes, it was “highly critical of basic research in foundational areas such as robotics and language processing”.[1] The report stated that AI researchers had failed to address the issue of combinatorial explosion when solving problems within real world domains. – Wikipedia

When I read the Lighthill report, I found the following quotes of interest. 

Most workers in AI research and in related fields confess to a pronounced feeling of disappointment in what has been achieved in the past twenty-five years. Workers entered the field around 1950, and even around 1960, with high hopes that are very far from having been realised in 1972. In no part of the field have the discoveries made so far produced the major impact that was then promised. 

While this conclusion, which is rapidly gaining acceptance, has been undermining one of the clearest overall justifications for work in category A, performance of Advanced Automation systems developed at great expense in problem domains of particular economic importance has generated a still stronger sense of disappointment. Work in the pattern-recognition field has not yet proved competitive with conventional methods: even the recognition of printed and typewritten characters posed a quite surprising degree of difficulty, while the recognition of handwritten characters appears completely out of reach. Speech recognition has been successful only within the confines of a very limited vocabulary, and large expenditure on schemes to produce machine recognition of ordinary speech has been wholly wasted. Learning techniques, by which a machine’s performance at recognising words might improve on receiving identified words from more and more individual speakers, appear feasible only for an exceedingly small vocabulary (such is the power of the combinatorial explosion) like the decimal digits!

The most notorious disappointments, however, have appeared in the area of machine translation, where enormous sums have been spent with very little useful result, as a careful review by the US National Academy of Sciences concluded in 1966; a conclusion not shaken by any subsequent developments. Attempts based on classical grammar and syntax and on the transformational grammar of contemporary general linguistics have been equally unsuccessful in producing acceptable programs. Suggestions from recent research (see below), that analysis and use of natural language by computer succeed only when a very detailed knowledge of the universe of discourse is stored within the machine, augur badly for the future availability of machine-translation programs versatile enough to be commercially valuable.

When able and respected scientists write in letters to the present author that AI, the major goal of computing science, represents another step in the general process of evolution; that possibilities in the nineteen-eighties include an all-purpose intelligence on a human-scale knowledge base; that awe-inspiring possibilities suggest themselves based on machine intelligence exceeding human intelligence by the year 2000; when such predictions are made in 1972 one may be wise to compare the predictions of the past against performance as well as considering prospects for the realisation of today’s predictions in the future.

Research on AI in some other countries may be funded by military agencies (ARPA in USA) or by other mission-orientated public bodies. With this type of funding it is common for scientists to close their ranks and avoid public disagreement among themselves, in the hope that the total funds available for science may thus be enhanced to an extent that may outweigh any harmful results of a distribution of those funds determined on the basis of insufficient scientific discussion.

How Lighthill Called Out Poor Progress

The Lighthill report documented that AI was not delivering on its promises.

In my view, given the funding sources, the US and European governments were misled by AI researchers at elite institutions like MIT and Princeton. Marvin Minsky was the most prominent pied piper on AI, yet his role in causing wasteful AI funding from Western governments has been unexamined. In fact, after having evaluated the material in this area, I believe this is the first article to point the finger at Marvin Minsky, and the Lighthill report was published 46 years from the time I am writing this.

Why Weren’t The AI Promoters Were Not Callout for Exaggerated Claims?

This has allowed a large amount of time to pass without Minsky being held accountable. The lack of critical analysis of AI originators is quite extensive. Multiple AI promoters from the most prestigious universities tricked the US and European governments until 1973, the first AI winter. You hear about the first AI winter, but they don’t explain why.

The claims were false, and there were few reasons to think at the time that the claims would be valid. This is part of a long-term issue of prestigious universities scamming the government primarily and the private sector—the Pentagon-funded research into using gravity as a weapon (as the following quote explains).

If you think the idea of gravitational waves propelling interplanetary spacecraft sounds like science fiction, you’re in good company – any astrophysicist will rubbish the idea out of hand. However, that didn’t stop the US Defense Intelligence Agency (DIA) from commissioning a report to investigate whether the elusive waves could pose a threat to US security. The JASON Defense Advisory Group were also asked to judge whether high-frequency gravitational waves could image the centre of the Earth, or be used for telecommunications.

The report (pdf format) concludes: “These proposals belong to the realm of pseudo-science, not science.” “The proposal is utter nonsense,” says Karsten Danzmann from the Max Planck Institute for Gravitational Physics in Hanover, Germany, and member of the GEO600 project to detect gravitational waves. “I’m a bit surprised the agency bothered to commission an investigation – it would probably have been enough to just ask an in-house science advisor,” he says. David Shoemaker, from MIT in Cambridge, Massachusetts, a member of the LIGO project to detect gravitational waves, agrees that a quick phone call to a physicist may have been sufficient. But he quips that given the US defence establishment’s history of funding bad science, over-long reports that rubbish such ideas at an early stage may not be a bad thing. “The Department of Defense always have a few projects on the go that disobey the rules of thermodynamics, so I wish they would commission this kind of in-depth study in more cases.” – New Scientist

A few years back, many research grants were handed out to analyze hydrogen as an energy source, even though it is not a source but a battery. It is widely known in the medical research community that much of the cancer research dollars are redirected to other things, as cancer research is one of the most accessible research areas to obtain funding. 

Conclusion

Exaggerated claims caused the first AI bubble. AI researchers received funding to try to progress toward generalized intelligence. They could not meet by 1973 but have been unable to meet even in 2020. This is how exaggerated the claims of AI were during the run-up to the first AI winter.