Did Hillary Lose the Election to Due Failed Big Data and AI?

Executive Summary

  • Everything from Russians to Bernie Sanders has been blamed for the loss of the election by Hillary Clinton.
  • One thing that escaped attention was the Clinton campaign’s Big Data and AI program.

Introduction

We are bombarded with marketing information about the opportunities of AI and Big Data. Nearly every IT consulting company and many major vendors have some AI story they are currently pitching to help “crush” their quota.

All of this marketing promotion and complicit IT media coverage on AI has made the information generally published on AI extremely one-sided. And when AI projects fail, they are very quickly swept under the rug.

This article will focus on one such story, one of the most important developments in the US at least in the past 10 years where AI played a decisive and decisively negative role.

The Hillary Clinton Presidental Campaign

The book The AI Delusion is focused on the overall overestimation of AI and not the Hillary Clinton campaign per se, however, it has some very interesting insights regarding how the campaign was run and how it over-relied on a software program.

On September 16, 2016, seven weeks before the election, Eric Siegel wrote an article in Scientific American titled “How Hillary’s Campaign is (Almost Certainly) Using Big Data.” He argued that, “The evidence suggests her campaign is using a highly targeted technique that worked for Obama.” Ada ran 400,000 simulations a day prediction election outcomes for scenarios that it considered plausible. For example, 70% of the campaign budget went for television ads, and Ada determined virtually every dollar spent on these ads. The advice of experienced media advisors was neither sought nor heeded. Ada’s database contained detailed socioeconomic information on which people watched which television shows in which cities and Ada estimated how likely they were to vote for Clinton. No one really knew exactly how Ada made her decisions, but they did know that she was a powerful computer program analyzing an unimaginable amount of data. So they trusted her. She was like an omniscient goddess. Don’t ask questions, just listen. – The AI Delusion

Right off of the bat, the number of simulations run per day should have been considered suspicious. There is a general misunderstanding that AI simply runs itself. AI systems do not work without human intervention. There is a category of AI or machine learning (which is actually the less marketing and more accurate term) that is called unsupervised learning, but again, after some time, the results must be analyzed by a human. When I have run “machine learning” algorithms I have commented to people I was working with at the time, how little the machine was learning, and how much work I had to do.

The entire topic of 400,000 simulations per day should have been reviewed to determine what this actually meant or if it was simply a way to impress the recipients of the message.

Assuming Hillary Clinton Could Not Lose Blue-Collar Voters

We do not know how Ada determined what she considered to be an optimal strategy, but it is clear that, based on historical data, Ada took blue collar voters for granted, figuring they they reliably voted Democratic, most recently for Obama, and the would do so again. With blue collar voters as her unshakeable base, Clinton would coast to victory by ensuring that minorities and liberal elites turned out to vote for her. This presumption was exacerbated by Ada’s decison that the campaign did not need to spend money polling in safe states –so, the campaign did not realize that some safe states were no longer safe until it was too late. – The AI Delusion

It is odd that Ada assumed that Hillary Clinton could not lose blue-collar voters. She would almost never talk about labor or issues of workers in general. This is for the following rather obvious reasons.

  • She is very obviously elitist, and this is picked up not only by blue-collar workers but by white-collar workers as well.
  • Hillary Clinton has a natural demeanor that is quite off-putting to blue-collar men.
  • She is viewed as a consummate disingenuous politician.
  • Hillary has many features that are extremely offputting to non-elites, and this is expressed well in the following video from Jimmy Dore.

The Jimmy Dore show provides many analytical videos around politics and political media. Jimmy Dore describes himself as a “C student who spent his life in comedy clubs.” However, he predicted the problem with Hillary as a candidate prior to the election occurring. Why was Ada not fed the very available statistics that Jimmy Dore describes in this video? 

Ada did not see these stats — which opens up a very wide question as to what overall data was being used. The Democratic National Committee also did not see this (that is the real subject matter experts).

No One Could Have Predicted Hillary’s Electability Problems?

When Clinton suffered a shock defeat to Sanders in the Michigan primary, it was obvious to people with campaign experience that his populist message had tremendous appeal, and that the blue collar vote could not be taken for granded. Ada didn’t notice. Clinton blamed her shock loss on everything but Ada. Ada was, after all, a powerful computer — free of human biases, churning through gigabytes of data, and producing an unimaginable 400,000 simulations a day. So, the campaign kept to its data driven playbook, largely ignoring the pleas of seasoned political experts and campaign workers who were on the ground talking to real voters.

Most glaringly, the Clinton campaign’s data wonks shut out Bill Clinton, perhaps the best campaigner ay of us have ever seen. Bill was outraged that Hillary did nto listen to him during the campaign– literally refusing to take his phone calls. He complained to Hillary’s campaign chairman, John Podesta that “Those snotty nosed kids over there are blowing this thing because no one is listening to me.”

Ada concluded that voters were more worried about Trump’s unpresidental behavior than they were about jobs; so Hillary focused her campaign on anti-Trump messages: “Hey, I may not be perfect, but Trump is worse.”

Following Ada’s advice, the Clinton campaign almost completely ignored Michigan and Wisconsin, even though her primary-campaign losses to Bernie Sanders in both states should have been a fire alarm of a wake up call. Instead, Clinton wasted time and resources campaigning in Arizona — states she probably could not win (and did not win) becuase Ada decided that Clinton could secure a landslide victory win wins in marginally important states. – The AI Delusion

What Information Was Fed to Ada?

It is a bit surreal to hear that certain states were barely visited by Hillary Clinton when one considers how easy these states are to visit.

  • Milwaukee is a 2.5-hour flight from New York (the Clinton campaign headquarters).
  • Detroit is a 1-hour flight from Milwaukee.
  • New York is a 1.5 hour flight from Detroit.

Did Ada know how easy it would have been to visit these states? Each action takes effort and resources. Was Ada fed the relative effort for each action when it performed its simulations? This again brings up the questions of what was going on with Ada.

The Problem with the Term AI

The problem with stating that something is “AI” is that it does not describe exactly what is being done. AI is not just a self-guiding robot that is all-knowing.

  • One team of people can build one predictive robot while another team of people with a different approach could build a different predictive robot.
  • They could both claim to be AI, (which is itself a misleading and mostly useless term) but the recommendations would be entirely different.

This brings up the question if one AI system contradicts another AI system, then what happens? And if two AI systems can contradict each other, then what is AI? These are the questions that should have been asked. AI is not a place where questioning should stop, it is a place where questioning should begin. AI is quite dangerous is that it can easily fall into the same fallacy of the omniscient deity, that the deity has all the answers, and that the deity should not be questioned. This is exactly the type of thinking that science spent the last several hundred years trying to dismantle.

Consulting firms, most likely just like the group of people used by the Hillary Clinton campaign, are using the term AI to bill hours and gain power by tricking non-technical people with they have the ability to build systems that allow humans to not think.

Losing to the B Team

In the various post mortems of the Hillary Clinton campaign, one of the questions that has been asked is how such a sophisticated team such as the one fielded by Hillary Clinton lose to what is generally acknowledged as an inexperienced team in the Trump camp. It seems that Ada actually in part neutralized the team that Hillary Clinton had assembled. Ada blocked out Bill Clinton’s advice and the advice of other experienced staff.

Conclusion

Consultants that lie about AI are enabled in their lies by the fact that the IT media is compensated to promote AI. Major firms like IBM and Accenture, that have no demonstrated ability to implement AI projects, spend big money to get AI promoted in the IT media, and therefore, IT media is very happy to promote AI without ever looking at the evidence. As we covered in the article How Awful Was the Coverage of the McDonald’s AI Acquisition?, much of the AI coverage appears to be written by authors who have never used AI/ML themselves and who don’t actually know what it is. They are not questioning the statements made about AI by those trying to sell AI software and AI consulting. Therefore the promotional AI quotes, which are most often false, are being repeated without a filter.

If Ada had been a success it is likely that its success would have been published. But as it was not only a failure but a miserable failure, it took years for this story to come out.

It is curious that while AI is being lauded, the most fundamental part of the scientific approach (which is that you are honest in observing the data points) is being discarded in order to promote AI.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

References

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion

*https://www.amazon.com/dp/B07DPPM9C5/

How Awful Was the Coverage of the McDonald’s AI Acquisition?

Executive Summary

  • McDonald’s has jumped on the AI bandwagon by announcing an acquisition that is not actually AI.
  • We check the media coverage of this inaccurate announcement.

Introduction

For years now companies have been introducing AI programs to impress investors and customers. In March of 2019, McDonald’s acquired Dynamic Yield Ltd for $300 million. Dynamic Yield Ltd which is an AI firm. This AI initiative is aimed at adjusting the drive-through windows. The project essentially moves items around on the menu depending upon the time of day or day of the week based upon previous sales order history — and it should go without saying that this is not actually AI, and overall is one of the more ridiculous AI projects we have come across.

We checked how major media entities reported on this story.

Coverage per Media Entity

CNBC’s Coverage

The title of this article is the following..

McDonald’s $300 million deal officially makes it the king of A.I. and fast food—here’s why the move was pure genius

And then this is from the article.

The race for massive digital disruption in restaurants, from fast food and casual dining to high-end eateries, is only going to get more intense, and McDonald’s acquisition makes a strong statement about how far it’s willing to go to win over customers. Consumers today expect to get food from their favorite restaurants in a number of ways, including ordering ahead and delivery to home and office. With the power of A.I. on its mobile app, McDonald’s could see bigger orders, increased demand for delivery, and greater margin.

The coverage by CNBC simply accepts the proposal by McDonald’s without questioning the story of logic of McDonald’s program in the least. As we will see further on in the article, both of these are problems.

Forbes’s Coverage

As the largest fast-food establishment, operating in 188 countries and serving more than 69 million people each day, it’s clear McDonald’s creates volumes of data, but it’s what they do with it that will yield powerful results. Here are just a few ways McDonald’s is getting ready for the 4th industrial revolution and using AI, big data and robotics. By the end of 2018, you can expect an ordering kiosk to be available at a McDonald’s near you. McDonald’s France is also testing out interactive terminals. As McDonald’s continues to embrace its data-driven culture, expect to see the company improve performance based on the insights and efficiencies realized from artificial intelligence, big data and robots.

Forbes also accepts the statements without question by McDonald’s. The article is written not by Forbes but by a contributor with a consulting practice around AI and analytics. This is an author with a bias to promote AI. In fact, it is why he offered his content to Forbes for free. As we covered in the article Can You Trust IDC and Their Now China Based Owners?, Forbes was purchased by a corrupt China-based real estate conglomerate, and they have drastically cut back on journalists and editors, preferring to obtain content from biased contributors with something to or direct paid placements from large entities like SAP.

2nd Forbes’ Coverage

The underlying story and intent here for McDonald’s is far deeper. It serves as a near perfect story of the inter-connected world we live in and why that world is in the most complex and possibly most exciting state of change it has experienced since the first industrial revolution and certainly since Ray Kroc first bought the brand to life. There is nothing digital about eating a burger. But that does not mean that the experience of ordering, receiving and going back for a burger should not be dominated by digital knowledge markers.

Converging the virtual and physical versions of ourselves is a big bridge this can now be crossed by McDonald’s. That is far more likely to delight me than a game token for their Monopoly game.

Being able to use AI to deliver almost infinite combinations of highly personalized elements could give McDonald’s access to the secret sauce that sits at the heart of Starbucks experience which is the power of the Starbucks retail partner to remember you, know what you want or may want. It is why Starbucks can charge $6+ for a coffee. Imagine McDonald’s as a premium experience because of that level of personal knowledge? Not just to the 14,000 stores in the USA but also the other 22,000 McDonalds restaurants around the world for every single customer moment. Size can have enormous power in the digital age.

This coverage is entirely promotional and has to qualify as ridiculous. This article is also not written by a Forbes author but this time by a Forbes contributor — who again is a promoter of AI. Neither of the Forbes articles were written by journalists, but by two individuals who wrote for free in order to get their name out there to attain business for their AI-based consulting firms.

Crunchbase’s Coverage

Unfortunately, customers don’t regularly come to McDonald’s for healthy offerings, he said. So the data will tell McDonalds to push items like fries or burgers vs salads. From a business standpoint, he said it gives McDonald’s the green light to suggest more so-called junk food to customers but rationalized in a “subtle, scary way.”

McDonald’s will likely draw some interesting information about customer behaviors and taste buds from this acquisition. But, as the AI bandwagon continues to super-size, whether or not data-focused innovation is critical for the life of the fast food giant will be revealed by time.

This is the first article so far to question the official story. However, it does not get into very much detail around the topic of AI and the overall article is only 491 words, or a 2 minute read time.

Wired’s Coverage

Look at the Dynamic Yield acquisition, then, not as the start of a digital transformation, but as the catalyst that evolves it.

“What we hadn’t done is begun to connect the technology together, and get the various pieces talking to each other,” says Easterbrook, in an exclusive interview with WIRED. “How do you transition from mass marketing to mass personalization? To do that, you’ve really got to unlock the data within that ecosystem in a way that’s useful to a customer.”

In the new McDonald’s machine-learning paradigm, significant display real estate goes toward showing customers what other items have been popular at that location and prompting them with potential upsells. Thanks for your Happy Meal order; maybe you’d like a Sprite to go with it.

McDonald’s was reticent to share any specific insights gleaned so far, or numbers around the personalization engine’s effect on sales. But it’s not hard to imagine some of the possible scenarios. If someone orders two Happy Meals at 5 o’clock, for instance, that’s probably a parent ordering for their kids; highlight a coffee or snack for them, and they might decide to treat themselves to a pick-me-up. And as with any machine-learning system, the real benefits will likely come from the unexpected.

Think also beyond the store itself. A company that amasses as much data as McDonald’s will find no shortage of algorithmic avenues. “Ultimately you can see we’ll be able to use predictive analytics—we’re going to have real-time information, as we start to connect the kitchen together—further back through our supply chain. I’m sure that will happen,” says Easterbrook. “That isn’t part of this particular technology, but as you start to link the predictive nature of customer demand all the way through your stock levels in the restaurant and the kitchen, you can almost flex it back down through the supply chain.”

An important part of that focus is figuring out how to leverage the “personalization” part of a personalization engine. Fine-tuned insights at the store level are one thing, but Easterbrook envisions something even more granular. “If customers are willing to identify themselves—there’s all sorts of ways you can do that—we can be even more useful to them, because now we call up their favorites,” according to Easterbrook, who stresses that privacy is paramount.

We have in the past found Wired articles we liked, but this coverage is disappointing. It copies the other publications by primarily reporting whatever McDonald’s PR says without any analysis.

Hospitality Tech’s Coverage

“Ultimately we really want to be able to offer customers two ways of ordering a meal through McDelivery:” through a third-party delivery operator and through McDonald’s global mobile app. “We believe we’re making good progress,” he added. “Obviously there’s a fair bit of technology work that has to go on to integrate it.”

“Over time using data from the millions of customers that we serve daily, the technology will get smarter and smarter through machine learning,” said President and Chief Executive Officer, Steve Easterbrook, in an April 30 earnings call with analysts. “And using the data collected based on current restaurant traffic at the drive-thru, the technology will begin to suggest items that can make peak times easier on our restaurant operations and crew.”

This was an article that simply passively repeated what McDonald’s stated without any real coverage at all.

PC Magazine’s Coverage

McDonald’s rolled out the decision technology to several of its US restaurants last year to see how well it performed. The fact it is now acquiring the company behind the tech suggests it went really well and is set to become a standard feature of restaurants. McDonald’s isn’t stopping there, though, with touchpoints inside the restaurants and the Global Mobile App also being upgraded with the AI technology.

The same goes for this article as goes for the previous article.

Diginomica’s Coverage

McDonald’s bet on digital transformation and a technology-led growth strategy has been a use case exemplar to which we’ve returned on many occasions. The fast food giant has spent millions of dollars over the past few years introducing digital tech into its customer experience front end and operational delivery back end, spend that has to date produced demonstrable ROI for the firm. This isn’t the first time McDonald’s has experimented with this sort of AI-based approach. Back in 2015 the firm piloted digital menu boards in some locations that were able to make recommendations for food and drink choices based on the weather conditions at the time. What’s being talked about now is clearly going to be a lot more sophisticated over time and will be making use of the data gathered through the investment in self-service kiosks and other digital channels.

On the front end, I confess I really don’t care for ‘personalised recommendations’ when doing any form of online shopping, let alone grabbing a burger. But I recognise the rationale and the appeal for some.

That said, McDonald’s bet on AI will inevitably deliver another stick with which its critics can beat it. Consider this from The Telegraph, which urges us all to “be worried” that “the world’s most maligned producer of fast food” has bought itself an AI firm.

This article like the others does no work trying to understand the underlying reasonableness of the venture, or doing any thinking of any kind. We have routinely critiqued Diginomica’s coverage of SAP for being inaccurate and highly subordinate to their “media partners,” (translation firms that pay them for softball coverage) Diginomica’s Compliant Coverage of S/4HANA, but they don’t seem capable of covering non-SAP topics either.

Geek.com’s Coverage

Once the ink dries on the deal, McDonald’s plans to roll out the foody function this year to Drive Thrus across the country, before expanding into international markets.

The chain will also integrate the technology into its digital customer experience touchpoints—i.e. self-order kiosks and mobile apps.

“We’re thrilled to be joining an iconic global brand such as McDonald’s and are excited to innovate in ways that have a real impact on people’s daily lives,” according to Liad Agmon, co-founder and CEO of Dynamic Yield, which will remain a stand-alone company with employees operating around the world.

In this absurdly short article of 299 words, which itself could have been written by AI, there is nothing more that stating the acquisition and then reporting the McDonald’s quotations as fact.

Phys.Org’s Coverage

Professor Sodhi raised further concerns about the way McDonald’s customers might react to having their number plate recorded when they enter a drive through.

“Would people really be comfortable with that,” he asked.

Fellow Cass academic Dr. Oguz Acar, Senior Lecturer in Marketing, said recording consumers data so overtly could risk alienating them.

“Suggestions based on number-plate recognition may make consumers feel that their privacy is violated,” Dr. Acar said. “This could, in turn, bring about various negative downstream consequences in the form of reduced sales, negative word of mouth and an impact on brand image.”

Professor Feng Li, Chair of Information Management at Cass, said McDonald’s plans to implement AI can lead to short-term gains but unintended long-term consequences.

‘We can no longer beat when playing chess, and AI based systems can easily out-smart humans when we make in a well-defined context,” Professor Li said.

“This has already happened in some industries from airlines to e-commerce.

“They [McDonald’s] seem to be going for personalisation initially, but if dynamic pricing is also used, it can be, and often will be, abused by companies—and can lead to consumer pushbacks.”

Professor Sodhi further questioned McDonald’s decision to buy not just the technology it needed to implement AI, but to buy the entire company.

“The purchase makes no sense,” Professor Sodhi said.

“It is possible McDonald’s CEO has something else up his sleeve, and if so improving and personalising the drive-through experience is then only a stated reason, not the real one.”

This is one of the only articles to question the actual technology and Professor Sohdi brings up the same point we bring up later around why McDonald’s has to acquire Dynamic Yield rather than simply use them as a vendor. The acquisition does make no sense — from the perspective of McDonald’s business, but it does make sense from the position of developing a story for Wall Street.

Digital Trends’ Coverage

The technology powering the menu boards will also take into account factors such as current weather conditions — so it might offer up cold drinks on a hot day — and also how busy the restaurant is, meaning if there’s a long line and the kitchen is under pressure, it might push items that are quicker to prepare.

If you hadn’t already noticed, McDonald’s high-tech menu board is similar in many ways to how Amazon’s online shopping site constantly offers similar or complementary items as it tracks your search activity click by click.

Similar to the other articles, this simply repeats what McDonald’s says.

Silicon Angle’s Coverage

“Technology is a critical element of our Velocity Growth Plan, enhancing the experience for our customers by providing greater convenience on their terms,” said Steve Easterbrook, chief executive of McDonald’s. “With this acquisition, we’re expanding both our ability to increase the role technology and data will play in our future and the speed with which we’ll be able to implement our vision of creating more personalized experiences for our customers.”

Silicon Angle is another example of no value being provided and no reason for their article to exist.

TechSpot’s Coverage

McDonald’s will put its newfound technology to work in the drive thru. Working in conjunction with the company’s digital menus, Dynamic Yield technology will account for factors like weather, time of day, current restaurant traffic and trending menu items to display items that customers are more likely to purchase.

This is another article that is tiny at 203 words. It provides no analysis whatsoever — and at this point, I am beginning to question what is the point of having what amounts to large numbers of copycat articles that could in fact be the same article, just published at different websites.

In fact, the only analysis is provided in the comments section.

“”People only come to us if they want something to eat, or something to drink. We’re not in the business of using technology to try to change people’s lives.”

McDonald’s doesn’t give itself enough credit. It is largely responsible, either directly or through its many copycats, the epidemics of obesity and diabetes in America. Their also in the process of slowly eliminating dine-in service and switching to a drive-through only business model where you’re encouraged to order well ahead of time and then park in line to pick up your cold food. This will help to boost unemployment, particularly among teens and young adults. Talk about transformative. – Comment

Fast Company’s Coverage

That means that if everyone around you is eating their Mueller report-fueled feelings in the form of chicken nuggets, Egg McMuffins, and shamrock shakes, the AI can pick up on the trend and suggest it to whoever is next in the drive thru. It also means that your fast food experience will start to look a lot more like a trip to Amazon or other digital retailers, which have long used personal data and algorithms to make shopping recommendations. McDonald’s CEO Steve Easterbrook told Wired that the company has big plans for big data and smart tech, and could potentially use wireless beacons to detect your smartphone–or even cameras to scan your car’s license plate–in order to make more personalized menu suggestions.

This is the same pattern followed by most of the other articles. Zero value add.

BBC’s Coverage

“It can know time of day, it can know weather. We can also have it understand what our service times are so it only suggests items that are easier to make in our peak hours,” said McDonald’s chief executive Steve Easterbrook.

The BBC offered the same non-analysis of the news as the other media entities showcased thus far.

Gizmodo’s Coverage

Let’s be honest, it’s a bit ridiculous that such an iconic menu needs an expensive algorithm to tell you what you probably already know you want. This will acquisition will likely be remembered as a prime example of corporations’ algorithm-fever run amok. But maybe we’ll see some good come out of it too—like burying its godforsaken parasite salads deep, deep under a pile of burgers in your menu options, right where they belong.

Most of the article just repeated the same things the other articles did, but then this paragraph above questions the overall exercise. However, it is such a short critique that its difficult to say what is leading the author at Gizmodo to say this. Once again, the analysis is actually provided by the comments.

I f****** hate when I’m looking at some menu item and suddenly that whole side of the menu gets replaced with an ad for something I don’t want. Then it has to rotate back through a bunch of promos until the actual menu reappears. – Comment

You most certainly are not. My diet doesn’t allow me to eat a lot of fast food, so I don’t really keep up with changes in McDonald’s menu. I just plain walked out of McDonalds one day at lunch because of their silly menu. And I’ve been a stockholder since 1973. – Comment

I think in practice it works a lot better here than in social media news feeds, but you absolutely do not need to spend $300million just to find out that people like eating Big Macs at 12pm. – Comment

The McD’s menus are awful. One should be able to look up and quickly determine the price of small, medium, and large fries/drinks,etc without having to wait on the stupid menu to cycle through the sizes. It’s crazy that they paid for this crap. – Comment

I already hate the rotating menus enough. Why add something that is universally hated? – Comment

Data Iku’s Coverage

This was one of the few articles with any real analysis — but it is extremely brief.

After a solid 2018, McDonald’s was able in 2019 to purchase Dynamic Yield, a startup based in Tel Aviv. Its main use case is in promoting upsell items based on store history, time of day, weather, and other factors. 700 AI-enabled menu boards are already in stores, with more to come. AI integration isn’t limited to McDonald’s either, as other burger chains adapt.

We learned what Dynamic Yield actually specializes in, and this led us to the Dynamic Yield website.

We found something interesting going to the site. Which is that Dynamic Yield does not appear to use the term AI or even ML on their website. 

In fact, Gartner places them in the Personalization Engine category. 

We read through several of Dynamic Yield white papers and also found no mention of AI or ML. We instead found these terms.

  • Customer Data Management
  • Recommendations
  • Behavioral Messaging
  • Personalization APIs
  • A/B Testing

Here are Dynamic Yield’s Tools that can be accessed online.

This brings up a question. Why does McDonald’s refer to what Dynamic Yield does as AI. And secondly, of all the articles written on this topic why were we the only ones to pick this up?

Here is a listing of Dynamic Yield’s customers (the list is quite long, but this is what would fit in one screenshot). Most of what Dynamic Yield does is for the web. That is not food menus. This brings up the question of why McDonald’s needed to acquire Dynamic Yield instead of simply use them as a vendor. This leads to the conclusion that McDonald’s needed to acquire a company that has really nothing to do with their business, in order to create a splash. And they paid an enormous premium to do so. Dynamic Yield only has $17.8 million in revenue, which translates to 17 x revenues.

This entire acquisition may simple by about goosing the stock price. McDonald’s executives may have a number of shares that they would like to unload. This means the actual outcome of Dynamic Yield’s impact on McDonald’s may be a washout, but the executives may still benefit. And if they do, then the compliant media will have been a big part of serving as a repeating mechanism for McDonald’s talking points.

In fact, the next article addresses this exact topic.

Barrons’ Coverage

McDonald’s was a relatively strong performer last year. Investors flocked to its defensive characteristics as markets sold off late in the year, but there was more than just a flight to safety at play. McDonald’s increased its dividend, delivered upbeat earnings, and analysts promoted the benefits of the company’s restaurant-remodeling campaign, the Experience of the Future. Like so many of 2018’s winners, McDonald’s has lagged behind this year amid a market rally that has made riskier names attractive again. Still, analysts remain upbeat about the stock and its ability to keep climbing, and McDonald’s is up 6.2% since the start of 2019. In the trailing 12-month period, the stock has gained 20.6%.

What’s New. Cowen & Co.’s Andrew Charles reiterated an Outperform rating and $205 price target on McDonald’s on Wednesday, two days after the company announced that it would buy Dynamic Yield. He writes that McDonald’s is harnessing the power of technology and personalization better than most restaurant peers, a move that should pay off in the years to come.

That is, an analyst, who likely knows nothing about AI, and whether Dynamic Yield actually does AI, has determined that this acquisition will be successful. The fact that this analyst knows nothing about AI, is quickly reinforced by reading this quote from him.

“Dynamic Yield integrates across McDonald’s velocity drivers including drive-thru menu boards at Experience of the Future (EOTF) remodels and digital ordering via smartphone and kiosk,” he writes, and can offer real-time tailored options, catered to individuals.

The Barrons’ coverage performed little analysis and mostly repeated statements from a Wall Steet analyst who is not a source of information on the domain of AI, but it was educational nonetheless to know that McDonald’s has been frantically searching around for some type of story to pitch Wall Street.

Engadget’s Coverage

The system will look at factors such as the weather, time, local events, traffic levels at the restaurant and on nearby roads, historical sales data, currently popular items and even what you’re ordering to optimize menu displays at drive-thru windows. It might, for instance, promote the McFlurry or iced coffees on hot days, or suggest simpler items that are faster for employees to prepare if there’s a long line.

Engadget, a website we have enjoyed on several occasions, ads little value with their tiny 267 word article.

Engadget’s Article Comments

What is curious is how distinct the comments on the Engadget article are from the article itself — or in fact any of the articles.

This is great because what I really don’t like is when I pull up to a menu and things are in the same place they were last time so I can quickly find what I want. Move things around, make it a guessing game, hide Waldo in there somewhere. – Comment on Engadget

Isn’t this an algorithm not AI? People keep slapping AI onto everything making it seem smarter than it is.. to me AI would be making up its own factors to use for menu prioritization. – Comment on Engadget
More surveillance creep. The purpose, like most such AI apps, is to induce me to buy more. I already buy too much. As a senior citizen, I no longer do McD, but the idea is becoming pervasive in commerce. Is this the last gasp of an excessive consumption economy trying to keep itself going? – Comment on Engadget
drive thru menus are not the problem. how about do something about the quality of their food!!! I like quarter pounders but the last few years they have been nothing but cardboard. see if ai can do something about that!!! – Comment on Engadget
Just sell a cut-down menu of your core foods that are always available immediately and quickly. You can’t physically make more money from drive-thrus if traffic is constantly bottlenecking at the delivery window (at least in my country / locale). And you’re not helping the overall queue if you’ve got a multitude of screens, posters, banners and offers slowing decision making down. I go there every now and then, but all nearby suffer from very poor drive-thru logistics, so tend not to bother. It’s such a mess that if you haven’t made up your mind before arriving, we’re all in for a wait. The AI’ll have to be pretty clever to deal with that. – Comment on Engadget
It’s amazing what is passing as “AI” these days. As a software engineer, I feel like this is all just basic programming that’s linked to various sensors. This has all been around for decades and has always just been called “system integration”. – Comment on Engadget
Anywhere that has moving menus and graphics only slows down the process. Go into a movie theatre with TV’s for menu boards. Every 30 seconds the screen is wiped so they can show some close up of a cup of coke or some shit. Which means that when you want to look at the menu, you have to wait for it to come back. McDonalds does the same thing. Just list the damn items so we can quickly identify what we want to order. It’s not that hard. You want to have a dedicated space on the board for advertising and suggesting items, go for it. But get rid of the dynamic menu board bullshit. – Comment on Engadget

I think they would get a bigger return on their investment if they invested in their employees first. Get them better training and properly staffed. Otherwise they are always the bottleneck that slows everything down. – Comment on Engadget

G2Crowd’s Coverage

Other than being a crafty means to boost sales, McDonald’s is spearheading technological advancements by using artificial intelligence and machine learning to make waves in the difficult-to-penetrate fast food universe. Though their motives are less than altruistic, the means by which McDonald’s is approaching sales tactics are creative if nothing more.

Integrating tech into their business model has been a goal for McDonald’s and other similar fast or fast-casual restaurants. To retain customers, it’s crucial to “get with the times” and have options that appeal to users of all ages and technological skill levels.

The article provided a good background on Dynamic Yield. However, G2Crowd stated that DY used ML, which is not apparent from their website. In fact, it would make little sense to use ML for this task.

Secondly, the author endorses the strategy and calls it “getting with the times.” However, are AI-based menus storming the restaurant scene? I went to In and Out Burger a few months ago, and their menu does not appear to have an AI.

It is unlikely the menu at In n Out has changed much since the 1950s. And nevertheless, I no longer go to In n Out Burger because they are simply too crowded — so they don’t seem to have any problem retaining customers even though their menus are trapped in the 1950s. And another nice thing — this menu appeals to customers of all “ages and all technology skill levels.” 

Are AI menus actually “the times?”

A Synopsis of the Coverage of Twenty Two Media Entities

We created this table to provide a synopsis of all of the coverage of media entities listed so far, and others that we decided to leave out to keep this article from becoming longer than it already is.

Media Entity Coverage of McDonald's AI

 
Media Entity
Coverage Synopsis
Our Grade of their Coverage
1CNBCDoes nothing by report what McDonald's said, without any fact checking or other analysis.
F
2ForbesBoth "contributors" wrote articles in Forbes in order to get business for their AI consulting businesses and clearly had little interest in anything true or performing fact checking. Article's only editing process was to get past the Chinese censors -- as it was sufficiently positive and was not found to be questioning of authority in any way, it passed.
F
3CrunchbaseThe Crunchbase article could itself have been written by AI.
F
4WiredRepeats McDonald's talking points.
F
5Hospitality TechRepeats McDonald's talking points.
F
6PC MagazineRepeats McDonald's talking points. For a technology magazine, there was surprising little detail.
F
7DiginomicaAnother copy and paste job from Diginomica my goto source for inaccurate information on technology.
F
8Geek.comRepeats McDonald's talking points. A "mini" article of 299 works. By all means, don't work too hard Geek.com.
F
9Phys.OrgBrought up some very good points by two professors.
B
10Silicon AngleWhy does this article even exist? One could go to the McDonald's press release on the topic and get the same content.
F
11TechSpotIs a 203 word article an article, or is it something else? Isn't this length a tweet? Its abuse of bandwidth to load an article for 203 words.
F
12FastCompanyZero value add.
F
13BBCOne might think that the mighty BBC would write a nice article on this topic. One would be wrong.
F
14GizmodoGizmodo punts on the topic of McDonald's AI and writes and entirely generic "mini" article. But does at least question the sanity of the entire McDonald's proposal. The main value of the article is not the article, but the article comments which appear to have much more on the ball than the Gizmodo article author.
D
15Data IkuA fairly generic article, that adds value by bringing up the point of what Dynamic Yield actually focuses on, which is upselling.
C
16BarronsA mostly similar article to other coverage, but does explain that McDonald's has been desperate for a story to fee investors.
D
17EngadgetA 267 "mini" article that offers no extra insight. But which offers very insightful comments.
F
18AdWeekWithout any ability to understand AI, they reach out to others like Forrester for input, and Forrester provides AdWeek with content free comments courtesy of a Forrester analyst without any domain knowledge on the topic.
F
19NewsExaminerLists quotes from McDonald's and also Dynamic Yield with no analysis of any kind.
F
20Nation's Restaurant NewsZero content article -- a duplicate of the McDonald's press release.
F
21G2 CrowdA well written article, but it seemed to be guessing as to the reality of the technical aspects of the acquisition.
D
22VoxThis article did a good job of providing context and told us a few things we were not able to find from other articles. Could have been a lot more thorough -- these tiny articles are a concern.
B

McDonald’s #1 Issue is a Lack of AI?

McDonald’s has a number of problems that immediately come to mind.

  1. McDonald’s food is quite unhealthy.
  2. McDonald’s pays its workers extremely poorly. A person can work full time at McDonalds’ and be homeless without government assistance. This shows in the behavior of their workers when you go into a McDonalds. McDonald’s has routinely encouraged its workers to apply for food stamps. This is an industry-wide problem where in the US 52% of fast-food workers are on public assistance — which is estimated to be a $153 billion subsidy to workers from all sectors, which is very odd because it means the government is funding the worst quality food providers in the country. And McDonald’s is disproportionately consumed by the children and the poor, who are the least able to make discerning food choices. The amount paid for Dynamic Yield Ltd is only a small percentage of the overall subsidy provided to McDonald’s by the US government.

The only publication that even addressed these issues is Nation of Change and TruthDig, both of which have published the same syndicated article.

For example, in the category of “wheel-spinning” innovation – ie, trying to change a corporation’s course without actually changing anything – it’s hard to top McDonald’s. For several years, the fast-food chain has been losing customers to younger chains with healthier, more-stylish offerings. So, CEO Steve Easterbrook has tried to recoup the losses with PR tricks, such as calling the menu “healthy” and “fresh.” But a McNugget and fries are still what they are, so people have not bitten the PR bait.

Now, though, – Eureka! – he’s hit on an innovation that’ll surely cause hungry eaters to flock to the Golden Arches: Artificial intelligence. Yes, exclaimed Steve-The-Innovator, consumers need a robotic order-taker to advise them on what to order – based on AI’s ability to digest unlimited data about the weather, traffic, time of day, and what other people are ordering. “Decision technology” it’s called, and the CEO spent 300 million McDollars to buy these so-called thinking machines, which the maker claims will provide “the rapid and scalable creation of highly-targeted digital interactions.” Now, what could be more inviting than that?

Unsurprisingly Nation of Change and TruthDig is not a standard media entities, but are progressive publications and the author Jim Hightower is a well-known activist.

Conclusion

The average score of the articles we surveyed was between a D and F (that is a 1.43 out of 5).

Issues with the coverage.

  • Only a small minority of the media entities that we surveyed provided any coverage around the feasibility of McDonald’s statements around AI. From reading the articles is seems that only a minority of the authors even understood what AI was, or could explain ML or know if there is applicability to menus, or would have been able to determine what type of company Dynamics Yield actually is, or if an acquisition would make any sense.
  • The articles show a disturbing trend where there is a repetition of the information provided by the corporation, but the analysis is provided not by the author, but by the comments on the article. In fact, some of the most perceptive observations as to the feasibility of AI to menus were in the comments of articles rather than in the articles themselves.
  • Only one author was able to see the inconsistency between McDonald’s compensation that it pays its staff versus the rich valuation is paid for Dyanmic Yeild.
  • One article (quoting a university professor) brought up the rather obvious question of why McDonald’s acquired Dynamic Yield rather than simply using them as a supplier.

The lack of understanding of AI in the media entities means that companies can release information about AI that is false, and they can be sure that the vast majority of coverage will be favorable.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://www.engadget.com/2019/03/26/mcdonalds-ai-drive-thru-machine-learning-dynamic-yield/

https://www.bbc.com/news/business-47722259

https://learn.g2.com/mcdonalds-ai-integration

*https://www.truthdig.com/articles/the-creepy-new-addition-to-mcdonalds-menu/

https://www.nrn.com/quick-service/mcdonald-s-automate-upselling-purchase-ai-company

https://www.barrons.com/articles/mcdonalds-stock-is-up-on-artificial-intelligence-buy-of-dynamic-yield-51553698431

https://www.forbes.com/sites/bernardmarr/2018/04/04/how-mcdonalds-is-getting-ready-for-the-4th-industrial-revolution-using-ai-big-data-and-robotics/#2ee585a73d33

https://www.eater.com/2015/4/13/8403905/52-percent-fast-food-workers-public-assistance-food-stamps-study

https://www.theatlantic.com/business/archive/2013/10/instead-living-wage-mcdonalds-tells-workers-sign-food-stamps/309625/

https://www.cnbc.com/2019/03/26/mcdonalds-300-million-deal-with-dynamic-yield-is-a-brilliant-move-for-artificial-intelligence-and-fast-food.html

https://gizmodo.com/mcdonalds-spent-300-million-on-its-own-version-of-the-1833634853

https://www.forbes.com/sites/forbesinsights/2019/03/31/mcdonalds-purchase-of-an-ai-company-goes-ways-beyond-just-do-you-want-fries-with-that/#2fe5a59d4d8d

https://www.wired.com/story/mcdonalds-big-data-dynamic-yield-acquisition/

*https://www.pymnts.com/news/partnerships-acquisitions/2019/mcdonalds-ai-personalization-company-dynamic-yield/

https://futurism.com/the-byte/mcdonalds-ai-dynamic-yield-predict

https://hospitalitytech.com/mcdonalds-tests-ai-powered-digital-menu-boards

https://www.pcmag.com/news/367447/mcdonalds-to-personalize-drive-thru-menus-using-ai

*https://www.smartcompany.com.au/startupsmart/news/mcdonalds-acquires-dynamic-yield/

https://diginomica.com/you-want-ai-with-that-mcdonalds-latest-tech-gambit-gets-highly-personal

*https://www.geek.com/tech/mcdonalds-drive-thru-gets-ai-upgrade-1780239/

https://futurism.com/the-byte/mcdonalds-ai-dynamic-yield-predict

https://phys.org/news/2019-04-mcdonald-significant-ai-fries.html

https://www.techspot.com/news/79374-mcdonald-latest-acquisition-bring-ai-drive-thru.html

*https://www.nationofchange.org/2019/07/30/big-macs-new-big-data-innovation/

https://www.ft.com/content/a1818006-4f4e-11e9-b401-8d9ef1626294

https://www.designnews.com/electronics-test/mcdonalds-putting-ai-its-drive-thrus/35639361260521

*https://news.crunchbase.com/news/mcdonalds-will-serve-artificial-intelligence-with-latest-300m-acquisition/

http://laborcenter.berkeley.edu/pdf/2015/the-high-public-cost-of-low-wages.pdf

Stagnating wages and decreased benefits are a problem not only for low-wage workers who increasingly cannot make ends meet, but also for the federal government as well as the 50 state governments that finance the public assistance programs many of these workers and their families turn to. Nearly three-quarters (73 percent) of enrollees in America’s major public sup-port programs are members of working families;4 the taxpayers bear a significant portion of the hidden costs of low-wage work in America.

Higher wages and increases in employer-provided health insurance would result in significant Medicaid savings that states and the federal government could apply to other programs and priorities.14 In the case of TANF—a block grant that includes maintenance of effort (MOE) provisions that require specified state spending—higher wages would allow states to reduce the portion of the program going to cash assistance while increasing the funding for other services such as child care, job train-ing, and transportation assistance.

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion

How Real is the Data Science Gap?

Executive Summary

  • Many IT media entities propose a “data science gap.”
  • We analyze how accurate the proposal of a data science gap actually is.

Introduction

At Brightwork we have normally popped the balloon of hype around AI/ML and its projected opportunities to improve forecast accuracy. However, there is a new hypothesis that is now becoming popular in AI/ML circles, and this is the question of the data science gap.

Comments About the Data Science Gap

“It should come as no surprise that demand for folks with data science expertise exceeds supply. In fact, according to some McKinsey, there are only half as many qualified data scientists as needed. The good news is the market will likely resolve the shortage in the long run. But in the short run, the talent gap creates some challenges for an organization looking to get ahead with data.

Thanks to their ability to use math and computer science to turn big data into business gold, data scientists are the rock stars of the advanced analytics world (at least as data scientists have traditionally been defined). As more companies start investing in AI, they’ve looked to data scientists to lead the way.

LinkedIn report from August found more than 151,000 job postings for data scientists, with acute shortages being felt in big tech hubs like San Francisco, New York City, and Los Angeles.

The big data boom has been a boon for management consulting firms like Deloitte, McKinsey, Accenture, PwC, KPMG, and Booz Allen Hamilton, all of which have devoted large sums to attracting and retaining top data science talent over the past decade.” – Datanami

This article is very similar to most of the articles we found on this topic in that it simply takes industry’s word for it that there is a data scientist shortage. This is based upon the demand of data scientists, but it leaves out how much of this demand is simply due to the hype cycle. For example, as we covered in the article How Many AI Projects Will Fail Due to a Lack of Data?, IBM has been exaggerating the capabilities of AI projects. In fact, nearly all of the major vendors have. As we covered in the article The Next Big Thing in AI is to Fake AI’s Benefits, this is the next phase of AI.

Various publications like to mention how data is growing. Data has been growing since computers were invented. 

However, an oversight by publications that publish coverage like this is the presumption of Big Data, and by extension AI/ML that this data is useful for making predictions. As we will cover, there is a great workload placed on data scientists, that they are in many cases not able to meet. 

The Job of a Data Scientist

A realistic view on the job of a data scientist is provided by the following quotation.

“When big data got going, all of a sudden everyone was trying to hire data scientists,” Snell said. “And you watch people’s LinkedIn profiles say ‘data scientist’ with 25 years of experience. That’s a title we invented last week! But suddenly we have decades of experience.” – Datanami

This quotation describes the effort that must be invested by data scientists.

“The best data scientists are the crazy ones, crazy passionate, willing to do whatever it takes to succeed. Someone exceeding the new 30,000 hour rule has a ridiculous amount of self-taught experience. Their toolkit is massive, and the problems they have encountered are very diverse. For the common data scientist branches (coding, hacking, stats/math) they should be very strong on all of them compared to their data science peers. A high degree of crazy/hyper-confidence is often found in the type-E individual.” – Expertfy

The “Double Consumption” of Data Scientists

Feedback from many projects indicates that the data is not anywhere near to ready for algorithms when a project is begun. The data scientists turn into data mungers, as is expressed in the following quotation.

What you think you will be doing

What we Know About Data Science (AI/ML) Projects

We are covering what is very disappointing outcomes from Big Data/AI projects.

The concept was that everyone would worry about making sense of the data later after they had already invested in a major way into their data lakes and Big Data infrastructure and tools. This was a great way to get Big Data projects funded. IBM and Accenture, and everyone else that could make money on this was very much in favor of their clients hiring them to do this. 

But now the bill is coming due. And the argument is that we have a “data science gap.”

Why the gap?

Well, there are not a sufficient number of data scientists who are sufficiently talented to get the value out of the data.

Perhaps. This is certainly the common explanation. But is a bit of a “Get Out of Jail Free Card” for those that proposed all of the easy benefits from data science and AI/ML.

Those that sell or publish for those that sell various hype trains eventually need to change the narrative when their overly rosy predictions don’t come true. This is where a Get Out of Jail Card or excuse comes into play. When we critique obviously exaggerated statements by consulting firms and vendors we are ordinarily said to be “negative.” However, if the method that is followed is simply to make exaggerated statements, and then to come up with an excuse why the prediction did not occur, then accuracy has no meaning. Regardless of the excuse given (and there will always be an excuse), this does not remove the responsibility of the predictor to be evaluated for their prediction accuracy. If a predictor is correct, they don’t say “well it was because of XYZ.” If the predictor is correct they say “I was correct.” The excuse only comes into play when the predictor is wrong. 

Is Big Data Dead…..Already?

This curious quotation..

“But despite the progress in AI, big data remains a major challenge for many enterprises.

There are lots of reasons why people may feel that big data is a thing of the past. The biggest piece of evidence that big data’s time has passed may be the downfall of Hadoop, which Cloudera once called the “operating system for big data.”

After acquiring Hortonworks, Cloudera and MapR Technologies became the two primary backers of Hadoop distributions. The companies had actually been working to distance themselves from Hadoop’s baggage for some time, but they apparently didn’t move fast enough for customers and investors, who have hurt two companies by holding out on (Hadoop) upgrades and investments.” – Datanami

And the following also from Datanami..

“Hadoop has seen better days. The recent struggles of Cloudera and MapR – the two remaining independent distributors of Hadoop software — are proof of that. After years of failing to meet expectations, some customers are calling it quits on Hadoop and moving on.

However, cracks began to appear around 2015, when customers started complaining about software that wasn’t integrated and projects that never entered production. Distributions were shipping with more than 30 different sub-projects, and keeping all of this software integrated and in synch became a major challenge.

“Hadoop is absolutely going away with cloud capabilities,” says Oliver Ratsezberger, CEO of Teradata, which was stung by the early Hadoop hype. “You don’t need HDFS. It was an inferior file system from the get-go. There’s now things like S3, which is absolutely superior to that.”

Ratzesberger was an early adopter of Hadoop, having used the technology while building software at eBay in the 2007-2008 timeframe. “We knew what it was good for and we knew what it was absolutely never built for,” he continues. “We now have customers – big customers just recently – in Europe who told me recently, the $250 million in Hadoop investments, they’re writing off, completely writing off, tearing it out of their data centers, because they’re going all cloud.””

And the following.

“I can’t find a happy Hadoop customer. It’s sort of as simple as that,” says Bob Muglia, CEO of Snowflake Computing, which develops and runs a cloud-based relational data warehouse offering. “It’s very clear to me, technologically, that it’s not the technology base the world will be built on going forward.”

“The number of customers who have actually successfully tamed Hadoop is probably less than 20 and it might be less than 10,” Muglia says. “That’s just nuts given how long that product, that technology has been in the market and how much general industry energy has gone into it.

The Hadoop community has so far failed to account for the poor performance and high complexity of Hadoop, Johnson says. “The Hadoop ecosystem is still basically in the hands of a small number of experts,” he says. “If you have that power and you’ve learned know how to use these tools and you’re programmer, then this thing is super powerful.  But there aren’t a lot of those people.  I’ve read all these things how we need another million data scientists in the world, which I think means our tools aren’t very good.”

Unless you have a large amount of unstructured data like photos, videos, or sound files that you want to analyze, a relational data warehouse will always outperform a Hadoop-based warehouse. And for storing unstructured data, Muglia sees Hadoop being replaced by S3 or other binary large object (BLOB) stores.” – Datanami

Ok, so wait — there are a number of non-Big Data sources that can be used to perform AI against, however, the availability of Big Data was one of the primary test cases for why AI would be effective. Because it was proposed, only AI would be able to tease out the insights from Big Data. It is extremely inconsistent to say on one hand that the Big Data bubble is receding, taking with it many of its promised benefits, and then at the same time propose that AI has a great future.

Yes, I am aware that most of the quotes above relate to Hadoop, however, we are not asked to accept the assumption that the situation will greatly improve when the data is pulled from the now downtrodden Hadoop instances and placed into S3 or other “container.” But as is explained throughout this article, the original proposals about Big Data are very likely to be true.

Statements About the Opportunity of AI Go Unchallenged in IT Media

Companies that have AI services and software to sell have been releasing pro-AI information to the marketplace, and the IT media entities have done very little to fact check any of their statements — even though the statements come entirely from a position on financial bias. The following quotation is a perfect example of this.

“If your competitors are applying AI, and they’re finding insight that allow them to accelerate, they’re going to peel away really, really quickly,” Deborah Leff, CTO for data science and AI at IBM, said on stage at Transform 2019.”

This is call “Fear of Missing Out.” But Deborah Leff does not bother bringing any evidence to support this claim that companies that apply AI allow them to accelerate. That question deals with the topic of benefit. A second topic is the probability of having success with AI. And a little further on, we will see a statistic regarding AI projects placed into production rates that question how big the word “If” in this statement really is.

“Chris Chapo, SVP of data and analytics at Gap, dug deep into the reason so many companies are still either kicking their heels or simply failing to get AI strategies off the ground, despite the fact that the inherent advantage large companies had over small companies is gone now, and the paradigm has changed completely. With AI, the fast companies are outperforming the slow companies, regardless of their size. And tiny, no-name companies are actually stealing market share from the giants.”

Really? Because the statistics on the US economy at least show the opposite of this. There has been a great consolidation of many industries over the past 20 years mostly due to mergers and acquisitions, and new small business formation is currently at a multi-decade low. Chapo goes on.

“But if this is a universal understanding, that AI empirically provides a competitive edge, why do only 13% of data science projects, or just one out of every 10, actually make it into production?

“One of the biggest [reasons] is sometimes people think, all I need to do is throw money at a problem or put a technology in, and success comes out the other end, and that just doesn’t happen,” Chapo said. “And we’re not doing it because we don’t have the right leadership support, to make sure we create the conditions for success.”

13% = 1 out of 10?

It is hard to see how 13% = 1 out of 10, as it is 1 out of 7.7, but that is the least of the inaccuracies in this comment. The overall comment should be branded by Hoover because it is an entirely vacuous statement. But notice that the assumption is that there is always a benefit to be had from data science projects. Another observation is that unless these AI project are extremely inexpensive, or unless the insights coming from the 1 out of 7.7 is really a profit generator, it most likely that AI projects have a strongly negative ROI. Also, going into production is not having an ROI. SAP modules are in production all over the world and barely used. So the ROI projects area a subset of the 13%!

If someone tells me there is a 13% chance of simply doing something, and the chance of being successful is some subset of the probability of doing that thing, I would question what that thing is being done. Furthermore, we are discussing doing advanced AI projects, while it seems that many of the people discussing and making decisions on these topics do not seem to have mastered basic probability.

Chapo goes on to explain the data issues with data science projects.

“The other key player in the whodunit is data, Leff adds, which is a double edged sword — it’s what makes all of these analytics and capabilities possible, but most organizations are highly siloed, with owners who are simply not collaborating and leaders who are not facilitating communication.

“I’ve had data scientists look me in the face and say we could do that project, but we can’t get access to the data,” Leff says. “And I say, your management allows that to go on?””

Who Are the 13%?

Because IBM and others have projects to sell, there is a great reticence to describe “who” is actually getting AI projects into production. This is explained in the following quote.

“Unless your name is FacebookAmazonNetflix, or Google – the notorious FANG gang (plus Microsoft) – you’re chances of pulling off a successful AI or big data analytics project are slim, according to Ghodsi. “AI has a 1% problem,” he says. “There are only about five companies who are truly conducting AI today.”

“Those [FANG] companies have hordes of data scientists, like 10,000 or 20,000 of them. They have PhDs and experts from universities they hired that used to be professors,” he says. “But [the rest of the Fortune 2000] say ‘We don’t have access to Silicon Valley engineers. We just don’t have those. There’s not enough of them. The people who make huge Silicon Valley engineer salaries over here – that’s not the rest of the world.’ So how can other companies who don’t have the resources to just pour money into hiring 10,000 data scientists – how are they going to do it?”” – Datanami

The major consulting firms point to outcomes from FANG, but the consulting companies don’t have the ability to do this themselves. For example, IBM failed at turning their major AI project, Watson into a successful product and ran into problems ranging from multiple data source reconciliation to entirely overestimating the ability of AI to solve the problem without providing subject matter expertise as we covered in the article How IBM is Distracting from the Watson Failure to Sell More AI. That is, no matter how badly IBM failed at their own AI project, they would still like to sell their inability to pull of AI projects to other companies.

Fire and Forget on Funding AI Projects?

This brings up another question. Why wasn’t the data analyzed to determine its availability before the AI project was begun? Isn’t that the responsibility of the company funding the project — to determine if the project has the necessary preconditions for success? Chapo goes on.

“But the problem with data is always that it lives in different formats, structured and unstructured, video files, text, and images, kept in different places with different security and privacy requirements, meaning that projects slow to a crawl right at the start, because the data needs to be collected and cleaned.”

It is strange that this would be news to anyone. Did anyone think that corralling the data would not entail these issues? Chapo goes on.

“Oftentimes people imagine a world where we’re doing this amazing, fancy, unicorn, sprinkling-pixie-dust sort of AI projects,” he said. “The reality is, start simple. And you can actually prove your way into the complexity. That’s where we’ve actually begun to not only show value quicker, but also help our businesses who aren’t really versed in data to feel comfortable with it.”

This seems to leave out who made these proposals. Where did these false pretenses come from? Where they perchance promoted by the sales and marketing arms of vendors or consulting companies?

We publish the reality of data science, but we would be fired immediately if we worked for IBM or Accenture, because being realistic about AI and data science is considered extremely bad for sales, and people that discuss reality are not team players. At these companies whoever can tell the biggest lie tends to set the agenda.

The Assumed Incredible Relationships in Big Data

What is left out is that the relationships and insights contained in the data, that is the Big Data, were most often overstated in the first place. This is reminiscent of the Abu Ghraib scandal during the Iraq invasion. The US military began torturing Abu Ghraib prisoners because they weren’t getting the “intel” they thought they should be.

One problem.

The vast majority of prisoners got to Abu Ghraib because they were reported by people that knew them to collect the reward money. That is they were not turned in because they were members of the resistance (which the US called insurgents), but because there were desperate times in Iraq for Iraqis at that time. They were turned in for a reward. Furthermore, there was never any trial to determine guilt or innocence, the act of being turned in was enough for any length of detention. Because of this most of the prisoners at Abu Ghraib were everyday people. They had no intelligence to provide. And further recall, the US was looking for something that was a false pretext for the war, which was WMDs. This means that the US military was asking people who did not know about something that did not exist.

The Rough Sequence of Information Analysis

The first question to ask is whether the source has the information before you presume that more aggressive methods will bring out the insights that one desires. Tens of thousands of AI/ML projects have been approved without the evidence being brought forward that the insights are either there, or that they are worth the effort to extract in the first place. The only real evidence for many of these claims is of the most hypothetical nature. For instance, the idea that an ML algorithm performed well in an M4 Forecasting Competition, which is based upon lab type data. These are data sets that are of far higher quality than is generally found within companies. The problem is that the better the data, the more advanced methods will tease out insights.

Upon reviewing the data sets from the M4 competition, it was immediately apparent to me that the M4 competition is not applicable to any of the clients I have ever had. For years we have been hearing about how the real “action” in AI is in a branch of mathematics called deep learning. So what is the evidence that deep learning is effective at creating predictive models?

“While deep learning approaches have given us better results in some applications, the technology is not being used outside of two primary use cases: image recognition (i.e. computer vision) and text recognition (such as natural language processing). For most data science use cases that don’t involve manipulating images or text, traditional machine learning models are hard to beat.

“I would say right now there’s actually no proof that [neural networks] can perform better,” Xiao says. “If you have data sets that are tens of thousands or hundreds of thousands in number and you have a traditional regression problem and you’re trying to predict the probability that someone is going to click on an ad, for example, neural networks — at least the existing structures — just don’t perform as well as other methods.”

That probably comes as a surprise to many in the industry who figured deep learning was going to drive data science into the AI future. “I don’t think that’s the answer that most people want to hear,” Xiao says “People like to throw around the term AI, but we’re not quite to that level yet.”” – Datanami

*From Research Gate

Neural networks or “Deep Learning” are a type of sophisticated decision tree. It is difficult to see how Deep Learning techniques are going to outperform simpler time series forecasting techniques for the vast majority of items that must be predicted. Yet many AI projects are being justified on the basis of Deep Learning, without the funders of these projects realizing the limited applicability of Deep Learning. 

If you torture people, pretty soon, you start getting intelligence. Fake intelligence that is. The same thing is true of data. If you analyze data from enough dimensions, and with enough financially biased consulting firms and vendors riding the bandwagon, soon relationships will begin to “appear.” With the hundreds of variables that many data science projects are using, overfitting and illusory relationships will be reported as real. Remember, the previous investments into Big Data, a well as the AI/ML project must be justified. The relationships and benefits must be there — I mean all that money was spent on the data lakes!

There is a gap alright, and its the differences between the promise of Big Data and the reality.

Conclusion

Major consulting companies and vendors that have bet big on Big Data and AI/ML are not going to back down from the claims because the projects are by in large not meeting the sales presentations around them.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://towardsdatascience.com/the-data-science-gap-5cc4e0d19ee3

*https://www.datanami.com/2017/12/12/deep-learning-may-not-deep/

*https://www.datanami.com/2019/07/15/big-data-is-still-hard-heres-why/

https://www.researchgate.net/figure/An-example-of-a-deep-neural-network-with-two-hidden-layers-The-first-layer-is-the-input_fig6_299474560

*https://www.datanami.com/2019/07/09/why-you-dont-need-ai/

*https://www.datanami.com/2019/06/24/hitting-the-reset-button-on-hadoop/

Bridging the data science gap

*https://www.datanami.com/2017/07/25/exposing-ais-1-problem/

*https://www.datanami.com/2017/03/13/hadoop-failed-us-tech-experts-say/

*https://venturebeat.com/2019/07/19/why-do-87-of-data-science-projects-never-make-it-into-production/

https://towardsdatascience.com/why-data-science-sucks-d4e0171aba46

*https://www.datanami.com/2019/04/17/data-scientist-title-evolving-into-new-thing/

*https://www.datanami.com/2019/05/01/three-ways-to-close-your-companys-data-science-skills-gap-now/

https://www.newyorker.com/magazine/2004/05/10/torture-at-abu-ghraib

https://www.experfy.com/blog/this-is-why-your-data-scientist-sucks

*https://www.datanami.com/2019/06/10/hadoop-struggles-and-bi-deals-whats-going-on/

https://www.wsj.com/articles/data-challenges-are-halting-ai-projects-ibm-executive-says-11559035800*https://www.visualcapitalist.com/big-data-keeps-getting-bigger/

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion

The Next Big Thing in AI is to Excuse AI Failures

Executive Summary

  • We are deep into the AI bubble, and the benefits of AI are far less than promised.
  • Rather than acknowledge the overselling of AI, the next rational step is AI benefit falsification!

Introduction

At Brightwork we have normally popped the balloon of hype around AI/ML and its projected opportunities to improve forecast accuracy. AI projects are being  “popped” globally and failing to meet expectations. We cover the next phase that the AI industry will have to move to.

Selling Data Science and AI as the Next Big Thing

For years now companies have been told that AI is the next big thing. And not only companies or (the buy side) but the sell side as well. Degrees in data science have popped up all over the world, as data science has been predicted as the next big thing for one’s career for those mathematically inclined. This is expressed by the following quotation.

“For a while, I toyed with the idea of changing careers to Data Science. A marvelous graph posted in Quora convinced me of otherwise: Most of the time of a Data Scientist was allocated to data cleansing. So it was not all adventures and thrilling stuff! I looked at my sysadmin job with better eyes after that and decided to maintain a more critical stance on the so much hyped “sexiest job”. A year later or so, there was a spat of posts on DataScientists quite disappointed with their trade: It turned out that a big part of their work hours was devoted to bend and twist the facts in order to justify failed managerial decisions as the most rational ones in (a supposed) absence of either enough data or quality data. I still aspire to grab a job related to AI/DS (from my Sysadmin perspective). But as you’ll understand I no longer see it as the panacea. Journalists are to blame. They are but an extension of sales departments.” – Gorka Porteiro

The Problem with Oversold AI

Data science is “hot,” there is no doubt about it, but the question too infrequently asked is whether data science technique tends to actually work or provide value above the cost of the AI/ML projects and tools. 

Sales have been lying about what the data scientists can accomplish. So naturally they get on the project and they can’t meet expectations.

Habitual Overestimation of Data Quality

Customers vastly overestimate the quality of their data. So they think they are “ready to go” or the data will require some “minor tweaks.” Some evangelists on LinkedIn have made comments that of course, you need good data to get good output, and they then use the GIGO acronym (Garbage In, Garbage Out).

This, of course, makes them sound very wise.

However, it is these evangelists who oversimplified the opportunities in data science in the first place. Then when the project stalls, they move to the GIGO acronym. These AI projects presume great unfound relationships — which they can’t know as there are scant studies that show AI project benefits. Moreover, they sold the AI/data science project first and worried about data cleansing after.

Selling AI/Data Science Projects with Zero Idea of the Undiscovered Opportunities

This means that in the majority of situations, no AI/ML test was performed.

IBM or others will come in and rob your company, with a data scientist billing hours and collecting Pez Dispensers on eBay. Then when the data can’t be acquired, the project can be halted, and the resume can be updated, showing great benefits from the previous AI project.

Justifying Past Investments in AI

there is a real danger with AI being put into place not because it is proven to work, but because all of these investments must be justified. I know the consulting firms will be rigging the results of AI projects, and when these models are rolled out, they will be rolled out without sufficient evidence that they work long term. The gigantic (AI) failure of Watson is something IBM is putting in major damage control to sweep under the rug as we covered in the article How IBM is Distracting from the Watson Failure to Sell More AI.

Hiding the outcome from AI projects will become the next thing in AI.

Conclusion

IBM has 20,000 AI projects ongoing, as an example. That vast majority of these projects have failed or will fail as the projects were sold on little more than consulting sales hype. IBM has already intimated they have run into broadscale data problems on many projects as we covered in the article How Many AI Projects Will Fail Due to a Lack of Data?

As these AI/data science projects fail, a narrative must be developed that obscure this fact, so that the negative experience from these projects do not impact the sale of future AI/data science projects. The first few phases of AI was to lie about the benefits of AI and data science, overselling the benefits. Now, the second stage will mean covering for AI’s failures.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://www.wsj.com/articles/data-challenges-are-halting-ai-projects-ibm-executive-says-11559035800

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion

Knowing the Improvement from AI Without Knowing the Forecast Error?

Executive Summary

  • It is often stated that AI will greatly improve forecast accuracy.
  • These AI proponents seem to assume that it will be a simple matter to measure the net improvement of AI on forecasting.

Introduction

At Brightwork we have normally popped the balloon of hype around AI/ML and its projected opportunities to improve forecast accuracy. In this article, we ask a different question, which is how will companies know if their expensive AI/ML project is improving forecast accuracy.

How Most Companies Fail to Effectively Measure Forecast Error.

The degree to which forecast accuracy measurement in performed in practice is generally greatly overestimated. While one can find an enormous number of articles on the best way to measure forecast accuracy, most of the articles tend to focus on the measurement math that is used. The reality of how forecast error is measured is a far smaller area of coverage.

There are several issues that hold back companies from effectively measuring forecast error.

Issues That Hold Back Forecast Error Measurement

 
Measurement Issue
Description
1Limitations in Forecasting ApplicationsMost forecasting applications only measure the forecast error at the SKU, and do not allow for total product location database measurement and weighed forecast errors.
2Error MetricsOne of the most intuitive forecast error measurements, MAPE, is undermined when there are zeros in the demand history. And zeros are increasingly prevalent in sales histories.
3Zero Tolerant Error Metrics are ComplexError metrics that can tolerate zeros in the demand history (like sMAPE, MASE etc..) are not intuitive, are complex to calculate and are often not available within forecasting applications.
4Exempt Groups from Forecast ErrorSome groups in organizations submit inputs to the final forecast, but are not held accountable for forecast error.
5Poor Education on Forecast ErrorBasic forecasting error understanding is often lacking within companies. For example, the idea that the forecast error completely changed depending upon the forecast bucket and the level in the hierarchy must often be repeatedly explained.
6Constant Error Reporting DiscrepanciesSales and marketing and other groups report forecast error at high levels of aggregations than supply chain management. All of these higher levels of aggregation result in lower forecast errors, giving a false impression as to the actual forecast error. For supply chain management the forecast error must be measured at the product location combination (or SKU). Some supply chain departments report out aggregated forecast error, again to make the forecast error appear better than it is.
7A Lack of Error Measurement AutomationAs forecast error cannot be calculated with much nuance or customizability within forecasting applications, this means that some automated method of measuring forecast error outside of forecasting applications is necessary. The lack of this ability is often used as an excuse to report forecast error at higher levels of aggregation (see points 5 and 6 above for the problems with this.)

The Problem with Starting AI/ML Projects Without the Forecast Error Worked Out Beforehand

AI or any other method used to improve forecast error requires a consistent and agreed upon method of measuring forecast error. Yet, most AI projects are begun before this is in place. When executives hear about AI, they often get excited and are more willing to open their wallets. IBM’s AI consulting division recently reported 20,000 ongoing AI projects (some of those are outside of forecasting, but many are in forecasting). And how many forecast error improvement projects does IBM have underway?

We would estimate very few.

Like IBM, Deloitte clearly wants to sell you an AI project. How about a forecast error measurement project? Not so much. AI is hot. Forecast error measurement is decidedly “not.” 

Delving into forecast error improvement does not excite anyone, it can lead to eye-rolling, a concern by executives that they will be held responsible for problematic forecasting inputs if the measurement is too effective, a general disdain for the mathematics of forecast error measurement and it certainly is not very cinematic.

How can any AI forecast improvement project be approved, without a solid and proven forecast error measurement already be in place?

Forecast Error Questions For which the Company Should Already Have Answers

A high percentage of companies that have kicked off AI forecast improvement projects most likely do not have the answer to these and more questions around forecast error measurement.

  • Is the company going to report on the basis of a weighted forecast error?
  • How will the forecast error be used to drive forecast improvement?
  • Will low volume SKUs with a high forecast error be targeted for more improvement effort than high volume SKUs?
  • What is the mechanism for defining some of the product location database as “unforecastable?”

Without these and more questions answered, what is the point of the AI project being kicked off?

Conclusion

Without defining the measurement schema, AI forecast improvement projects are pointless. In fact, most of them are fruitless in any case and filled with exaggerated promises by both software vendors and consulting firms eager to ride the AI bubble to revenue enhancement.

These projects are doubly undermined out of the gate and guaranteed to waste money and to distract from forecast improvement without a firm forecast error measurement schema.

Let us recall, forecast error measurement should be the easier task. If one can’t develop a forecast error measurement schema, what hope can there be to master the far more complex and speculative AI forecast improvement project?

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://www.wsj.com/articles/data-challenges-are-halting-ai-projects-ibm-executive-says-11559035800

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion

Why AI and ML Are So Overrated for Forecasting

Executive Summary

  • In 2019 we are deep into a large AI bubble, and little that is published around AI is true.
  • We cover the reasons that AI is so overrated.

Introduction

In this article, we will cover the major reasons why the AI bubble is out of control and is going to pop.

Problem #1: Lack of Understanding of AI

It is apparent from reading material from many websites and reviewing conference presentations that there is a sizable population that is writing and speaking on AI that does not know how it works, and have never done it themselves. The following is a quote from Forrester on AI.

“Those barriers describe how AI will play out in 2019, when companies will claw their way out of data debt, to some extent because of GDPR and escalating security concerns. Combined with intelligent tools that move data governance to a more ambient and contextual state, most firms will turn the corner on data governance thanks to AI. Firms will also expand RPA and proofs of concept to broaden the process, product, or experience scope and better understand the impact of AI. RPA and AI technology innovations will combine to create business value while serving as a testbed for broader implementations of AI. In addition, a fledgling supply-side market will surface for explainable AI to broker the distance between enthusiasm and complex machine learning.”

This assumes that AI works. How many people are simply making this assumption?

And this leads to a second issue.

Problem #2: The Assumption That AI Always Fixes Problems

A large segment of the population in the IT space has dropped their guard with regards to AI. AI is simply assumed to be effective, without evidence being requested that it is effective. And how does everyone seem to know that AI is effective? Well, a large number of people are writing about AI. This is not evidence that it works. There is a fundamental misunderstanding as to what AI/ML can do. There are just too many people not testing the math before making the claims.

Problem #3: Allocating Any Improvement to AI Without Asking the Question of Whether AI Was the Best Approach to Use

If a minor observation is made, it is assumed that no other method by AI could have arrived at the conclusion. However, AI (which is mostly just ML algorithms that have been around for decades) AI is one of the highest overhead ways to arrive at an insight. These companies are going to quit AI. Its too much work and the results are spotty.

This is illustrated by the quotation from IBM.

On IBM AI

“Many ambitious artificial intelligence-backed projects never come to fruition due in large part to issues with data collection and cleaning, according to Arvind Krishna, PhD, IBM’s senior vice president of cloud and cognitive software.

During an interview with The Wall Street Journal earlier this month, Dr. Krishna noted that a common reason projects using IBM Watson AI often unravel is that companies are unprepared for the amount of time and money they must spend just collecting and preparing data. Those unglamorous yet crucial tasks, he said, make up approximately 80 percent of an entire project.

often unravel is that companies are unprepared for the amount of time and money they must spend just collecting and preparing data.”

Dr. Krishna goes on…

“You run out of patience along the way, because you spend your first year just collecting and cleansing the data,” he said. “And you say, ‘Hey, wait a moment, where’s the AI? I’m not getting the benefit.’ And you kind of bail on it.”

Problem #4: Used As a Justification for Big Data Investments

For years vendors and consulting companies told customers to accumulate large amounts of data. However, data itself has no value. One has to be able to derive insights from the data, and there has a been a great overselling of the benefits of Big Data — this is made even starker in contrast when it is realized that companies have major challenges in mastering “Small Data” such as being able to forecast from sales history. Why did these vendors and consulting firms think the following:

  1. That all companies would have opportunities to improve predictability from Big Data?
  2. That all companies would be able to master the accumulation of large amounts of data?
  3. That ML algorithms would be worth the effort and would outperform “Small Data” forecasting?
  4. That all companies would be able to master ML algorithms?

Now if these AI/data science projects don’t work, then what does that say about the expensive Big Data projects that companies invested all that money in? That is right much of it was waste.

Conclusion

The industry is not asking the right questions and analyzing the lack of positive outcomes from AI projects. One showcase example of this is IBM Watson.

  • IBM has been lying about Watson AI for over ten years.
  • IBM has spent billions on Watson and had not only problems understanding how to train Watson to solve medical research problems, but also failed to harmonize different data sets.
  • The curious thing is that IBM continues to sell AI projects. IBM claims to have 20,000 AI projects ongoing. However, these projects have been sold on false promises.
  • IBM does not possess any AI capabilities that other entities in the space do not possess, and the field of AI is filled with false claims.

Even if a company does employ many people who are familiar with how to run the major AI/ML algorithms, there is little evidence that these algorithms work. There are further problems with formatting data as it turns out that data lakes are even more difficult to convert into a usable form than previously thought.

All of this occurs in an environment where far more proven methods of forecasting often languish due to a lack of funding, unable to match the promises and “sexiness” level of AI.

A Hypothesis No AI Company Wants To be Tested

Even significantly into the AI bubble, there is yet to be much evidence that AI meets the hype. Every time an example of AI failing is found, industry sources that make money on AI tell these observers that the failure is not relevant. For example, with IBM Watson AI, IBM had over 10 years and enormous resources and was not able to make a useful product in the AI space. Yes, the right questions have not been asked as to why IBM failed so badly at Watson, and what it means for AI generally.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://www.wsj.com/articles/data-challenges-are-halting-ai-projects-ibm-executive-says-11559035800

How IBM is Distracting from the Watson Failure to Sell More AI

Executive Summary

  • IBM has become a major provider of false information around AI.
  • Central to IBM’s AI sales strategy is to distract attention from its colossal Watson failure.

Introduction

IBM has sold many AI projects after making enormous promises about Watson AI that never panned out.

Watson’s Failure at M.D Andersen

“We often call out overly optimistic news coverage of drugs and devices. But information technology is another healthcare arena where uncritical media narratives can cause harm by raising false hopes and allowing costly and unproven investments to proceed without scrutiny.

A case in point is the recent collapse of M.D. Anderson Cancer Center’s ambitious venture to use IBM’s Watson cognitive computing system to expedite clinical decision-making around the globe and match patients to clinical trials.

Launched in 2013, the project initially received glowing mainstream media coverage that suggested Watson was already being deployed to revolutionize cancer care–or soon would be.

But that was premature. By all accounts, the electronic brain was never used to treat patients at M.D. Anderson. A University of Texas audit reported the product doesn’t work with Anderson’s new electronic medical records system, and the cancer center is now seeking bids to find a new contractor.

IBM spun a story about how Watson could improve cancer treatment that was superficially plausible – there are thousands of research papers published every year and no doctor can read them all,” said David Howard, a faculty member in the Department of Health Policy and Management at Emory University, via email. “However, the problem is not that there is too much information, but rather there is too little. Only a handful of published articles are high-quality, randomized trials. In many cases, oncologists have to choose between drugs that have never been directly compared in a randomized trial.

Forbes ran a blog headlined “IBM’s Watson Now Tackles Clinical Trials At MD Anderson Cancer Center.” Forbes stated use in patient care “might come in early 2014.” It quoted an M.D. Anderson doctor saying: “It’s still in testing and not quite ready for the mainstream yet, but it has the infrastructure to potentially revolutionize oncology research.”

Likewise Scientific American asserted: “The University of Texas M.D. Anderson Cancer Center is using Watson to help doctors match patients with clinical trials, observe and fine-tune treatment plans, and assess risks as part of M. D. Anderson’s ‘Moon Shots’ mission to eliminate cancer.”

While IBM has entered into numerous deals to use its artificial intelligence system in healthcare, a company spokeswoman said there’s no published study linking the technology to improved outcomes for patients because “the implementation of the technology is not there yet.”

Artificial intelligence has been suffering from overhype since the 1970s and 80s,” said Steven Salzberg, a professor of biomedical engineering at the Johns Hopkins School of Medicine. “

Sixty-two million was spent on Watson by the University of Texas before the contract was canceled. All of the information we have obtained from other sources around Watson is that Watson does not add value and that IBM lies about what Watson can do.

This has had real impacts on the usage of Watson as the following quote from the Wall Street Journal explains.

“More than a dozen IBM partners and clients have halted or shrunk Watson’s oncology-related projects. Watson cancer applications have had limited impact on patients, according to dozens of interviews with medical centers, companies, and doctors who have used it, as well as documents reviewed by the Wall Street Journal.”

What Does Watson Tell Us About IBM’s Honesty on AI?

This introduction to Watson describes things that have not occurred and that Watson has not accomplished as if they have already been accomplished. 

These following quotes are from Gizmodo in and article titled Why Everyone Is Hating on IBM Watson—Including the People Who Helped Make It.

Watson Offers A Real Benefit or a Brand?

“Ed Harbour, vice president of Implementation at IBM Watson believes Watson is still unique in its field. “Are there other companies out there that offered AI-based systems and machine learning? Yes, there are,” he said. “However…I believe very strongly Watson is ahead of the competition and we’ve got to continue to push [to make Watson better]. No, I don’t think it’s something that anybody can just do.”

But according to Perlich, data scientists who want to create similar platforms as Watson could possibly pull from various offerings from the likes of Microsoft Azure, Amazon Web Services, or Data Ninja. But what those products don’t offer is the Watson branding. “And everybody’s very happy to claim to work with Watson,” Perlich said. “So I think right now Watson is monetizing primarily on the brand perception.””

That does not seem real. This is not an argument for differentiation. If IBM has invested at least a $ billion into Watson, why isn’t there a differentiation? According to Reuters in 2014, this was the level of investment.

“Jamie Popkin, managing vice president at research firm Gartner, said IBM’s technology significantly improved how information can be used and managed. “I think they’ve developed something that takes us to the next step where information management needs to go,” said Popkin.

IBM said it decided to establish the unit because of strong demand for cognitive computing.

“We have reached the inflection point where the interest is overwhelming and we recognized we need to move faster,” said Stephen Gold, vice president of Watson Business.

Watson will be deployed on Softlayer, the cloud computing infrastructure business IBM bought last year.

According to Gartner, by next year there will likely be a large and growing market for Watson-derived smart advisors and it said that Crédit Agricole predicted that these systems will account for more than 12 percent of IBM’s total revenue in 2018.”

Curiously, none of this actually came to pass, and Gartner once again fails on another prediction. And one wonders if being paid by IBM may have influenced this accuracy level.

Watson as the Donald Trump of AI?

“IBM Watson is the Donald Trump of the AI industry—outlandish claims that aren’t backed by credible data,” said Oren Etzioni, CEO of the Allen Institute for AI and former computer science professor. “Everyone—journalists included—know[s] that the emperor has no clothes, but most are reluctant to say so.”

Etzioni, who helps research and develop new AI that is similar to some Watson APIs, said he respects the technology and people who work at Watson, “But their marketing and PR has run amok—to everyone’s detriment.”

This is a delicate way of saying that IBM has been habitually lying about Watson.

IBM’s Moonshot and Curing Cancer?

“The designer thinks that false hope came from the Watson ads. For instance, one commercial depicts two doctors in a rural hospital that can do genomic analysis thanks to an intelligent black box that advises the doctors. In another commercial a soon-to-be seven-year-old talks to a fictional square about how she’s not sick anymore. After Watson reads her health data she asks if Watson is a doctor. “No, I help doctors identify cancer treatments.” Watson responds, as the copy on the screen reads: “IBM Watson is helping doctors outthink cancer, one patient at a time.”

“Outthink cancer” is deceptively vague. Rometty was even more vague in a 2015 Wall Street Journal interview. “We will change the face of health care,” Rometty told writer Monica Langley. “If you think solving cancer is cool, then we’re cool.””

This sounds very deceptive. Has IBM Watson cured cancer? If it has, IBM should come out and say this. It is unclear what outthinking or solving means. Has IBM solved cancer?

Ethical Problems with IBM’s Claims Around Watson?

The experience meeting the hopeful patient made the designer view the company in an entirely different light. “I would not put money on Watson helping patients on a grand scale,” the designer said. “IBM needs to be held accountable for the image that it’s producing of its successes compared to what they’re actually able to deliver, because at a certain point it becomes an ethical issue…You’re telling cancer patients that they should have a higher feeling of hope about their outcome and then under-delivering on that—to me, that’s just dirty.”

It seems like this ethics would apply outside of medicine as well. It is odd that if a person’s health is not put at risk, lying is often considered harmless, but if health is put at risk, then lying is a serious problem. This seems to translate to ethics only applying to software sold to the medical industry.

“Another former employee who worked as a design researcher lead at Watson for Oncology also said they were uncomfortable with how commercials portrayed the platform. “You watch those commercials and you think it’s finding new ways to cure cancer,” the designer said. “Why confuse people and make them think it’s going to find something that a physician couldn’t possibly find?… Then you’ve moved into what strikes me as unethical territory when you’re potentially giving hope to people who should never have placed hope in that kind of a system because it’s not a magical box that does that stuff. It’s not a god.””

Did IBM Make AI Mainstream?

“Now, thanks largely to IBM, it is no longer a risk for tech companies to focus on AI. Rather, it is a risk to ignore it. But because IBM wanted consumers to take it seriously in the early days, the company came up with its own flashy, imprecise branding for the fantastic new technology. As other companies have started investing heavily in AI in a time when it’s safer to do so, IBM has stayed on the same course, and Watson is trapped in the same black box.”

Yes, but if IBM’s hyped the AI market based upon false claims, this wouldn’t this be a negative? They may have become more accepting of AI because they did not sufficiently analyze the claims made by IBM.

Conclusion

IBM has been lying about Watson AI for over ten years. It has spent billions on Watson and had not only problems understanding how to train Watson to solve medical research problems, but also failed to harmonize different data sets. The curious thing is that IBM continues to sell AI projects. IBM claims to have 20,000 AI projects ongoing. However, these projects have been sold on false promises. IBM does not possess any AI capabilities that other entities in the space do not possess, and the field of AI is filled with false claims. Even if a company does employ many people who are familiar with how to run the major AI/ML algorithms, there is little evidence that these algorithms work. There are further problems with formatting data as it turns out that data lakes are even more difficult to convert into a usable form than previously thought.

All of this occurs in an environment where far more proven methods of forecasting often languish due to a lack of funding, unable to match the promises and “sexiness” level of AI.

A Hypothesis No AI Company Wants To be Tested

Even significantly into the AI bubble, there is yet to be much evidence that AI meets the hype. Every time an example of AI failing is found, industry sources that make money on AI tell these observers that the failure is not relevant. IBM had over 10 years and enormous resources and was not able to make a useful product in the AI space. IBM has promised that its clients, that have far less to spend on such projects, will benefit immensely from hiring IBM to implement AI projects.

Why would these clients be able to accomplish this, if IBM itself went down in flames on its own internal AI project?

Previously winning our Golden Pinocchio Award for lying about Watson.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

Foresight Forecast Conference

References

https://www.forbes.com/sites/adrianbridgwater/2019/06/04/ibm-injects-data-science-ai-into-its-db2-database/#239fa22d1d0a

https://gizmodo.com/why-everyone-is-hating-on-watson-including-the-people-w-1797510888

https://www.wsj.com/articles/ibm-bet-billions-that-watson-could-improve-cancer-treatment-it-hasnt-worked-1533961147

https://www.reuters.com/article/us-ibm-watson/ibm-to-invest-1-billion-to-create-new-business-unit-for-watson-idUSBREA0808U20140109

https://newrepublic.com/article/83337/ibm-watson-computer-jeopardy

*https://www.healthnewsreview.org/2017/02/md-anderson-cancer-centers-ibm-watson-project-fails-journalism-related/

Comment from this article:

“Watson’s win on Jeopardy wasn’t as straightforward as everyone thinks. Contrary to public perception, Watson has never had a speech interface. So for Jeopardy the questions were submitted in written form to Watson. However, the way the game was played, Watson received the question as soon as Alex Trebek began reading the question to the other contestants. With the speed that computers process information this meant that Watson had something like an hour to contemplate the question before the other contestants had finished hearing the last words. With this type of advantage it’s no surprise that Watson won. And IBM’s marketing department has taken that golden ring and run with it ever since.”

https://seekingalpha.com/article/4080310-artificial-intelligence-retrospective-analysis-ibm-2017-q1-earnings-call

The Similarity Between Consulting Firms and Phone Sex Operators

Executive Summary

  • Vendors and consulting firms have been aggressively selling and making exaggerated claims for AI.
  • Consulting firms seem to be able to switch to whatever is “hot” at the time with amazing speed.

Introduction

We are now someway into the AI/ML bubble. What is curious is how many companies claim to have deep AI expertise.

Let us review some of them.

Getting Your AI From Wipro

Wipro, a firm not known for forecasting is now your one-stop shop for AI. 

Getting Your AI From Infosys

Infosys is another AI expert. So many AI experts to choose from among the giant IT consulting firms. That man later married that robot. 

Getting Your AI From Capgemini

This video from Cap Gemini is filled with inaccuracies but if it does not “jack you up on AI” it is unclear if anything will.

As with WiPro and Infosys, Cap Gemini is a non-entity in the forecasting space, but that does stop them from producing a killer video. The proliferation of AI “expertise” within the large consulting firms happened very quickly. Can the readers of this article please reach out to us if they can find a major consulting firm that does not say that it is a company with deep expertise in AI?

We are trying to find just one that is not world leaders in AI.

The speed aspect by which consulting firms have added AI is pointed out by the following quotation.

Did you notice lately that every software vendor is suddenly an AI expert? Go back 3 years and you would never hear them even mention AI. None of their existing products use AI but suddenly they are AI experts! – Ahmed Azmi

The Level of Concern with the Outcome of AI Projects

One thing that is clear, consulting firms do not care about the outcome of AI projects. How can we say this? Because their websites are filled with false claims around AI. And their intent is to simply obtain AI projects. The consulting firm’s version of success is very simple, and it is expressed by IBM.

And when questioned about IBM’s success in AI, he responded defensively with the following quotation.

““I think 20,000 is not slow,” he said. “I think 20,000 projects is, what I would call, successful.””

This brings up the following questions

  • How does IBM have 20,000 ongoing AI projects?
  • Successful for whom, the customer of for IBM?

IBM certainly sees this as a success, but IBM only cares about billing hours on projects. By this definition even AI projects where hours are billed but not work is done is considered successful by the consulting company. However, IBM clients measure success, not by IBM’s metric. That is customers that invest in AI measure the benefit by how AI improves the accuracy of their various predictions.

The idea that IBM would have so many IA projects ongoing, and that there would be so little published about the benefits of AI received by companies is odd. Does IBM or Accenture explain the speculative nature of the AI projects they are selling to customers?

This is unlikely. This means that consulting companies promote an enormous amount of waste by selling whatever service happens to be hot at the time without any concern for whether the service actually accomplishes anything.

Conclusion

The major consulting companies can’t bring information into their clients about what works, because they are fundamentally trying to scam them. If your consulting company does not care about your outcomes as a client, why would they care about applying anything that actually worked? Observe how Dr. Krishna measures success — by how many consultants IBM is billing for AI projects.

This brings up the question of who is really trying to accomplish things with AI. 

The evidence is that the few companies with viable ML offerings are online consumer companies. Amazon.com, Google, and Microsoft. They wrapped an API around their own IP and monetized their DATA. What makes Google’s image, voice, and video recognition work is the massive size and quality of their data used to train their models. Data collected from apps like YouTube and platforms like Android. ” – Ahmed Azmi

This is a very good way of differentiating who is serious about AI. AWS/GCP/Microsoft are actually trying to do something (although I think AWS/GCP are much better examples), meanwhile the consulting firms are just trying to rip companies off. This entails getting prospects excited about AI or whatever the next hot topic is by making exaggerated claims around the item, and their capabilities with that item.

Some consulting companies have things they need to sell. The quota is based upon selling those things. Armies of consulting sales people line up to sell whatever is the hot item of the day. 

It is difficult to not see the parallels between consulting firms and phone sex operators. They will say sexy things to you for $4.99 / minute. And like phone sex operators they can be “whoever you want them to be.” There are parallels here to a recent movie based upon people making fake voices as telemarketers called Sorry to Bother You. 

Right now these consulting firms are saying “AI,” and tomorrow they will say something else. As long as they can keep charging $4.99 / minute. 

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

References

https://www.accenture.com/_acnmedia/PDF-85/Accenture-Understanding-Machines-Explainable-AI.pdf#zoom=50

How Many IBM and Other AI Projects Will Fail Due to a Lack of Data?

Executive Summary

  • Vendors and consulting firms have been aggressively selling AI in forecasting software and AI projects.
  • Customers are finding something curious about these ongoing projects.

Introduction

We are now someway into the AI/ML bubble. What are AI projects finding to their dismay? A lack of data for running AI/ML.

Quotes from IBM on AI Projects

“Many ambitious artificial intelligence-backed projects never come to fruition due in large part to issues with data collection and cleaning, according to Arvind Krishna, PhD, IBM’s senior vice president of cloud and cognitive software.

During an interview with The Wall Street Journal earlier this month, Dr. Krishna noted that a common reason projects using IBM Watson AI often unravel is that companies are unprepared for the amount of time and money they must spend just collecting and preparing data. Those unglamorous yet crucial tasks, he said, make up approximately 80 percent of an entire project.

This quote is problematic from multiple dimensions.

Breaking the Watson Quote from IBM’s Overall AI Projects

Watson has been a failed product for IBM. It is AI directed at health care which is still essentially non-functional after billions and over a decade of investment. However, this article is not about Watson (we have quotes about the problems with IBM Watson in the references). This quotation is about AI writ large. But it is curious that with Watson, IBM apparently ran into its own data problems as the following quote describes.

“The employees said there was never clear agreement, for example, on how to merge data gathered by the three companies into a unified format that could be used by Watson. That made it more difficult to deliver insights to help hospitals target medical services to specific patients, cut costs, and improve the quality of care.

With this acquisition, IBM will be one of the world’s leading health data, analytics and insights companies, and the only one that can deliver the unique cognitive capabilities of the Watson platform,” Deborah DiSanzo, general manager for IBM Watson Health, said in a statement following the Truven acquisition.

But the deals presented the difficult task of harmonizing all that data – housed in different formats, and focused on different aspects of patient care – into a model that could be digested by Watson, a challenge that is not unique to IBM.” – STAT

Perhaps IBM is not the company to rely upon for “spiffing” up your data for your AI project, as it is now quite clear they were not able to figure out how to do it for their internal project, for which they had more resources than any one individual IBM project will likely ever match. IBM Watson is a specific health care focused AI solution. However, IBM appears to also call AI not related to that specific item Watson as well, which is of course confusing.

Having said that, let us review this portion of the quote from Dr. Krishna.

On IBM AI

“often unravel is that companies are unprepared for the amount of time and money they must spend just collecting and preparing data.”

When IBM sold the project, did they explain the level of effort this would take? This quote makes it sound like someone else, that IBM does not communicate with, is selling AI projects that IBM consulting then has to work. Is Dr. Krishna that his own IBM sales team is communicating with these same customers before the IBM AI project begins?

Dr. Krishna goes on…

“You run out of patience along the way, because you spend your first year just collecting and cleansing the data,” he said. “And you say, ‘Hey, wait a moment, where’s the AI? I’m not getting the benefit.’ And you kind of bail on it.”

Questions Related to this Quotation

 
Question Area
Question
1Setting Customer ExpectationWas the data effort explained by IBM to customers? Has IBM ever oversold the benefits of AI and undersold the work effort required to get the data so it is in a state that it can be used by AI algorithms?
2How Long Until Data Begins to Be Usable?Does the data availability appear after the first year, or is this just the starting point?
3What is the Efficacy of the ML Algorithms?What about IBM AI projects that are sold on a promise of AI providing great improvements in forecasting accuracy which then, after the algorithms are run, don't and it turns out the entire premise of the project was flawed?
4Forecasting AI Project BenefitsIf the data is not close to being ready to run AI/ML algorithms, on what basis is IBM forecasting AI benefits to specific customers?

The question of underselling the data effort and overselling the benefits of AI is all important because IBM has routinely oversold its Watson solution as the following quotation attests.

“But it also earned ill will and skepticism by boasting of Watson’s abilities. “They came in with marketing first, product second, and got everybody excited,”” –  Robert Wachter, chair of the department of medicine at the University of California, San Francisco

and

“Robert Burns, a professor of health care management at the University of Pennsylvania’s Wharton School, said the complexity of integrating mis-matched data sets has vexed hospitals and other health care entities for decades. It is folly, he said, for IBM, or any company outside the industry, to suggest the problem can quickly be solved to cure terminal diseases or dramatically improve health care delivery.” – STAT

And of course, this is in no way limited to IBM. It is difficult to find a consulting company in IT that is not making outrageous claims around AI. In fact, let us review several.

Getting Your AI From Wipro

Wipro, a firm not known for forecasting is now your one-stop shop for AI. 

Getting Your AI From Infosys

Infosys is another AI expert. So many AI experts to choose from among the giant IT consulting firms. That man later married that robot. 

Getting Your AI From Capgemini

This video from Cap Gemini is filled with inaccuracies but if it does not “jack you up on AI” it is unclear if anything will.

As with WiPro and Infosys, Cap Gemini is a non-entity in the forecasting space, but that does stop them from producing a killer video.

IBM’s AI Projects Tend to Fizzle Out?

Still, Dr. Krishna maintained that the fairly common occurrence of halted AI projects is “the nature of any early technology.” Even as so many fizzle out, IBM still has about 20,000 more ongoing AI projects, a number that he deemed indicative of overall success.”

There is a serious problem with Dr. Krishna’s statement here. This is because AI is not new. Is Dr. Krishna unaware of this fact?

AI has failed to produce results in at least two separate historical AI bubbles (in the 1960s and Early 1970s, the 1980s), each one of them followed by an “AI winter.” Many of the people working in data science/AI are not even aware of these previous bubbles. And how far back AI goes surprises most people we discuss this topic with.

“Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true.

Eventually, it became obvious that they had grossly underestimated the difficulty of the project. In 1973, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an “AI winter“.” – Wikipedia

For those of you who have not tried SodaStream, you really should. It not only can add fizzle to new drinks, but it can give that “sparkling quality” to drinks that have gone flat. The problem? As of yet, there is no SodaStream for AI projects. 

To review a portion of the quote from Dr. Krishna.

“Even as so many fizzle out, IBM still has about 20,000 more ongoing AI projects, a number that he deemed indicative of overall success.”

And when questioned about IBM’s success in AI, he responded defensively with the following quotation.

““I think 20,000 is not slow,” he said. “I think 20,000 projects is, what I would call, successful.””

This brings up the following questions

  • How does IBM have 20,000 ongoing AI projects?
  • Successful for whom, the customer of for IBM?

IBM certainly sees this as a success, but IBM only cares about billing hours on projects. By this definition even AI projects where hours are billed but not work is done is considered successful by the consulting company. However, IBM clients measure success, not by IBM’s metric. That is customers that invest in AI measure the benefit by how AI improves the accuracy of their various predictions.

The idea that IBM would have so many IA projects ongoing, and that there would be so little published about the benefits of AI received by companies is odd.

Another question is why is IBM placing data science resources on site and billing for them if the data is largely unavailable and if it may take a year or more to develop the data? Would IBM sell an automobile service plan for a customer that has yet to purchase an automobile? It seems like an elementary question to ask of what data the client has that can be used. Without this, IBM has no idea if their client can benefit from an AI project.

The AI Project Preparedness Matrix

This topic of data availability brings up the question of how common it is for companies that engage in AI projects have the necessary items to actually successful pull such projects off.

To evaluate this, below are the individual estimates of the author and three other experienced resources in forecasting and ML/AI.

The Implications of the Poll

If this poll is roughly representative, it means that AI projects are begun with a very small likelihood of success. AI projects have been ongoing for a number of years now, and given these estimates, it is easy to project very high rates of failure. When these failures do happen, they will be hidden by consulting firms and vendors. And it will take far longer to find out the real story about the outcomes of these projects.

The question arises — how can an entire bubble be based upon AI, if such a small percentage of companies have the ability to be successful with these projects?

What Happened to Data Lakes?

For the better part of a decade, companies were told to through large amounts of unstructured data into data lakes. The idea was that data was now accumulating to so quickly, that there was no time to organize it. NoSQL is hot, it’s happening it’s now. The point was to accumulate almost as much as you could. The data scientists would come by later and sort out everything after it was collected. Unstructured or semi-structured data was seen almost as a virtue

However, now it is taking years to assemble this data, and now that it comes time to use this data, it takes lengthy projects to make is usable. Was the projection about the benefits of just collecting data and worrying about organizing it later actually justified, or was this waste?

Companies like IBM love charging for data lake projects. It allows them to talk up the future potential that will be released from AI. However, if Dr. Krishna is correct, these data lakes may not be as valuable as they were first proposed.

This is attested to by the following quotation.

“Data lakes promised to be the next generation of data warehouses, a central place to dump all of a company’s data. Unlike the warehouse, however, data lakes allow companies to dump data into the lake without ordering it beforehand. The problem with this approach, however, is that it simply delays the inevitable need to make sense of that data.”

Dataversity stated that 2019 is the year when companies begin “draining the data lake.” Data lakes did not appear that long ago, and we are draining them already?

Conclusion

The quotation from Dr. Krishna is misleading. Let us review some of the many issues in just a few lines of quotations from Dr. Krishna.

Issues with Dr. Krishna/IBM's Quotes

 IssueDescription
1Misrepresentation of IBM WatsonIBM Watson is not a successful product. In fact Watson has failed quite heavily and left a litany of dissatisfied customers that IBM does not acknowledge. IBM failed at their own internal data integration project, leading in part to Watson's downfall.
2Confusion or Commingling of Watson with IBM AI.Watson is not the same as IBM AI, or an IBM AI project.
3AI's DevelopmentAI is not new. This leads to the natural question of why Dr. Krishna would state that it is new. Does Dr. Krishna and IBM sales mislead prospects by repeating that AI is new in order to minimize and deflect from AI's true history?
4Responsibility for Setting Sales ExpectationsDr. Krishna describes a scenario where IBM has no responsibility for explaining the effort in investing in data development to IBM's AI customers. It is difficult to believe that IBM properly apprises customers of these difficulties. Therefore, it fits with Dr. Krishna's incentives to state that "customers don't seem aware," when IBM puts informing them secondary to selling AI projects.
5Measuring AI SuccessDr. Krishna seems to measure AI success by how many IBM AI projects are ongoing, rather than how successful those projects are at delivery benefits.

The Otherworldly Claims of AI

Consulting firms are making large and unsubstantiated claims around AI. Consulting firms with no background in either AI or forecasting are making world-changing claims about their AI capabilities, and the claims appear to be uniform.

  • AI is being proposed to defeat other methods in an almost universal manner, all without evidence this is true.
  • AI is becoming homogenized to improve just about everything. AI’s benefits are claimed to be so universal, that in short order it will be challenging to declare what is not an improved outcome of applying AI.
  • Many companies that eventually do assemble their multivariate data will find that in a higher percentage of cases the AI/ML is not able to show benefit versus far simpler and less expensive forecasting techniques. Dr. Krishna states the following.

“In the world of IT in general, about 50% of projects run either late, over budget or get halted. I’m going to guess that AI is not dramatically different.”

Not all IT projects have the same success rate. This is something else that Dr. Krishna should know. AI projects, because they are so strongly based upon false claims will have a much higher failure rate than 50%. In fact, The AI Project Preparedness Matrix above indicate that most of the AI projects that are sold are sold into companies that don’t have the ability to successfully complete them.

Who Are the AI Poll Contributors?

  1. Shaun Snapp: Shaun is the article author and an experienced forecasting consultant and the author of four books on forecasting.
  2. Ahmed Azmi: Ahmed has many years of experience in the AI/ML space.
  3. Steve Morlidge: Steve is a long term forecasting consultant, author or forecasting journal publications and the author of several books on forecasting.
  4. Anonymous: The anonymous entry is someone from a software vendor with many years of industry forecasting experience and publications in the forecasting literature.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer for Monetized Error Calculation

Improving Your Forecast Error Management

How Functional is the forecast error measurement in your company? Does it help you focus on what products to improve the forecast? What if the forecast accuracy can be improved, by the product is an inexpensive item? We take a new approach in forecast error management. The Brightwork Explorer calculates no MAPE, but instead a monetized forecast error improvement from one forecast to another. We calculate that value for every product location combination and they can be any two forecasts you feed the system:

  • The first forecast may be the constant or the naive forecast.
  • The first forecast can be statistical forecast and the second the statistical + judgment forecast.

It’s up to you.

The Brightwork Forecast Explorer is free to use in the beginning. See by clicking the image below:

The Foresight Forecast Search Engine

Foresight is a top forecasting journal and our favorite for publishing and reading. Foresight combines both academic with practical articles. Foresight provides an amazing search engine that can allow anyone to see what article apply to their interest or research area. Select the image below to go to their search engine.

 

References

https://www.beckershospitalreview.com/artificial-intelligence/ibm-exec-says-data-related-challenges-are-biggest-reason-ai-projects-fall-through.html

*https://www.statnews.com/2018/06/11/ibm-watson-health-problems-layoffs/

*https://www.wraltechwire.com/2018/05/25/ugly-day-ibm-laying-off-workers-in-watson-health-group-including-triangle/

https://www.techrepublic.com/article/data-lakes-are-an-epic-fail-but-this-open-source-project-might-change-that/

*https://www.dataversity.net/is-it-time-to-drain-the-data-lake/#

https://www.theguardian.com/technology/2018/jul/06/artificial-intelligence-ai-humans-bots-tech-companies

We have reached and AI bubble to the point where we have AI “fraud.”

“It’s hard to build a service powered by artificial intelligence. So hard, in fact, that some startups have worked out it’s cheaper and easier to get humans to behave like robots than it is to get machines to behave like humans.

“Using a human to do the job lets you skip over a load of technical and business development challenges. It doesn’t scale, obviously, but it allows you to build something and skip the hard part early on,” said Gregory Koberger, CEO of ReadMe, who says he has come across a lot of “pseudo-AIs”.

“It’s essentially prototyping the AI with human beings,” he said.

In the case of the San Jose-based company Edison Software, artificial intelligence engineers went through the personal email messages of hundreds of users – with their identities redacted – to improve a “smart replies” feature. The company did not mention that humans would view users’ emails in its privacy policy.”

https://spectrum.ieee.org/biomedical/diagnostics/how-ibm-watson-overpromised-and-underdelivered-on-ai-health-care

“Outside of corporate headquarters, however, IBM has discovered that its powerful technology is no match for the messy reality of today’s health care system. And in trying to apply Watson to cancer treatment, one of medicine’s biggest challenges, IBM encountered a fundamental mismatch between the way machines learn and the way doctors work.

IBM’s bold attempt to revolutionize health care began in 2011. The day after Watson thoroughly defeated two human champions in the game of Jeopardy!, IBM announced a new career path for its AI quiz-show winner: It would become an AI doctor. IBM would take the breakthrough technology it showed off on television—mainly, the ability to understand natural language—and apply it to medicine. Watson’s first commercial offerings for health care would be available in 18 to 24 months, the company promised.

In fact, the projects that IBM announced that first day did not yield commercial products. In the eight years since, IBM has trumpeted many more high-profile efforts to develop AI-powered medical technology—many of which have fizzled, and a few of which have failed spectacularly. The company spent billions on acquisitions to bolster its internal efforts, but insiders say the acquired companies haven’t yet contributed much. And the products that have emerged from IBM’s Watson Health division are nothing like the brilliant AI doctor that was once envisioned: They’re more like AI assistants that can perform certain routine tasks.

But it also earned ill will and skepticism by boasting of Watson’s abilities. “They came in with marketing first, product second, and got everybody excited,” he says. “Then the rubber hit the road. This is an incredibly hard set of problems, and IBM, by being first out, has demonstrated that for everyone else.””

https://www.forbes.com/sites/jasonbloomberg/2017/07/02/is-ibm-watson-a-joke/#58e1cf23da20

“On the May 8th edition of Closing Bell on CNBC, venture capitalist Chamath Palihapitiya, founder and CEO of Social Capital, created quite a stir in enterprise artificial intelligence (AI) circles, when he took on IBMIBM +0% Watson, Big Blue’s AI platform.

“Watson is a joke, just to be completely honest,” Palihapitiya said. “I think what IBM is excellent at is using their sales and marketing infrastructure to convince people who have asymmetrically less knowledge to pay for something.””

This independent analyst was contradicted by an IBM partner.

“Not all bloggers sided with Palihapitiya, however. André M. König, Co-Founder at Opentopic (an IBM partner), added his two cents. “Well I agree that IBM is a formidable marketing machine, only to be outmatched by their corporate boldness and technological innovation,” König wrote. “If you call IBM Watson a joke you call the hundreds of companies and startups that have built on it a joke.””

The following addresses canceled Watson projects, a common feature of Watson.

“In February 2017, M.D. Anderson Cancer Center canceled a promising, but troubled contract with IBM for its Watson platform. “The breakup with M.D. Anderson seemed to show IBM choking on its own hype about Watson,” Freedman added. “The University of Texas, which runs M.D. Anderson, announced it had shuttered the project, leaving the medical center out $39 million in payments to IBM—for a project originally contracted at $2.4 million.

“After four years it had not produced a tool for use with patients that was ready to go beyond pilot tests.”

Moreover, despite significant progress, even state-of-the-art machine-learning algorithms often cannot deliver sufficient sensitivity, specificity, and precision (that is, positive predictive value) required for clinical decision making.”

Instead, IBM is ceding whatever AI leadership it purported to have to a new crop of far more innovative startups and other AI firms willing to reinvent themselves as the inexorable pace of innovation continues unabated – and that’s no joke.””

Which is the standard response, any partner of a vendor defends that vendor.

https://www.forbes.com/sites/tiriasresearch/2019/02/12/ibm-drives-watson-ai-everywhere/#529d9acb7ecc

https://thenextweb.com/artificial-intelligence/2018/06/13/what-happens-when-the-ai-bubble-bursts/

https://en.wikipedia.org/wiki/AI_winter

https://www.wsj.com/articles/data-challenges-are-halting-ai-projects-ibm-executive-says-11559035800

Software Ratings: Demand Planning

Software Ratings

Brightwork Research & Analysis offers the following free demand planning software analysis and ratings. See by clicking the image below:

software_ratings

The Problem with POS Data for Supply Chain Forecasting

Executive Summary

  • POS information is often proposed to be used to improve forecast accuracy.
  • The evidence does not support that using POS does what vendors and consultants say it does.

Introduction to Using POS Data for Forecasting

Using POS data is a frequently proposed way to improve forecast accuracy. Software vendors that do demand sensing use it as a primary strategy to as they say improve forecast accuracy.

In this article, we will evaluate some logical issues with utilizing POS data for forecasting. This issue deals profoundly with time, which is difficult to visualize. To account for this fact, this article has many graphics to explain what happens when different data are incorporated into a forecast and when they are incorporated.

The Sample Scenario

To begin we layout a straightforward scenario that is quite typical of forecasting and replenishment to a store.

In reality, companies have goods issue and goods receipt times. But that complication does not add anything to the example, and just confuses things. So we will just absorb goods issue and goods receipt into the lead times. The forecast lead time is from when the order must be placed for the item to eventually arrive at the store. 

A very important factor to address is where is the stocking location. Multiple stores place a demand on a single warehouse. Therefore, the warehouse is the stocking location.

Due to the law of large numbers, the warehouse “demand” is far easier to forecast than the store’s demand.

Many people assume that getting the most up to date information helps forecast accuracy. However, there is a lag between when the sales data is accumulated and when the order must be placed. If there were no lag, then forecasting would be unnecessary. One would merely send the quantity that had just sold. 

Under the standard forecasting scenario, the POS data would eventually find its way into the sales history. But under demand sensing, the POS data from the last few days is used to influence a forecast which is for a time 28 days out into the future.

The critical point is not that POS data can be sent to a forecasting system.

It can.

The question is what does the forecasting system do with it? Is it used to adjust the forecast on Jan 2nd?

If it is used to “adjust” any forecast that has already been made, it is irrelevant for supply planning. Therefore it is not forecasting or forecasting that is intended to improve the supply planning process.

If this is done, it will alter the forecast history, making it perhaps more “accurate” but improving accuracy after it no longer matters.

An excellent way to obtain 100% “forecast” accuracy is to wait until all sales orders are complete and then overwrite all forecasts with actuals. This will produce perfect — and wholly falsified forecast accuracy. 

There is variation within a month.

If three days of sales are captured, how is it known if that is a low, medium or high period?

This means getting into day by day variability analysis. However, normally day by day analysis is not considered relevant for forecasting as the planning bucket is either weekly or monthly.

But by instituting the use of POS data, now the company needs to answer this question. Furthermore, it is a question which is in many cases at least product specific if not location specific. 

After presenting this scenario to a colleague, they had the following counterpoint.

“The manufacturer uses demand sensing to get a better demand signal before the retailer or wholesaler places an order. Typical latency between POS and sales order to the manufacturer is between 2 and 4 weeks. So sensing buys the manufacturer this amount of time (quite a bit more than 3 days in your example). Much of the internal lead times at the manufacturer are less than this latency so this allows them to replace uncertain forecasts with certain demand signals. Many supply lead times for the manufacturer exceed this latency, so for those forecast accuracy comes into play. For those, the removal of bullwhip effect makes forecasting a lot easier. Rather than forecast large orders intermittently, they can forecast a large quantity of minute POS scans. Sure, if you go to daily/store/SKU granularity it is harder than weekly/warehouse/SKU. But sensing based on traditional deterministic algorithms can simply group to the level they work best.”

The answers to these questions would determine the effectiveness of using POS data for forecasting. But having worked in many customers, we know the answers to these questions, at least for most companies, even very large companies.

Supply chain companies operate below where software vendors and consultants say they do. Forecasting implementations often fail to meet expectations, but software vendors and consulting companies don’t publish this information on their websites.

This brings up questions as to how POS data is being incorporated into forecasts, once you pull back the cover on the happy self-reported case studies.

One should be suspicious if the error measurement time horizon changes after a POS demand sensing project. 

Conclusion

Incorporating POS data into forecasts is fraught with many issues, and can be used to by consulting companies and software vendors to assist in forecasting departments essentially make the forecast accuracy look better than it is.

Search Our Other Forecasting Content

Research Contact

  • Interested in Accessing Our Forecasting Research?

    The software space is controlled by vendors, consulting firms and IT analysts who often provide self-serving and incorrect advice at the top rates.

    • We have a better track record of being correct than any of the well-known brands.
    • If this type of accuracy interests you, contact us and we will be in touch.

Brightwork Forecast Explorer

Improving Your Forecast Management

Brightwork Research & Analysis offers the following free software for tuning forecasting systems. See by clicking the image below:

 

References

Sales Forecasting Book

Sales and Stat-1

Sales and Statistical Forecasting Combined: Mixing Approaches for Improved Forecast Accuracy

The Problems with Combining Forecasts

In most companies, the statistical and sales forecast are poorly integrated, and in fact, most companies do not know how to combine them. Strange questions are often asked such as “does the final forecast match the sales forecast?” without appropriate consideration to the accuracy of each input.

Effectively combining statistical and sales forecasting requires determining which input to the forecast have the most “right” to be represented – which comes down to those that best improve forecast accuracy.

Is Everyone Focused on Forecast Accuracy?

Statistical forecasts and sales forecasts come from different parts of the company, parts that have very different incentives. Forecast accuracy is not always on the top of the agenda for all parties involved in forecasting.

By reading this book you will:

  • See the common misunderstandings that undermine being able to combine these different forecast types.
  • Learn how to effectively measure the accuracy of the various inputs to the forecast.
  • Learn how the concept of Forecast Value Add plays into the method of combining the two forecast types.
  • Learn how to effectively run competitions between the best-fit statistical forecast, homegrown statistical models, the sales forecast, the consensus forecast, and how to find the winning approach per forecasted item.
  • Learn how CRM supports (or does not support) the sales forecasting process.
  • Learn the importance of the quality of statistical forecast in improving the creation and use of the sales forecast.
  • Gain an understanding of both the business and the software perspective on how to combine statistical and sales forecasting.

Chapters

  • Chapter 1: Introduction
  • Chapter 2 Where Demand Planning Fits within the Supply Chain Planning Footprint
  • Chapter 3: The Common Problems with Statistical Forecasting
  • Chapter 4: Introduction to Best Fit Forecasting
  • Chapter 5: Comparing Best Fit to Home Grown Statistical Forecasting Methods
  • Chapter 6: Sales Forecasting
  • Chapter 7: Sales Forecasting and CRM
  • Chapter 8: Conclusion