Artificial Intelligence: How good is good enough?
This blog was first published on the Utility Analytics Institute website.
Artificial intelligence is a great tool for specific purposes but it’s not a panacea for grid automation
Speak with any data scientist about artificial intelligence (AI) and you’ll soon hear one common gripe: that hype lifts expectations to unreasonable levels. The power industry is not immune to this. Too many futurists prognosticate about a future all-seeing, all-knowing, AI-driven, self-healing grid. The big question is, ‘how far into the future do we have to go before this becomes a reality?’ I don’t have a crystal ball, so it’s a tough question to answer with any degree of accuracy. However, I believe there’s a lot we can learn from the stalled progress of self-driving vehicles.
Pareto Principle in software development
Software engineers are all too familiar with the Pareto Principle, which states that 20% of inputs lead to 80% of the results. In an overcited, unscientific, and poorly understood study, the Standish Group found that only 20% of features are always or often used, while 64% are rarely or never used. While the study may be flawed, its results have crept into perceived wisdom; this may be because, despite its failings, the conclusions are accurate.
This 80-20 rule of thumb applies nicely to an AI project. AI can be really good at performing the 20% of common functions, which automates 80% of a process. In essence, rather than the stuff of sci-fi—the classic, all-knowing computer—AI is good at performing a smaller set of well-defined tasks. It is far less proficient at automating the other 80% of rare processes.
AI—like all other software—is good at the common stuff, and is only ‘good enough’
If we plot the frequency of all sub-processes in a complex task where the Pareto Principle applies, we’d get an inverse S-curve. Tackling the most common tasks will automate 80% of the process. But after the lowest hanging fruit have been resolved, we run into a long tail. Issues become increasingly harder to identify, define, and resolve. We may need more esoteric or hard-to-get data inputs, new maths, and lots of work to resolve. The further into the long tail, the harder it is to justify return on investment. When the results do not justify the investment, long-tail processes are ignored.
Whether this is a problem or not depends on the task you’re trying to automate. For example, it’s certainly not an issue for online chatbots, used by many companies to alleviate pressure on the contact centre. Here, the bar for ‘good enough’ is set to ‘not very good’. Most chatbots are designed to answer the most common customer requests; when asked a trickier question, they fall flat and pass the request on to a human. Yes, I am speaking from my own frustration at this so-called online ‘help’ feature!
The Pareto Principle explains why chatbots are so dumb. Each step into the long tail gets increasingly expensive to resolve. While most companies are happy to invest $10,000 in a bot that removes a chunk of basic call centre traffic, most are loath to spend $1M on state-of-the-art AI that can deal with many more requests. The chatbot is good enough and does not have to get better.
Vehicle automation: how good does ‘good enough’ have to be?
How good does vehicle automation have to be to be ‘good enough’? Let’s rephrase that question: how far into the long tail do processes have to be automated for AI to be ‘good enough’? When expressed that way, we can see: it’s a lot further into the long tail than an online chatbot.
When it comes to any critical infrastructure automation, we must be a lot closer to perfection in order to be ‘good enough’. When a chatbot can’t deal with a complex request, the worse that happens is the customer is referred to a human agent. When an autonomous vehicle can’t cope with a situation, people may die.
The transition to fully autonomous vehicles is often described as a shift across six levels, from L0 (no automation) through a series of increasingly sophisticated stages to L5 (full autonomy). We also can think of this as a sequence of progressive steps that AI must take into the long tail of issues. Current vehicles possess L2 autonomy—maybe L3 at a push.
The problem the industry faces is that it’s not a linear progression to jump from one level to the next. If the long tail of vehicle autonomy follows that inverse S-curve, the next step involves an exponential increase in the number of issues it must address. This may well explain why some companies are exiting the space.
Lessons to learn for grid automation
There are many use cases for advanced analytics in the energy industry but tentative steps are required. Machine learning (ML) has already proven itself in slashing operation & maintenance (O&M) costs, but it’s highly unlikely that ML will ever perfect the art of detecting faults and providing the best course of action. Over time it will improve, and eat away at the long tail of problems (the learning bit of machine learning will automate this). But when we talk about a fully automated grid, balancing in real-time the profound complexities of the biggest machine ever built, we really are in the realm of science fiction.
The costs of automating the grid’s long tail will likely exceed the benefits of doing so, and I suspect that will remain the case for decades to come. While new techniques such as deep learning will accelerate the time taken to address the long tail, the industry needs to remain circumspect about AI’s capabilities. Yes, it’s a great tool for specific purposes but it’s not a panacea.