Facts take a back seat when well-known people make misinformed pronouncements about “existential threats”
Henry Kissinger published an article in the June 2018 Atlantic Monthly detailing his belief artificial intelligence (AI) threatens to be a problem for humanity—probably an existential problem.
He joins Elon Musk, Bill Gates, Stephen Hawking and others who have come out to declare the dangers of AI. The difference is, unlike those scientists and technologists, the former secretary of State speaks with great authority to a wider audience that includes policy makers and political leaders, and so could have a much greater influence.
And that’s not a good thing. There’s a widespread lack of precision in how we describe AI that is giving rise to a significant apprehension on its use in self-driving cars, automated farms, drone airplanes and many other areas where it could be extremely useful. In particular, Kissinger commits the same error many people do when talking about AI: the so-called conflation error. In this case the error comes about when the success of AI programs in defeating humans in games such as chess and go are conflated with similar successes that might be achieved with AI programs used in supply chain management or claims adjustments or other, more futuristic areas.
But the two situations are very different. The rules of games like chess and go are prescriptive, somewhat complicated and never change. They are, in the context of AI, “well bounded.” A book teaching chess or go written 100 years ago is still relevant today. Training an AI to play one of these games takes advantage of this “boundedness” in a variety of interesting ways, including letting the AI decide how it will play.
Now, however, imagine the rules of chess could change randomly at any time in any location: Chess on Tuesdays in Chicago has one set of rules but in Moscow there are a different set of rules on Thursdays. Chess players in Mexico use a completely different board, one for each month of the year. In Sweden the role for each piece can be decided by a player even after the game starts. In a situation like this it’s obviously impossible to write down a single set rules that everyone can follow at all times in all locations. This is an example of an unbounded problem.
AI is today being applied to business systems like claims and supply chains that, by their very nature, are unbounded. It is impossible to write down all the rules an AI has to follow when adjudicating an insurance claim or managing the supply chain, even for something as simple as bubblegum. The only way to train an AI to manage one of these is to feed it massive amounts of data on all the myriad processes and companies that make up an insurance claim or a simple supply chain. We then hope the AI can do the job—not just efficiently, but also ethically.
But it’s impossible to know in advance that it will be able to do so. In fact, it can take a year or more to ascertain this. In addition, new regulations, variations in market demand and new technologies ensure rules associated with business systems are continually changing, thus keeping these systems unbounded.
Clearly, then, it’s not necessarily true that the successes of AI based on bounded systems can be generalized to unbounded ones. We are just learning how to incorporate AI models into parts of unbounded systems. We have managed to create AI loan systems that are biased against certain segments of our population; at the same time we devise self-driving cars that are far better than most human drivers. We are only beginning to understand how AIs applied to unbounded systems make their decisions because it is very difficult to interrogate the AI and ask why a particular decision was made. The recent fatal self-driving car accident in Tempe, Ariz., demonstrates this vividly. The authorities had to rely on many sources of information, including the car itself, to determine what actually happened. But whereas the car provided telemetry about its operations, it was incapable of explaining why it hit the other vehicle.
This lack of precision in how we describe the use of AI is giving rise to a significant apprehension on its use in self-driving cars, automated farms, drone airplanes and many other areas that would benefit from its substantial use. Most people are not willing to get into a self-driving car based on recent surveys. They are afraid the AI will put them at greater danger than a human. This concern can in part be traced directly back to The Terminator and other science fiction movies that position AIs as evil and endow them with unrealistic capabilities. It can also be traced back to the way companies are trying to field self-driving cars without first demonstrating the benefits of these vehicles first.
And perhaps most significantly, it can be traced back to well-known and well-meaning people making pronouncements about existential threats when, in fact, none exist and there is no proof yet that they can exist.
RSS