The Control Problem

From the sixth century to February 9th, 1996, we were the undisputed masters of chess. If you wanted to see the best chess matches, they would be played solely between humans. The next day, on February 10th, 1996, the reigning grandmaster, Gary Kasparov, was bested in a match by a computer named Deep Blue. Kasparov went on to successfully beat the computer 4-2, but lost the following year to its next iteration (humorously named Deeper Blue). On February 10th, humanity’s top billing as chess players was called into question. Just 15 months later, computers rose to the top of the totem pole, and show no signs of relinquishing that throne. Humans still play each other in chess tournaments, but their playstyles are heavily influenced by computer analysis, so much so that deviations from computed predictions are closely scrutinized, and often found to be blunders. The student has become the teacher, or perhaps more accurately, the calculator has become the mathematician.

Despite what you might think, Deep Blue and its chess-playing kin are actually a form of artificial intelligence. While AIs in popular culture often take the form of murderous robots or godlike beings, they are currently understood to come in three different flavors: weak, strong, and super. Chess programs are a type of weak AI - programs that may outperform humans in a single task, but fail at accomplishing the wide range of tasks we are capable of. They are specialized, not general purpose. Think spam filters, virtual assistants, and that smart thermostat you might already have in your home. If you look at the rate at which consumer-ready weak AI is becoming available, it becomes clear that we’re at the start of an AI boom.

And only a technophobe would be scared of an AI boom, right? Nobody likes spam, Siri is almost laughably bad1, and intelligently adjusting temperature sounds utterly benign. With these examples as a backdrop, claiming that Deep Blue and its ilk will spell our demise probably sounds like fear mongering. But as others have pointed out the economic problems associated with automation will emerge long before all human labor is made irrelevant. Computers don’t have to replace humans in all industries. This can be seen most clearly with self-driving cars. No longer confined to the realm of thought experiments and science fiction, autonomous vehicles are poised to swallow a significant amount of transportation jobs. This does not bode well for the nearly 4.4 million people currently employed by the transportation industry.2

That’s certainly ominous, but predictable; technology almost always eliminates at least some jobs. And if you’re like me, then predictable = boring. Let’s get theoretical and dive into strong AI. Simply put, strong AI is defined as a program that can operate indistinguishably from a human at a wide range of tasks.3 The key distinction between weak and strong AI is versatility. Your self driving car may be as good as a human driver, but you can’t just stick a driving AI into your phone and use it as a voice assistant. While weak AI would gradually pick off small subsets of jobs, strong AI would rapidly eliminate entire industries.

No strong AI currently exist, and most advances in the field have been incremental upgrades to weak AI. However, two different solutions show promise. The most straightforward method would be whole brain emulation (WBE). This involves comprehensively scanning a biological brain and then closely modeling it. Despite being barefaced plagiarism, WBE has one key advantage; it requires no understanding of human cognition or artificial intelligence. Because no fundamental breakthrough or proof of concept has to be achieved, the obstacles WBE faces are more practical in nature. Three different technical feats must be conquered: high throughput scanning, automated image analysis, and implementation-ready hardware. In simpler terms, we have to be able to scan the brain in enough detail to be able to recreate it, process all the data generated by that scan, and use that data to run a simulation on a computer that won’t immediately burst into flames.

Such a brute force approach has much to recommend it, but a more efficient approach also exists. We know that evolution, while slow and blind, can create intelligence at least as smart as humans because, well, here we are. If a programmer could harness and guide evolutionary processes, as opposed to random chance, she should be able to create intelligent life far more quickly than evolution created us. After all, it took a few billion years for evolution to stumble onto heavier than air flight, but only took humans around a few hundred thousand years. Just like knowing the correct answer to a math problem can often help you figure out the correct derivation, our knowledge of both evolution and the human brain can dramatically increase the efficiency of the evolutionary approach, also known as seeding or bootstrapping. These are not the only ways strong AI could come about, but they are the most grounded. Consumer-level AI has already been a major recipient of R&D dollars, and industry heavy-weights such as Google have already shifted their focus towards developing intelligent assistants powered by machine learning. In this context, major breakthroughs in the field of AI seem all but assured.

Economic collapse is definitely something to worry about, but ranks pretty low on the “sexiest ways to destroy a civilization” list. citation needed Let’s talk about something that, to borrow a cliché, will destroy life as we know it. Super AI as a category is ill-defined; it describes all AIs that “possess intelligence far surpassing that of the brightest and most gifted human minds.”4 A calculus teacher once told me, in order to squash any difficult questions, that infinity wasn’t really a number, but more like a direction. While this may be a gross mathematical simplification, it applies wonderfully to super AI. The term super AI points in the direction of ever increasing intelligence.

This asymptotic property arises partially from the way its little brother, strong AI, will likely be developed. Both WBE and seeding have straightforward paths to quickly becoming super AI after becoming strong AI. With WBE, you simply have to increase the computational resources available to it and increase the rate of simulation. A program that operates at 10x the speed of a human with vastly more memory would easily outperform our brightest minds. While seed AI can also be “overclocked” in the same manner, they could also simply continue to evolve and improve in ways that are impossible to predict. This unpredictability strikes at the heart of what makes super AI so dangerous.

“Whenever I get stuck, I just ask myself ‘What would someone smarter than me do?’” Just as we can’t imagine how people smarter than us would act, it is futile to predict the specific details of how super AI will affect our lives, our civilization, our species. We humans, with only a relatively small increase in intelligence over other animals, have managed to become the undisputed masters of an entire planet. The other contenders, whether they be chimpanzees, or dolphins, or octopuses5, are no longer in control of their destinies. This is clearly a winner-take-all competition. And due to their asymptotic nature, super AIs are practically guaranteed to be vastly smarter than even our best and brightest. It is perhaps for this reason that our best and brightest have already been sounding the alarm, albeit with mixed results. It is clear - we must find a way to ensure that we do not go the way of the dolphin or the chimpanzee. We need a solution to the Control Problem.


  1. At least at the time of writing↩︎

  2. Bureau of Labor Statistics↩︎

  3. Intelligence.org↩︎

  4. Wikipedia.org↩︎

  5. Merriam Webster↩︎