When arguably the greatest chess player of all time, Garry Kasparov, was beaten by Deep Blue in 1997, some took it to mean that human intelligence had become irrelevant. For instance, Newsweek ran a cover story about the match with the headline “The Brain’s Last Stand.” However, the chess-related conflict between human and computer cognition turned out to be somewhat more convoluted than that.
In the wake of the match, Kasparov came up with a concept he called “Advanced Chess,” wherein computer engines would serve as assistants to human players—or the other way around, depending on your perspective. Kasparov’s idea was that humans could add their creativity and long-term strategic vision to the raw power of a computer munching through plausible-seeming variations. He thought that, perhaps, in long games, such cyborg teams could beat computers, complicating the idea that human intelligence had simply become obsolete.
He was right. Highly skilled cyborg players turned out to be stronger than computers alone. Most famously, in 2005, a cyborg team won a so-called “freestyle” tournament—one in which entrants could consist of any number of humans and/or computers. And, even more surprisingly, the tournament was won by a pair of relatively amateur players—Steven Cramton, and Zackary Stephen, both far, far below master strength. They came out on top of the powerful program Hydra, as well as esteemed grandmasters like GM Vladimir Dobrov. And the secret to their success seemed to be that they were the best operators—they had figured out the ideal way to enhance the chess engines’ intelligence with their own.
In other words, for the human half of a cyborg team, being a supremely good chess player wasn’t as important as knowing how to steer computer intelligence. AI manipulation was itself a relevant skill, and the most important one. Cramton and Stephen ran five different computer programs at once—both chess engines and databases which could check the position on the game board with historical games. Using this method, they could mimic the past performances of exceptional human players, play any moves that all the engines agreed upon, and more skeptically examine positions where the different engines disagreed upon the right way to proceed. Occasionally, they would even throw in a slightly subpar but offbeat move that one of the programs suggested, in order to psychologically disturb their opponents.
This is kind of a beautiful picture of computer-human interaction, in which humans use computers to accomplish cognitive tasks in much the same way that they use cars to accomplish transportation. However, there’s a strong possibility that this rosy picture won’t last for long. It’s possible that, eventually, chess engines will get strong enough that humans can’t possibly add anything to their strength, such that even strong operators like Cramton and Stephen would, if they tried to provide guidance, only detract from the computer’s expertise. In fact, this may have happened already.
In May of 2017,that he believed cyborg players were still stronger than engines alone. However, that was before Google’s AlphaZero chess engine, in December of 2017, a version of one of the world’s best chess programs, Stockfish. AlphaZero, which was grown out of a machine learning algorithm that played chess against itself 19.6 million times, won 28 out of the match’s 100 games, drew 72, and lost not one.
What was more notable even than AlphaZero’s supremacy was its style. AlphaZero played what seemed like: it sacrificed pieces for hard-to-see advantages, and navigated into awkward, blocked-up positions that would’ve been shunned by other engines. Danish grandmaster Peter Heine Nielsen, upon reviewing the games, said “I always wondered how it would be if a superior species landed on earth and showed us how they play chess. I feel now I know.” If there’s any computer that’s exceeded the capacity of cyborg players, it’s probably AlphaZero.
This progression—from the emergence of strong AI, to the supremacy of cyborgs, to the more complete supremacy of even stronger AI—could pop up in other fields as well. Imagine, for example, a program, Psychiatron, which could diagnose a patient’s mood disorder based on a momentary scan of their face and voice, searching for telltale signs drawn from muscular flexion and vocal intonation. That program would make psychiatrists irrelevant in terms of diagnostic process.
However, you might still need a psychiatrist to make sense of Psychiatron’s diagnostic to the patient, and provide that patient with a holistic treatment that would best address the many factors behind their disease. Psychiatron would simply enable psychiatrists to be better. Eventually, though, that cyborg team might be superseded by an even stronger Psychiatron, which could instantly dispense the right series of loving words upon making a diagnosis, as well as a carefully co-ordinated package of medications and an appropriate exercise plan, all through machine learning techniques that would be completely opaque to any human operator.
This is a version of the future that’s either utopian or nightmarish depending on your perspective—one where we are, as Richard Brautigan wrote, “all watched over by machines of loving grace,” who, like parents, guide us through a world that we’ll never fully understand.