On May 11th 1997 Deep Blue, a computer program, beat Gary Kasparov; chess grandmaster and arguably one of the greatest players of all time. That was the first time that a computer had defeated a chess grandmaster under tournament conditions. That moment represents a watershed moment between man and machine, one that offers a tremendous amount of opportunity and collaboration between both.
A month after the match, Kasparov wrote the following in his book, Deep Thinking: Where Machine Intelligence Ends and Human Creativity Begins,
“The scrum of photographers around the table doesn’t annoy a computer. There is no looking into your opponent’s eyes to read his mood, or seeing if his hand hesitates a little above the clock, indicating a lack of confidence in his choice. As a believer in chess as a form of psychological, not just intellectual, warfare, playing against something with no psyche was troubling from the start.”
Machines, as it turns out, do not tire nor is their performance affected by external factors. They are machines humming along and going their jobs uninterrupted and unperturbed.
Fortunately Kasparov viewed this moment and the insights he gleaned from his defeat at the hands of Deep Blue as an opportunity to collaborate with machines. Kasparov viewed that humans and machines can complement each other. Each brings in some unique capabilities that when combined can result in performance that far exceeds what each can individually do. This insight resulted in the rise of centaur chess, also known as advanced chess, whereby teams of humans and machines compete against each other. This hybrid team combines the best of both worlds: the computation power of machines with the strategic insights and creativity of humans.
Image source: Scott Eaton
A few years later, Kasparov had to say the following on the main learnings from centaur chess. The below is copied from Kasparov’s conversation with Gary Taylor. Bold emphasis mine.
COWEN: You’ve been a pioneer in what’s sometimes called advanced chess, freestyle chess, or centaur chess, where you pair a human being with a computer or a set of programs. Today, 2017, do you still think it’s the case that a human paired with a set of programs is better than playing against just the single strongest computer program in chess?
KASPAROV: There’s no doubt about it.
COWEN: The human will make some mistakes, so the human will ask Stockfish, Komodo, Rybka, “What’s the best move?” Collate the different outputs, make some kind of judgment, explore some lines more deeply. Put that against Rybka Cluster. Is Rybka Cluster really going to lose many games?
KASPAROV: I think so. Again, it depends on the qualification of the operator.
COWEN: Sure, if it’s the best operator in the world, whoever that may be. Maybe yourself, maybe Anson Williams.
KASPAROV: By the way, I exclude myself from this category because I’m not a very good operator. I’m a very good chess player. A great operator does not have to be necessarily a very strong player.
COWEN: What makes for a great operator?
KASPAROV: Someone who can work out the most effective combination, bringing together human and machine skills. I reached the formulation that a weak human player plus machine plus a better process is superior, not only to a very powerful machine, but most remarkably, to a strong human player plus machine plus an inferior process.
At the end of the day, it’s about interface. Creating an interface that will help us to coach machines towards more useful intelligence will be the right step forward. I’m a great believer that, if we put together a good operator — still a decent chess player, not necessarily a very strong chess player — running two, three machines and finding the best way to translate this knowledge into quality moves against Rybka Cluster, I would probably bet on the human plus machine.
Notice how he also excludes himself from the set of great operators. He’s a great chess player, but not a great operator. This then begs the question, could a centaur team comprised of modest chess players beat another centaur team of chess grandmasters?
In 2005, a team called ZackS won a freestyle - another name for advanced chess - tournament by beating an opponent that included Vladimir Dobrov, a grandmaster, his highly-rated teammate, and their computer programs. ZackS was comprised of two twenty-something-year-old guys in New Hampshire named Zackary Stephen and Steven Cramton. Stephen has a master’s degree in statistics and spent his days as a database administrator. Cramton was a soccer coach in the fall and ran a snowboarding program in the winter. They used four chess software engines in all but relied primarily on two of them. Stephen and Cramton are not great chess players. Stephen’s rating was 1,381 and Cramton’s 1,685. If Cramton, the higher rated player, was to go head-to-head with Dobrov, the grandmaster, Dobrov would have a 99% chance of beating him.
It also turns out that centaur teams composed of good players and great operators are also better than ones with machines playing alone. Tyler Cowen, a professor of economics at George Mason University studied centaur chess and dedicated a chapter about it in his book Average is Over. His research concludes that centaur players have advantage over the best programs at 100-150 rating points, which suggests they are expected to win about 67% of the time.
He also offers the following two conclusions from centaur chess, which I believe will have a significant impact on the future of work, in particular how humans and machines will work together in a centaur model. Tyler’s insights were
Human-computer teams are the best teams.
The person working the smart machine doesn’t have to be an expert in the task at hand.
The second one is pretty astounding. You do not have to be an expert in your domain to achieve expert level performance, especially if you complement your skills with those of a machine.
Centaurs in action
We’ve seen how centaurs can exhibit superior performance in games like chess, but can we expect to see similar outcomes in the “real” world? Said otherwise, can centaurs deliver exceptional performance outside the domain of games? I believe the answer is absolutely yes.
We have several examples from our everyday life showing how the combination of human and machines can result in superior performance - one that exceeds the limits of each independently. If you’ve been on an airplane you might have experienced one of the most common and oldest human and machine interactions: the auto-pilot mode. We’re also seeing how this model is transitioning to our everyday lives through cars. Many of today’s cars offer a spectrum of machine enabled (A.I.) services ranging which assist the driver via tools like self parking, blind-spot and lane changes warnings to full on autonomous vehicles that can self-drive. One observation to note is that in both the car and airplane case, a driver and pilot are ever so present. The machine works with the human and doesn’t replace her.
One area that I am tremendously excited about for centaur-like collaboration is healthcare. Two areas within healthcare are particularly ripe for this sort of collaboration. These are monitoring, or what I call healthcare at the edge, and imaging.
Full disclosure, I lead product development efforts at an A.I. company in the healthcare imaging space.
Healthcare at the edge
I’ve suffered from hypertension ever since I was a teenager. For most of my adult life the only available tool at my disposal to monitor and manage my hypertension has been the doctor’s visit. Over the past decade, the price and accuracy of consumer-grade blood pressure monitors has allowed me to take these measurements at home. Using my monitor, I can now track my blood pressure at a more regular interval than the 2-3 times per year cadence of the doctors visit. Yet, even with the advent of these devices, the experience is marginally better than the doctor’s visit. Instead of taking readings 2-3x per year I now take them about once per month, mostly because of the “hassle” of putting on the monitor and taking the reading. What if there were wearables that could continuously read and monitor my blood pressure. Enter Apple Watch and other similar devices.
In case you haven’t noticed, Apple Watch isn’t just a watch. It’s your own personal health monitor that is able to track and monitor for an incredibly wide variety of health conditions. Your Apple Watch can monitor your heart rate detecting abnormalities both in terms of heart rate patterns and rhythm. The latter can be used to detect atrial fibrillation (Afib). Your “watch” can do continuous glucose monitoring (via a third party app from Dexcom). It can detect if you fall. It is able to take ECG readings. It also has a blood oxygen sensor and can monitor your blood pressure measurements, (via an app). It can obviously track your sleep patterns, activity and caloric intake. Your Apple Watch can take these readings unaided, with no effort on your part other than wearing your watch. The data that the watch and the apps running it collects can be mined by machines to detect and warn for any abnormalities. Instantaneously. It is akin to having your own personal nurse measuring and monitoring your vitals 24/7.
That’s a message that Apple is also reinforcing through its advertising for Apple Watch. The main message is health monitoring and machine learning. In no instance does the ad mention anything related to a regular watch. The watch is an afterthought. Side topic; make no mistake Apple is gunning for the healthcare market. After all $2TB companies need enormous markets to grow. Healthcare offers that.
The idea of being able to monitor at the edge and react to anomalies isn’t science fiction. Witness Mercy Virtual, which as the name suggests is a virtual hospital. Mercy doesn’t have beds. It’s staff of doctors and nurses “sit at carrels in front of monitors that include camera-eye views of the patients and their rooms, graphs of their blood chemicals and images of their lungs and limbs, and lists of problems that computer programs tell them to look out for. The nurses wear scrubs, but the scrubs are very, very clean. The patients are elsewhere.” source: Politico
The benefits of combining wearables with the ability of machines to analyze massive quantities of data in real time are profound. For one, machines can help detect abnormalities or the onset of diseases much earlier, which can dramatically boost the change of recovery. Second, it frees up humans - healthcare professionals - to focus on what they do best: providing care to their patients. Third, it can dramatically improve the productivity of healthcare professionals by outsourcing data collection and monitoring tasks to the machine. Fourth, it can dramatically reduce the cost of healthcare. I no longer have to spend a night in the hospital, which in the US costs ~4700 on average, for monitoring and data collection, if wearables can do the job.
Medical Imaging
If there’s one application that is custom tailored for machines, particularly A.I, it’s image recognition. Recent advances in computation power, availability of data and advances in A.I. (deep neural network algorithms) have yielded significant advancements in machines’ ability to recognize patterns in images. You’ve likely encountered those in your everyday life. Your social network is able to recognize pictures of you and your friends. Similarly, the photo app on your phone is able to make these same recognitions. I can search for pictures on my phone using search keywords like “table”, “key” or “house and my app displays photos that match the term. Magic!
It turns out that images and recognizing patterns within them is ever so prevalent in healthcare. Those come in many forms ranging from x-rays, CT scans and mammograms to name a few. This is an area where neural network algorithms can make a dramatic impact to both the radiologist and patient.
A typical radiologist in the US will read about 20,000 studies per year, or ~100 per work day. If we assume that the radiologist’s career spans 25 years, then she is expected to have analyzed ~500K studies through her career. Contrast that with an A.I. (deep neural network) algorithm that can be trained on millions of images. Not only is the algorithm exposed to a training set can be an order of magnitude more than what a radiologist will see in her career, but it can also be trained on distributions the human might never see. For example, a radiologist trained in a predominantly caucasian part of the world, might not be exposed to images from an African American or Asian population (there is variance between races) whereas the algorithm if trained on a sufficiently large and diverse data set will. Furthermore, the algorithm is dynamic. It is continuously learning from new data it receives. I wrote an article on that topic and its challenges here. With an A.I. solution you can get a dozen or more expert radiologists out of the box.
Perhaps more importantly, the algorithm doesn’t tire. It isn’t affected by the criticality of its decisions. The algorithm - machine - is unperturbed, or as Gary Kasparov put it “There is no looking into your opponent’s eyes to read his mood, or seeing if his hand hesitates a little above the clock, indicating a lack of confidence in his choice” That last point is vital, because the job of a radiologist is very hard - if you are a radiologist, you have my utmost respect. A radiologist can read up to 100 studies per day. These could be relatively simply x-rays to more complex 3-D images like CT and DBT, which can be in the 100s of images per study. The radiologist has a mere minutes to read these images and make a recommendation.
Unsurprisingly radiologists are affected by their workload and their environment. They tire and can develop System 1 thinking (see Kahneman) in terms of looking for pre-learned patterns. It turns out radiologists are human after all and susceptible to having mental shortcuts and exhibiting variance in performance due to fatigue amongst other factors. In a seminal study - The Gorilla Study - radiologists were shown studies superimposed with a picture of a gorilla in one of the corners of the image as shown in the image below (can you spot the gorilla). About 83% of the radiologists missed the gorilla. For more on this study refer to this article
Source: Trafton Drew and Jeremy Wolfe
All of these challenges be it fatigue, biases, inconsistent performance and System 1 thinking are ones that a machine can overcome. A radiologist partnering with AI leverage the A.I. to review her work and detect discordancies. Alternatively, the algorithm can prioritize or triage cases for the radiologist to read, thereby prioritizing difficult cases for the human to spend time evaluating.
A peek into the future
Every so often some technological innovation comes along offering the promise of a better future at the expense of disrupting the present. The industrial revolution had that effect; it dramatically improved productivity yet displaced many existing industries. History has also taught us that humans can have a short-lived adverse reaction to change. The very first operator-less elevators were met with skepticism and deemed too risky to ride - see here for more details. Today, autonomous vehicles (self-driving cars) are met with much skepticism and doubt. I have no doubt that they will be the de facto mode of transportation in the not so distant future.
It could very well be that A.I. might replace many jobs. Geoffrey Hinton, computer scientist at the University of Toronto, seems to believe so, especially when it comes to radiologists.
“I think that if you work as a radiologist you are like Wile E. Coyote in the cartoon,” Hinton told me. “You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath.” Deep-learning systems for breast and heart imaging have already been developed commercially. “It’s just completely obvious that in five years deep learning is going to do better than radiologists,” he went on. “It might be ten years. I said this at a hospital. It did not go down too well.” Geoffrey Hinton now qualifies the provocation. “The role of radiologists will evolve from doing perceptual things that could probably be done by a highly trained pigeon to doing far more cognitive things” Source: “A.I. vs M.D.” by Siddhartha Mukherjee; New Yorker
I do believe that the overall net effect of A.I will be dramatically positive, especially when A.I. is combined with humans. I refer back to Kasparov who put it so eloquently.
“A.I. will help us to release human creativity. Humans won’t be redundant or replaced, they’ll be promoted.” Source: The Register
I’ve offered a few examples of how I see humans and A.I. work together in a centaur-like manner. All of those were within the healthcare domain, which is one I am familiar with by virtue of my current role. It is also a domain I hope will adopt A.I. to accelerate the delivery of care and allow humans to do what they excel at: empathy, caring and well, being human. Or as Dr. Eric Topol puts it in his most excellent book - Deep Medicine
“The greatest opportunity offered by AI is not reducing errors or workloads, or even curing cancer: it is the opportunity to restore the precious and time-honored connection and trust—the human touch—between patients and doctors” Eric Topol