My intention for this week was to publish my second post on the sad state of software testing. The timing didn’t work out this week and I hope to be able to publish that post next weekend.
In addition to my tardiness, I had started to notice an uptick in an article I wrote in October 2020, which I titled “Centaurs: The future of work? How A.I. and humans can collaborate” I had always wanted to follow up on this article, given the recent changes in the AI landscape. I will be reposting the article from 2020 here, with an opening prologue and ending epilogue. Much has changed since I first wrote the article :)
Prologue
I published the original article whilst working at an AI company - Kheiron Medical Technologies. Kheiron develops AI products in the cancer detection space, Mia being it’s flagship breast cancer detection product. There were a few very important dynamics that I had observed whilst building and making these products available to the market, which in Mia’s case is the healthcare sector.
First, AI products were - and still are - viewed as a black box. It’s very hard to understand how an AI product like Mia makes an inference upon reading a medical image. One cannot simply debug the model to understand why or how it derived its output.
Second, products like Mia are in direct competition with human radiologists. An AI model doesn’t tire, it doesn’t have a bad day, it can be trained on a corpus of images that a human radiologists might see in 10 lifetimes. And, finally, an AI model is actually better at image and pattern recognition than a human is. How then, can you convince these very same humans to use and purchase products that are well positioned to make them redundant?
The answer is to position these products as an aid to the human. They aren’t meant to replace them, but to help humans do a better job and focus on what they are able to do much better than an AI model: provide care to their patients. That was the crux for the article: human + AI, or what is commonly referred to as a Centaur approach, a term borrowed from the chess world.
The original 2020 article
Epilogue: And then came LLMs
Fast forward to today. The AI mania led by LLM models like ChatGPT is igniting all sorts of debates on how the bots will take over. I don’t believe that, and still argue that the centaur model will be the more pervasive one. There are a couple of reasons for my argument.
The first, is to simply acknowledge what the recent LLMs are able to do. They are glorified auto-complete tools. I know this is an understatement, but ultimately that’s what these models do. They build up words, sentences and paragraphs one token at a time. Now, it just so happens that these models are trained on what is arguably the entire corpus of human digitized knowledge. No wonder they are immensely powerful, even dare I say intelligent. Rote combined with a photographic memory and pattern matching at the scale of LLMs is intelligence.
And the remarkable thing is that when ChatGPT does something like write an essay what it’s essentially doing is just asking over and over again “given the text so far, what should the next word be?”—and each time adding a word. (More precisely, as I’ll explain, it’s adding a “token”, which could be just a part of a word, which is why it can sometimes “make up new words”.) What Is ChatGPT Doing … and Why Does It Work?
The second, and more important reason, is regulations and red tape. Huge swaths of our economy - healthcare, education, are still in the technological dark ages. To assume that these sectors will jump on the AI bandwagon is naive. The best we can hope for, at least in the near term, with these sectors is a centaur like model; a gradual adoption of AI.
Now think about what happens over time. The prices of regulated, non-technological products rise; the prices of less regulated, technologically-powered products fall. Which eats the economy? The regulated sectors continuously grow as a percentage of GDP; the less regulated sectors shrink. At the limit, 99% of the economy will be the regulated, non-technological sectors, which is precisely where we are headed.
Therefore AI cannot cause overall unemployment to rise, even if the Luddite arguments are right this time. AI is simply already illegal across most of the economy, soon to be virtually all of the economy. Source: Andreeseen Horowitz
I am still tremendously excited about the powers and ultimate benefits of AI. I’ve witnessed first hand what AI products can do to the healthcare sector - they save lives. I have no doubt that same use cases and jobs might entirely be displaced by AI, including radiologists and elements of software engineering, but the net impact won’t be mass unemployment. However, many other jobs and new use cases will leverage AI in Centaur like fashions, ultimately for the benefit of humanity. I do not believe that the Singularity is upon us. We’ll see!
Coincidentally, a day before I wrote this article, the NY Times published this article on Kheiron. This quote sums it up quite nicely.
“An A.I.-plus-doctor should replace doctor alone, but an A.I. should not replace the doctor,” Mr. Kecskemethy said. Source: NY Times
i remember when code completion and syntax highlighting were laughed at as 'training wheels' for devs. i always loved the augmentation that ergonomic tech brought and dont understand the bravado / luddite mentality that fights it. i just want to be solving new business problems, without the back pressure jumping out of flow to go do read api docs / stack overflow to re-learn the already solved problems.
hopefully when co-pillot gets license filtering in place, the excuses for blocking high powered code complete will fall.