My interest in Christopher Nolan’s Oppenheimer eventually led me to learn about Vasily Atkhipov. The short form is that in 1963 Atkhipov, singlehandedly, quite literally saved the world.
Specifically, he saved the world from mutual nuclear annihilation, which at the height of the Cold War was a very real possibility. You could look it up.
This comes up now because I was talking with a colleague today about recent advances in Generative A.I. As this technology advances by leaps and bounds, there are parallels with nuclear weaponry.
We don’t need to indulge in fantasy scenarios like Skynet to be worried that A.I. might eventually do us all in. The danger, in fact, is quite the opposite.
The danger of a Large Language Model (LLM) is not that it cares, but that it doesn’t care. It is like the broom in the Sorcerer’s Apprentice, mindlessly doing what it is told, with no regard for the consequences.
Should we be worried about that? Yes.
People who argue against government regulation of A.I. sometimes point out that nothing really bad has happened as a consequence of LLMs. Which brings me back to Oppenheimer and the Cold War.
Vasily Atkhipov was not inevitable — he just happened to be the right person at the right time. Had it been someone else on his watch, we might all be dead now.
We may be in an analogous situation with Generative A.I. Nothing goes horribly wrong until it does, and then it might be too late to do anything about it.
If something catastrophic threatens to happen because we all rely on A.I. without anyone checking its work, there may not be a Vasily Atkhipov around this time to save the day. As I said to my colleague today: Just because you’re lucky, doesn’t mean you’re safe.