Grady Booch is a unique voice in computing, whose contributions encompass a wide range of interests. He introduced the Booch method, which led to his co-creation of the Unified Modeling Language. He also helped usher in the use of design patterns and agile methods and has written a large corpus of books and articles addressing software engineering and software architecture. Today, he is chief scientist for software engineering at IBM Research
and is creating a documentary exploring the intersection of computing and what it means to be human at Computing: The Human Experience.
Our recent conversation touched on both practical and philosophical aspects of human-computer interaction and co-evolution, artificial intelligence, quantum machines, and Web3.
Tyson: Thanks for the chance to talk, Grady!
There’s so much to cover. Let me begin by asking something "of the moment." There has been an almost cultural war between object-oriented programming and functional programming. What is your take on this?
Booch: I had the opportunity to conduct an oral history with John Backus—one of the pioneers of functional programming—in 2006 on behalf of the Computer History Museum. I asked John why functional programming didn’t enter the mainstream, and his answer was perfect: “Functional programming makes it easy to do hard things” he said, “but functional programming makes it very difficult to do easy things.”
Functional Programming has a role to play: many web-centric software-intensive systems at global elastic scale are well-served with having some elements written in stateless form, and that’s precisely what functional programming is good for. But remember this: that’s still only a part of those systems, and furthermore, there is much, much more to the world of computing than web-centric systems at global elastic scale.
Tyson: Okay, let me leap across from the specific to the general: what is software? What is a computer? Why are these seemingly obvious things so significant?
Booch: If you were to have asked me that question at the turn of the century—the start of the 1900s, I mean—I would have said “a computer is a person who computes,” and as for software, I would have no idea what you meant. You see, the term computer was at first a person—usually a woman—literally someone who calculated/computed. It wasn’t until we began to devise machines in the mid 1900s that we replaced the activity of those squishy organic computers with relays, vacuum tubes, and and eventually transistors.
Even if we consider the Turing test, Alan had in mind the question of whether we could build a machine that duplicated the ability of a human to think. As for the term software, its etymology tells us a great deal about how astonishingly young the field of computing is. The term digital was first coined by George Stibitz in 1942, and the term software was introduced by John Tukey in 1952. Here’s an easy way to distinguish the terms: when something goes wrong, hardware is the thing you kick and software is the thing you yell at.
Tyson: You said in our earlier chat that “perhaps the most important outcome of our computing technology is that it compels us to examine what it means to be human.” Would you continue that thought?
Booch: The story of computing is the story of humanity. This is a story of ambition, invention, creativity, vision, avarice, and serendipity, all powered by a refusal to accept the limits of our bodies and our minds. As we co-evolve with computing, the best of us and the worst of us is amplified, and along the way, we are challenged as to what it means to be intelligent, to be creative, to be conscious. We are on a journey to build computers in our own image, and that means we have to not only understand the essence of who we are, but we must also consider what makes us different.
Tyson: Babbage said, “We may propose to execute, by means of machinery, the mechanical branch of these labours, reserving for pure intellect that which depends on the reasoning faculties.” Where are we at in that journey?
Booch: Actually, I think his colleague, Ada Augusta, Countess of Lovelace, better understood the potential of computers than he ever did. “The Analytical Engine does not occupy common ground with mere 'calculating machines,' she said. Rather, “it holds a position wholly of its own.” Ada recognized that the symbols manipulated by machines could mean something more than numbers. The field of computing has made astonishing progress since the time of Babbage and Lovelace and Boole, but still, we are a very young discipline, and in many ways we have just begun.
Tyson: Speaking of Babbage does lead naturally to Ada Lovelace. I notice a strong thread in your work of pointing out the sometimes hidden role women play in moving us forward. How do you think we as a society are doing on that front?
Booch: Poorly. There was a time in the earliest days of computing when women played a far larger role. Annie Jump Cannon was the lead among the Harvard Computers in the 1800s; the ENIAC was programmed mainly by five women; Grace Hopper pioneered the idea of compilers and high-order programming languages. Sadly, a variety of economic and social and political forces have reduced the number of women in the ranks of computing. A dear colleague, Mar Hicks, has written extensively on these factors. We must do better. Computing impacts individuals, communities, societies, civilizations, and as such there must be equitable representation of all voices to shape its future.
Tyson: AI, especially conversational AI, has really taken off recently. What do you think is the next phase in that story?
Booch: Remember ELIZA from the mid-1960s? This was an early natural language system that absolutely astonished the world in its ability to carry out Rogerian therapy … or at least a fair illusion of it. We’ve come a long way, owing to a perfect storm; the rise of abundant computational resources, the accumulation of vast lakes of data, and the discovery of algorithms for neural networks, particularly a recent architecture called a transformer. In many ways, that recent advances we have seen with systems such as ChatGPT, Bard, and (in the visual world), DALL-E and Stable Diffusion have come about by applying these three elements at scale.
The field of artificial intelligence has seen a number of vibrant springs and dismal winters over the years, but this time it seems different: there are a multitude of economically-interesting use cases that are fueling the field, and so in the coming years we will see these advances weave themselves into our world. Indeed, AI already has: every time we take a photograph, search for a product to buy, interact with some computerized appliance, we are likely using AI in one way or another.
Chat systems will incrementally get better. But, that being said, we are still generations away from creating synthetic minds. In that journey, it is important that we consider not just what our machines can do, but what they do to us. As Allen Newell—one of the early pioneers of artificial intelligence—noted, “computer technology offers the possibility of incorporating intelligent behavior in all the nooks and crannies of our world. With it, we could build an enchanted world.” To put it somewhat poetically, software is the invisible writing that whispers the stories of possibility to our hardware … and we are the storytellers. It’s up to us to decide if those stories amplify us, or diminish us.
Tyson: Quantum computing is alongside AI in terms of its revolutionary potential. Do you think we’ll have a similar breakthrough in quantum computers anytime soon?
Booch: The underlying assumption of science is that the cosmos is understandable; the underlying assumption of computing is that the cosmos is computable. As such, from the lens of computing, we can imagine new worlds, but to make those things manifest, we must make programs that run on physical machines. As such, we must abide by the laws of physics, and quantum computing, at this current stage in its development, is mostly trying to find ways to work within those laws.
Two things I must mention. First, quantum computing is a bit of a misnomer: we don’t store information in its quantum state for very long, we just process it. As such, I prefer the term quantum processing not quantum computing. Second, theoretically, non-quantum computers and quantum devices are Turing equivalent. They both have the same computational potential, and each have particular advantages and efficiencies, with very different scalability, latency, resiliency, correctness, and risk. Quantum machines are particularly good at attacking what are called NP problems, problems that grow harder and harder as their size increases. As for breakthroughs, I prefer to see this as a world of steady, continuous, incremental progress advancing over solving some very hard problems of physics and engineering.
Tyson: Quantum computing leads me to cryptography—where, almost as a side-effect, it is able to attack public-key algorithms. I get the sense you are wary of blockchain’s ethics. Would you talk a bit about cryptography and Web3?
Booch: Web3 is a flaming pile of feces orbiting a giant dripping hairball. Cryptocurrencies—ones not backed by the full faith and credit of stable nation states—have only a few meaningful use cases, particularly if you are a corrupt dictator of a nation with a broken economic system, or a fraud and scammer who wants to grow their wealth at the expense of greater fools. I was one of the original signatories of a letter to Congress in 2022 for a very good reason: these technologies are inherently dangerous, they are architecturally flawed, and they introduce an attack surface that threatens economies.
Tyson: You said, “I hope we will also see some normalization with regards to the expectations of large language models.” Would you elaborate?
Booch: I stand with Gary Marcus, Timnit Gebru, and many others in this: large language models such as GPT and its peers are just stochastic parrots, very clever and useful mechanisms that offer the illusion of coherence but at the expense of having absolutely no degree of understanding. There are indeed useful purposes for LLMs, but at the same time, we must be cognizant of their risks and limitations.
Tyson: What do you make of transhumanism?
Booch: It’s a nice word that has little utility for me other than as something people use to sell books and to write clickbait articles. That being said, let’s return to an earlier theme in our interview: what it means to be human. Conscience, sentience, sapience are all exquisite consequences of the laws of physics. It is likely that the cosmos is teeming with life; it is also likely that sentient life is a rare outcome; it is also unlikely that, in the fullness of time of the cosmos, that we are the only sentient beings. That being said, we are, you, me, everyone reading this, are sentient beings, born of star-stuff and able to contemplate ourselves. That, for me is enough.
Tyson: Do you think we’ll ever see conscious machines? Or, perhaps, something that compels us to accept them as such?
Booch: My experience tells me that the mind is computable. Hence, yes, I have reason to believe that we will see synthetic minds. But not in my lifetime; or yours; or your children; or your children’s children. Remember, also, that this will likely happen incrementally, not with a bang, and as such, we will co-evolve with these new species.
Tyson: Everyone should look at your lists of books you've read. Knowing that you've read, A Universe of Consciousness gives me permission to ask: Do you hold a materialist viewpoint? (Or, falling completely into the realm of philosophy, What is consciousness?)
Booch: Let me put it this way: I have reason to believe I am conscious and sentient; I have reason to believe that you are, as well, because my theory of mind yields a consistency in our being. Reflecting Dennet’s point of view, consciousness is an illusion, but it is an exquisite illusion, one that enables me to see and be seen, know and be known, love and be loved. And for me, that is enough.