How AI Will Make Everybody—and Everything—Smarter

An interview with Byron Reese, author of “The Fourth Age”

ai

What is technology? Byron Reese defines the concept as “knowledge that magnifies human ability.” A bullhorn makes us louder, a forklift makes us stronger, and binoculars help us see farther.

As CEO of research publication at tech analysis firm Gigaom, host of the Voices in AI podcast, and a tech entrepreneur of 25 years, Reese has dedicated his professional career to understanding how technology—from bullhorns to biometrics—augments the human experience. He engages with tomorrow’s greatest technological questions—is brain function mechanical? Can a robot feel? Where is the line between man and machine, and where will that line be a century from now?

Byron Reese

Reese’s latest book, The Fourth Age: Smart Robots, Conscious Computers, and the Future of Humanity, argues that technology has reshaped the fundamentals of life on Earth three times in the past. First, fire led to language. Then, 90,000 years later, agriculture led to cities and war. Finally, the invention of the wheel and the development of written language led to the formation of the nation-state: the backbone of global society and economy. Reese posits artificial intelligence (AI) and robotics represent an equally transformative evolution for humanity, one that will usher in a new age of prosperity and productivity. He spoke with trajectory contributing writer Andrew Foerch to share his analysis, evaluating philosophical notions of reality, personhood, and the mind. 

What led you to write this book?

There were so many conflicting narratives about what AI was going to do to the world—everybody from Elon Musk to Stephen Hawking to Bill Gates. Elon says there’s a small chance we’ll survive, that we’re just the bootloaders for AI. Then we have other people, mostly practitioners, who found that perspective to be hardly even worth a rebuttal because it was so ridiculous. The fact that all of these smart people had such different views, that’s what got me interested in writing about AI.

What I discovered along the way was not that these people knew different things. They didn’t have special knowledge. They believed different things about the nature of reality, humanity, the mind, or the brain. That’s what I wanted to understand: What were the assumptions behind all these narratives?

What makes you believe machine intelligence is so revolutionary, even beyond a tool such as the internet?

Whatever AI is—and there’s a big debate about that—but whatever it is, it effectively makes everybody smarter. There’s no way to spin that to be a bad thing. The internet is only a communications platform, it’s just a way for computers to talk to each other. If the internet created a million companies and $25 trillion in wealth and revolutionized the world, can you imagine what making everybody on the planet smarter would do?

AI is actually even more than just making everybody smarter—it’s a form of collective memory for our planet, for our species. Answers to any questions you have are in a library somewhere. The fact they’ve been written down doesn’t help if they aren’t preserved in a way that can be leveraged at scale. Machine learning is a way to study the past to make educated guesses about the future.

What themes or patterns did you notice in looking at the history of human evolution?

For me, the question begins with, ‘What is technology?’ I concluded technology is knowledge that magnifies human ability. More people can hear you with a bullhorn, and you can move more bricks with a forklift than you can on your back. Then I thought, ‘What are the effects of technology through history?’ And what you find are certain inflection points where two things happen: lots of technologies come together at once, and together they alter the course of humanity.

The first inflection point, and it’s a little arbitrary, is the pairing of fire and language. Those two technologies developed at the same time for a very good reason. Fire let us cook food, and cooking food gave us big brains, and with big brains we created language. But language meant all of a sudden we could coordinate our actions. No mammoth was a match for 10 people with language.

The second was agriculture. It gave us the city, and the city gave us the division of labor, and that gave us everything. All wealth really comes from that. You specialize, I specialize. Because of that, we all have a better standard of living. Agriculture also gave us warfare and slavery, which is the bad side of it, but all that happened because of technology.

And then, 5,000 years ago, two technologies just happened on the scene coincidentally at the same time, and those were the wheel and writing. With the wheel and writing came nation states. With them, you can promulgate and enforce laws, move people, collect taxes, and more. That’s why, around 5,000 years ago, great civilizations emerged out of nowhere all around the world.

Right now, most AI applications are considered “narrow.” What is the thinking among experts about achieving “general” AI?

There is no consensus that narrow AI gets better and better and one day you get a general AI. It could be they are vastly unrelated technologies that just happen to have ‘artificial intelligence’ in their names. On my podcast I ask practitioners, ‘Do you believe narrow AI evolves into general intelligence?’ I would say 60 percent say, ‘No, it doesn’t, it’s a different technology.’ We may not have even started working on general intelligence yet.

There is a minority opinion within the computer science world that general AI may not be possible to build. The argument that we can build one is simple to understand: the brain is a machine, and the mind and consciousness—the fact that we experience the world—is a mechanistic process. If that’s the case, the argument is we can therefore build general AI. If we could just figure out how a neuron works, we could tack a bunch of them together.

When I ask people on my show when they think humans will succeed at building general intelligence, I get five to 500 years as a range. That’s pretty telling. Imagine dropping your shirt off at the drycleaner and being told it will be ready in five to 500 days. The consensus is probably more like 20 to 25 years from now—but more interestingly, the answer has always been 20 to 25 years from now. Alan Turing thought we’d have a machine that could pass the Turing test 20 years ago, and we’re nowhere near having that. They’re all guesses.

As far as the future of employment, what effects do you think automation will have on jobs?

There are three schools of thought on that, and in the book I don’t come down for any one of them, I try to make the cases for and against each. The first is automation will take a bunch of low-skill jobs and there will be a group of people that cannot compete in the job market anymore, creating an eternal Great Depression. The second says computers will do everything better than us. They’ll write better poetry. They’ll be better presidents. Don’t think they can only do low-skilled work. The third school of thought says we’ve all seen this before. When you build tools that empower people, it increases people’s productivity, and that drives wages.

The argument is that in the United States, we’ve had 250 years of technological progress, 250 years of economic growth, and 250 years of full employment. Other than the Great Depression, unemployment has always been between five and 10 percent. So why is it that electricity becomes widely available and unemployment doesn’t jump? Why did the assembly line come out—which was a frightening kind of artificial intelligence at the time—and unemployment never went up? The answer is because people simply used these new technologies to increase their own productivity and therefore to make more money. And that drove economic growth.

I personally feel strongly the third school is most accurate. If a machine can do a job, the job doesn’t require anything that makes you a human. I think the goal is to use these technologies to empower everybody to do jobs beyond what a machine can do.

What types of skills should people learn to manage a world and workforce of smart machines?

Everybody in the workforce teaches themselves new stuff all the time. I would expect that’s going to stay the same. Universities are not trade schools. They help people learn to think for themselves, to learn new things, and to be trainable. That’s everybody’s future now. It’s not that we need to retool everything, it’s just that we learn as we go. The more we’re able to put AI into the simplest tools, the more people using the tools instantly become experts. Our tools are smart enough to help us along. 

Why do you think some people fear the conscious machine and what would you say to them?

We as a species are programmed to be cautious. Someone said it’s better to see a rock and think it’s a bear and run away than to see a bear and think it’s a rock and stay there and get eaten. We have a long history of finding caution to be a really good strategy for survival.

We’ve had 5,000 years of pretty much nonstop progress. If you picked any time in the past, any place in the world, and any measure of progress (infant mortality, standard of living, life expectancy, access to education, the status of women, individual liberty, self government), I can guarantee you that with few exceptions, it’s better now than it was at that place in the past. Things have never been better than they are right now, and they progressively get better over time. Somehow, we’re at this point where people say, ‘It’s all about to stop.’ And I don’t understand it. In fact, I think we’re within striking distance of being able to build the kind of society that dreamers used to dream about for so long. Utopians would say, ‘Someday, there will be enough food for everybody. Someday, there will be enough medical care for everybody.’ We now have the technology to make that happen.

With regard to “the singularity,” do you believe humans will ever create a machine that is as or more intelligent than a human?

I do not. There are people who believe it, and they are smart people. They would say we will reach a point where a machine can learn everything faster than a human. At that point, it’s game over for humans. Everything that comes out, a machine will learn it faster. To some degree people overestimate what AI capable of. It’s simple really. Let’s study data about the past, let’s look for patterns in it, and let’s project those into the future. That’s what it does, and it does this fantastically well.

If you draw a line from the abacus to the calculator to the iPhone and then you keep drawing, I don’t believe you eventually get to a human being. So no, I’m not a singularian. That all begins with the assumption that nothing is going on in your brain other than computation.

What did you personally gain from the experience of writing this book?

I wrote a book that never tells what I believe. It really tries to look at all these positions and understand the pros and cons of each. The book is my own journey. I wrote it because I was thinking: ‘Should I be worried about this? What is going to happen with jobs? What should my kids study to make sure they’re relevant?’ These are the great questions of our time. I came to my conclusions as I got near the end of writing the book and I didn’t think it was important to include them because it doesn’t really matter whether I think I’m a machine. What matters is: Does the reader think they’re a machine?

Photo Credit: Max Pixel

A New Age in Driving

Advances in roadway technology herald a transportation revolution

Monitoring Amazonian Wildfires

Advances in remote sensing and GIS aid in better tracking the location, intensity, and direction of wildfires

Celebrating IKONOS

The historic launch of the IKONOS satellite 20 years ago was a pioneering step for commercial remote sensing