The State of AI

Experts at SXSW 2018 discuss upcoming hurdles for AI as well as the looming singularity, machine ethics, and the future of work

SXSW_AI

“Humans tend to overestimate technology in the short term but underestimate it in the long term,” said Tom Foster, editor at large for Inc. magazine, during a panel he moderated on innovations in machine learning at South by Southwest (SXSW) in March.

Artificial intelligence (AI) was a recurring theme across panels at SXSW 2018’s Interactive Conference held in Austin, Texas. The topic was particularly popular in tracks titled “Intelligent Future” and “Startup & Tech Sectors.” Many AI experts marveled at recent advances in the technology while pondering its future.

“I’ve been working in AI for now more than 30 years and in the past eight years there are things that have occurred that I never thought would happen in my lifetime,” said Adam Cheyer, co-founder of Viv Labs, during a discussion on innovations in AI. He gave as examples IBM’s Watson supercomputer answering Jeopardy! questions in natural language and the ability for AI to recognize objects in an image then synthesize the meaning of that image.

According to Cheyer, who is also known for being one of the co-founders of Apple’s Siri, we’re on the brink of a new paradigm. He noted the way people interact with computers changes roughly every 10 years—beginning with the PC, then the web browser, followed by the smartphone—and predicted the next evolution would be in the advancement of artificially intelligent assistants such as Alexa and Siri. He also described today’s assistants as utilities that can’t do everything the web or an app can, but anticipates this will soon change as companies pour billions of dollars into creating a scalable ecosystem.

“I believe within the next two to three years you will use an assistant for everything you use the web and apps for and it will be better than either of them,” Cheyer said, giving traveling to a wedding as an example.

Imagine an AI assistant that knows your preferences and can engage in complex task coordination on your behalf—arranging your travel and hotel, what you will wear, what gift you will bring.

“It will be a paradigm more important than web and mobile and at a scale where every connected human and business will be transformed—and we’re this close.”

Narrow vs. General AI

From left to right, Daphne Koller of Calico Labs, Adam Cheyer of Viv Labs, Nell Watson of Singularity University, and Loic Le Meur of Leade.rs participate in a discussion on “Exploring Innovations in AI.”

Though Cheyer posits AI’s competence in application areas (narrow AI) will have the power to change lives, he said general AI has made slight progress, but “nothing beyond that of what any two-year-old can do.”

A panel exploring the power of narrow AI, also called vertical AI, outlined the differences.

Amir Husain, founder and CEO of SparkCognition and author of “The Sentient Machine: The Coming Age of Artificial Intelligence,” said all of the investment going into AI today would be “paid back manifold” by vertical applications.

Vertical AI is the ability to perform a specific task at the human level of performance or better, such as classification, image processing, speech recognition, and finding patterns in data. Conversely, artificial general intelligence, sometimes referred to as AGI, is the ability to perform more complex tasks such as playing chess, composing a song, or arguing a lawsuit.

“When an app can look at the cancer diagnosis and provide results better than a human doctor, when an autonomous car can … control the car in a safer way than a human, when a weapon system is able to guide itself without the aid of the pilot—when it can only do that one task with reliability and accuracy—that is narrow AI,” Husain said. “Today, that is where we are poised. That is not to say we aren’t going toward more generalized AI.”

According to Husain, current achievements center around the ability of AI to effectively perceive—whether that is recognizing an image and the objects within it, making sense of text, converting sound into text, or so forth. But, he continued, progress will occur when this automated perception is woven with other AI applications to evolve from observation to decision, simulation, and action.

“Are you interested in just pulling in some data from a database and seeing a fancy chart that says, ‘I think this is an anomaly?’ Come on, that’s so 1979,” he said. “… What’s really interesting is how you start reasoning and how you start to build a case for action, and then enable that action in a way that’s explainable.”

John Price, CEO of vast.com, said success is about more than “just spinning up a great vertical AI app.” He outlined the importance of knowing where you’re going to get quality data, developing an effective and scalable business plan, and knowing how users will interact with the AI and how it will present information.

“Users don’t want information from the AI, they want to be told what to do,” he concluded.

How Close is the Singularity, Really?

Though AI technology has barely transitioned beyond vertical applications and consumers worldwide would surely admit to occasionally yelling at their Alexa devices in frustration, multiple panelists referenced the singularity—often defined as the point of no return at which artificial intelligence surpasses human intelligence, or at which humans and machines merge, or both. SpaceX’s Elon Musk and Google’s Larry Page, who believe AI is a great threat to the world, and that it has the power to change the world for good, respectively, were referenced multiple times as examples of extreme perspectives concerning the technology’s future. While some believe Musk’s hesitation could stifle progress, others fear Page’s total embrace of AI goes too far.

Famed physicist Steven Hawking shared Musk’s skepticism, and wrote in 2014: “If a superior alien civilisation sent us a message saying, ‘We’ll arrive in a few decades,’ would we just reply, ‘OK, call us when you get here. We’ll leave the lights on?’ Probably not—but this is more or less what is happening with AI.” 

But, realistically, how pressing are concerns about the future of humanity in the age of AI? Some experts anticipate the singularity is closer than most realize, such as futurist Ray Kurzweil, who pinpoints the year of singularity as early as 2045. Other experts, such as Cheyer, think that date is still roughly 1,000 years out.

“The real fear is [if] sentience emerges. I’m [an advisor to] a company called Sentient, and I can tell you we do not know how to do sentience,” Cheyer said, describing sentience as the ability to think, reason, feel, and scheme. “We don’t even know what consciousness is. I haven’t seen any work that is leading in that path. … I think most in AI take that view, which is quite different than what the media blows up.”

Daphne Koller, chief computing officer at Calico Labs, said the singularity might not be quite as far off as 1,000 years, but agrees major advancements would need to be achieved before that point is reached. Currently, she elaborated, achievements in AI rely on very large data sets. For example, while a child can learn the word “dog” after being shown the animal three or four times, an AI system may require thousands of examples to recognize a specific animal.

“We need to have a machine learning paradigm that is able to learn from a small number of examples and also to transfer learning from domain to domain,” she said. “Both of these are very nascent.”

Intelligence vs. Consciousness

Another panel directly addressed the notion of consciousness, particularly the difference between consciousness and intelligence.

David Chalmers, co-director of NYU’s Center for Mind, Brain, and Consciousness, described intelligence as “sophisticated, goal-directed behavior.” In addition, intelligence can be measured externally, such as one’s ability to play chess or navigate a new city.

Consciousness, on the other hand, is more difficult to define, but is commonly considered a subjective experience from the first-person point of view, or a stream of emotion and thought.

With regard to the question of whether machines can be intelligent, Chalmers said AI developers have been working on this a lot as they aim to leap from narrow to general AI that can achieve a number of goals. The notion of whether machines can be conscious, he said, is a very different question for which the answer is “no” among many experts.

From left to right, Glenn Zorpette of IEEE Spectrum, David Chalmers of NYU, Christof Koch of the Allen Institute, and Susan Schneider of the University of Connecticut discuss the notion of machine consciousness.

Susan Schneider, a philosopher with the cognitive science program at the University of Connecticut, is writing a book called “Future Minds” in which she takes a practical, “wait-and-see approach” to the notion of machine consciousness.

The quest to recreate consciousness is a way of investigating whether it transcends the brain, Schneider said.

“Evolution did the first wave of mind creation,” she said. “Businesses like Google and Facebook may be doing the second. So, we should get involved. We don’t just want to leave it to economic interests.”

Many scientists are racing to “hack the human brain” and to uncover the mysteries of consciousness for reasons including and reaching far beyond AI. But, according to Koller, comparing AI to human intelligence may prove to be a false dichotomy.

“There is no reason to believe the fastest path to machine intelligence is replicating the path evolution led us on in terms of creating natural intelligence,” she said. “The obvious analogy is flight. Airplanes work nothing like birds. All of the attempts to replicate flight that is birdlike have failed.”

Ethical Considerations

Even at its current state, AI is yielding ethical and policy concerns among myriad groups, from civil rights advocates and philosophers to lawmakers and small business owners.

Jeff Chow, vice president of product, consumer experience at TripAdvisor, said it’s important to recognize and work through bias to ensure AI tools represent the population at large. For example, facial recognition has proven to work best on lighter skinned faces while voice recognition is more in tune with deep voices—a reflection of mostly white, male developers.

Finale Doshi-Velez, an assistant professor of computer science at Harvard University, pointed to the unintended consequences of skewed data sets. For instance, if a hospital has a lot of data on a certain class of people that frequent the facility, its AI may only yield results on how to treat that class of patients.

She said some of the most popular verticals for AI, including healthcare, banking, and insurance, present many ethical questions.

As a broad but important example, how should healthcare providers make decisions about people’s health with a machine? Which decisions should be tasked to machines in the first place?

“There are a set of human values that we as a society have to decide on,” Doshi-Velez said. “And those values could include an individual level of choice.”

An example of individual choice could be determining how one’s autonomous vehicle will or won’t react in certain situations.

In insurance, actuaries are already using AI to make decisions that impact lives. Financial institutions are increasingly applying algorithms to assess loan risk.

“We need to gather data about how our values are being met or not met by whatever technologies or tools we have,” Doshi-Velez said. “ … This is not something that’s happening in the future. It’s happening now. Data is affecting our lives very easily, and millions of people are impacted by the decisions of software.”

She emphasized that AI value systems should be decided upon by society “and not the engineer trying to make a decision by 5 p.m.”

Chris Jones, vice president of technology at iRobot, warned that it’s important to consider application specifics when making such determinations.

“[Regulating AI] is a very generic term,” Jones said. “If you want to talk about regulating AI and its use in automobiles … it’s going to be a very different conversation than regulating for healthcare. I’ve seen some of the comments being so generic in nature and so far-ranging that it’s not actionable.”

Nell Watson, an AI and robotics faculty member at Singularity University, which aims to prepare global leaders and organizations for the future, said there is an emerging discipline called machine ethics, which incorporates philosophy, logic, math, psychology, and social science.

“It’s about the art and the engineering of loading values into the machines or helping machines make decisions that are more in line with our hopes and expectations, and, ideally, human flourishing,” said Watson, who is also co-founder of a group called EthicsNet, which aims to create a dataset to teach machines to act in a socialized manner.

AI and the Future of Work

Another major area of AI-related ethical and policy concern is the future of work. One panel highlighted the “disconnect” between Washington, D.C., and AI technology.

Terah Lyons is executive director of the Partnership on AI, which studies and forms best practices on AI technologies and works to advance public understanding of the technology. She also worked on AI issues for the White House Office of Science and Technology Policy during the Obama administration.

“It’s become incredibly clear that AI will have outsized impacts potentially on communities that currently are not represented in the technology to the extent that they should be, and that the diversity and inclusion aspect is a real problem,” Lyons said.

From left to right, Terah Lyons of the Partnership on AI, Sarah Holland of Google, and Rep. Terri Sewell discuss the “disconnect” between AI and D.C.

Rep. Terri Sewell, a Democrat from Alabama, agreed and voiced concern for the technologically underserved. Sewell, a proponent of the Future of AI Act and the AI Jobs Act, said AI is not a high-ranking issue among her colleagues, and expressed concern that the digital divide is already vast in rural areas.

“I have parts of my district that still have dial-up [internet],” the Congresswoman said. “So rolling out broadband as part of an infrastructure strategy [is] huge. … It is imperative that we focus more on the future of work and all of these great innovations and technologies that are coming our way. … No one wants to slow down innovation. But huge swaths of my district are already behind, I can only imagine they will get further behind.”

Sarah Holland with Google’s Washington, D.C.-based public policy team, said all stakeholders—policymakers, academia, everyday citizens, and advocates—need to weigh in to determine how AI is going to land in society, from the economic impacts to the social.

“The convening power of government should be highlighted,” Holland said. “And finding ways to help educate people that this is coming, and it’s coming fast, and finding ways we can help it work for them.”

Holland and Lyons both pointed to the power of open data to help level the AI playing field.

“AI is very asymmetric. Some companies are way ahead,” Lyons said. “Open data is a way the government can lean in hard and help equip smaller organizations with the data and tools necessary to train models the way they might need to compete. Facilitating multi-sector conversations is critical to that because without crosscutting discussion it is far more likely that large tech companies are less tuned into the needs of citizens more generally.”

Rewriting the Social Contract

According to Husain, there are two schools of thought with regard to AI and the future of work: the first—AI will take away nearly all human jobs; and the second—AI won’t be able to perform most human jobs and instead the nature of work will evolve. For example, when society transitioned from agrarian to industrial, new jobs emerged that hadn’t even been conceived of yet. But, Husain said, he sees many problems with the second camp’s thinking.

“When the Industrial Revolution hit what we had replicated was the human muscle. … After that the input of the human muscle in almost every economic endeavor diminished,” he said. “Today, what we are talking about is replicating, if not all of the generality of the power of the human mind, enough of the capabilities that would allow economic endeavors across a broad array of activities to be carried out by these replicated machine minds. And if that’s the case what else do we have? We are mind and machine.”

Husain emphasized the need for policymakers to avoid comforting platitudes.

“The point is, [will] machines just on the current trajectory without AGI … create a society with 50-70 percent unemployment unless we redo the social contract?” he said. “You’re losing time to have a proper discourse around how the social contract needs to change.”

Price said there’s room to both manage AI responsibly as well as to take advantage of its potential to help solve global problems such as disease and climate change.

“We have to be responsible with AI,” he concluded. “That said, I think it’s the greatest opportunity in the world.”

,

An Interview with Jeff ‘Skunk’ Baxter

A look inside Baxter's journey from music to defense intelligence

Location Nation

In just a few short years, LBS has changed the way the world thinks about and interacts with place

Success Left of Zero

The significance of human geography before and during conflict