For Artificial Intelligence to Succeed, It Needs Empathy

The Pew Research Center, a nonpartisan think tank, once asked 979 researchers, tech pros, and policy leaders about the future of artificial intelligence (A.I.). The results weren’t exactly comforting: while 63 percent of respondents believed that A.I. will improve individual lives by 2030, some 37 percent said that the technology won’t benefit humanity. What’s needed, some of those experts claim, is machine “empathy.”

“A number of the thought leaders who participated in this canvassing said humans’ expanding reliance on technological systems will only go well if close attention is paid to how these tools, platforms and networks are engineered, distributed and updated,” read Pew’s report.

The report also quoted a number of those experts. “I strongly believe the answer depends on whether we can shift our economic systems toward prioritizing radical human improvement and staunching the trend toward human irrelevance in the face of A.I,” said Bryan Johnson, founder and CEO of Kernel, a developer of advanced neural interfaces. “I don’t mean just jobs; I mean true, existential irrelevance, which is the end result of not prioritizing human well-being and cognition.”

Other experts had similar suspicions. “A.I. will drive a vast range of efficiency optimizations but also enable hidden discrimination and arbitrary penalization of individuals in areas like insurance, job seeking and performance assessment,” said Andrew McLaughlin, executive director of the Center for Innovative Thinking at Yale University.

Before A.I. can conquer the world (and take a percentage of jobs currently done by humans), it will have to overcome some big technological hurdles. For example, A.I. has no common sense; while you can use machine learning to train a software platform to react to a narrow band of circumstances, it won’t respond in predictable ways to similar situations. Just because you train your vacuum-cleaning robot to not ram into one type of obstacle doesn’t mean it’ll have the judgment to avoid different obstacles.

Organizations such as Google and DARPA are already working on those nuanced problems. However, the experts queried by Pew point to another, more refined issue that may prove critical to A.I. development: empathy.

Based on the collective advice of these experts, Pew recommends that tech pros “build inclusive, decentralized intelligent digital networks ‘imbued with empathy’” that will “aggressively ensure that technology meets social and ethical responsibilities.” Some level of “regulatory and certification process,” it added, “will be necessary.”

Some organizations, most notably OpenAI, are actively trying to create an ethical framework around artificial intelligence and machine learning. Google is also pushing a new program, AI for Social Good, which is meant to tackle some of the planet’s biggest challenges. Will those kinds of initiatives persuade tech pros to move with caution when it comes to this new, potentially dangerous technology? That could become the biggest existential question of all, and whoever figures out “machine empathy” and “artificial soft skills” could end up a winner in the competition to make the perfect A.I.

One Response to “For Artificial Intelligence to Succeed, It Needs Empathy”

  1. Minter Dial

    I fully agree that empathy will be a big factor with AI, although not necessarily for all aspects of AI. There are three critical considerations. First, empathy is a critical attitude to have in advance of creating an ethical framework. Secondly, the team that is briefing the “AI+Empathy” will need to take into that, notwithstanding the fact as you write that AI may be without common sense, coders/programmers may need extra guidance (very logical minds) to help encode empathy. Thirdly, getting enough good data will be a — if not the — most monumental challenge. This will be not just for the learning sets, but for the programme once it is deployed at large.
    Minter Dial (author of Heartificial Empathy)