Artificial Intelligence Doesn’t Scare Developers: Survey

When it comes to artificial intelligence (A.I.), what excites developers are some of the very things that make ordinary folks terrified of the technology.

In the latest Stack Overflow developer survey, some 40.8 percent of developers said that “increasing automation of jobs” was an “exciting” aspect of A.I. Another 23.5 percent were enthused by the prospect of algorithms making important decisions, and 23.3 percent liked the idea of A.I. eventually surpassing human intelligence.

By contrast, only 19.8 percent of developers considered the prospect of A.I. automating jobs “dangerous,” while roughly 28 percent thought the prospect of algorithms making important decisions was potentially a bad idea. Another 28 percent thought negatively about A.I. surpassing human intelligence.

“Developers are most likely to think that the creators and technologists behind the machine learning and A.I. algorithms are the ones who are ultimately most responsible for the societal issues surrounding artificial intelligence,” Stack Overflow stated in the report accompanying the survey data. “About a quarter of respondents think that a regulatory body should be primarily responsible.”

A majority of developers (72 percent) are also excited about the prospect of artificial intelligence, while only a minority (19 percent) are more worried about the dangers than the possibilities. Merely 8.2 percent haven’t really devoted much thought to what A.I. and machine learning will do to the world in coming years.

“There was not much serious worry about Skynet,” Stack Overflow added, “but many developers discussed systemic bias being built into algorithmic decision making and the danger of A.I. being used without the ability to inspect and reason about decision pathways.”

Stack Overflow’s survey results underscore the fear that A.I. tools will hit the market without a sufficient ethical framework in place. Google subsidiary DeepMind recently launched a research unit, dubbed “DeepMind Ethics & Society,” that will study real-life applications of A.I. and advise how to put ethical considerations into practice.

“At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes,” read the company’s announcement. “Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face.”

DeepMind isn’t the only organization trying to determine a set of standards for A.I. OpenAI, a non-profit “artificial research company,” is trying to guide the creation of A.I. that’s friendly to humanity (whatever that means). In order to accomplish that aim, OpenAI conducts research, releases papers, and offers useful tools for A.I. experts and other tech pros.

However, these organizations can only operate in an advisory capacity when it comes to artificial intelligence; in theory, there’s nothing preventing an ambitious team from creating a powerful artificial-intelligence platform that lacks any sort of moral or ethical framework. As with so many things in technology, preventing worst-case scenarios might hinge on tech pros’ ability to self-police their work.