Wanted: Ethics for Artificial Intelligence Platforms

Now that artificial intelligence (A.I.) has evolved from the theoretical to the practical, it’s perhaps time to introduce an ethical framework for the technology.

That’s the perspective of Google subsidiary DeepMind, which has just launched a research unit for that very purpose, dubbed “DeepMind Ethics & Society.” Researchers within the unit will study real-life applications of A.I. and figure out how to put ethical considerations into practice.

“At DeepMind, we start from the premise that all AI applications should remain under meaningful human control, and be used for socially beneficial purposes,” read the company’s announcement. “Understanding what this means in practice requires rigorous scientific inquiry into the most sensitive challenges we face.” It will also involve cross-disciplinary researchers from the humanities, social sciences, and other fields.

People have already been exploring the values and standards of A.I. for years. For example, OpenAI, a non-profit “artificial research company,” has tasked itself with shepherding an A.I. that’s friendly to humanity. As part of that effort, OpenAI conducts research, releases papers, and offers useful tools for A.I. experts and other tech pros.

The need to establish some sort of framework grows increasingly necessary as A.I. systems become more powerful. For example, Google is “all in” on A.I., baking its Assistant into its latest generation of hardware products; meanwhile, machine-learning algorithms increasingly power its software platforms. That potentially affects billions of people worldwide. “Today, we overwhelmingly get it right,” Google CEO Sundar Pichai told The Verge. “But I think every single time we stumble. I feel the pain, and I think we should be held accountable.”

That’s quite a bit more moderate than the view held by Tesla CEO Elon Musk, who thinks that artificial intelligence could spark a global conflict. “China, Russia, soon all countries w strong computer science,” he Tweeted in September. “Competition for AI superiority at national level most likely cause of WW3 imo.”

Whether or not a body like DeepMind comes up with ethical standards for A.I. that are adopted by the technology industry as a whole, it’s clear that this evolving technology will present a host of thorny conundrums for tech pros in coming years. A.I. offers the tantalizing possibility of solving a lot of technology and data problems—but if handled incorrectly, it could also cause a lot of pain.

Comments

One Response to “Wanted: Ethics for Artificial Intelligence Platforms”

October 10, 2017 at 6:25 am, Charles Boyles said:

When ethics are explored in regards to AI you must consider this simple premise. Artificial intelligence must always benefit and never harm either emotionally or physically the persons whom it is serving. There has to be a clear directive in the programs source code that states this. The number one most important thing in any application is do no harm. This is not limited to civilian applications either I believe that in military applications this is crucial as well because machines should never have intent to kill. That said it’ll be interesting to find balance in those types of applications as to servingvonea purpose and the job it’s given. A drone that is atonomous shouldn’t be given the ability to drop bombs or use live ammunition on a decision instead it needs human interaction to determine the best course of action. A civilian companion should be able to protect its user but hit at the cost of an assailants life. The purpose of AI is to enrich life not end it.

Reply

Post a Comment

Your email address will not be published.