Breaking Down Problems with A.I. Safety

shutterstock_137262293

If you’re interested in working with artificial intelligence (A.I.) and machine-learning platforms, chances are good you’ve heard of OpenAI, the non-profit “artificial intelligence research company” funded by tech luminaries such as Tesla CEO Elon Musk and venture capitalist (and Gawker bestie) Peter Thiel.

OpenAI has a unique mission: develop open-source A.I. software that’s friendly to humanity (as opposed to SkyNet). Earlier this year, the organization launched OpenAI Gym, a toolkit for building reinforcement learning (RL) algorithms, which tech pros in the space may find useful as they build out their own A.I. projects.

Now comes the next OpenAI initiative: a breakdown of “concrete problems in A.I. safety.” OpenAI researchers, in conjunction with colleagues from Google Brain, have issued a paper that delineates how machine-learning systems can potentially go haywire.

“Many of the problems are not new, but the paper explores them in the context of cutting-edge systems,” explains a brief intro on OpenAI’s Website. “We hope they’ll inspire more people to work on A.I. safety research, whether at OpenAI or elsewhere.”

According to OpenAI and Google, the biggest challenges facing A.I. include:

  • Safe Exploration: Can an A.I. platform—whether software, or a robot—explore its environment without destroying itself by accident?
  • Robustness to Distributional Shift: Can A.I. effectively adjust its behavior, or at least shut down in a non-destructive way, when exposed to new and unexpected types of data?
  • Avoiding Negative Side Effects: A.I. platforms will need to accomplish tasks without inadvertently harming their surroundings. For example, you can’t have a self-driving vehicle that successfully navigates from points A to B while plowing down every pedestrian in its path.
  • Avoiding ‘Reward Hacking’: If an A.I. platform earns a “reward” for accomplishing a certain task, how can tech pros ensure that the A.I. won’t start “gaming” its environment in order to earn more rewards?
  • Scalable Oversight: A.I. platforms will still need to successfully complete goals with minimal feedback.

If you’re interested in artificial intelligence and machine learning, considering the above problems could help you build better platforms. Unless, of course, you’re trying to build SkyNet.

Image Credit: Ociacia/Shutterstock.com

Post a Comment

Your email address will not be published.