Main image of article The Ethics of A.I. Doesn't Come Down to 'Good vs. Evil'

The Artificial Intelligence (A.I.) Brain Chip will be the dawn of a new era in human civilization.

The Brain Chip will be the end of human civilization.

These two diametrically opposite statements summarize the binary core of how we look at artificial intelligence (A.I.) and its applications: Good or bad? Beginning or ending? Truth or deceit?

Ethics in A.I. is about trying to make space for a more granular discussion that avoids these binary polar opposites. It’s about trying to understand our role, responsibility, and agency in shaping the final outcome of this narrative in our evolutionary trajectory.

This article divides the issues into five parts:

  1. What do we mean by ethics and A.I.?
  2. Our lack of ability to understand the intended and unintended consequences of innovation.
  3. Our lack of ability to understand the connections and ramifications between separate events.
  4. Our lack of ability to standardize fairness.
  5. Our inexperience in managing platforms with billions of people.

What Do We Mean by Ethics and A.I.?

So what is this thing we call “ethics”? And what is this other thing called “A.I.”? And why is it important to even consider these things?

Let’s start with the first question: what is ethics?

This might be a gross oversimplification, but when I explain ethics to my 6- and 9-year-olds, I tell them that ethical behavior is behavior that makes sense toward building a better place, a better society, and a better world—taking into account the overall good. For example, making clean water accessible to everyone is ethical, while polluting water is unethical.

Morals, on the other hand, are about an individual’s principles regardless of the consequences of the greater good; it’s more subjective and experiential. For example, it just feels wrong to contaminate water; it’s immoral. If you know that contaminated water will harm people and that’s why you don’t do it, it’s because it’s unethical. There are nuances to this; like I said, this is way oversimplified but for the purposes of this article, but it will have to do.

In terms of A.I., we can go with narrow or broad understandings. Broadly speaking, A.I. is literally intelligence created by humans. Whether it’s an airplane autopilot or a computer playing chess, if it’s man-made and can emulate intelligent behavior, it qualifies as A.I.

Nowadays, when we talk about A.I., we are talking about more sophisticated AI. We are talking about algorithms that are able to learn and adjust as they process data. A good example is an algorithm that is able to infer the rules of Go based on studying thousands of Go games. Such a machine is able to establish correlations between datasets, infer rules, and improve. It’s not that it emulates intelligent behavior; it is actually capable of becoming smarter over time.

How are ethics and A.I. connected, though? A.I. is a powerful tool that can reshape the way society behaves. As such, if we are not careful about how we deploy A.I. and how we structure governance over it, it could prove damaging to society.

As humans, we are great at innovating; we are also great at opening up Pandora’s Box and leaving it up to future generations to clean up any mess created. We deploy new tech and habits way before we understand how they impact our bodies, minds, and society. It took decades of people smoking to begin to understand the causality between smoking and disease. We have been using cell phones and computers for a few decades now and still don’t fully understand how these tools impact our ecosystem. And now with A.I., we add a new item to the list of innovations with incredible power—both constructive and destructive.

Automation Machine Learning A.I. Artificial Intelligence Dice

Objection 1: Our Lack of Ability to Properly Assess Innovation Pre-Facto

My first objection is not with A.I. itself, but the fact that, as a society, we spend years and billions on R&D but minimal time and funds in understanding the ramifications of the novelty that we are about to introduce. Think about major innovations: industry machinery, planes, trains, automobiles, cigarettes, e-cigarettes, microwaves (to name a few). We are amazing at creating and learning to manage things that completely change the way we live. We are also very good at getting excited about the innovation, deploying it at scale ,and only decades later realizing unintended harm and damage that we introduced.

We are also excellent at introducing A.I. initially to handle an edge case, such as allowing people who cannot walk to walk again through the brain chip. Who can say no to that? Amen, right? But now that we have clearance to apply the brain chip, we can now quickly extend the range of applications with little or no scrutiny. Shortly after the brain chip allows people to walk again, it could become a cognitive differentiator that further separates those who can afford it from those who cannot. This is just an example, but you can see how you can quickly go from a noble use case to a not-so-noble use case quickly.

Ethics is often considered in the aftermath, when it should be homework that needs to be done before we put in place something new. While we are amazing at inventing and breaking through, we are not that good at consciously architecting the values and behaviors that we would believe to virtuous, and introducing things that take us in that direction.

This is particularly important because of the wide usage of technology. Any given algorithm can dramatically change the future for millions of people. Whereas other forms of technology would take time to roll out, adopt, and use at a wide scale, with tech platforms you have billions of people who are connected to and by few platforms. Consequently, any small change becomes instantly magnified. We lose the possibility of adjusting and course-correcting as things are deployed.

Objection 2: Our Lack of Ability to Understand the Connections and Ramifications Between Events

A.I. can cross-correlate immense data sets of entirely different natures. Think about recruitment for a minute: A.I. could, for instance, establish a correlation between candidates born in the first half of the year vs. the second half of the year, and understand that candidates born in the month of May are more likely to succeed at a given job. A.I. could make other determinations based on how your university ranks, previous job titles, and where you live; pretty soon, getting the job (or not) becomes a predetermined event.

While this may not seem particularly harmful, A.I. has the potential of boxing people into certain lanes and removing their ability to override these lanes given individual exceptions or behavior. The same candidate who’s unfit from an algorithmic perspective could have a whole range of abilities such as willingness to succeed, intuition, or candor that won’t be captured by the data. In the A.I. reality, that same candidate could be crushed before having an opportunity to show their abilities.

If you play this out, we could live in a world where your future is drawn out the second you are born, based on your location, parents, time of year, name and other countless variables.

A.I. ML Dice

Objection 3: Our Lack of Ability to Standardize Fairness

We are learning as we go. If you compare the level of knowledge and awareness that we have now versus 50 years ago, you will easily notice that key things that are not acceptable today were non-issues back then. The gender wage gap, for instance, is unacceptable today. Think about this from an algorithmic perspective for a second. If we had built an algorithm 50 years ago to hire people, that algorithm would reflect that time period’s prejudice and would offer different pay according to gender. Now that we know better, we would fix the algorithm in order to reflect the values of a society that is concerned about fairness and equality.

It took us literally billions of iterations of hiring unfairly in order to evolve our gender equality paradigm. There is friction in each of these iterations. Some people just felt like those differences were not correct and behaved differently; these people began as statistical outliers but gradually grew in numbers and, propelled by the value structure adopted by modern society, gradually shifted to the statistical norm. We reach fairness through trial and error and continuously evolve our paradigm. It’s likely that 50 years from now, when we look back at our current practices, we may find them just as much (if not more) outdated.

The challenge is that the algorithmic framework does not typically account for scattered behavior all over the spectrum. It’s standardized from inception. That does not promote experimentation and evolution through every single iteration; it does not invite feelings and intuition into the equation. A decision will not just rub an algorithm the wrong way. If the very paradigm that produces that data is biased and rigged, its results will follow suit.

Objection 4: Our Inexperience in Managing Platforms with Billions of People

We are very new at centralizing the oversight and rules around the behavior of billions of people. We evolved as tribes, with different pockets of culture emerging and changing all the time. The confluence of all these pockets of culture produced our modern civilization. With A.I., we no longer have a confluence of pockets. We have one big sack. And what’s our track record at managing one big sack?

How credible can we be when tweaking an algorithm that could single-handedly impact the lives of billions of people? Think about a large organization with hundreds of managers. Some of these managers are on the generous side, some of them are on the ruthless side. Some of them are great at what they do, and others should have never become managers. When you add the results of all of these management styles together, it’s likely that goodness and badness will cancel each other out, and you will be left with an end result that is not great… but not terrible, either.

With A.I., you no longer have all of these differences between managers. You have a single manager for the entire organization and results can be more extreme: great or terrible. We just don’t have an extensive track-record to instill the right level of confidence that we are well-equipped to manage the level of power that A.I. offers.

Conclusion

Ethics is about shaping the future in ways that we believe to be good for the greatest number of people, in accordance with our fundamental values and principles. And shaping is about iteratively improving. With A.I., we don’t have the same level of diversity in these iterations; everything is funneled down through the same fundamental rules and principles, unlike human behavior that is naturally diverse when it comes to processing.

With A.I., we also don’t have the same amount of time to witness the results of our trials, react to them, and reshape them. Once something is deployed, the impact is ubiquitous and instant. It’s a lot harder to react to this and constructively shape it to our will.

A.I. is a new shiny toy that just looks amazing and everyone wants to play with. But we are not quite sure of how things will play out, and I highly question our capability of being able to shape the outcome of something that can roll out the change in such profoundly powerful ways.

Gabriel Fairman is the Founder and Chief Executive Officer of Bureau Works, a Silicon Valley technology company that uses artificial intelligence for translation services for large corporations.