Main image of article The Coming Debate Over A.I. and Privacy
shutterstock_265969664 (2) At its annual Worldwide Developers Conference (WWDC) in San Francisco this week, Apple suggested that it was more than willing to compete toe-to-toe against Google, Amazon, and Microsoft in the artificial intelligence (A.I.) arena. As those rivals build increasingly sophisticated bots that can respond to users’ natural-language commands, Apple has opened its own digital assistant, Siri, to third-party developers. If things pan out as the company expects, Siri will rapidly add more functionality, essentially becoming a butler for a growing collection of tasks. And that’s just the first of what will surely be many A.I.-related initiatives on Apple’s part. But how can Apple reconcile its need for personal data—the fuel of an effective A.I. platform—with its rigorous user privacy stance? During WWDC, Apple executives talked about something called “differential privacy.” As explained by Craig Federighi, the company’s senior vice president of software engineering, this approach uses “hashing, sub-sampling, and noise injection” to give researchers “crowd-sourced learning while keeping the data of individual users completely private.” In other words, Apple is analyzing users’ data in aggregate, while denying its own scientists the ability to discern the patterns and information of any one individual. It’s a noble approach, but one that’s been thwarted in the past by clever tech pros—as Wired pointed out soon after Federighi’s speech, Netflix once attempted to use similar techniques in its own research, only to see a group of outsiders discover individual identities by correlating data-sets. Apple also claims that some of its early uses of A.I., such as facial recognition, will take place on users’ devices, with no need to upload images or information to Apple’s servers. This differs from Google and Facebook, which need users to upload content to their respective datacenters in order to perform next-generation tasks such as photo tagging or auto-recommending email replies. In the view of Apple executives, the presence of user data on a company’s servers potentially creates a privacy issue, especially if that information is hacked or subpoenaed. That’s not to say that Apple’s rivals are cavalier when it comes to personal data. According to MIT Technology Review, Google and Microsoft use a variety of techniques to disguise personal information folded into their A.I. platforms, including homomorphic encryption, which will produce encrypted results from encrypted data.

The Coming Thing in A.I.

If you’re a developer or a tech pro who works with A.I. in some capacity, privacy will likely become a point of recurring debate over the next several years. That discussion will center on a few key points:
  • How much personal data does an A.I. platform actually need?
  • What data should an A.I. platform access?
  • Can A.I. functionality take place on a device, or does data need to go to the cloud?
  • What steps can be taken to ensure data used by A.I. is secure?
The answer to these (and other questions) will help determine whether a particular A.I. effort is a success or a scandal-riddled failure. While Apple put a stake in the ground with its differential privacy initiative, expect the debate over A.I. and privacy to continue for quite some time to come.