Apple just joined the Partnership on AI, a group trying to build a framework of ethics and best practices for the use of artificial intelligence. Existing partners include IBM, Facebook, Google, and Microsoft.
The tech industry’s embrace of artificial intelligence and machine learning puts Apple in a bit of an odd spot. All A.I. platforms need massive amounts of data in order to become more sophisticated; and if the purpose of those platforms is something customer-centric, such as predicting smartphone users’ wants and needs, that data must be specific and personal. Google’s digital assistant, for example, requires your location, search history, contacts, calendar, and other data in order to provide the most personalized service—and none of it can be encrypted.
While companies such as Google and Facebook have provided more encryption options for messaging and other kinds of data over the past few years, A.I.’s hunger for unencrypted data is the soft underbelly for the tech industry’s hardening armadillo shell of privacy and security.
Apple has tried to have it both ways by maintaining strong privacy controls for user data even as it works to make Siri, its digital assistant, much smarter. During a 2016 earnings call, Apple CEO Tim Cook took issue with the notion that a tech firm needs to strip-mine a user’s personal data in order to provide A.I.-based services. “People would like you to believe you have to give up privacy to have A.I. do something for you, but we don’t buy that,” he said. “It might take more work, it might take more thinking, but I don’t think we should throw our privacy away.”
Siri’s machine-learning functions, such as facial recognition, reportedly take place on the user’s device as opposed to the cloud. Apple also relies on tools such as hashing and noise injection to obscure individuals’ data in the datasets its researchers use to improve its A.I. platforms. With the upcoming iOS 10.3, Apple will give users the option of sending data from their iCloud account to the company in order to improve “intelligent features and services” such as Siri; that data will supposedly be handled in a “privacy preserving manner.”
Given Apple’s intense secrecy, it’s difficult to tell how effectively the company is balancing the need for privacy with A.I. development (certainly everyone has an opinion on whether Siri is superior to similar options on the market). It was only a few weeks ago that Apple’s researchers published their first paper on artificial intelligence (titled: “Learning from Simulated and Unsupervised Images through Adversarial Training”), and they only did so after intense criticism from the rest of the A.I. research community about keeping their work under wraps.
In light of its penchant toward opacity, Apple joining the Partnership on AI is a notable step. If the company has figured out how to keep data secure even as it’s used to boost an A.I. platform’s ability to provide services, that could greatly ease the fears of privacy and security advocates.