Main image of article Facebook FTC Agreement Could Impact Its A.I., Product Plans

After years of privacy-related controversies (and months of not-so-secret negotiations with the federal government), Facebook has announced an agreement with the Federal Trade Commission (FTC) that will slam down new regulations on how the company handles data.

This new regulatory framework "introduces more stringent processes to identify privacy risks, more documentation of those risks, and more sweeping measures to ensure that we meet these new requirements," Facebook announced in a newsroom posting. "Going forward, our approach to privacy controls will parallel our approach to financial controls, with a rigorous design process and individual certifications intended to ensure that our controls are working—and that we find and fix them when they are not."

Facebook will also pay out $5 billion in penalties, which sounds like a lot of money until you realize the company earned roughly $15 billion in the first quarter of 2019 alone.

Facebook now promises that it will have "quarterly certifications" to evaluate its privacy controls; Facebook CEO Mark Zuckerberg will need to personally sign off on these certifications. Regulations affect everything from Facebook's use of phone numbers to password encryption to facial recognition (check out this Twitter thread for a great point-by-point breakdown of the agreement).

There will also be additional oversight from Facebook's Board, the FTC, and the U.S. Justice Department. If this insight is truly rigorous, it will impact how Facebook stores and utilizes its data; but not everyone at the FTC agrees that this settlement will work:

Increased oversight, though, could end up impacting one of Facebook's key areas: Artificial intelligence (A.I.), and creating a voice-activated assistant that leverages user data in order to provide granular service.

Facebook Has Big 'Alexa Killer' Plans

Yes, Facebook wants to get into the digital assistant game (again).

Last year, Facebook announced that it was shutting down “M,” its three-year-old platform that tried to combine artificial intelligence (A.I.) with flesh-and-blood customer service reps. Users could ask “M” for pretty much anything (“Where can I get a good slice of pizza in lower Manhattan?”) and receive an answer from either an A.I. bot or a human being.

In theory, “M” could have been more powerful than digital assistants such as Siri or Alexa powered entirely by code; but Facebook discovered relying on humans to deal with user queries was difficult (and perhaps impossible, in the long run) to scale, so it canceled the project.

And then, earlier this year, news emerged that Facebook wants to try again, this time with an assistant that more closely mirrors what its rivals are doing. The voice-activated platform currently under development will reportedly work across not only Facebook, but also the company’s other products, including Oculus VR headsets and Portal, its video-chat device.  

However ironically, news of Facebook’s digital-assistant ambitions leaked on the same day that the company admitted it had collected 1.5 million users’ email contacts without consent. Although Facebook hastened to add that it was deleting the inappropriately sourced information, it was yet another reminder that the company has spent the past two years mired in data-related scandals. 

Even before the FTC settlement, Mark Zuckerberg promised to take the social network in a more privacy-centric direction, but it remains to be seen whether he’s capable of actually doing so, considering how his business model is based entirely on selling granular user data to advertisers. “Frankly we don't currently have a strong reputation for building privacy protective services, and we've historically focused on tools for more open sharing,” he stated in a note posted on Facebook in March. “But we've repeatedly shown that we can evolve to build the services that people really want, including in private messaging and stories.”

Another level of complication will come if Facebook opens up its digital assistant to third-party developers. If it does, it will follow in the footsteps of Amazon, Google, and other players that have decided a third-party ecosystem of voice-activated services is a surefire pathway to growth. In fact, one could argue that no platform can survive without massive developer buy-in—despite the fact that platforms such as Alexa have yet to produce a genuine, must-have blockbuster “app.”

But Facebook also has something of a checkered history when it comes to encouraging third-party developers to build for its platform. A few years ago, it tried to make chatbots a thing, but the resulting products were clumsy, and developers largely abandoned their attempts to build something compelling with Facebook’s tools.

Just because a large company launches a new product, SDK, or API doesn’t mean that developers should rush into its ecosystem; that should be common knowledge by this point. Facebook entering the voice-activated digital assistant market seems particularly problematic, given its past issues with privacy and security; throw in its recent muddling of chatbots and “M,” and you have a recipe for potential failure—if Facebook even lets developers play in this particular sandbox. The new regulations bind third-party apps to the same privacy regulations governing Facebook, and either Facebook or outside developers might balk at that.

In other words, with all of these new privacy structures in place (such as increased board oversight, mandatory reviews of its systems and products, and so on), the build-out and adoption of a digital assistant may only become harder. Will the particulars of the FTC agreement choke off Facebook's ability to leverage third-party development and data usage? Will the new privacy controls too tightly regulate what an all-encompassing voice-activated assistant can do? We'll see if this initiative even launches.