Over the past few years, interest in using algorithms to build all kinds of neat website and app features has spiked. Facebook, for example, uses algorithms to serve up posts and ads to targeted users; and companies of every size rely on search, hashing, language detection, and other algorithms to power backend and user-facing processes.
But consumers are wary of algorithms’ reach, according to a new data-dump by Pew Research Center. Some 58 percent of Americans believe that “computer programs will always reflect the biases of their designers,” substantially more than the 40 percent who believe “it is possible for computer programs to make decisions that are free from human bias.”
Nor do Americans really like the idea of algorithms making real-world decisions. For example, some 57 percent find the concept of automated résumé screening unacceptable (wait until they hear that many companies rely on screening software to sort through job applicants); an eye-watering 67 percent similarly dislike the idea of automated video analysis of job interviews. And 68 percent hate the idea of an algorithm determining a personal finance score from many types of consumer data.
“When asked to elaborate about their worries, many feel that these programs violate people’s privacy, are unfair, or simply will not work as well as decisions made by humans,” Pew added.
However, people feel better about algorithms in the context of social media, with 75 percent reporting that it’s acceptable for social-media platforms such as Facebook to recommend events in their area; roughly 57 percent are similarly okay with social media using algorithms to recommend people they may want to know. The percentages only begin sliding into the negative when politics are involved, with only 37 percent comfortable with being targeted by political ads. (Maybe most people believe that social media doesn’t really have an effect on their “real lives,” and therefore are unconcerned about its impact.)
For app builders (and other tech pros) who are using algorithms to power key product features, the lesson here is pretty clear: As much as users appreciate those features, there’s a line beyond which they begin to find things creepy. If you’re building an app that makes real-world decisions (such as using a credit score to determine whether someone is eligible for a loan or service), assume that your users will be uncomfortable with that process, and design your UX and explanatory text accordingly.
If your app or website uses algorithms to make recommendations, surface ads, or dozens of other code-driven things, chances are good that most users won’t notice unless the actions are either wildly off-base (and thus need to be tweaked) or too accurate for comfort (using a maps API and profile data to feed geo-targeted advertisements or notifications, for example, is the sort of thing that will freak people out). Still, use caution when building; put yourself in the shoes of a “typical user” and judge how comfortable you’d feel with this-or-that feature.