Main image of article Ageism in Technology Hiring: Can A.I. Stop It for Good?

We’ve been hearing a lot lately about how artificial intelligence (A.I.) and machine learning might make it even harder for older technologists to land a new job. But what if these emerging technologies could be used to eliminate (or at least reduce) the impact of ageism?

If that sounds a little bit Pollyannaish, you’re probably right. Even so, it’s worth considering how automated tools for screening job applications could end up creating a more diverse workforce. Before we go into that, though, let’s examine the “dark side”: How automation could boost hiring bias and wreak absolute havoc on many folks’ ability to find employment.

A few years back, Amazon experimented with automating its hiring process. Despite the best efforts of its developers, however, the machine-learning algorithms began preferring male applicants, according to a Reuters article. The software sorted through 10 years’ worth of applications and found that the majority of candidates were male; because of that, it made the assumption that male candidates were somehow preferred, and downgraded any mention of “women” as it scanned résumés.

The company ended that experiment. “Amazon edited the programs to make them neutral to these particular terms,” Reuters added. “But that was no guarantee that the machines would not devise other ways of sorting candidates that could prove discriminatory.”

With machine learning and A.I., everything hinges on the dataset; it’s always a case of “garbage in, garbage out.” Even the most talented data scientists and software engineers can only do so much if the data isn’t up to snuff (i.e., too noisy, incomplete, etc.) or if a particular issue isn’t foreseen. As more and more companies turn to automating the job-application process, it’s worth keeping that in mind—because otherwise, a fancy hiring platform meant to streamline the candidate selection process will fail to take every good candidate into account.   

A.I. and Ageism

Ageism remains a huge problem in tech, despite the legal penalties for practicing it. In our 2018 Dice Diversity and Inclusion Survey, technologists told us that ageism is prevalent in the industry, with 29 percent saying they’d experienced or witnessed age-based discrimination in the workplace. That outpaced gender discrimination (reported by 21 percent of respondents), political-affiliation discrimination (11 percent), and bias based on sexual orientation (six percent).

For older technologists, the standard advice is to keep your skills up-to-date, and to demonstrate a little bit of flexibility when it comes to the roles you’re willing to consider. But for many, that simply isn’t enough; they look at all the age-related lawsuits out thereand believe the deck is forever stacked against them. After all, a 2019 study by Hiredfound that ageism within tech tends to hit once technologists are around 40 years old;for many workers, that’s an extremely scary number—but it probably doesn’t come as much of a surprise. 

But what if hiring A.I. could “lean in” and target groups of job candidates otherwise overlooked? That’s an idea floated recently by Frida Polli in the Harvard Business Review, who wrote: 

“A beauty of AI is that we can design it to meet certain beneficial specifications. A movement among AI practitioners like OpenAI and the Future of Life Institute is already putting forth a set of design principles for making AI ethical and fair (i.e., beneficial to everyone). One key principle is that AI should be designed so it can be audited and the bias found in it can be removed. An AI audit should function just like the safety testing of a new car before someone drives it.”

And yes, in theory, A.I. could take the whole “hiring pipeline” into account, making companies consider candidates who might have gone unseen (or quickly disregarded) in other circumstances. This is especially key as many tech companies focus more on candidate skills than educational attainment, which increases the viability of those candidates who are self-taught (and who may have never gone to college, for example). But in practice, companies that build software will need to make an effort to eliminate bias—just as clients will need to ensure that the software does, in fact, consider all candidates. 

Is this achievable? Yes. The media attention over Amazon’s failed A.I. hiring experiment suggests that, if a company’s automated screening software is exhibiting bias toward certain candidates, that company will face a significant backlash in the press until the issue is fixed. The fear of blowback may persuade the hiring-automation industry to aggressively hunt down and eliminate all bias. But nothing is perfect, and there’s every chance that “hiring A.I.” will make a number of mistakes in its infancy.

It’ll be up to humans to keep an eye on these A.I. tools and make sure they’re working correctly. Considering the older technologists who’ll rely on these systems to land the jobs they need, the stakes couldn’t be higher.