How AI Can Help You Stop Bias
It’s hardly a secret that Silicon Valley struggles to recruit diverse talent.
Big technology remains largely an industry dominated by white or Asian men, and the symptoms of a homogenous workforce continue to make news headlines. In 2017, a former Uber employee generated public uproar after she pulled back the curtain on the sexist culture rampant throughout the entire tech industry. That same year, a Google employee caused controversy for publishing a sexist, internal memo arguing men and women have psychological differences because of their biological makeup.
Yet despite the bad press, tech companies remain slow to address the diversity issues hurting their public reputations. Quick fixes, like one-time inclusion training sessions and anti-bias training, are no longer acceptable answers to Big Tech’s PR nightmares, and diversity is becoming a “must have” rather than a “nice to have” initiative.
If tech giants are serious about improving their employee demographics, they need to get to the root of their diversity problem: unconscious biases impeding the hiring process.
How Systemic Bias Hurts Big Tech’s Diversity
The majority of today’s hiring decisions are made by humans who rely on a set of anecdotal data points to inform whether or not a candidate is qualified for the job. Qualities such as an applicant’s age, educational background, race and gender can influence even the most objective of recruiters, resulting in inconsistent decision-making across the board.
Regardless of how diverse the talent pool may be, bias can drive recruiters to only hire candidates who share a number of similar characteristics and to reject individuals who don’t fit the tech worker stereotype. Bias can even be found in the language used to write job descriptions. Certain phrases, like “whatever it takes,” could signal to applicants that a work-life balance is difficult to maintain, while words like “competitive” or “dominant” might deter women from applying. Research reveals that the language used in job ads contributes to the disproportionate number of male applicants within the tech industry and excludes applicants from minority groups.
In addition to hurting the demographic makeup of a workforce, the lack of diversity hiring is bad for businesses because it’s reflected in the products and services pushed out to consumers. Artificial-intelligence-based algorithms, for example, can reflect the homogenous groupthink of Big Tech’s dominant white male workforce. It’s how Google Photos’ facial recognition software labeled black people as gorillas and an AI-powered beauty website considered white people more attractive than dark-skinned individuals.
How AI Can be the Helping Hand
With over 50,000 positions to fill for its second headquarters (HQ2), Amazon could be an ideal candidate to introduce AI in its hiring process and diversify its workforce almost immediately. Amazon’s hiring managers have the opportunity to level the playing field with AI recruiting tools and improve the percentage of people of color within their workforce.
In theory, introducing AI to the hiring process sounds like an easy fix to solve the tech industry’s diversity concerns. But even AI software can develop biases when left unattended and should be treated more as a tool to help recruiters reliably conduct fair evaluations.
The majority of machine learning tools available to recruiters rely on being fed good, clean data sets in order to make sound judgements. Many of these systems can be “garbage in, garbage out”—if AI tools are given incorrect or insufficient data, it’s likely the machine will return results that are equally skewed or biased. Training AI recruiting tools to make consistent decisions is equally important. Leveraging sample and control data sets can help recruiters ensure their AI tools arrive at the intended conclusion about a candidate’s qualifications every time.
Companies should also clean their data sets as much as possible and clearly define what hiring-related tasks should be left to humans and which can, and should, be automated by AI. Machine learning tools, for example, can help talent acquisition teams scan job descriptions for problematic language or help recruiters with tedious, administrative tasks like scheduling interviews and even initial screening. Tasks that require a more human touch, including evaluating a candidate’s social skills or effectiveness in specific work situations, should be assigned to an actual hiring manager. To avoid inducing bias during this step, leverage an AI system to help detect patterns in candidate behaviors. This way, these insights can be used to train decision-makers on how to uncover biases on their own.
Perhaps the biggest advantage AI has over humans is that these machines can be empirically trained, ignoring traits in applicants that are insignificant while paying attention to traits humans might consider to be negative. As diversity becomes a priority across every industry, Big Tech has an opportunity to demonstrate AI’s ability to positively influence tech-hiring practices and cultivate a more inclusive workforce. Ultimately, technology shouldn’t be used as a band-aid for solving the tech industry’s systemic diversity issues. Instead, AI-enabled systems should be used to provide tech hiring teams with greater visibility into how to improve their efforts over time.
Ankit Somani is co-founder of recruitment-technology firm AllyO.