Is AI the solution to the tech diversity gap?

With only 4% of the UK tech workforce from a BAME background, the tech industry is in a bad spot in regards to diversity. Age, disability, and gender are also at the forefront of the problem. This is typically the result of unconscious bias, and businesses are losing out on some of the best talent as a result. Humans receive around 11 million pieces of information every second, and while our brains process all of this, our conscious minds can only handle around 40 pieces. As a result of automatically taking cognitive shortcuts, hiring managers tend to make decisions without consciously thinking about them, which leads to unconscious biases forming. These decisions are often based on factors such as background, culture, environment, and personal experience, and such choices can lead to others missing out on opportunities. Most organisations hire what they know in terms of people, so if a business predominantly consists of white males, for example, that’s the demographic they’ll hire.

According to BIMA’s Tech Inclusion & Diversity report 2019, just over a third of women believe their gender has impacted their career development, while 14% of people think their ethnicity has played a part. This goes to show that a lack of workplace diversity is a critical issue. But because of how beneficial diversity can be to the workplace, more businesses are realising the importance of it and changing their policies accordingly. As leadership advisory firm Egon Zehnder note: “Whether viewed as a business imperative, an ethical responsibility, or a fiduciary requirement, diversity and inclusion has moved to the top of the agenda.” 

In order to hire diverse talent, though, recruiters need to adapt their hiring processes to avoid unconscious bias. But how best to do this? One fix that’s been touted for businesses is AI (Artificial Intelligence), but many are sceptical. Here we look at whether AI is a viable solution for bridging the tech diversity gap or not.

AI provides better hiring results for companies

When applied properly, AI can help remove unconscious bias by evaluating CVs with no human preconceptions. While hiring managers dream of picking the right candidate as soon as possible, this becomes a difficult task when hundreds of people have applied for a role. However, AI-centered hiring helps to identify what is necessary for success in a role and matches candidates according to those characteristics in a much quicker and efficient way. These models enable a business to maximise the diversity of the talent pool.

Take LinkedIn’s machine-learning model, which provides a ranked list of candidates based on factors including similarity of work experience and skills, the job role, posting location, and the likelihood of a candidate responding. This platform has reached 660 million users and increased job advert engagement by 50% year over year.

Modern Hire’s AI-powered enterprise hiring platform is another example, and is used by Amazon, Walmart, and CVS Health to make smarter hiring decisions. The tool allows organizations to continuously improve hiring results through more data-driven, personalized experiences for candidates, recruiters, and managers. Mike Hudy, Chief Science Officer at Modern Hire told Forbes that the company “simply designs scientific and fair hiring tools, and rigorously monitors candidate scoring (how technology-enabled interviews are evaluated and rated during the review process) to ensure bias is eliminated.”

The effectiveness of these tools is ultimately determined by the data that’s used, however, and it must be job-relevant in order for AI to make better decisions and reflect a diverse team. Meanwhile, like other hiring tools out there, AI models need to be continuously audited to ensure they are accurate, fair, and free of bias, otherwise they face running into the kind of problems we highlight next.

What happens when AI goes wrong?

Although AI technologies have been praised for eliminating unconscious bias, it’s not a foolproof model. As many are based on résumé or job application data, this could prove to be inherently biased if the information is centered around the types of people the company currently employs. This is typically collected from previous applicants and employees already hired. So if very few women work for a business, for example, there’s going to be limited relevant information and the model will therefore be primarily focused on a male candidate pool. Take Amazon, which experienced an AI problem when its machine-learning engine didn’t favour women. The software was not rating candidates in a gender-neutral way considering the data was being taken from the company over the last decade, which reflected a male-dominated field. Essentially, the system had taught itself to prefer men, and Amazon soon abandoned the venture. 

Meanwhile, the use of facial recognition tools have been known to discriminate against minorities and those with disabilities. For example, Google faced backlash when its automated photo-tagging tool misidentified black people as gorillas. Since the incident, the tech giant has removed gorillas and other primates from its lexicon. Another issue with facial recognition is that often it fails to recognize women, with white females misclassed as men 19% of the time, while errors increase to 35% for non-white women. In order for these applications to be bias-free, they require a lot of diverse facial data, otherwise they fail to reflect an inclusive group. Without this, these apps may discriminate against different racial groups or those with facial injuries. This shows that these models tend to be inconsistent and need to be used with caution if they want a fair result. 

Overall, AI can certainly help tech businesses diversify their teams. However, one thing is certain: AI models are only as good as the data they are trained on and the people who create them.

Maren

Maren