Algorithms may be perceived as being an objective way to instill diversity, equity and inclusion (DEI) in an organization, but AI is by no means exempt from the unconscious biases that human beings exhibit, and we must guard against thinking of AI as a silver bullet. We already know that a more diverse workforce leads to greater innovation, but diversity in the teams designing the AI underpinning future workplace processes will also help to keep it less biased, and ensure organizations achieve their DEI goals.
23 November 2022 • 4 min read
As we navigate the turbulence of the past few years, many workplaces are grappling with how to design remote and hybrid working so that it works – and in many cases, looking to artificial intelligence (AI) for the answers.
AI already has the ability to vastly improve the way we work from home, supporting team communication and collaboration, managing workflow and even playing a role in improving security, and its usefulness is only likely to grow over time.
One example is the important role AI is playing in streamlining HR tasks such as recruiting: shortlisting candidates from an onslaught of resumés, for instance, is extremely time-consuming, and machines can do this in a fraction of the time, and also automate any other manual recruitment tasks.
Of course, the assumption in this example is that algorithms offer the advantage of being much more objective than people – and to some extent they are. In many instances, they can even help to support diversity, equity and inclusion (DEI) in organizations. But AI is by no means exempt from the unconscious biases that human beings exhibit – something we need to take into account.
What is unconscious bias? It’s a subconscious pattern of thinking that has been socialized into us – something we all have, and as the name suggests, are not aware of. Yet these biases have enormous impact – we don’t realize that all day, in multiple encounters, we are stereotyping people automatically without considering whether our assumptions about them are correct.
Most of us don’t believe that we are prejudiced against others. Unlike prejudices, unconscious biases aren’t intentional – but they can mean that we miss the opportunity to recognize people’s true potential.
In fact, when those incorrect assumptions are pointed out, we may be shocked because most of us don’t believe that we are prejudiced against others. But unlike prejudices, unconscious biases aren’t intentional – we’ve just internalized them over many years, thanks to our upbringing and other forms of socialization. They might be gender or age specific, or relate to names, appearances or culture – and those are just a few examples. But whatever form they take, they can mean that we miss the opportunity to recognize people’s true potential.
So why would AI be biased too? Because it relies on the data that people feed it. And they bring their biases with them.
There are some simple examples of how this plays out. In 2015, for instance, a University of Washington study searched for different occupations and looked at the percentages of women who appeared in the top 100 Google image search results, compared with how many women actually worked in those fields.
The results were telling: in a Google image search for “CEO”, 11% percent depicted women, whereas 27% of US CEOs are women. In addition, 25% depicted authors as women, versus 56% in reality, and 64% of the telemarketers were female, while the real figures are 50:50. Google promised to fix the problem, but a 2022 update shows these discrepancies have been only partially rectified.
Examples like this are common – from facial analysis programs that discriminate against patients with darker skin to exam-scoring algorithms that downgrade the scores of disadvantaged students, you don’t need to look far to find examples. So AI has a role to play, but it needs to be used with great care.
We need to be aware of who is controlling the narrative in our workspaces in general – but also more specifically when we are harnessing the power of AI to streamline processes. This means that even when we’re trying to foster greater DEI using AI, we need to be aware of diversity.
We need to be aware of who is controlling the narrative in our workspaces in general – but also more specifically when we are harnessing the power of AI to streamline workplace processes. This means that even when we’re trying to foster greater DEI using AI, we need to be aware of diversity – among the teams designing those processes – because it makes a difference. A study presented at the 2020 NeurIPS machine learning conference concluded that biased predictions are mostly caused by imbalanced data, but that the demographics of engineers also play a role.
We already know that a more diverse workforce is important for greater innovation and creativity, for greater opportunities for professional growth, and for better decision-making. But diversity in the teams designing the AI underpinning future workplace processes will also help to keep it less biased, and ensure organizations achieve their DEI goals.
It also needs to remain a topic of conversation in organizations, so we ensure employees are constantly made aware of unconscious bias. But it needs to be discussed in a way that is open and easy – because people tend to get offended when you tell them they’re biased! This requires that we normalize it as a concept – and help employees to understand that it’s a universal human trait.
To do so will require ongoing training and awareness-building. In our organization, for instance, which is spread across 30 countries, we have set this up globally, and connect discussion with special days such as International Women’s Day. We’ll have a keynote that employees can listen to and discuss afterwards.
We also need to do concerted training of leaders on diversity, so that they can recognize those biases both in themselves and within their teams. When you’re dealing with a multinational company, intercultural training can be very useful, for instance. But even within a single country there are usually a variety of cultures – we all need to be sensitive to that, and to the other kinds of discrimination that may arise.
Most of us are unaware of our unconscious thinking patterns – as long as we are not affected by them. But those who bear the brunt of these biases can be severely disadvantaged. We still need to use our human judgment to ensure that AI-supported decision-making is as fair as possible.
Most of us are unaware of our unconscious thinking patterns – as long as we are not affected by them. But those who bear the brunt of these biases can be severely disadvantaged when bias becomes the breeding ground for decisions that lead to discrimination or preferential treatment of employees. This will, in turn, have a detrimental effect on their performance.
AI may help to reduce levels of bias, but we must guard against thinking it’s a silver bullet, and find ways to define and measure fairness. And while AI is in the process of being refined and improved, we will still need to use our human judgment to ensure that AI-supported decision-making is as fair as possible.
Discover more in
Artificial intelligence