Unveiling the Impact of Technology: Battling Bias and Embracing Diversity in the Age of AI

Unveiling the Impact of Technology: Battling Bias and Embracing Diversity in the Age of AI


 
Alumni_tech_Impact of Technology
 
 

In a fast-paced, technology-driven world, the question arises: Can technology be a force for good, helping us overcome bias and improve access? As we experience a technological revolution that empowers us, it's crucial to explore how technology can remove human weaknesses and promote diversity. Remote working platforms have widened our scope of talent networks, while computing advancements have transformed employee engagement. However, biases persist, perpetuating gender, racial, and age-related stereotypes. Rewiring algorithms and fostering diversity in AI development are essential to create unbiased solutions. As recruiters we play a vital role and must address biases, embrace diversity, and help shape a future where technology serves as a catalyst for positive change.

A technological revolution that has empowered and enabled

With remote working platforms running smoothly and people becoming more comfortable to use them, the reach of talent networks has undoubtedly widened. At Alumni working within executive search, we see that location is much less of a barrier nowadays, to finding the right candidate. Especially for very specialist roles, where there is often a great skills shortage; the necessity of face-to-face contact is much less of a priority than securing business critical competence that will help future-proof the organisation. Technology has made it possible to attract a much more geographically diverse list of candidates.

In pursuit of a promise of a better work-life balance, many of the executives we speak to as potential candidates in our assignments, are attracted by the prospect of remote working opportunities. Something that technology has also made possible and that is becoming more common as a way to attract sought after skills and experience. The ‘9 to 5’ cubicle model of work has become increasingly outmoded and with this more inclusive thinking, has come an expansion in the potential pool of viable candidates.

Advances in computing has also transformed the way in which we engage with our colleagues. Employee engagement across digital platforms is a necessity to ensure innovation and productivity within our organisations.

Perpetuating negative effects on cultural diversity

Part of our mission and what drives our executive search business is to always be on the look-out for potential new pools of diverse talent and untapped skills. In trying to overcome our human weaknesses of unconscious bias, one might assume that computer algorithms are above such human characteristics as racism, homophobia, disabilities, or ageism. However, when looking closer at how to internalise AI and use AI within recruitment, many stumble on the fact that AI have been shown to exhibit these very traits.

Researchers based at the University of Washington (Paper: ‘Unequal Representation and Gender Stereotypes in Image Search Results for Occupations’), found major discrepancies in occupational images displayed in Google searches. For instance, typing in ‘CEO’, only displayed 11 per cent of images depicting women.

Another example from the United States, is where mortgage approval algorithms, have been found to be 40-80 per cent more likely to deny borrowers of colour than their white counterparts. The reason is the algorithms were trained using data on who had received mortgages in the past and, in the US, there’s a long history of discrimination in lending.

Already back in 2016, data scientist Cathy O’Neil wrote a book called ‘Weapons of Math Destruction’, about the extent of AI usage, with selection decisions being made in the fields of employment, loans, college admissions and prison sentences. Her conclusion was that by their very structure, big data algorithms have a tendency to amplify existing prejudice, increasing inequality in a way that is pervasive and hard to track.

A paper by Princeton University based researchers, published in the journal ‘Science’, also looked at the way in which machine learning systems might in fact be increasing instances of racial or gender bias, since they learn and interpret language from existing repositories of knowledge which might not reflect current, accepted social norms. We have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might instead be trying to move away from.

Need to rewire the algorithms

There is a need to rewire the algorithms as we can’t fix them by feeding better data, when often there isn’t better data. We need to reset the default settings. For instance, as highlighted by the MIT graduate Joy Buolamwini, facial recognition software, present in everything from camera exposure systems to law enforcement monitoring, has been programmed with the Caucasian facial type as the default setting. This we could change. An example brought forward was the automatic rejection of Richard Lee’s passport application ‘for having his eyes closed’ (a facial recognition error due to his Asian descent). The increased pervasiveness of machine learning might have a very real long-term impact on our lives.

There is an automatic (and largely unseen) level of discrimination at software level that needs to be addressed, particularly when as Forrester estimates; that almost 100 per cent of organisations[1] will be using AI by 2025, and the artificial intelligence software market will reach $37 billion by the same year. According to DataRobot’s State of AI Bias report[2], 81 per cent of business leaders want government regulation to define and prevent AI bias.

When it comes to diversity, AI has the promise to be truly agnostic of all bias as long as its scoring mechanisms are correct. Developers go to great pains to ensure that the data that they bring into their system has no inherent bias in terms of distribution and that all different sorts of people are adequately represented. However, in reality, it’s quite hard to ensure.

One example is Chosen AI, founded in 2018 to solve the reactive recruitment process most large organisations are subject to, and it attempts to replicate executive search methodology. Firstly, it mapped the company’s competitive landscape and then it looked at people. The risk of bias in this scenario is perpetuating the status quo of hires, looking at the type of people who are already at a company and then looking for similar. If AI determines a ‘calibration client’ then uses the company that the person has worked at and the roles that the person has had in the company to find more similar people, there is potential for bias.

As more and more societal functions turn to AI technology for their advancement, it's more important than ever to continue to examine how AI's underpinnings affect its functions. In an already often racist and sexist society, we cannot afford to have our police systems, transportation methods, translation services, and more, rely on technology that has racism and sexism built into its foundations.

A force for good

From a business perspective, then, the technological revolution has indeed opened up more opportunities for a broader, more diverse workforce and empowered whole new sectors of society. It has, however, also created a range of tools which need to be treated with caution, if we are to ensure that we move forward as a society, rather than reinforcing old prejudices.

It is possible to avoid bias by using data from diverse workplaces – where there is best practices, with companies not just trying diversity for diversity’s sake, but where it’s tied into performance in different roles. There needs to be some bias, due to the nature of the workforce, but what we have come to understand is that algorithms can now also learn not just from what people are doing currently, but also learn diversity itself.

The chances of improving diversity increase when you have a diverse team tasked with it. The challenge in the space of AI is that the World Economic Forum reports that about 78 per cent of global professionals with AI skills are male[3]. In order to build an unbiased AI solution, the tech sector needs a wider range of perspectives and diversity of thought.

So, we are back to recruitment. To help prevent perpetuating old ways, we as recruiters have an enormous responsibility to shoulder. We need to make sure that we do all we can to help secure a more diverse skillset in the AI space. We need to step up and help ensure that we find the best and the brightest to help develop the future of AI. We need to keep up to speed with the vast array of challenges that our clients are facing and the growing number of new roles and responsibilities that are evolving, to help them  future-proof their organisations. We are up to the challenge!

References

[1] https://www.forrester.com/webinar/The per cent2BEvolution per cent2BOf per cent2BML per cent2BPlatforms per cent2BTo per cent2BAI per cent2BPlatforms per cent2BA per cent2BSpectrum per cent2BOf per cent2BCapabilities/WEB33085

[2] https://www.datarobot.com/newsroom/press/datarobots-state-of-ai-bias-report-reveals-81-of-technology-leaders-want-government-regulation-of-ai-bias/

[3] https://www.weforum.org/reports/reader-global-gender-gap-report-2018/in-full/assessing-gender-gaps-in-artificial-intelligence

 
 

 

Alumni

Alumni has more than 30 years’ experience in making leaders and their teams the best that they can be. Self-awareness, empathy and bias training are fundamental to avoiding groupthink, creating an inclusive culture and reaping the benefits of diversity in the workplace. If you are curious to learn more about how we advise our clients and work with organisational diversity we would love to hear from you!

 

Contact us

 
 

Perspectives

Stay interviews to boost retention

Perspectives

Future of work: interview with Patricia Fors

Perspectives

The Fundamentals of Inclusive Leadership