expand_less

AI in EDI

The Opportunities and Considerations for EDI Professionals

Artificial Intelligence (referred to as AI for short) is technology, such as machines, systems, and software, that replicates human intelligence. AI is programmed to think and perform tasks as a human would and is capable of imitating human cognitive functions such as problem-solving, learning, and decision-making. AI even has the ability to design and create images, write music, articles and essays, imitate voices and assist in medical diagnosis and procedures. Pretty impressive, right?

With numerous AI tools now readily available online for public use, many professionals are beginning to explore how AI could aid them in the workplace. The opportunities are endless but there are also considerations to be aware of. We explore those opportunities and considerations, providing Equity, Diversity, and Inclusion (EDI) professionals with a basic understanding of what they can expect when working with AI and how to balance the benefits and risks.

Curious to learn from the source, we used ChatGPT, an AI tool designed to provide human-like responses to user requests, to ask how AI could be used to assist EDI professionals. Here is what it told us:

  1. Data analysis and insights

AI can analyse large datasets to identify patterns and trends related to diversity and inclusion within an organisation. This can help professionals understand the current state of diversity, track progress over time, and pinpoint areas that need improvement.

  1. Bias detection and mitigation

AI algorithms can be used to identify potential biases in hiring practices, performance evaluations, and other processes. By recognising and addressing biases, professionals can work towards creating a more equitable and inclusive workplace.

  1. Recruitment and talent acquisition

AI-powered tools can help reduce bias in candidate screening and recruitment processes by focusing on objective criteria and relevant skills, thereby increasing the chances of a diverse talent pool.

  1. Natural Language Processing (NLP)

NLP-powered chatbots or virtual assistants can provide employees with a safe space to ask questions and seek advice related to diversity and inclusion topics without fear of judgement or bias.

  1. Employee engagement and sentiment analysis

AI can analyse employee feedback, survey responses, and other communication data to gauge employee sentiments and identify potential issues related to diversity and inclusion.

‘’AI can be a solution to discover talent, particularly if you’re a large organisation and you don’t quite know what talent you might have sitting in front of you. It offers a chance to create much more personalised learning. It is also a very powerful solution to create more diversity in your organisation as well as inclusion’’ Barb Hyman Founder and CEO of Sapia.ai

The AI opportunities for EDI professionals, generated by ChatGPT, suggest that it could be a great time-saving tool, particularly when analysing both qualitative and quantitative data, identifying EDI-related challenges in the workplace, and generating ideas and recommendations to address those challenges.

An example of an organisation using AI to its benefit is Unilever, one of the world’s largest consumer goods companies, who launched an AI-powered, internal online talent marketplace called FLEX Experiences, which helps employees to identify personalised open opportunities across the business that match their profile and aspirations. After being trialled with more than 20,000 employees, across 90+ countries, the tool was proven to be responsible for driving increased employee engagement and satisfaction.

However, when reading through the opportunities that AI presents, some possible considerations came to mind. Firstly, whilst ChatGPT suggests that AI can be used to identify and mitigate biases, AI systems themselves may be biased if the data used to train them contained biases. In a research paper that explored the risks of AI, authors Bender, Gebru, McMillan-Major and Shmitchell found evidence that training data is shown to have problematic characteristics ‘’resulting in models that encode stereotypical and derogatory associations along gender, race, ethnicity, and disability status’’. This means that AI has the potential to reinforce or even amplify these biases which could lead to unfair and discriminatory outcomes.

However, Barb Hyman, the founder and CEO of Sapia.ai claims that AI combined with human perspective is an effective way to achieve fairer outcomes when making people decisions. This is because most biases are unconscious and therefore not measurable from a human perspective, whereas with AI you can actually measure the bias. Barb encourages organisations to request model cards when working with AI tools. Model cards are documents which provide information about AI models, including transparency on the dataset and the bias testing that has been conducted on it. This will allow organisations to understand and manage potential risks.

ChatGPT also proposes the use of AI to assist in data analysis and insights despite the fact that AI systems can be vulnerable to hacking and other security threats. As EDI data often consists of sensitive employee demographic data, the use of AI to process such data raises security and privacy concerns which is something that would obviously need to be addressed before moving forward.

The American Association for the Advancement of Science (AAAS) developed a Decision Tree for the Responsible Application of Artificial Intelligence which assists users and organisations in choosing whether to implement AI solutions. The Decision Tree encourages the consideration of benefits and limitations to AI, as well as the legal, ethical and moral risks. One question that the Decision Tree challenges users to answer is; ‘How would artificial intelligence be superior to another approach?’ It is important to remember that AI is only imitating human intelligence; It lacks empathy and understanding of complex human thoughts and emotions. Although ChatGPT recommends using AI for analysing employee feedback due to the speed and ease at which it can be done, the risk is that comments could be misinterpreted, oversimplified or overlooked, resulting in ill-informed decision making and unhelpful action. This demonstrates that AI is inferior to actual human analysis of employee feedback.

“My advice would be that if you felt there was a significant benefit [of using AI], use it, but in moderation as an assistance tool rather than an end-to-end work tool. And don’t forget mass adoption of widely available AI carries significant risks. It can be useful and beneficial, but there are often pitfalls.’’ – Alastair Brown Chief Technology Officer at BrightHR

Learning how to balance the opportunities and considerations of AI is imperative if you wish to utilise its many advantages in your EDI role. The AAAS Decision Tree is a useful tool that can help you to create clear parameters around when you will use it, and when you will avoid it. AI tools and systems are beneficial in saving time and generating ideas and used in the right way could support the career progression of underrepresented groups from an EDI perspective. However, AI  cannot substitute real human judgement and experience, particularly in the EDI sector where human empathy and understanding are key for creating an inclusive workplace.