AI (artificial intelligence) has become a ubiquitous feature of modern life. From virtual assistants, such as Amazon’s Alexa and Apple’s Siri, to original text and image generators, like ChatGTP or Google’s Bard, they perform a wide variety of tasks designed to save us time or effort in our daily work and home life. At perhaps a less obvious level, they are also present in powerful image and facial recognition software in security systems, diagnostic tools in healthcare networks, and human resources management tools. In minor and major ways, AI has a direct impact on us as individuals and our broader society. And while it may have many advantages, it is also problematic in many ways. One of these problems is a gender bias against women.
While it is tempting to think of AI tools as neutral and impersonal computer code, it is important to remember that they are developed by human programmers and will therefore mirror the biases of their human creators if no safeguards are in place against those biases. Moreover, AI tools will reflect any bias in the training data, which could be generated by humans or machines.
Gender bias in AI can occur during algorithm development, the compilation of training data sets, and AI-generated decision-making. It is present in word embeddings associating certain professions with specific genders, which can, among other things, negatively impact job hiring processes, and perpetuate existing stereotypes – for example the assumption that the word ‘doctor’ would be associated only with male pronouns (he/him). The use of AI in certain sectors, like hospitality, tourism, retail and education can impact women’s employment opportunities and workplace equality. It can also have an impact on facial recognition software, causing potential harm to vulnerable and marginalised communities. Equally problematic, gender biases in AI can perpetuate societal gender stereotypes and inequalities. Furthermore, virtual assistants with female voices like Alexa or Siri, or feminised and hypersexualised robots like Saya or Geminoid F, designed to perform affective or caregiving tasks, reinforce traditional gender roles and tend to dehumanise and objectify women.
Machine learning is only as good and reliable as the data sets it learns from. For example, if an AI model is trained to diagnose medical conditions based on medical images, it relies on machine learning techniques with hundreds of thousands of images as training data. Additionally, it requires guidance from expert health practitioners to ensure accurate disease identification. However, despite the abundance of example images, there may still be issues with the AI model’s diagnostic reliability. For instance, if X-ray data used for training contains only images of male patients, the AI model’s predictions may only be reliable for men, and potentially result in inaccurate diagnoses for women. It is therefore imperative that AI models be trained using diverse and inclusive datasets.
A diverse workforce in AI development will also help to prevent biased outcomes. However, historically women have been underrepresented in AI-related fields like computing, digital information technology, engineering, mathematics, and physics; and at present only an estimated 12% of AI researchers are women. The World Economic Forum predicts that achieving gender equality on a global scale is predicted to take 132 years.
AI will continue to evolve and learn from original content created by humans. Reshaping a bias-free AI world will therefore require human intervention and a commitment to diversity and equality in AI development. We can be help reshape AI in this regard by asking critical questions, and by staying aware of issues like how gender bias in AI could influence society.
– by Christo de Kock and Tania Botha
Sources:
International Women’s Day. 2023. Gender and AI: Addressing bias in artificial intelligence [Online]. Available: https://www.internationalwomensday.com/Missions/14458/Gender-and-AI-Addressing-bias-in-artificial-intelligence [2023, August 4].
Manasi, A., Panchanadeswaran, S. & Sours, E. 2023. Addressing Gender Bias to Achieve Ethical AI [Online]. Available: https://theglobalobservatory.org/2023/03/gender-bias-ethical-artificial-intelligence/#:~:text=The%20tendency%20to%20feminize%20AI,of%20female%20names%20or%20pronouns [2023, August 4].