Bias in AI creeps in faster than you might think.

Artikel uit SDN 147

 

The future is AI.

Artificial intelligence (AI) is revolutionizing the way we live and work, from powering virtual assistants and self-driving cars to aiding medical diagnoses and detecting fraud. With the rising popularity of what AI can do for you there has been a surge of new tools like ChatGPT or AI Builder within the Microsoft Power Platform. According to ChatGPT itself: the future of AI seems promising, but it is important that we develop and apply this technology responsibly and ethically.

What is bias in AI?
AI algorithms are trained using vast amounts of data, which means they can identify patterns and make predictions based on that data. However, if the data used to train the algorithm is biased, the algorithm may learn and replicate that bias in its decisions.

A study published in 2021 found that language models, including GPT-3 (which ChatGPT has been trained on), have biases towards gender and race. The study found that the language models produced language that reinforced stereotypes, such as associating certain occupations with a particular gender or race. 1

More recently ChatGPT has also been found to have a bias towards left-wing political parties in the Netherlands. When questions were asked regarding the ‘stemwijzer’ ChatGPT favored left-wing parties such as SP, DENK and D66. This bias is mainly due to the scientific and academic papers, which are generally more likely to be left-wing, ChatGPT has been trained on. 2

Another example of bias in AI comes from a study published in 2020 found that AI-powered healthcare tools can exhibit racial bias. The study found that a popular algorithm used to determine who needs extra healthcare support was less likely to flag black patients as high-risk, even when they had similar health conditions as white patients. 3

 

Does the problem lie within datasets?

Joy Buolamwani of the Netflix documentary Coded Bias, discovered that the datasets used to train AI may not be inclusive. While developing a smart mirror at MIT, Joy attempted to use facial recognition but found that her face was not recognized until she put on a white mask. The AI had been trained to recognize white faces much better than people of color. Creating new datasets with more variation is difficult, as it requires people to give up their anonymity in public by contributing their faces to an AI database. However, biased data can be identified and removed through techniques like debiasing and reweighting. Examining datasets for bias and correcting problematic data is crucial to ensuring AI is inclusive. 4

 

Double check and check again
Auditing AI systems is crucial to ensure they operate in a fair and ethical manner when making important decisions that can impact individuals and society. The process involves evaluating data, algorithms, and outputs to determine if they align with ethical and legal standards. Auditing helps identify potential biases, errors, or ethical concerns and enables stakeholders to take action to address these issues. It can also provide evidence in legal proceedings and improve the system. In summary, auditing AI systems is essential to ensure that they are transparent, accountable, and responsible in their decision-making processes.5

Diversity in AI
Fostering diversity in AI development involves ensuring a diverse team, using inclusive data sets, conducting bias audits, providing education on diversity, partnering with diverse organizations, encouraging diversity in AI research, and increasing transparency. This is essential to prevent AI systems from perpetuating biases and stereotypes, and to ensure ethical and inclusive AI. Microsoft is one of the companies that advocates for diversity within AI with their responsible AI practices. 6

Ethical thinking is crucial thinking
To create ethical standards in AI, it is necessary to identify relevant ethical principles such as fairness, transparency, accountability, privacy, and autonomy. Guidelines should be developed based on these principles and a broad range of stakeholders should be involved in the process. A review process should be established to ensure the guidelines are updated as needed, and education and training should be provided to developers and users. To encourage compliance, incentives and sanctions may be necessary, such as recognition programs, certification schemes, or regulatory mechanisms. By following these steps, it is possible to create ethical standards that promote responsible and trustworthy AI systems that benefit society while minimizing risks and harms.

Conclusion
AI has the potential to revolutionize various aspects of life, but as with any technology, there are potential risks and challenges. One of the most significant challenges facing AI is bias, which can be introduced into algorithms through biased datasets. The consequences of such biases can be far-reaching, perpetuating stereotypes and limiting opportunities for certain groups. It is, therefore, essential to develop and apply AI technology responsibly and ethically, taking steps to foster diversity in AI development, audit AI systems for bias, and establish ethical guidelines that promote fairness, transparency, accountability, privacy, and autonomy. These guidelines should be updated regularly, and education and training should be provided to developers and users. Through such efforts, it is possible to harness the power of AI in a way that benefits society while minimizing risks and harms.

Sources

1 Li Lucy and David Bamman. 2021. Gender and Representation Bias in GPT-3 Generated Stories. In Proceedings of the Third Workshop on Narrative Understanding, pages 48–55, Virtual. Association for Computational Linguistics.
https://aclanthology.org/2021.nuse-1.5.pdf

2 ChatGPT has left-wing bias in Stemwijzer voting advice application. (2023, March 8). Leiden University. https://www.universiteitleiden.nl/en/news/2023/03/chatgpt-has-left-wing-bias-in-stemwijzer-voting-quiz

3 Ziad Obermeyer et al., Dissecting racial bias in an algorithm used to manage the health of populations.Science 366, 447-453(2019).DOI:10.1126/science.aax2342
https://www.science.org/doi/10.1126/science.aax2342

4 Joy Buolamwini: examining racial and gender bias in facial analysis software – Google Arts & Culture. (n.d.). Google Arts & Culture.
https://artsandculture.google.com/story/joy-buolamwini-examining-racial-and-gender-bias-in-facial-analysis-software-barbican-centre/BQWBaNKAVWQPJg?hl=en

5 Auditing Artificial Intelligence. (2018) Isaca. https://ec.europa.eu/futurium/en/system/files/ged/auditing-artificial-intelligence.pdf

6 TechCrunch is part of the Yahoo family of brands. (2019, June 17). https://techcrunch.com/2019/06/17/the-future-of-diversity-and-inclusion-in-tech/?guccounter=1&guce_referrer=aHR0cHM6Ly9oYnIub3JnLw&guce_referrer_sig=AQAAAHyfm27AAoTUnBdQGULUsHrIoI5A8DU66tf2uL_PBkOvQ7rzvKto36Xkvbwyuv7jfNj7wk756mklxPLDZUiSZKQRCmEMlb6NmcjlIReNQgscNFeAiIAXePZI5GlTq0j5V78NdmHZBh4uEj8oneGra11mngutO8irH_DxNZwvGD7L

Bio
My name is Meara Leentvaar (She/her), I am a 27 year old Power Platform Developer/Designer at Ordina. I have a passion for IT & inclusivity and have done a minor in how bias in AI presents itself. Here are some trivial facts about me: I love to play games (Switch, PS4, PC, you name it), read books and manga. I consider myself a plant and a bunny mom which in all honesty isn’t the best combination. I also have an unfounded fear of ostriches.