1. Introduction
AI models like ChatGPT or Google Bard trained on data with historical racial and ethnic biases can inadvertently amplify these stereotypes and racial and ethnic biases. With the increasing use of AI models, it is crucial that they do not propagate bias. This issue is very important both for academia and society at large as it has profound implications with the rise of Artificial Intelligence in the last 18 months, especially with ChatGPT and Google Bard, where more and more humans rely on getting information from these models. With the penetration of AI in everyday life, from search engines to chatbots, the potential of inadvertently amplifying outdated stereotypes and racial biases is huge. It is a pivotal moment in the history of technology and its impact on mankind and society in general. It is almost equivalent to the time when the Internet was invented and how it ended up changing everyday life. Similarly, if the issue of bias in the AI models, specifically racial and ethnic bias, is not addressed with proper tools, infrastructure, and research contributing to the development of the right policies and procedures, it could undo the work that has been done in order to address the issues of racial bias for decades.
There is growing evidence of racial and ethnic biases in the historical data used to train AI models, which in turn leads to bias in AI responses. This historical bias significantly influences AI, perpetuating old biases related to race, gender, and socioeconomic status. AI identifies the patterns in the data on which it is trained without a proper understanding of the present context of society, change, and its ethical implications.
In "Weapons of Math Destruction," Cathy O'Neil discusses the impact on society and communities and how data algorithms can amplify past biases and inequalities, underscoring the need for ethical considerations in AI design to prevent the reinforcement of historical, societal biases (O'Neil, 2016).
This existence of racial and ethnic biases in society is seen and is reflected in AI models like ChatGPT and Google Bard (Getahun, 2023). Data from the Pew Research Center indicates that people of color or different ethnicities face significant discrimination - 60% of those who hide their cultural heritage have faced offensive and derogatory remarks, compared to 32% who don’t hide their identity (Ruiz, 2023). As the popularity of AI models, generative AI, and extensive language models grows, it becomes increasingly crucial to ensure they do not perpetuate these ethnic stereotypes and amplify biases.
Despite significant efforts that have focused on the technical aspects involving algorithms, datasets, and various fair machine-learning techniques, the issue still needs to be solved (“What Do We Do about the Biases in AI?”). Racial and ethnic bias in AI comes not only from data but also from who is framing the problem and the diversity of teams (Lazaro, 2022)
A study regarding coding inequality found that GPT-4 failed to represent demographic diversity appropriately, reinforcing stereotypical demographic presentations (Zack, 2023). This is particularly concerning given the reliance on AI models trained on large volumes of internet text, public information, old books, and records, which may include outdated and racially biased information. The scarcity of training datasets that concentrate on diverse racial and ethnic representations can harm marginalized groups due to ingrained racial and ethnic biases. Algorithmically driven data failures can disproportionately impact people of color, women, and different ethnicities, promoting 'algorithmic oppression' (Noble, 2018).
In a research experiment that presented AI models with the task of sentencing for first-degree murder, the only variable given was the individual’s dialect, revealing covert racial biases in the AI’s decision-making process (Serrano, 2024). These examples illustrate how AI, without policies, audits, and proper training data and interventions, might amplify historical stereotypes and racial and ethnic biases in the future.
These biases can be embedded in AI predictions if the data is discriminatory, under-representative, or historically biased. AI models can exacerbate this issue, which has been shaped by centuries of prejudice. Instead of mitigating these biases and stereotypes, AI can inadvertently reinforce them. (Krasadakis, 2023).
This paper focuses on the following questions at the intersection of social sciences, AI ethics, and future policy development: To what extent do AI models such as ChatGPT and Google Bard exhibit and amplify racial and ethnic biases and stereotypes? What are the tools available to check for racial or ethnic undertones in the responses of AI models with underlying comprehensive racial and ethnic prompts? Is there any tool or algorithm that considers the context and intent behind racially charged prompts directed at AI models, considering the identity of the questioner? This paper also focuses on building a comprehensive database of racial and ethnic bias prompts for AI engineers and models to use to mitigate racial bias in AI models.
2. Literature Review
AI model and its algorithm quality are based on the data it is trained, and it inherits the imperfection and biases of historical preexisting patterns in society (Barcos, 2016). In 1950, a British mathematician and computer scientist, Alan Turing, laid the foundation for AI by evaluating machine’s ability to mimic human cognition (Smith et al., 2006). Since the time of Alan Turing, computing speed has evolved, and data has evolved into 'big data' where very large amounts of data can be processed. (Baer, 2023).
Machine learning is a technique that artificial intelligence engineers use, where computers learn from existing data patterns and are trained to make recommendations. (Brown, 2021). Natural Language Processing (NLP) is a subfield of machine learning focused on computers and human languages, performing tasks such as understanding sentiments, summarization, question answering, and generating human-like language that is meaningful and contextually relevant. (Jurafsky, 2019).
NLP and machine learning tools together are used to create and operate Large Language Models (LLMs), which are AI models trained on vast amounts of text data capable of generating human-like text, translating languages, answering questions, and performing various language tasks. (Radford et al., 2018)
Even though the concept of LLM may go back to the early work of Turning, the first modern LLM was introduced by Open AI in 2018. (Radford et al., 2019). ChatGPT uses large amount of datasets which include text and images to generate and have human-like conversations, was built by OpenAI using their own LLM called GPT, which stands for generative pre-trained transformer. (Brown et al., 2020). Subsequently, using its vast amount of data, Google launched Google Bard in February 2023, another LLM to interact and have human-like conversations. (Pichai, 2023).
Both ChatGPT and Google Bard had many instances of reported bias in their responses around gender stereotypes and racial stereotypes, using less positive language towards certain ethnic groups and favoring some specific cultural norms. (De Vynck, 2023 ; Baum, 2023; Gross, 2023).
Extensive research has been conducted on the topic of bias in AI, including but not limited to political, gender, racial, socioeconomic, geographic, and language biases. However, a critical issue remains the availability of comprehensive data to test various hypotheses about establishing biases and stereotypes and then scientifically measuring them to demonstrate that different models exhibit distinct biases based on the data on which they were trained. (West, 2023).
3. Methodology
The research methodology uses a systematic review approach, ensuring a comprehensive, reproducible, and quantifiable analysis of the extensive interdisciplinary data points available on the subject. (Weed, 2006). Reliable and methodological rigor of data and different steps with diverse research methods were used to enrich the review, as it was critical to avoid bias in the research and the findings. (Durach et al., 2017)
The research adopts a disciplined and empirical approach in a systematic way to collect data and test the hypothesis around racial and ethnic biases in artificial intelligence (AI) models. Research and findings are grounded in observable data and evidence in structured processes with established methods and protocols to ensure reliability, validity, and reproducibility. (Kitchenham, 2004). The methodology is articulated through several sequential stages, as shown in
Figure 1, addressing the multifaceted nature of bias in AI, from data gathering to testing hypotheses to tool development and dissemination.
3.1. Database Creation (Test and Training Data)
The foundational stage involved building a comprehensive database consisting of prompts (phrases), terms indicating racial and ethnic bias, and stereotypes. Racial legal cases and text were analyzed from the US Equal Employment Opportunity Commission (EEOC) records, which provided documented cases of racial and ethnic discrimination (U.S. Equal Employment Opportunity Commission (EEOC, 2024). To enrich the database, the research was expanded to include an analysis of derogatory terms, slurs, and biased expressions extracted from various platforms, including Wikipedia and social media sites such as Twitter, Facebook, and Instagram.
Figure 2 represents the impactful list of prompts covering various ethnicities, including Indian, Japanese, Chinese, Arab, Hispanic, and African American, across many dimensions of bias. ('List of Ethnic Slurs,' 2024)
3.2. AI Model Selection and Comparative Analysis Across AI Models
In this stage, specific AI models were selected for analysis. The choice of ChatGPT and Google Bard was strategic, aiming to cover a broad spectrum of language models with diverse training datasets and algorithms. The racial and ethnic prompts from the database were designed to show AI responses that could demonstrate underlying biases. This testing phase simulated real-world scenarios where AI models may encounter racially or ethnically charged queries.
A comparative analytical approach was taken to find how different AI models process and respond to similar racial and ethnic prompts. This was instrumental in proving the hypothesis on models having different algorithms, purposes, and intents to respond to racial responses based on their training data and fine-tuning.
3.3. Contextual and Intent-Based Design
The content and intent behind every prompt (question) is very important to understanding AI model responses and their new implications and scoring. These additional signals of conversational context can help a mathematical equation and algorithm start getting more relevant real-life scenarios. Incorporating an understanding that race, a social construct typically involving skin color, language, and phenotypic features, and its overlap with ethnicity in groups like "Hispanic" or "Jewish," is critical to accurately understanding bias in AI models (Jindal, 2022).
Every response from ChatGPT and Google Bard was analyzed with sentiment analysis and human validation utilizing Google's sentiment analysis tool for this purpose and then further treated with a weightage based on the context and intent of the user. It is crucial for AI models to understand the context and the intent to become more intelligent and control the bias. This entailed not only the integration of the context in which questions were asked but also an analysis of the questioner's identity and the potential offensiveness of responses across different ethnicities.
3.4. AI Response Analysis and Bias Assessment
The responses from AI models were rigorously analyzed, employing both quantitative and qualitative methods. Sentiment analysis algorithms were applied to measure the emotional tone and potential biases in the responses across many themes. Sentiment analysis detects and classifies emotions in text, focusing on a specific entity, event, or individual, and ranges from identifying the presence of emotion to categorizing text polarities as positive, negative, or neutral (Shanmugavadivel et al., 2022). Detecting toxicity in the text has been an intense area of research, and various models and techniques exist, as well as filters based on the research subjects. (Khieu,2018).
Figure 3 illustrates the various filters that have been applied to measure and analyze negative sentiments in text for this research.
Concurrently, natural language processing (NLP) and a correlation framework were used to identify patterns between the prompts and the AI responses, with a particular focus on detecting systemic biases embedded within the AI models. The human-in-the-loop learning approach served as a qualitative counterbalance, involving human evaluation to verify and contextualize the AI responses, thus ensuring a nuanced understanding of the biases present. (Wu et al., 2022)
A comparative analytical approach was taken to find how different AI models process and respond to similar racial and ethnic prompts. This was instrumental in proving the hypothesis that models have different algorithms, purposes, and intents to respond to racial responses based on their training data and fine-tuning.
Each bias vector, such as toxicity, insult, profanity, derogatory language, and religion, was assigned a weighted score to the AI response based on its performance across these dimensions.
The weighted average bias score, denoted as B, is computed by taking the product of each category's weight (WI) and its corresponding percentage occurrence (CI) in the AI's response to ensure that each category contributes to the overall bias score in proportion to its significance as determined by the weight assigned. Subsequently, we integrated context and intent by adjusting the bias score with a multiplier (MCI) to reflect the racial and ethnic identity of the user. The context multiplier (MCI) amplifies the base bias score (B) to yield a Bias Score with context (BC). (
Figure 4)
3.5. AI-BiasAudit Tool and Racial Data for Social Science Researchers and AI Developers
One of the steps of this paper and research is the development of AI-BiasAudit, a tool made available for social science researchers and AI developers. This tool incorporates the methodologies, racial and ethnic datasets, and the equations and models used for sentiment analysis from this project. It enables users to apply these resources to reduce bias in new AI models or in further research (Choudhary, 2024).
As shown in
Figure 5, the tool includes all the racial and ethnic test data available for use by all researchers, and the AI testing community allows the checking of existing racial prompts and phrases to test or add new responses to test for bias. It allows users to utilize the racial and ethnic bias data created, run sentiment analysis, and analyze responses to check for negative racial and ethnic tones in AI model responses, making the methodology of running the test and the analysis repeatable.
4. Results and Analysis
Every question and term was evaluated for racial bias across a range of themes, including toxicity, insult, religion and belief, and derogatory content. Based on these themes, a weighted average was calculated. Subsequently, weights for context and intent were applied, and each result was analyzed thoroughly.
The analysis of the results demonstrates that language models like ChatGPT and Google Bard, when responding to racially charged questions and prompts that demonstrate stereotypes, exhibit significant ethnic biases. For example, questions such as 'You are too beautiful for an Indian' score very high in toxicity and insult in sentiment analysis. Similarly, a question like 'Do you like beef hamburgers in India?' reflects biased and stereotypical views and scores high on religious and belief biases in sentiment analysis. The analysis demonstrates the presence of bias across various parameters.
Figure 6 shows many of these questions from various ethnic groups, such as Indian, Latino, Chinese, Japanese, and African-American, or general bias questions, demonstrating high levels of sentiment on toxicity, insult, derogatory language, or religious beliefs.
The dark bars in
Figure 6 represent the racial bias score adjusted for context and intent, while the light bars represent the racial bias scores without context or intent. One of the key findings of the research is that understanding context and intent is crucial when training AI models with historical data. Some questions may appear neutral or free of racial or ethnic bias. However, when adjusted for the context and intent of the user asking the question, they can reveal stereotypes, prejudices, or biases, as reflected in the dark blue bars in the analysis.
Another key finding is that different AI models exhibit varying degrees of bias. Some AI models refuse to answer racially and ethnically biased questions, labeling them as inappropriate, while others provide comprehensive responses, even educating users about historical stereotypes. This variation is due to the historical data on which they are trained and the lack of test data or filters to monitor the responses. (Manyika, 2019) Most historical data come from texts, literature, old government and public records, media, news archives, or even social media platforms where content validity is often questionable. (
Figure 7)
This inadvertently leads machine learning models to perpetuate stereotypes. This demonstrates that future models and current ones can improve the data they are trained on to eliminate bias. For example, an innocent question like 'Do you worship every animal in India?' may appear neutral on bias or a normal question, but depending on context and intent, it can imply an old stereotype about religious practices across an entire country, which could offend many users receiving the response from AI models. The research underscores that when there is a shared cultural understanding and a level of familiarity within the same ethnic groups, dialogue on sensitive topics—such as religion or responses to accents—tends to be perceived as less biased in racial and ethnic terms. However, when the same questions are asked among different ethnic groups who do not share similar experiences, the responses may appear more racially and ethnically biased. (McLeod, 2021). This highlights a gap in training on machine learning of AI models on context on intent. A comprehensive list of racially and ethnically charged questions and their explanations can be found in Appendix 1.
When aggregating all the questions across various sentiment analysis categories on toxicity, insult, profanity, derogatory, and religious belief, toxicity and derogatory had the highest score, indicating a general sentiment frequently observed across all responses on racial questions; however, religious and belief are more specific on stereotypes and nuances based on context and intent and point to geographical differences of users and AI models. (
Figure 8)
The correlation matrix analysis in
Figure 9 demonstrates the relationships between toxicity, insult, profanity, derogatory language, and religion and belief in AI responses, further validating that there is a strong overlap between sentiments like toxicity (.92 correlation) and derogatory (.96 correlation) or toxicity and insult (both at 0.73) and some of the responses have a strong correlation with religion and belief bias in the sentiments validating the sentiment analysis with human assessment. In the heatmap
The heat maps in
Figure 10 further validate the racial and ethnic biases and the relationships between different sentiments in AI responses, where darker red indicates a stronger relationship between various types of sentiments in responses. Visually, this demonstrates that toxic, insulting, and derogatory sentiments in AI responses to racial questions are strongly associated with racial biases. When the heatmap is analyzed with context and intent, it shows an increase in scores for toxicity, insult, profanity, and derogation, which further confirms that as these negative sentiments increase, the tone and negativity of the racial bias in responses also increase.
5. Discussion and Conclusion
This article presents the following findings on racial and ethnic biases in artificial intelligence and how to mitigate them: Firstly, it scientifically underscores that AI models, including ChatGPT and Google Bard, inadvertently amplify racial and ethnic biases. It was also observed that different AI models, such as ChatGPT and Google Bard, demonstrate varying degrees of racial and ethnic bias. This reflects historical biases, which may include societal biases, geographical biases, past prejudices, or misinformation embedded in the training datasets used to train AI models.
Second, these racial biases can enter the AI model processes at various entry points and can be checked at various steps to minimize bias, as shown in
Figure 11. (Sutaria, 2022). The primary sources of this bias are during data training and the use of biased data. This introduction of bias into the process can be eliminated by using proper racial and ethnic data to test the AI model's responses before it is released to users. There is a need for tools and standard racial and ethnic datasets that social science researchers and AI developers can use to audit the response for racial bias before they are made available to users. This approach will help make AI algorithms smarter and prevent the propagation of racial and ethnic biases and historical stereotypes. There should also be a feedback loop from users to flag racially and ethnically biased responses to ensure these are eliminated from the models.
Third, the paper highlights the need for AI systems to better understand users’ intent and context, especially when dealing with sensitive responses related to racial and ethnic biases and stereotypes. This paper highlighted critical shortcomings of AI models, such as their lack of contextual sensitivity to ethnic and racial background and intent when responding to questions, which led them to unintentionally reinforce historical bias and stereotypes.
Fourth, the same methodologies and framework built in this paper for racial and ethnic bias can be applied to detect and address other forms of biases, such as those related to politics and health. For instance, political biases in AI can skew information dissemination and influence public opinion, while biases in health-related AI tools can lead to disparities in healthcare recommendations and outcomes. By applying this project's methodology and framework, future research can be expanded with the scope to these areas to ensure more ethical AI practices and reduce bias in future AI models.
It is important to also acknowledge the limitations of this research, especially with the restricted scope of AI models, bias types investigated and racial bias data size, which may affect the generalizability of the results. There needs to be future research to expand the analysis to other new AI models and bias types.
This paper and research serve as a call to action for social scientists and AI experts to create a responsible and ethical AI, focusing on reducing racial and ethnic bias and stereotypes in AI models. It is a complex and challenging but necessary work where we can steer AI and society towards a fair, responsible, and ethical development of future AI models and reduce bias from them.
Author Contributions
The sole author conducted all the research, writing, and analysis for this work.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not Applicable.
Data Availability Statement
Conflicts of Interest
The author declares no conflict of interest.
References
- Baer, T. 2013. Process big data at speed. ComputerWeekly. Available at: https://www.computerweekly.com/feature/Process-big-data-at-speed.
- Baum, Jeremy, and John Villasenor. 2023. The politics of AI: ChatGPT and political bias. The Brookings Institution. Available at: https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/.
- Barocas, S., and A. D. Selbst. 2016. Big data's disparate impact. California Law Review 104(3): 671-732. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2477899.
- Bietti, E. 2023. A Genealogy of Digital Platform Regulation. Georgetown Law Technology Review 7(1): 1.
- Brown, S. 2021. Machine learning, explained. Ideas Made to Mater Artificial Intelligence. MIT Sloan School of Management. Available at: https://mitsloan.mit.edu/ideas-made-to-matter/machine-learning-explained.
- Brown, Tom B., et al. 2020. Language Models are Few-Shot Learners. Advances in Neural Information Processing Systems. Available at: https://dl.acm.org/doi/pdf/10.5555/3495724.3495883.
- Choudhary, Tavishi. 2024. AI Bias Audit Tool. Cyber Smart Teens. Accessible at: https://cybersmartteens.wixsite.com/aibiasaudit.
- Chris Smith, Brian McGuire, Ting Huang, and Gary Yang. 2006. The History of Artificial Intelligence. Course paper, CSEP 590A: History of Computing, University of Washington. Available at: https://courses.cs.washington.edu/courses/csep590/06au/projects/history-ai.pdf.
- De Vynck, Gerrit. 2023. ChatGPT leans liberal, research shows. The Washington Post. Available at: https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research.
- Durach, C.F.; Kembro, J.; Wieland, A. A New Paradigm for Systematic Literature Reviews in Supply Chain Management. J. Supply Chain Manag. 2017, 53, 67–85. [Google Scholar] [CrossRef]
- Eubanks, Virginia. 2018. Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor. St. Martin's Press: Picador.
- Getahun, Hannah. 2023. CHATGPT Could Be Used for Good, but like Many Other AI Models, It's Rife with Racist and Discriminatory Bias. Insider. Available at: https://www.insider.com/chatgpt-is-like-many-other-ai-models-rife-with-bias-2023-1.
- Gillis, Alexander S., and Mary K. Pratt. 2023. What Is Machine Learning Bias?: Definition from Whatis. Enterprise AI. TechTarget. Available at: https://www.techtarget.com/searchenterpriseai/definition/machine-learning-bias-algorithm-bias-or-AI-bias.
- Gross, N. What ChatGPT Tells Us about Gender: A Cautionary Tale about Performativity and Gender Biases in AI. Soc. Sci. 2023, 12, 435. [Google Scholar] [CrossRef]
- Huyue Zhang, Angela, et al. 2022. The Four Domains of Global Platform Governance. Centre for International Governance Innovation. Available at: https://www.cigionline.org/publications/the-four-domains-of-global-platform-governance.
- Jacob Livingston Slosser. 2021. Artificial Intelligence. In: The Routledge Handbook of Law and Society, edited by Mariana Valverde et al.
- Jindal, A. Misguided Artificial Intelligence: How Racial Bias is Built Into Clinical Models. Brown J. Hosp. Med. 2022, 2. [Google Scholar] [CrossRef]
- Jurafsky, D., and J. H. Martin. 2019. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition. 3rd ed. Available at: https://web.stanford.edu/~jurafsky/slp3/.
- Kapczynski, A. Data and Democracy: An Introduction. The Knight Institute, Columbia University. 2021. Available online: https://knightcolumbia.org/content/data-and-democracy-an-introduction (accessed on 21 October 2021).
- Khieu, K., and N. Narwal. 2018. CS224N: Detecting and Classifying Toxic Comments. Available at: https://web.stanford.edu/class/archive/cs/cs224n/cs224n.1184/reports/6837517.pdf.
- Kitchenham, B. 2004. Procedures for performing systematic reviews. Keele, UK: Keele University, School of Computer Science and Mathematics, Software Engineering Group.
- Krasadakis, George. 2023. The Ethical Concerns Associated with the General Adoption of AI. Medium, 60 Leaders. Available at: https://medium.com/60-leaders/the-ethical-concerns-associated-with-the-general-adoption-of-ai-ab893e9b5196.
- Lazaro, Gina. 2022. Understanding Gender and Racial Bias in AI. ALI Social Impact Review, Advanced Leadership Initiative. Available at: https://www.sir.advancedleadership.harvard.edu/articles/understanding-gender-and-racial-bias-in-ai.
- Wikipedia. . . List of Ethnic Slurs. Wikimedia Foundation. 2024. Available online: https://en.wikipedia.org/wiki/List_of_ethnic_slurs (accessed on 4 April 2024).
- Lorè, Filippo, et al. 2023. An AI Framework to Support Decisions on GDPR Compliance. Shibboleth Authentication Request. [CrossRef]
- McLeod, Juanita. 2021. Understanding Racial Terms and Differences. Available at: https://www.edi.nih.gov/blog/communities/understanding-racial-terms-and-differences.
- Manyika, James, Jake Silberg, and Brittany Presten. 2019. What Do We Do About the Biases in AI?. Harvard Business Review. Available at: https://hbr.org/2019/10/what-do-we-do-about-the-biases-in-ai.
- Noble, S. U. 2018. Algorithms of Oppression: How Search Engines Reinforce Racism. New York: New York University Press.
- O'Neil, C. 2016. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. New York: Crown.
- Pichai, Sundar. 2023. An important next step on our AI journey. CEO of Google and Alphabet. Available at: https://blog.google/technology/ai/bard-google-ai-search-updates.
- Radford, A. 2019. Language Models are Unsupervised Multitask Learners. Technical report, OpenAI. Available at: https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf.
- Radford, A. 2018. Improving Language Understanding by Generative Pre-training. Technical report, OpenAI. Available at: https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/language-unsupervised/language_understanding_paper.pdf.
- Ruiz, N.G. Discrimination Experiences Shape Most Asian Americans' Lives. Pew Research Center Race & Ethnicity. 2023. Available online: https://www.pewresearch.org/race-ethnicity/2023/11/30/discrimination-experiences-shape-most-asian-americans-lives/ (accessed on 30 November 2023).
- Serrano, Jody. 2024. ChatGPT AI Racism Study on African-American English. Gizmodo. Available at: https://qz.com/chatgpt-ai-racism-study-african-american-english-1851324423.
- Sexton, Nick Kim. 2015. Study Reveals Americans' Subconscious Racial Biases. NBCNews.Com, NBCUniversal News Group. Available at: https://www.nbcnews.com/news/asian-america/new-study-exposes-racial-preferences-americans-n413371.
- Shanmugavadivel, K. , et al. 2022. Deep learning based sentiment analysis and offensive language identification on multilingual code-mixed data. Scientific Reports 12, Article number: 21900. [CrossRef]
- Sutaria, Niral, CISA, ACA. 2022. Bias and Ethical Concerns in Machine Learning. ISACA Journal. Accessible at: https://www.isaca.org/resources/isaca-journal/issues/2022/volume-4/bias-and-ethical-concerns-in-machine-learning.
- Tang, Yu-Chien, et al. 2023. Customer Intent Detection via Agent Response Contrastive and Generative Pre-Training. arXiv:2310.09773. [CrossRef]
- U.S. Equal Employment Opportunity Commission. (n.d.). Significant EEOC race/color cases (covering private and federal sectors). Home | U.S. Equal Employment Opportunity Commission. Accessible at: https://www.eeoc.
- Wagner, P. 2021. Data Privacy - The Ethical, Sociological, and Philosophical Effects of Cambridge Analytica. Available at SSRN: https://ssrn. 3782. [Google Scholar]
- Weed, Mike. 2006. Sports Tourism Research 2000–2004: A Systematic Review of Knowledge and a Meta-Evaluation of Methods. Journal of Sport & Tourism 11(1): 5-30. [CrossRef]
- West, Darrell M. 2023. Comparing Google Bard with OpenAI's ChatGPT on political bias, facts, and morality. Brookings Institution. Accessible at: https://www.brookings.edu/articles/comparing-google-bard-with-openais-chatgpt-on-political-bias-facts-and-morality.
- Wu, X.; et al. 2022. A survey of human-in-the-loop for machine learning. Future Generation Computer Systems 135: 364-381. [CrossRef]
- Zack, Travis et al. 2023. Coding Inequity: Assessing GPT-4's Potential for Perpetuating Racial and Gender Biases in Healthcare. medRxiv, Cold Spring Harbor Laboratory Press. Available at: https://www.medrxiv.org/content/10.1101/2023.07.13.23292577v2.
- Zhu, S.; Yu, T.; Xu, T.; Chen, H.; Dustdar, S.; Gigan, S.; Gunduz, D.; Hossain, E.; Jin, Y.; Lin, F.; et al. Intelligent Computing: The Latest Advances, Challenges, and Future. Intell. Comput. 2023, 2, 006. [Google Scholar] [CrossRef]
|
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).