Title of Dissertation:
Exploring the roots of historical bias amplified by artificial intelligence: the programmer’s role
Supervisor: Prof. Dr. Laura Marie Edinger-Schons
University: University of Mannheim
Scholarship: KAS Scholarship (Konrad Adenauer Stiftung)
Cohort: 7th Cohort, since 2020
Email:
-
Short Abstract
Businesses and governments increasingly employ automated decision making through artificial intelligence (AI) in areas that threaten fundamental human rights. Machine learning (ML), a branch of AI that learns from historical data, reflects continuous unfair discrimination due to historical bias. In response to these ethical issues, most previous research focuses on technological and statistical fixes. Therefore, I argue against technological solutionism to fix cultural problems, given that historical bias reflects society's prejudices, values, and the world as it is or as it was. These issues are an opportunity to question structural inequalities and the values implicitly encoded in technology. Furthermore, understanding bias in AI aids to avoid harm to vulnerable populations, business scandals, and discrimination lawsuits. In the most ambitious classification experiment of our times, classifiers as race, gender, disability, amongst others, must be understood to avoid further harm and the repetition of historical injustices through AI. There is a lack of research about the mechanisms of bias transfer from programmers to the AI, especially susceptible in the initial problem formulation stage, framed by the stakeholders in control of the AI process. Programmers are political actors because they also encode bias when choosing a fairness metric that decides to preserve biased decisions of the past. It depends on whether they assume the status quo is neutral or not.
Computer ethicists identify three categories of bias: pre-existing social bias, technical bias related to data and technology limitations, and emergent bias, which results from the interaction of society and technology. This paper focuses on the pre-existing social bias reflected in the problem formulation stage, where a problem is identified and hypothesized as solvable by technology. The lived experiences of the programmers inform their causal inferences and impacts the entire AI pipeline. Bias goes so deep that it is entrenched in our language and word embeddings which serve as foundations for more complex algorithms. Hence, this paper explores historical bias and prejudices from the programmers through an online survey experiment. It also explores programmers' perception of fairness in AI, modern racism scale (MSR), psychological ownership, agency, legitimizing justifications for system inequality, and social orientation dominance theory to predict endorsement of actions to level the playing field or correct system inequalities. Moreover, Ethics Position Theory on relativism and idealism and Machiaveniallism will be explored, which predict unethical behaviour in IT systems (Winter et al. 2004) and bridge understanding about the global AI ethics debate between different cultures.
The experiment has different stages. One of them involves asking for feedback from programmers on an AI chatbot represented as a white male and measure if any of them suggest diversity; the treatment group will watch a video on intersectional theory and bias. The treatment group will also assess the impression or negative affect from the speaker: one speaker is a black transexual woman, and another is a white male. The next step consists of evaluating their endorsement for affirmative action regarding the diversity of representation in an algorithm designed to admit students in the Department of Computer Science and Engineering. Moreover, their conceptual understanding and preference for fairness metrics to assess algorithmic bias will be evaluated. To subsidize blindspots and include multiple perspectives, I propose an intersectionality framework to map unintended consequences in AI. There are implications for stakeholder consultation, endorsement of affirmative action to the target group of black trans women and fairness notions in AI, and gender and racial bias in programmers at the problem formulation stage. Additionally, theoretical contributions to social dominance orientation and its new subdimension of egalitarianism that refer to group based orientation, to be complimented by intersectionality theory which considers subgroups and the overlapping of multiple systems of oppression. Filling in the gap regarding programmers’ agency on AI bias and their role to reduce it.
-
Research Interests
- Gender Equality
- Poverty and Social Inequality
- Intersectional and Decolonizing Theories
- Artificial Intelligence and Fourth Industrial Revolution Technologies
-
Education
- 2018, Master of Science in Public Policy, University of Bristol, England
- 2016, Bachelor of Law, Universidad Iberoamericana, Dominican Republic
-
Professional and Academic Career
- 2020, Human Rights Advisor on Artificial Intelligence and Inclusion, GENIA Latina, Santo Domingo, Dominican Republic.
- 2019, Independent Gender Consultant, Woman Up, Mate Consultancy, Bristol, England.
- 2018, Director of Gender and Inclusion, Ministry of Women, Santo Domingo, Dominican Republic
- 2018, President, Volunteering in Global Shapers Santo Domingo, Santo Domingo, Dominican Republic.
- 2016, Research and Innovation Director, United Nations Programme for Development and the Office of the Vice-President of the Dominican Republic joint program
- 2016, Research Intern, Run for America Political Consultancy, New York, United States.
- 2013, Research Fellow, Vice-president of the Dominican Republic, Santo Domingo, Dominican Republic
-
Publications
-
Roman, Arlette. How Should an Understanding of Gender Inequalities Inform the Design and Delivery of Policies to Tackle Global Poverty. Rome, Italy: BC Publishing House, 2019. Review of Socio-Economic Perspectives (RSEP), University of Washington Rome Centre. ISBN: 978-605-80676-9-1
-
-
Conference Contributions: Talks
- 2021, Business and Society Conference, University of Namur, "Exploring the roots of historical bias amplified by artificial intelligence: an interdisciplinary approach", Belgium.
- 2020, Cumbre Internacional de Jóvenes Líderes, “Artificial Intelligence and Inclusion for Latin America”, Puerto Rico.
- 2020, UNIDAS Dialogue on COVID-19 from a Gender Perspective, “COVID-19 and Digitalization: Evaluating Impact through an Intersectional Lens”, German Foreign Ministry, Germany.
- 2019, 15th RSEP International Conference on Economic, Finance and Social Sciences, “How an understanding of gender inequalities informs policies to tackle global poverty?”, University of Washington in Rome, Italy.
-
Conference Contributions: Posters
- N/A
-
Memberships
- Member, Executive Board in the Global Artificial Intelligence Ethics Institute (GAIEI) by Prof. Dr. Emmanuel Goffi.
- Member, Global Shapers Community, World Economic Forum (WEF).
- Member, “UNIDAS” Initiative by the German Ministry of Foreign Affairs for the Cooperation with Latin America.
- Founder, International Law Students Association Chapter Universidad Iberoamericana (UNIBE), Dominican Republic.
- Member, United Nations Association in Dominican Republic (UNA-DR).
- Director of International Relations, Faculty of Law Alumni Association, Universidad Iberoamericana (UNIBE).
- Member, ATLAS Women Lawyers in Human Rights Berlin