The Future of AI Can Be Kind, and I wrote my Dissertation about it

Yim Register (they/them)
9 min readJul 21, 2024

--

This past June, I graduated with my PhD from the University of Washington Information Science program.

Nearly everything has shifted for me since I moved to Seattle six years ago. I am not the same person I was when I started, nor would I want to be. Through falling apart, I found myself again.

But one thing remained constant — the joy I feel when working with new learners who are curious about AI. Over the years, I’ve taught middle schoolers, highschoolers, undergraduates, Data Science Master’s students, fellow engineers, formerly incarcerated people, industry professionals, software engineers, therapists, teachers, and anyone I find myself talking to at the bus stop or coffee line. I have had the privilege of talking about AI with people from all walks of life all over the world (I was even invited to talk about my dissertation work with a group of Thai monks last month!). I’ve designed activities for Misinfo Day through the Center for an Informed Public, and worked with Code.org to develop AI activities for middle and highschoolers. I’ve been the “AI expert” for a highschool Model UN debate (that was fun!), and I’ve trained therapists to spot misinformation and work with their clients to build healthier relationships to social media scrolling. I led a Directed Research Group where we worked on NLP analysis of how Instagram users made sense of “the algorithm”, which evolved into this paper: Attached to the Algorithm: Making Sense of Algorithmic Precarity on Instagram. I’ve worked with the OSWC on overdose awareness and Naloxone/Narcan trainings, where I developed their data analysis to ensure the efficacy of their training course. All of these endeavors have something in common: my belief that knowledge can be healing. Knowledge is power, yes. But knowledge is also agency; knowledge is healing; knowledge is connecting; knowledge is communicating; knowledge is exploration. So over the past six years, I focused my efforts on how we teach about AI, and how we work towards a future of AI that is less harmful.

“My dissertation is written as a resource for you, whether you are a scholar, teacher, policymaker, engineer, or curious reader. My dissertation is a resource to be shared and benefited from. I hope that you will find something useful in its pages.”

My dissertation is formatted to be easy to skim — no one, not even I, wants to read a 200+ page document to get the main points. My dissertation is written as a resource for you, whether you are a scholar, teacher, policymaker, engineer, or curious reader. My dissertation is a resource to be shared and benefited from. I hope that you will find something useful in its pages.

You can check out the final section titled “If You Read Anything, Read This”, which I’ve included below.

If You Read Anything, Read This

This dissertation has explored strategies to embed cases of algorithmic harm into AI education; situating ethics shoulder-to-shoulder with technical components of AI and machine learning. By including these case studies, we can: a) resonate with and motivate underrepresented students; b) better prepare Data Scientists to design and audit their own systems; and c) surface issues of harm from various stakeholders in order to think more preventatively and mitigate harms before they occur. We have the freedom in academia to design our courses and deliverables — we get the opportunity to put students at the center of the solution: instead of telling them all the ways that AI can go wrong, we ask them how we can create a better world. Through critiquing and repairing algorithms throughout their coursework, students can gain both technical and ethical knowledge simultaneously, which can later be practically applied on the job. Not only do we train good Data Scientists, we train the next generation of AI leaders to code with compassion. In Chapter 1, I examine the current landscape of Data Science curricula, and find that ethics is often “othered” or seen as a peripheral soft skill, despite numerous calls for integrated ethics and human-centered approaches to AI. I explored the possibility of including student’s own personal experiences in how they learn AI technical concepts — testing the hypothesis of how situated learning impacted both technical and advocacy outcomes. I found that students who used their own data to learn about linear regression actually had a boost in both technical comprehension and articulated better advocacy arguments in the case of an algorithm gone wrong. This was promising for the idea that using relatable and situated examples could improve learning outcomes — for both mathematical mechanisms and grappling with societal impact of AI systems. In Chapter 2, I explore the space of algorithmic harms, providing an expansive view of the various ways that AI can make mistakes. Many AI ethics curricula include the COMPAS recidivism algorithm or the sexist Amazon hiring tool; these are great examples of real-world algorithmic bias. However, many cases of algorithmic harm are much more subtle and insidious and less “clear cut” in terms of how to mitigate them. I review complex and nuanced landscapes of algorithmic harms: including my published work on discriminatory content moderation against vulnerable groups and mental health impacts of recommender systems. I demonstrate that many cases of algorithmic harm are less straightforward, with complicated tradeoffs and conflicting stakeholders. In Chapter 3, I look at the opportunities for repair when it comes to algorithmic harms. One of the go-to strategies is to follow an AI Code of Ethics or Ethical Guidelines. We see these from large institutions and big tech companies, and these AI principles are often the first line of defense for creating responsible AI. However, research suggests they are ineffective in practice, and difficult to operationalize. I synthesize the literature on the ineffectiveness of AI ethics guidelines, and provide recommendations for more practical AI ethics instruction. Next, I test an intervention for embedding ethics into a technical lesson on recommender systems — using learner’s own personal data again to help them surface algorithmic harms on social media. We find that social media users were again able to comprehend both the technical components and the social impact of algorithms when taught in this way; drawing on their lived experiences to surface algorithmic harms and reason about the technicalities of the algorithm simultaneously. While the above research demonstrated the value of lived experiences for AI ethics instruction, I also had concern about in-group favoritism; people tend to only care about what directly impacts them. Especially given the lack of diversity in the AI field, would we only ever care about issues of algorithmic harm that impacted the majority groups? In Chapter 4, I tested the notion of in-group favoritism, as well as explored how students made decisions regarding various cases of algorithmic harms. I found that for some cases of algorithmic harm, there is an apparent ‘moral threshold’ where all students found the issue to be urgent even if they could not personally relate. These tended to be clear-cut cases of bias that were blatantly unjust. For the more nuanced cases of algorithmic harm, such as Google Translate gender bias, there was a significant effect of gender and relatability on how urgent the issue was perceived. We found that students were much more attuned to issues of life-or-death, but struggled with making decisions about more nuanced cases of harm unless they personally related to them. In Chapter 5, I provide resources and recommendations to educators interested in embedding ethics throughout their AI courses. I present the LEADERS Framework for teaching AI Ethics, as well as an example activity regarding the Societal Impacts of Generative AI. Overall, I have demonstrated the benefits of situated learning in AI contexts, with a focus on advocacy and ethical consideration. I have shown that something missing from current AI ethics education is how to grapple with nuanced tradeoffs and justify decisions that may deferentially impact conflicting stakeholders. We will not be able to satisfy all constraints or permanently avoid harms — but we can prepare our future Data Scientists to properly investigate their models with ethical foresight and justify their decisions. We can train our future Data Scientists to repair algorithmic mistakes, and create technology in more human-centered and compassionate ways.

“We take a journey through my empirical findings, policy recommendations, best educational practices, and creative activities (both unplugged and web applications) — all with the purpose of bringing the learner into the conversation about algorithmic harms. When we care about our communities, our planet, and our passions, I believe we can build technology that truly reflects that.”

Even more exciting, in my opinion, is that my dissertation is also filled with artwork and poetry along the way, as well as textbook-like guideposts that distill each Chapter Summary and Contributions. We take a journey through my empirical findings, policy recommendations, best educational practices, and creative activities (both unplugged and web applications) — all with the purpose of bringing the learner into the conversation about algorithmic harms. When we care about our communities, our planet, and our passions, I believe we can build technology that truly reflects that. This dissertation is a hope that someday we will break out of the capitalist chains that force us into cycles of more, more, more — and into a state of being where we create for the pillars of joy and justice. And by training future AI Leaders of the world with the specific techniques I have discovered, we can ensure that the future of AI can be kind.

“And by training future AI Leaders of the world with the specific techniques I have discovered, we can ensure that the future of AI can be kind.”

Below are the various chapter titles and their summaries, as well as various art endeavors and activities that I developed over the years. If you skim over the titles and their accompanying Summaries, I assure you it will be like reading the dissertation for the gist. I’ve also included bonus artwork that didn’t make it in to the final published document.

You can find a PDF version of my dissertation here. Stay safe out there my friends.

--

--

Yim Register (they/them)
Yim Register (they/them)

Written by Yim Register (they/them)

Attending PhD School. Radical optimist. Machine learning literacy for self-advocacy and algorithmic resistance

No responses yet