The Rookie Primer on Responsible A.I.
A list of resources for anyone curious about fairness, accountability, ethics, social justice, transparency, and/or explainability as it pertains to Data Science. These are mostly Yim-centric for now, so I gladly invite collaboration and edits!
Algorithmic Bias
Ideas around fairness and accountability are swirling around in the tech sphere. Through a combination of pressures, more and more people are looking into algorithmic bias: when algorithms have preferences “baked in” to them based on the training data or model setup, that result in discrimination against some population. We see racial disparities in health technology, racial bias in facial recognition, sexism in Google translate, racial bias in criminal justice technology, fatphobia on Instagram,sexist hiring AI from Amazon, and more than we can even keep track of these days. These are just some of the larger headlines of the issues that made waves, but algorithmic bias can be baked into seemingly benign software. It is our responsibility to be critical about what goes in, and what comes out of, our algorithms.
Vocabulary Crash Course
To get started, you’re gonna need to know a few words. Remember all those buzzwords I literally listed at the beginning of this article? Let’s do a crash course (*note these definitions are mostly my own and are not the definitions, just what I’ve picked up by working in this space).
Fairness: in this study How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness, they asked people what they think algorithmic fairness means. They seem to converge on something they call “calibrated fairness” which sounds like an “equity over equality” approach. In other words, probabilities should favor people in proportion to their merit — for the probability of granting someone a loan, it should be proportional to their probability of paying it back. Beyond this context, perhaps fairness should mean uplifting those who need help the most, or not always relying on splitting things perfectly equally and instead being sensitive to the inequality that already exists and trying to account for it?
Accountability: You can read this awesome primer on What is Algorithmic Accountability? and explore how algorithms should be able to be audited and then the role of enforcing rules and regulations. Personally this reminds me of the work of the Tech Policy Lab at UW. Audits are important for social and legal change, and more audit studies are shedding light on algorithmic bias and other issues, such as Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube.
Ethics: Try out the Hitchiker’s Guide to AI Ethics for a good overview. They approach big ideas like “what AI is”, “what AI does”, “what AI impacts” and “what AI can be” in their piece.
Social Justice: You don’t have to be a social justice warrior to include social justice in your Data Science practice or teaching. I cover a little bit about how to integrate social justice in the Data Science classroom in another article. “Social Justice” can be defined as “justice in terms of the distribution of wealth, opportunities, and privileges within a society.” Social justice covers many topics, from who and why we incarcerate to how we treat disabled members of society. You may not be able to understand all realms of social justice. I suggest you pick one or two issues you really care about and start there.
Transparency: Access to details of the innerworkings of A.I. algorithms being used in the world. We face a “transparency paradox” which is that increased transparency can help us spot discrimination and bias, but also opens us up to increased risk and misuse. This piece covers the Transparency Paradox.
Explainability: From my understanding, explainability is not always just about explaining how A.I. algorithms work (though that’s important!). Sometimes it is about gaining insight into how algorithms came up with what they did, even when we can’t explain every step of the algorithm. Machine learning models learn, and sometimes they come across weird local minima or parts of the hypothesis space we couldn’t have even imagined, or use correlations across hundreds of dimensions that we would never be able to understand. So we need ways for our algorithms to self-report what they are doing, and ways for us to communicate that back to the stakeholders of models (while also preserving privacy and anonymity in many cases). This paper talks a bit about how we went from rule-based AI (glass box) to more unintuitive ML systems (black box): From Machine Learning to Explainable AI.
It’s more than just the algorithms, it’s the whole pipeline
The entire data pipeline contributes to bias, not just the way the models work. This means exploring bias in the steps leading up to the models:
- collecting data — Where did you get it from? How did you sample? Is the sample representative? Who is missing? Did they know what it would be used for? Would they give different answers under different circumstances? Is this something that changes over time?
- coding data — What are the categories? Who came up with them? How much do societal definitions play into how we code data (e.g. gender)? Was it some poor undergrad who originally coded this data? Did they get paid? How much of this relies on Mechanical Turk, aka the Ghost Workers of AI? Would people from other cultures agree on the categories? Should we be letting people categorize themselves or know how they get sorted? Should we even be going down this rabbit hole of a problem at all?
- storing data — Who has access? How is it anonymized? Is it protected? Is it expensive? Read more from Penn State’s the Ethics of Data Management: Data Storage and Protection.
- using data — What data gets excluded? Why? Is it out of convenience? Low sample size? Do you drop your outliers or keep them? Did you use the right model for this type of data? Did you do the appropriate tests to avoid p-hacking or multiple comparisons? What are you using the data to make or do? Should you be?
Papers to read (please suggest more!)
Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Social Forces.
D’Ignazio, C., & Klein, L. F. (2020). Data feminism. Mit Press.
Give back to your community, you won’t regret it
One of the best ways to work on your skills in these areas of Responsible A.I. is to work with stakeholders of the models. In academia, we often find ourselves surrounded by other academics. This is pretty normal and makes sense, but also leads to insular groups. Before COVID, I would interview every Uber driver I rode with. I would ask them about the app, about the algorithms, about their pay, about their lives. What worked, what didn’t? I ask kids how they feel about their ads on their social media, I ask non-academic friends about how they think algorithms work. I ask, I ask, I ask. It has been a hard thing for me to learn; that I don’t have all the answers (no one does). But together, we can figure out what isn’t working and what people need and also what we care about. Along the way, we find our loyalties and our passions. My hope is that everyone gets to be part of the conversation. Right now I’m very involved in the anti-diet eating disorder recovery communities, the #actuallyautistic online spaces, and with survivors of domestic violence.
- Give lots of talks or write a lot about your findings and ideas.
- Stop talking and writing for a bit, and start asking, reading, and listening.
- Find what you care about; what lights your fire; what makes you want to make the world a better place.
- Build community around that passion, preferably totally outside of academia if appropriate.
- Build up your academic ideas to responsibly serve the communities you care about. See if it changes you. See what you learn.
Don’t be too hard on yourself
Especially when first exploring the world of social justice, we can bring in a lot of shame about what we don’t know or mistakes we have made in the past. Too often, that fear and shame results in apathy. Instead of doing nothing, you’re already taking the first steps to be a part of the Responsible A.I. community. Get ready to unlearn a lot of things! Think of it like a new adventure. We are all always learning and unlearning, making mistakes, and disagreeing about things. The worst thing you can do is to “leave it up to somebody else”, which is usually what happens with Critical Data Studies (one professor gets stuck with the “ethics” course and everyone else thinks the problem is solved). That doesn’t mean you should go in without background and just wing it, but it does mean that even you trying to make a difference will probably make a difference. Release the shame of not being the first to the party, and be happy to be there now. You are welcome here.