The Rookie Primer on Responsible A.I.

Yim Register (they/them)
10 min readJan 26, 2021

A list of resources for anyone curious about fairness, accountability, ethics, social justice, transparency, and/or explainability as it pertains to Data Science. These are mostly Yim-centric for now, so I gladly invite collaboration and edits!

cartoon image of someone holding up a sign that says “social justice beings with me!”

Algorithmic Bias

Ideas around fairness and accountability are swirling around in the tech sphere. Through a combination of pressures, more and more people are looking into algorithmic bias: when algorithms have preferences “baked in” to them based on the training data or model setup, that result in discrimination against some population. We see racial disparities in health technology, racial bias in facial recognition, sexism in Google translate, racial bias in criminal justice technology, fatphobia on Instagram,sexist hiring AI from Amazon, and more than we can even keep track of these days. These are just some of the larger headlines of the issues that made waves, but algorithmic bias can be baked into seemingly benign software. It is our responsibility to be critical about what goes in, and what comes out of, our algorithms.

Vocabulary Crash Course

To get started, you’re gonna need to know a few words. Remember all those buzzwords I literally listed at the beginning of this article? Let’s do a crash course (*note these definitions are mostly my own and are not the definitions, just what I’ve picked up by working in this space).

Fairness: in this study How Do Fairness Definitions Fare?: Examining Public Attitudes Towards Algorithmic Definitions of Fairness, they asked people what they think algorithmic fairness means. They seem to converge on something they call “calibrated fairness” which sounds like an “equity over equality” approach. In other words, probabilities should favor people in proportion to their merit — for the probability of granting someone a loan, it should be proportional to their probability of paying it back. Beyond this context, perhaps fairness should mean uplifting those who need help the most, or not always relying on splitting things perfectly equally and instead being sensitive to the inequality that already exists and trying to account for it?

Accountability: You can read this awesome primer on What is Algorithmic Accountability? and explore how algorithms should be able to be audited and then the role of enforcing rules and regulations. Personally this reminds me of the work of the Tech Policy Lab at UW. Audits are important for social and legal change, and more audit studies are shedding light on algorithmic bias and other issues, such as Measuring Misinformation in Video Search Platforms: An Audit Study on YouTube.

Ethics: Try out the Hitchiker’s Guide to AI Ethics for a good overview. They approach big ideas like “what AI is”, “what AI does”, “what AI impacts” and “what AI can be” in their piece.

Social Justice: You don’t have to be a social justice warrior to include social justice in your Data Science practice or teaching. I cover a little bit about how to integrate social justice in the Data Science classroom in another article. “Social Justice” can be defined as “justice in terms of the distribution of wealth, opportunities, and privileges within a society.” Social justice covers many topics, from who and why we incarcerate to how we treat disabled members of society. You may not be able to understand all realms of social justice. I suggest you pick one or two issues you really care about and start there.

Transparency: Access to details of the innerworkings of A.I. algorithms being used in the world. We face a “transparency paradox” which is that increased transparency can help us spot discrimination and bias, but also opens us up to increased risk and misuse. This piece covers the Transparency Paradox.

Explainability: From my understanding, explainability is not always just about explaining how A.I. algorithms work (though that’s important!). Sometimes it is about gaining insight into how algorithms came up with what they did, even when we can’t explain every step of the algorithm. Machine learning models learn, and sometimes they come across weird local minima or parts of the hypothesis space we couldn’t have even imagined, or use correlations across hundreds of dimensions that we would never be able to understand. So we need ways for our algorithms to self-report what they are doing, and ways for us to communicate that back to the stakeholders of models (while also preserving privacy and anonymity in many cases). This paper talks a bit about how we went from rule-based AI (glass box) to more unintuitive ML systems (black box): From Machine Learning to Explainable AI.

It’s more than just the algorithms, it’s the whole pipeline

The entire data pipeline contributes to bias, not just the way the models work. This means exploring bias in the steps leading up to the models:

  • collecting data — Where did you get it from? How did you sample? Is the sample representative? Who is missing? Did they know what it would be used for? Would they give different answers under different circumstances? Is this something that changes over time?
  • coding data — What are the categories? Who came up with them? How much do societal definitions play into how we code data (e.g. gender)? Was it some poor undergrad who originally coded this data? Did they get paid? How much of this relies on Mechanical Turk, aka the Ghost Workers of AI? Would people from other cultures agree on the categories? Should we be letting people categorize themselves or know how they get sorted? Should we even be going down this rabbit hole of a problem at all?
  • storing data — Who has access? How is it anonymized? Is it protected? Is it expensive? Read more from Penn State’s the Ethics of Data Management: Data Storage and Protection.
  • using data — What data gets excluded? Why? Is it out of convenience? Low sample size? Do you drop your outliers or keep them? Did you use the right model for this type of data? Did you do the appropriate tests to avoid p-hacking or multiple comparisons? What are you using the data to make or do? Should you be?

Papers to read (please suggest more!)

Abebe, R., Barocas, S., Kleinberg, J., Levy, K., Raghavan, M., & Robinson, D. G. (2020, January). Roles for computing in social change. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (pp. 252–260).

Alkhatib, A., & Bernstein, M. (2019, May). Street-level algorithms: A theory at the gaps between policy and decisions. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–13).

Amini, A., Soleimany, A. P., Schwarting, W., Bhatia, S. N., & Rus, D. (2019, January). Uncovering and mitigating algorithmic bias through learned latent structure. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society (pp. 289–295).

Barbosa, N. M., & Chen, M. (2019, May). Rehumanized crowdsourcing: a labeling framework addressing bias and ethics in machine learning. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems (pp. 1–12).

Benjamin, R. (2019). Race after technology: Abolitionist tools for the new jim code. Social Forces.

Boratto, L., Fenu, G., & Marras, M. (2019, April). The effect of algorithmic bias on recommender systems for massive open online courses. In European Conference on Information Retrieval (pp. 457–472). Springer, Cham.

Bucher, T. (2012). Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New media & society, 14(7), 1164–1180.

Carton, S., Mei, Q., & Resnick, P. (2020, May). Feature-Based Explanations Don’t Help People Detect Misclassifications of Online Toxicity. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 14, pp. 95–106).

Chakraborty, A., Messias, J., Benevenuto, F., Ghosh, S., Ganguly, N., & Gummadi, K. (2017, May). Who makes trends? understanding demographic biases in crowdsourced recommendations. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 11, №1).

Danks, D., & London, A. J. (2017, August). Algorithmic Bias in Autonomous Systems. In IJCAI (Vol. 17, pp. 4691–4697).

D’Ignazio, C., & Klein, L. F. (2020). Data feminism. Mit Press.

Edizel, B., Bonchi, F., Hajian, S., Panisson, A., & Tassa, T. (2020). FaiRecSys: mitigating algorithmic bias in recommender systems. International Journal of Data Science and Analytics, 9(2), 197–213.

Eslami, M., Vaccaro, K., Karahalios, K., & Hamilton, K. (2017, May). “Be careful; things can be worse than they appear”: Understanding Biased Algorithms and Users’ Behavior around Them in Rating Platforms. In Proceedings of the International AAAI Conference on Web and Social Media (Vol. 11, №1).

Garcia, M. (2016). Racist in the machine: The disturbing implications of algorithmic bias. World Policy Journal, 33(4), 111–117.

Hajian, S., Bonchi, F., & Castillo, C. (2016, August). Algorithmic bias: From discrimination discovery to fairness-aware data mining. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 2125–2126).

Holstein, K., Wortman Vaughan, J., Daumé III, H., Dudik, M., & Wallach, H. (2019, May). Improving fairness in machine learning systems: What do industry practitioners need?. In Proceedings of the 2019 CHI conference on human factors in computing systems (pp. 1–16).

Iliadis, A., & Russo, F. (2016). Critical data studies: An introduction. Big Data & Society, 3(2), 2053951716674238.

Johnson, G. M. (2020). Algorithmic bias: on the implicit biases of social technology. Synthese, 1–21.

Kingsley, S., Wang, C., Mikhalenko, A., Sinha, P., & Kulkarni, C. (2020). Auditing Digital Platforms for Discrimination in Economic Opportunity Advertising. arXiv preprint arXiv:2008.09656.

Lambrecht, A., & Tucker, C. (2019). Algorithmic bias? an empirical study of apparent gender-based discrimination in the display of stem career ads. Management Science, 65(7), 2966–2981.

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447–453.

Register, Y., & Ko, A. J. (2020, August). Learning Machine Learning with Personal Data Helps Stakeholders Ground Advocacy Arguments in Model Mechanics. In Proceedings of the 2020 ACM Conference on International Computing Education Research (pp. 67–78).

Smith-Renner, A., Fan, R., Birchfield, M., Wu, T., Boyd-Graber, J., Weld, D. S., & Findlater, L. (2020, April). No explainability without accountability: An empirical study of explanations and feedback in interactive ml. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (pp. 1–13).

Tanweer, A., Bolten, N., Drouhard, M., Hamilton, J., Caspi, A., Fiore-Gartland, B., & Tan, K. (2017). Mapping for accessibility: A case study of ethics in data science for social good. arXiv preprint arXiv:1710.06882.

Usher, N., Holcomb, J., & Littman, J. (2018). Twitter makes it worse: Political journalists, gendered echo chambers, and the amplification of gender bias. The international journal of press/politics, 23(3), 324–344.

Give back to your community, you won’t regret it

One of the best ways to work on your skills in these areas of Responsible A.I. is to work with stakeholders of the models. In academia, we often find ourselves surrounded by other academics. This is pretty normal and makes sense, but also leads to insular groups. Before COVID, I would interview every Uber driver I rode with. I would ask them about the app, about the algorithms, about their pay, about their lives. What worked, what didn’t? I ask kids how they feel about their ads on their social media, I ask non-academic friends about how they think algorithms work. I ask, I ask, I ask. It has been a hard thing for me to learn; that I don’t have all the answers (no one does). But together, we can figure out what isn’t working and what people need and also what we care about. Along the way, we find our loyalties and our passions. My hope is that everyone gets to be part of the conversation. Right now I’m very involved in the anti-diet eating disorder recovery communities, the #actuallyautistic online spaces, and with survivors of domestic violence.

  1. Give lots of talks or write a lot about your findings and ideas.
  2. Stop talking and writing for a bit, and start asking, reading, and listening.
  3. Find what you care about; what lights your fire; what makes you want to make the world a better place.
  4. Build community around that passion, preferably totally outside of academia if appropriate.
  5. Build up your academic ideas to responsibly serve the communities you care about. See if it changes you. See what you learn.

Don’t be too hard on yourself

Especially when first exploring the world of social justice, we can bring in a lot of shame about what we don’t know or mistakes we have made in the past. Too often, that fear and shame results in apathy. Instead of doing nothing, you’re already taking the first steps to be a part of the Responsible A.I. community. Get ready to unlearn a lot of things! Think of it like a new adventure. We are all always learning and unlearning, making mistakes, and disagreeing about things. The worst thing you can do is to “leave it up to somebody else”, which is usually what happens with Critical Data Studies (one professor gets stuck with the “ethics” course and everyone else thinks the problem is solved). That doesn’t mean you should go in without background and just wing it, but it does mean that even you trying to make a difference will probably make a difference. Release the shame of not being the first to the party, and be happy to be there now. You are welcome here.

--

--

Yim Register (they/them)

Attending PhD School. Radical optimist. Machine learning literacy for self-advocacy and algorithmic resistance