Building everyday algorithmic literacy: marginalized users point out how ad recommender systems can fail

Yim Register (they/them)
6 min readJun 6, 2022
A computer (FB ads) + hand holding a magnet coming out of it, attracting messages, people and thumbs-up “likes” towards it

This work was recently published in ICWSM ’22: Developing self-advocacy skills through Machine Learning Education: the Case of Ad Recommendation on Facebook

Feel free to watch this 5 minute talk instead of reading this blog post:

The One Sentence Summary

We showed 77 social media users their own Facebook Ad Interest data and taught them about the recommender algorithm User-based Collaborative Filtering and they learned successfully and then pointed out how ad recommender systems can cause harm.

The Making Of

I started this project in March of 2020. Being totally honest, I actually completely shifted away from using Facebook since then. However, a lot of the ideas and implementations are still useful in my current dissertation work; and can likely apply to other social media platforms — the basic idea that “you are grouped into a demographic based on your friends and connections” is highly consequential no matter what social media platform you frequent. But taking us back to 2020 (sorry!), we were watching our social worlds shift almost entirely to a virtual presence. I don’t like the idea that “the world stopped” because for many essential workers and healthcare providers, the pace only quickened. And for the people moving their businesses (and livelihoods) on to social media, they depended on this digital world more than ever before.

I’d been thinking a lot about algorithmic literacy and how algorithms shape our realities — warping who we are through our “digital doubles”. I had a vision for my PhD work to build a series of tools that helped illuminate common social media algorithms, specifically for marginalized people to critique, steer, and resist algorithmic influence. I often bring up the case of targeted alcohol ads (which you can now hide on many platforms), eating disorder ‘rabbit holes’, political radicalization, and even just the everyday impact of being exposed to non-representative images and ads. Due to GDPR laws, there was increased access to one’s own data on social media platforms. In fact, on Facebook, it’s downloadable! I wanted to build something that allowed people to explore their own data, as well as question the assumptions of a common recommender algorithm: User-based collaborative filtering.

For anyone looking at their own ad data or simply reading this post, I encourage you to ask yourself:

What does Facebook think about you, and how does it make you feel?
What are the potential issues with using your network to mine for more recommendations?
What can we do to support your agency?

The TL;DR (too long; didn’t read)

Machine learning algorithms are all around us, informing our decisions and shaping our lives. I’m particularly interested in how social media algorithms perpetuate harm, and how to design and explain more trauma-informed ML systems. This means educating both Data Scientists and non-experts on the basics of how algorithms work and may cause harm: and allowing for users to advocate for policy or platform change while also working with Data Scientists to be more aware of potential impacts of algorithmic decisions.

This study tested an educational tool with both Data Scientists and social media users with no Data Science experience (N = 77). The tool focused on illuminating a rudimentary version of how Facebook ad recommendations work (likely relying on some form of User-Based Collaborative Filtering or UB-CF). Participants were able to upload their own Facebook Ad Interests data and explore what Facebook thinks they’re interested in. They used this data in a guided tutorial of how UB-CF ‘mines’ recommendations from your network of friends. They completed pre/post surveys as well as a comprehension test at the end. Finally, we asked users about the potential harms of Facebook ad recommendation, and the kinds of issues that might arise from algorithmically targeted ads.

We found that non-experts successfully learned about UB-CF from the tutorial, and that simply looking at your own data was not enough to facilitate understanding of harm. For me, this iconic quote from the data summarizes UB-CF in a single sentence:

“A friend with other interests in common is into some weird shit and FB assumed that I’m probably into the same weird shit.”

Novice participants significantly increased their comprehension of UB-CF following the Tutorial.

Still Reading? Let’s Talk Results

With special attention to participants who self-identified as marginalized, we summarized themes of potential harm that participants mentioned.

Eating Disorders

“Probably recommending diets to someone who has a history of eating disorders. I keep trying to hide those ads and mark them as “sensitive topic”. Someone looking up a bunch of diets online is probably interested in diets, but recommending more diets might actually be harmful.”

LGBTQ safety/privacy

“Some of my friends/family are still extremely religious. If there’s not a way to see if a recommended interest does not work with my current interests, I, a queer person (who fb knows is queer) could get like….recommended conversion therapy because of my conservative family.” (as of July 2020, Instagram and Facebook have banned ads for conversion therapy)

Health Misinformation

“As I’m apparently put in a category of people caring about the environment, I get spammed with all things “natural”, so a lot of scam and potentially dangerous “cures”. I’d like to see ads making unsubstantial claims gone. Greenwashing should also be forbidden.”

Participants also commented on political echo chambers, ‘hate following’, and potential suggestions to how Facebook should integrate a more ethical process into the ads it allows and the systems it relies on for targeted advertising. One non-expert participant describes their understanding following the tutorial:

“I would assume that someone else liked or had an interest in that thing, because people can hold harmful values without knowing it.. then those harmful ideas are spread around by Facebook’s algorithms and if they go unchecked that can be really problematic. I think more transparency is good, too.”

The Walkthrough

The app itself was a shinyapp that collected data using mongodb. Here I’ve included some figures and their captions to show a little bit of how the app worked.

The Philosophical

For many people, their realities and worldviews are shaped by social media. It’s a hub for news, opinions, connection, information, education, and inspiration. I’m very interested in how algorithmic systems impact our experiences as human beings, and how some basic understanding of those systems may allow for collective advocacy and policy change. A simple tutorial tested on 77 people could never “fix” our broken systems — but it might be a small step towards illuminating issues of algorithmic influence. I present a path forward for trusting users to contribute their own situated experiences and explore their own data to identify potential harms and impacts. I believe that the more we facilitate user understanding of social media algorithms, the more active and vocal users can be in shaping policy and platform practices.

--

--

Yim Register (they/them)

Attending PhD School. Radical optimist. Machine learning literacy for self-advocacy and algorithmic resistance