Can you match the company to its Misinformation policy?

Which company is doing what to combat misinformation and fake news? Is it an algorithmic solution, or are there people involved? What do you think they should do?

Ahh misinformation. From conspiracy theories to well-meaning panic, the internet gives everyone a voice. Some forms of misinformation are rumors spread quickly across the internet, but with disastrous consequences: such as the Hawaii false missile alert in 2018. Other forms of misinformation are intentional content creations called Deepfakes, using neural networks to create totally fabricated videos, often of politicians or celebrities. Some misinformation comes from small pockets of conspiracy theorists online, often unchecked by the platforms they post on.

So I made a simple matching game to help people think about how different companies are trying to combat misinformation. Keep in mind, there are different pressures on each of these companies; and different populations on each. When I first presented this game, people would outcry “Ugh I bet Facebook is the worst!” There is some truth to that, but recall the social pressure they have been under to at least pretend like they are addressing the issues on their platform. I encourage you to critically look at each of these companies and all the pressures involved in their decisions, so that you can critique what they are doing in an empowered and informed way! Find out more from the Center for an Informed Public at University of Washington.

Here are the companies I chose to use for this simple proof-of-concept game:

Twitter

Snapchat

Reddit

Facebook

Google

Instagram

Youtube

Pinterest

A. Introduced a “False Information” button for users to report suspected posts by themselves.

Introducing a button to report False Information means that any user can start to flag potential misinformation at the tap of a button. This means you could have a lot of participation, but also opens up the feature to be gamed by malicious groups. What do you think? (See Answer Key)

B. Restricted searches about “vaccines” and other medical terms to only information from the CDC or WHO.

This is one of the strongest instances of combatting misinformation. Some might call it censorship, but I personally think it is a brilliant realization that none of these social media companies are health organizations, and should say as such. When you search vaccines on this platform, or any other words in a long list of specifically chosen health-related concepts, the results are restricted to only information from the CDC or WHO. A notification at the top of the page even says “If you’re looking for medical advice, please contact a healthcare provider.” (See Answer Key)

C. Made a feature where you swipe left to see your friends’ posts and swipe right to see posts curated by human editors at the company.

This company thinks that usually you don’t go to your friends for news or the most factual information. You’re on the platform to have fun and engage with friends. So they implemented a feature where you swipe left to see posts from friends, and swipe right to see vetted content; curated by human editors at the company. They haven’t had a lot of problems with misinformation, according to them. Why would this be? What kind of factors play into a company being a breeding ground for misinfo? Do you believe that this platform really doesn’t have a big misinfo problem? (See Answer Key)

D. Have done very little to combat misinformation, chasing profits and “platform engagement” instead.

This company has had a problem with misinformation since the beginning, often harboring pockets of people with large followings who spread conspiracy theories or false information online. The CEO has denied many misinformation problems, and has admitted that the company is much more interested in stickiness to the platform and profits than in making policies to control the content. For some, that could be seen as a commitment to never censoring users, but for me I see it as a danger; given what I know exists on the platform. (See Answer Key)

E. May introduce a community points system, encouraging users to act like “good neighbors” and report harmful misinformation.

We all want “good neighbors” right? This potential rollout of a community points system is apparently similar to Wikipedia; where users can vet and verify information on the platform. They could earn badges and points for acting in good faith and reporting misinformation. Apparently there would also be some protections against people who report misinformation that differs greatly from popular opinion. Personally, that worries me. How do you stop people from gaming the system? Hint, this platform also has some orange badges claiming “Harmfully Misleading” on any content marked as misinformation. (See Answer Key)

F. Relies on algorithmic weights, like if a story is shared more by its headline than if it gets clicked on, it’s probably misleading, and gets flagged.

Algorithms, algorithms, algorithms. Our saving grace, right? Maybe. But they can also get us in a lot of trouble, removing the insight and intuition of social and competent human beings; relying on opaque and confusing algorithms that can behave in unexpected ways (If you follow me, you know I’m very pro-algorithm and ML technology, but that doesn’t mean I’m not skeptical!). Imagine the scenario where you see a headline, and immediately share that same article without ever even reading the story. This happens all the time! This particular example of algorithmic weighting means that stories that get shared more when they’re not clicked on than when they are clicked on, means that there is probably something misleading or wrong about the story itself. Therefore, the story gets flagged as possible misinfo. (See Answer Key)

G. Relies on the community to flag things as misinformation, and they can do so anonymously. But believe “communities set their own norms.”

This company basically said that they hope academics and reporters on the platform will combat misinformation on their own. They basically said it would be nice if more informed users reported misinformation, but that communities set their own norms, and this company won’t mess with that. Please note there was less information for me to find about this company and their efforts, and they may be doing more than I realize. I’m happy to be corrected. (See Answer Key)

H. Uses “algorithms, not humans” and take action against malicious people taking advantage of the algorithms aka “gaming the system”.

“Algorithms, not humans”. Not sure if that’s reassuring or concerning! This company seems particularly proud that they have algorithms that can combat this problem, instead of involving human curators. They are particularly aware of how people can “game” the algorithms, which has been a constant battle since the start of the company, and they will take action to prevent such gaming. They also promote good journalism and have several efforts to increase the quality of news in general.

Answer Key

A. Instagram

B. Pinterest

C. Snapchat

D. Youtube

E. Twitter

F. Facebook

G. Reddit

H. Google

Sources

Attending PhD School. Radical optimist. Machine learning literacy for self-advocacy and algorithmic resistance

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store