The Impact of Algorithms on People’s Entertainment

This week I read an interesting paper by Myojung Chung and John Wibhey, published in the Harvard Kennedy School Misinformation Review, on how to combat misinformation perpetuated by algorithms.

Chung and Wibhey reference a new type of digital divide that is developing. The digital divide, originally referred to the gap between people who have access to modern information and communications technology (ICT) and those who don't. However, Chung and Wibhey further explore a new type of digital divide that is developing - one based on knowledge of algorithms.

Algorithms increasingly control information flow on social media, determining what users see in their feeds. Social media algorithms are designed to filter and present online content in ways to maintain user engagement and retention which often results in the echo chambers where users are constantly fed misinformation that aligns with users’ existing biases and beliefs and can have disastrous consequences. Chung and Wibhey provide an recent example I had not been aware of, where Facebook's algorithm pushed out hate-filled misinformation targeting the Rohingya people in Myanmar, which contributed to their genocide in 2017.

There is already scholarly research that explores the divide in knowledge of content algorithms. It is unevenly distributed based on socio-demographic backgrounds similar to civic, political, or economic knowledge, and can impact the ability of people to evaluate information and make informed decisions. Interestingly, Chung and Wibhey found that the socio-demographic factors most predictive of algorithmic knowledge differed by country.

In the United States, political ideology and ethnicity were the primary predictors of algorithmic knowledge. Age, political ideology, and social media use were the significant factors in the United Kingdom. In South Korea, the main predictors included age, gender, education, and social media use, whereas in Mexico, only education and social media use were associated with algorithmic knowledge…

In the United States and the United Kingdom, where political ideology was a major factor, political polarization is at all-time highs with differing levels of knowledge between liberals and conservatives, pointing to the dangers of algorithmic knowledge on public discourse. While the specific predictors varied across countries, Chung and Wibhey found that algorithmic knowledge was unevenly distributed across education level, ethnicity, gender, and political ideology.

Chung and Widbey’s study also provides data supporting the idea that individuals with greater algorithmic knowledge are more inclined to take action against misinformation. When people understand that social media algorithms provide information based on their previous behaviors and preferences, they may be more aware that diverse ideas and information are in effect curated out and that they risk being trapped in echo chambers. This knowledge can lead them to actively seek out opposing views and avoid spreading misinformation when they find it. In contrast, research has shown that people who lack algorithmic knowledge - such as elderly populations or less educated - were more likely to spread misinformation, and more vulnerable to harm.

Implications

Currently, there are a variety of approaches being pursued to fight the spread of misinformation. These include steps to moderate or regulate platforms, fact-checking content, and labels to indicate the quality of the content. However, Chung and Widbey’s research suggests the potential for algorithmic knowledge to play a role in fighting the spread of misinformation. Programs to improve general knowledge and literacy around algorithmic mechanisms, particularly with vulnerable populations such as the elderly and uneducated, could be a promising approach.

Next
Next

A Case for Musical Privacy