The annual Americas Conference on Information Sciences (AMCIS) leads academic research dedicated to information science taking place in the US and from the North and South Americas, and is an Association for Information Sciences (AIS) conference. This year, the theme was around “Elevating Life Through Digital Social Entrepreneurship.” Namely, the conference highlighted the disparities in access to technology, especially in marginalized communities, and emphasized the importance of providing equal digital opportunities to promote inclusive growth. Additionally, it explored the challenges of developing sustainable business models that prioritize social impact.

From August 15 to 17, 2024, the 30th AMCIS conference was held in Salt Lake City, Utah. This year COSMOS had 5 studies at AMCIS! Several cosmographers visited Salt Lake City to present the studies. The following is a list of papers that were published in the conference:

  • Studying the Influence of Toxicity Intensity on Its Propagation Using Epidemiological Models
  • Toxicity Prediction in Reddit
  • A Computational Approach to Analyze Identity Formation: A Case Study of Brazil Insurrection
  • Role of Co-occurring Words on Mobilization in Brazilian Social Movements 
  • Analyzing Anomalous Engagement and Commenter Behavior on YouTube

In this first article of a two-part series, we summarize two papers on online toxicity, “Studying the Influence of Toxicity Intensity on Its Propagation Using Epidemiological Models” and “Toxicity Prediction in Reddit.” While both are concerned with tracking and classifying toxic online content, the former compares epidemiological modeling of toxicity on Twitter/X while the latter predicts toxicity based on the hierarchy of comments on Reddit.

“Studying the Influence of Toxicity Intensity on Its Propagation Using Epidemiological Models” specifically compares epidemiological models that segment the population into different groups based on the “infection” of toxic posting. Namely, the authors compare the SIR (Susceptible-Infected-Recovered), SIS (Susceptible-Infected-Susceptible), and STRS (Susceptible-Toxic-Recovered-Susceptible) models. As the model names suggest, each model segments the population and determines whether individuals are susceptible to, are transmitting, or in recovery from the effects of online toxicity. They address two key research questions: 

  1. Which epidemiological model is most effective in explaining toxicity diffusion on social media?
  2. How do different levels of toxicity affect the spread of toxic content?

They found that the SIS model, while providing some insight, was least accurate in comparison to the others. The Recovered state in the SIR model also improved accuracy, but it was not as accurate as the STRS model: the latter’s ability to capture the cyclical nature of reinfection in toxicity spread made it the most robust model for understanding toxicity diffusion. Additionally, even when datasets were split into moderately and highly toxic posts, the STRS model had superior error rates for both kinds of toxicity.

Dr. Agarwal said, “Such results suggest that policymakers and social media platforms could benefit from science-driven strategies tailored to combat the toxic behaviors of different user groups.” 

“Toxicity Prediction in Reddit,” proposed a novel approach to predict and detect toxic comments on Reddit that analyzed the hierarchical structure of the platform’s conversations. Since Reddit organizes discussion in a hierarchical format akin to a family tree (i.e., structured by parent-child relationships), comments can be traced back through multiple levels of parent comments. The researchers studied the influence of the threaded discourse structure on the spread of toxicity. They address two key research questions:

  1. Can the toxicity of a Reddit comment be predicted by analyzing the toxicity of its immediate parent, grandparent, and great-grandparent comments, as well as the average toxicity of the entire conversation?
  2. Which generational parent comment has the most influence on the toxicity of a child comment?

Their approach used trees and machine learning to investigate these questions. It was found that toxic conversations tend to diminish over time, as toxic content often discourages further participation in discussions, supported by previous research indicating that toxic comments can deter individuals from engaging. The authors also analyzed the influence of generational parent comments on the toxicity of the last comment in a conversation. Using various machine learning algorithms, it was found that the immediate parent comment was the most significant predictor of the last comment’s toxicity, with accuracies ranging from 69% to 75% for different toxic-level communities.

Dr. Agarwal said, “Both of these studies show meaningful trends policymakers and social media platforms can take advantage of when combating online toxicity.”