This week cosmographer Zachary Stine presented his paper “Agent-based models for assessing social influence strategies” at the 9th International Conference on Complex Systems hosted by the New England Complex Systems Institute in Cambridge, MA. It is a unique interdisciplinary forum that unifies and bridges the traditional domains of science and a multitude of real-world systems. The conference was held from July 22 – 27 in Cambridge, MA. The paper has been published in the conference’s proceedings available here: https://link.springer.com/chapter/10.1007/978-3-319-96661-8_14

Stine, the lead author of the paper, is a Ph.D. student in Computer & Information science at UA Little Rock. Dr. Nitin Agarwal, Jerry L. Maulden-Entergy Endowed Chair and Distinguished Professor of Information Science is also the co-author of the paper.

The paper argues that agent-based models (a class of computational models for simulating the actions and interactions with autonomous agents to measure the impact on the system) of opinion dynamics provide a useful environment for understanding and assessing social influence strategies. This approach allows the researchers to build a theory about the efficacy of various influence strategies, to be precise and rigorous about their assumptions surrounding such strategies, and highlights potential gaps in the existing models.

For this paper, the researchers adopted a strategy called ~ amplification, commonly employed by so-called ‘bots’ within social media. They treat it as a simple agent strategy situated within three models of opinion dynamics using three different mechanisms of social influence. Though there are many studies that have been published which show how bots propagate misinformation within social media, however, very few studies exist that show how the bots affect a population’s opinion.

Three broad classes of social influence models identified for this empirical study are 1. assimilative influence, 2. similarity biased influence, and 3. repulsive influence. A grand total of 91 unique sets of conditions were tested for this study. For each of these, 500 simulation runs were performed and analyzed, totaling 45,500 simulation runs.

It was assumed earlier that campaigns which were carried out by bots were highly effective in changing people’s/agents opinion through repetition tactics however through this experiment the researchers realized that to influence a group of agents under the three computational models mentioned cannot be easily achieved. They saw that a simple repetition of an opinion are much less reliable at changing an opinion of the agents and can be achieved under very narrow circumstances. They also observed that it is very difficult to influence the opinion of agents who inherently have two distinct set of opinion and such a population will continue to bipolarize when applying the simple repetition strategy. It is only under complex strategies that the researchers found the agents could be influenced. In other words, agents who had strong inherent opinions are very less likely to be influenced.

“While the findings presented in this paper are theoretical, they illustrate how small changes in our assumptions about how people influence each others’ opinions can dramatically affect the success or failure of a campaign that tries to manipulate a population’s opinions. It is often assumed that when bots on social media amplify some opinion, that inevitably more people will adopt the opinion being amplified. Our findings suggest that this assumption only holds under very specific and rigid assumptions about how people influence each other. Our findings will ideally show people studying the role of bots and online information campaigns that they must acknowledge their assumptions about social influence before they can accurately assess the threat of online information campaigns” – notes Stine. 

In conclusion, through this exercise and experiment the researchers feel that to influence a real audience (humans) through repetition tactics will be extremely challenging given the inherent human complexities that exist.  

This research is funded in part by the U.S. National Science Foundation, U.S. Office of Naval Research, U.S. Air Force Research Lab, U.S. Army Research Office, U.S. Defense Advanced Research Projects Agency and the Jerry L. Maulden/Entergy Endowment at the University of Arkansas at Little Rock.