In the ever-digital age, international conflicts rely more and more on cognitive warfare, such as mis- or disinformation campaigns, or, more generally, influence campaigns that are propagated online. These are threats which take significant research into computer and information science to mitigate—research that the COSMOS Research Center has and continues to pioneer. Most recently, COSMOS Director Dr. Nitin Agarwal presented such research in a talk titled “Developing Socio-computational Approaches to Mitigate Socio-cognitive Security Threats in a Multi-platform Multimedia-rich Information Environment” at the 7th NYU Computational Disinformation Symposium, held on December 5th of 2023.

This symposium is a high profile event in the world of computational disinformation, funded by the DARPA Semantics Forensics (SemaFor) program and hosted by NYU’s Center for Cybersecurity (CCS)—made even more notable in that the symposium is an invite-only event bringing together a range of experts from the industry, the research community, government, civil society, and think tanks combined on the topic of disinformation. As Justin Hendrix, Nasir Memon, Stef Daley & the NYU team put it, in their introduction prior to the symposium,

“The explosion of activity and interest in generative AI applications has been accompanied by a commensurate amount of speculation about what it will mean for society and for democracy. For those more narrowly concerned with how to analyze media for signals of veracity, provenance, or intent, it is unclear how the rapid development and deployment of generative AI systems will impact their work.”

Seeking to address this problem, this symposium looks at how to forecast what the future in AI and cyberthreats might look like, or what frameworks can sharpen problem definition. It addresses the signals in the current digital sphere—technological, social, industrial, and political—that show where things are headed. It is interested, too, in how the forensic ecosystem made of researchers and organizations can come together to identify and address these issues. Past years of the symposium have included a theme beyond disinformation, such as the 3rd year’s focus on the “business of disinformation” where the network is considered as an ‘ecosystem’ that propagates disinformation. This 7th installment’s theme was “forecasts” or, as the NYU team put it, “talks, panels, and exercises that allow us to synthesize ideas and cast our minds forward to better understand where things are headed,” of which there were four kinds of participation: interactive discussion like panels and debates, virtual exercises or workshops engaging the community in ideas and questions, creative lightning rounds showcasing what media forensics and disinformation countering could look like in 2033, and 15-minute lightning talks where research and investigation developments were summarized. 

Dr. Agarwal presented his most recent research as part of the 15-minute lightning talks. This research he presented, titled “Developing Socio-computational Approaches to Mitigate Socio-cognitive Security Threats in a Multi-platform Multimedia-rich Information Environment”, parallels his most recent presentation before the symposium, the NATO Science & Technology Organization Symposium on Mitigating and Responding to Cognitive Warfare (STO-MP-HFM-361) that took place just last November. During the NYU symposium, Dr. Agarwal highlighted the notable contribution of his research: a multi-model, multi-theoretic approach which blends modeling, large datasets, and social science theories to outline and characterize influence campaigns conducted across multiple platforms. In particular, this research identifies key actors and groups or mobs that form either as a result of these campaigns or due to the efforts of bad actors. Predicting these forces matters significantly to cybersecurity, since, as Dr. Agarwal puts it, “Propaganda disseminated through social media could potentially be used to persuade susceptible targets to disrupt or delay military operations through protests or other ‘non-lethal’ resistance.” Compounding the issue, narrative propaganda can “be easily manipulated and influenced by bots, trolls, and other influence operation tactics,” he says. 

Thus this research matters significantly to many international military operations, but it also makes breakthroughs in the theoretic and intellectual understanding of disinformation. While his NATO presentation looked at how the research might be applied practically by NATO and allied nations, Dr. Agarwal’s presentation at the symposium highlighted the theoretical and methodological gaps in this research discipline that academicians, industry leaders, and policymakers ought to be thinking. 

Dr. Agarwal and COSMOS are thankful for the support for this research by the U.S. Air Force Research Laboratory, Office of Naval Research, U.S. Army DEVCOM Army Research Laboratory, National Science Foundation (NSF), Entergy, and Arkansas Research Alliance.