Skip to main content
Faculty

Don’t Feed the Trolls: A Public Health Guide To Countering Twitter Bots and Trolls

Back to News
SPH News red placeholder image
People on their cell phones

Social media presents a promising opportunity for promoting public health, but as bots and trolls become an increasingly large part of the online landscape, public health researchers, practitioners and communicators need to have the right knowledge and tools to combat the spread of misinformation by these malicious actors online.

new guide, developed by Faculty Research Assistant Amelia Jamison and Professor Sandra Quinn of the University of Maryland School of Public Health, describes the types and behaviors of malicious actors on Twitter and introduces strategies to combat them both on and offline. George Washington University’s David Broniatowski, an assistant professor in the Department of Engineering Management and Systems Engineering, was a co-author. The idea for the guide arose from the team’s earlier research on vaccines and weaponized health communication. It was published in the American Journal of Public Health in April.

“We’ve seen the spread of vaccine misinformation across social media platforms and how that can very quickly undermine public health and confidence in public health,” says lead author Amelia Jamison. “These malicious actors are not new, but they’re also not very well known. As a public health researcher, I started to keep a list and this guide is meant to bridge the knowledge gap for other public health researchers.”

Using publicly available datasets from sources like NBC, the United States Congress, Twitter and a pipeline of bots and trolls from their research partners at Johns Hopkins University, the researchers found that the threat posed by malicious actors to public health is multifaceted. They identify and describe five types of malicious actors on Twitter: automated accounts including traditional spambots, social spambots, content polluters, fake followers and human users including general trolls and state-sponsored trolls.

They found that these malicious actors can directly influence users by spreading content that works against public health goals. For some, this may be a primary motivation. For others, the topic of vaccines may be used as “clickbait” to snare new followers, sell products or spread malware. These tactics distort social media data used by public health researchers to gauge public sentiment or surveillance and also erode public confidence in online public health communications.

To combat these malicious actors online, the researchers underscore how increasing the presence of “official” health narratives online may not be enough.

“Shutting down bots and trolls is not a viable solution,” explains Jamison. “We should be combating their messages, but not in a way that ‘feeds the trolls.’” Thinking like a bot may be the first step in combating them, both online and off.

Recommendations include increasing social media literacy to help alert the public to malicious users and increase the recognition of bot-driven narratives, as well as an offline effort for public health practitioners to use trusted relationships to dispel misinformation directly.

Most importantly, the researchers urge public health researchers, practitioners and communicators to form interdisciplinary partnerships with computer scientists, systems engineers and other technology experts.

They hope that this guide will spread awareness about the different malicious actors and their tactics and call for more research to fully understand how misinformation spreads online.

The study, published earlier this year in the American Journal of Public Health, is part of the cooperative Supplementing Survey-Based Analyses of Group Vaccination Narratives and Behaviors Using Social Media project co-led by Dr. Sandra Quinn and Dr. David Broniatowski.

Related Links:

  • Categories
  • Faculty
  • Research
  • Departments
  • Department of Family Science