Combatting Social Media Manipulation—GloballyAugust 2020
Topics: Artificial Intelligence, Public Health (General), Election Integrity, Social Networking, Information Security, Social and Behavioral Sciences (General), Information Management (General)
There's a plague of disinformation and misinformation infecting our information sources and social media exchanges today. And it's having profound and wide-ranging effects—from undermining the legitimacy of elections to influencing our personal healthcare decisions.
"Propaganda has been used as a tool of manipulation for centuries. What's new is the scale of the problem," says Jennifer Mathieu. She leads MITRE Social IntegrityTM Platform, a hosted ecosystem that provides dis- and misinformation threat detection and sharing services to users in near-real time.
While social media networks enable the broad dissemination of information, these same networks also allow disinformation (intentional manipulation) and misinformation (unintentional) to propagate at lightning speed.
"Various actors take advantage of that fact," Mathieu says, "and the dis/misinformation they introduce can muddy the waters and create barriers to trusting what people see online."
Attacking the Problem Globally
Currently, the world's citizens must discern for themselves which information is accurate and which sources can be trusted.
"No single organization can keep up with the scale of dis/misinformation, so we have to come up with creative ways to address it," Mathieu says. From MITRE's perspective, that means mobilizing organizations from across the world to work on solutions together.
"Social media platforms are international, so we need to engage globally," she explains. "We've spoken to over 40 organizations in the last two years—including think tanks, nonprofits, fact checkers, academics, industry, and the social media platforms—and everyone agrees with that."
Stopping the Spread at Its Source
MITRE and our partners have identified a near-term effort to focus on: creating a threat detection service that works across social media channels, along with a mechanism for sharing the identified threats broadly.
"One use of our platform is to identify topics that are propagated by inauthentic accounts," says Mike Fulk, technical lead for the MITRE Social Integrity Platform. "These include algorithm-based, or 'bot' accounts, as well as individuals masquerading as someone other than themselves, or 'sock puppets.'"
To achieve this goal, we're partnering with many of the world's leading software tool providers. "We serve as an integrator of cross-social media platform efforts," Mathieu explains. "Our platform includes a unique set of features to support topic-based monitoring across sources.
"For example, for COVID-19 we're monitoring emerging topics and identifying instances where inauthentic accounts are manipulating content. We're also measuring the amplification of dis/misinformation across social media platforms. We then share with users the topic and emerging trends that are being manipulated."
MITRE's next steps, she says, will be to recommend science-based communication strategies to address the identified dis/misinformation issues and resulting behavior.
"For instance, we might recommend a health communication persuasion technique based on the inoculation theory," says Denise Scannell, who leads MITRE's Health Communication Science team. That's a strategy to protect attitudes from change, aimed at reducing the effect of dis/misinformation."
These threat detection services will allow for earlier detection and sharing of dis- or misinformation topics and eventually enable a coordinated response by users of the services.
"The services will also help inform online integrity policies, procedures, and standards, and ultimately increase trust in the online and news infrastructure," Mathieu says.
A Decade of Experience in Social Media Analytics
MITRE's role stems from two things: expertise in the field and a unique vantage point as the operator of seven federally funded research and development centers (FFRDCs).
"Our role is an objective one, so working collaboratively with a broad spectrum of stakeholders is natural for us," Mathieu says. "And we've worked on online and social media analytics for 10 years—far longer than most organizations."
During that time, we've developed a number of tools to detect and combat online manipulation. In 2019, we presented a Transatlantic Vision for Addressing Disinformation at the EU DisInfo Lab Annual Conference.
"Recently, we designed a prototype Coronavirus Subreddit Dashboard to help detect dis/misinformation about COVID-19," Mathieu says. "For instance, we identified an app claiming to allow users to self-diagnose for the coronavirus."
We're also active in another crucial area prone to online manipulation—election integrity. Our SQUINT™ app lets election officials easily report dis/misinformation from online and social media sources about the voting or registration process—with a swipe tap, or click. "We support the app by providing trending information for the topics identified by election officials."
But independently developed tools alone, Mathieu acknowledges, aren't enough.
"Only by working together to combat the spread of dis/misinformation will we truly be successful in curtailing the harm it can cause. That's our focus and our vision."
—by Marlis McCollum