By Olivia Shoemaker
Having grown up in Minnesota, I felt convicted of my white privilege and supremacy as the systems of racism I had participated in as a white Minnesotan came into greater focus following the killing of George Floyd at the hands of local police. My first thought – how could this happen in Minnesota? – betrayed my blindness and ignorance towards the pervasive racism in my community.
At the same time, I was disoriented by the churning of rumors and conspiracy theories that distracted from the movement for justice as I sought to support the protests.
Platforms like Twitter, Facebook, and Instagram have elevated the voices of critical advocates. They have also provided a breeding ground for social disinformation, likely driven in part by foreign actors who seek to exploit social divisions in the U.S, undermine local grassroots initiatives, and selectively support divisive narratives.
The U.S. government, social media companies, and Internet users must all work to mitigate the influence of social disinformation and bot activity in order to address deep-rooted and long-standing racial inequalities that are also quickly becoming national security concerns.
Five days after Floyd’s death, Minneapolis Mayor Jacob Frey tweeted that the city was possibly confronting “foreign actors” who sought to “destroy and destabilize [the] city.” On the same day, Senator Marco Rubio reported similar concerns through his role on the Senate Select Committee on Intelligence, noting that he was seeing “counter-reactions from social media accounts linked to at least 3 foreign adversaries.” The Department of Homeland Security officially “warned of Chinese, Russian and Iranian action in spreading disinformation in the wake of the protests” two weeks after the killing.
While the information operation campaign underway surrounding the killing of George Floyd is still being investigated, the playbook has already been set.
In 2016, Russian trolls intervened in the aftermath of the death of Philando Castile in St. Paul, Minn., by targeting posts and narratives surrounding the Black Lives Matter movement. Fake accounts created invitations to demonstrations on Facebook that conflicted with previously-organized events. Merging real and false narratives created an environment of distrust and confusion. While misinformation is inaccurate information that is passed on accidentally, disinformation intentionally and often maliciously propagates false information.
In other cities like Baltimore and Ferguson, Russia’s Internet Research Agency created fake accounts like “Don’t Shoot,” “BlackToLive,” and “Black Matters US” designed to inflame violence and, separately, “encourag[e] extreme right-wing voters to be more confrontational” in the run-up to the 2016 election. These Russian trolls posed as legitimate political parties and local media publications and geographically targeted Americans through carefully-timed campaigns in cities with high-profile cases of police brutality.
While social media disinformation campaigns are largely coordinated by humans, foreign actors have increasingly turned to the use of automated social bots to expand the scope and sustainability of disinformation campaigns.
Researchers from Cornell University define social bots as “social media accounts controlled completely or in part by computer algorithms.” Social bots make up large portions of any social media user base, including up to 15% of Twitter accounts in 2017. Many of these bots are harmless, used to push out timed content or as fake followers for celebrities or aspiring influencers, but problems arise when human-mimicking accounts seek to opaquely manipulate public opinion or behaviors.
Increasingly, social bots use advanced AI that can mimic real profiles by stealing the profile pictures of unsuspecting users, engage with other users, amplify misinformation and disinformation, and build follower bases by cultivating specific interests and following popular accounts. On their own, these bots are detectable by wary social media users, which is why generators of social disinformation are now coordinating the actions of individual bots into “botnets.” These vast networks of largely real looking accounts work together to support selected content and often are only detectable through machine learning, if at all.
Bots use algorithms to convince us that certain opinions or events on social media are more popular than they actually are; they can also amplify protests and demonstrations, sometimes created by other fake accounts.
Local organizer Mia Grimm recalls taking over a demonstration in 2016 that suddenly popped up on social media in a random part of the city after the death of Philando Castille. The event divided the number of people who were showing up to BLM-organized protests and Grimm felt compelled to dispatch another BLM advocate to the new event to make sure that anyone who showed up would be protected by organizers. The Facebook page that created the event was later traced back to the Russian Internet Research Agency.
While it will take a while to tease out the impact of foreign intervention in 2020, my friends and family in Minnesota have already witnessed the ways that social media and the amplification of disinformation can influence their communities and the fight for racial justice.
A family friend chased cars with white supremecist stickers and no license plates off of his street after rumors spread on social media that his neighborhood was unguarded by police, ripe for looting, and a safe haven for Klan members.
My colleagues gossipped and grew angry about the text messages of a semi truck driver who nearly plowed into a crowd of protestors on an I-35 highway bridge. It turned out later that the screenshots had been fake; law enforcement had never accessed his phone.
I watched in shock as students I had attended high school retweeted and reposted conspiracies that Bill Gates or George Soros were the grand orchestrators behind George Floyd’s death (or lack thereof) rather than confronting the reality of police brutality.
Foreign-controlled social bots are not creating racial divisions out of thin air; rather, these countries are taking advantage of pre-existing inequalities that America must tackle.
In Minnesota alone, 32 percent of black residents fall under the poverty line in contrast with 7 percent of their white counterparts. White students score 20 percent higher on standardized reading tests in Grade 4 than African American students, indicating educational and socio-economic disparities. The homeownership gap is one of the biggest in the country, with around 25 percent of black families owning homes compared to 76 percent of white families.
Racism in the U.S. is an asset to foreign actors who see the U.S.’s occupation with internal divisions and growing violence as a threat to America’s international reputation as a country of peace and justice. The Russian success in its 2016 election campaign to “exacerbate social divisions in U.S. culture” proves there is no good reason for foreign actors to stop. Actors like China, Iran, and North Korea are also getting involved. With increasing automation of bot activity, a small group of people could have extraordinary influence over our country.
Social disinformation campaigns are constantly evolving to become less detectable. It is also increasingly impossible to track where posts originate from and if they are connected to coordinated foreign efforts. As tech companies and governments become more attuned to organizations like the IRA, actors turn to creative solutions like outsourcing disinformation campaigns to proxy groups in locations like Ghana and Nigeria.
Regulators and individuals must take action.
The first and most obvious solution is better policing of disinformation by law enforcement agencies and social media platforms. Algorithms have allowed regulators to move beyond crowdsourced detection of bot activity into the realm of supervised machine learning where tools like the “Botometer” differentiate human and bot behavior by looking at factors like time zone, language metadata, device metadata, and content deletion patterns. To detect the botnets that are transcending traditional algorithmic patterns, researchers are now exploring evolutionary algorithms that learn with human input over time.
CREDIT: https://botometer.iuni.iu.edu/#!/
To aid researchers, social media platforms must provide open API’s so that social scientists can investigate trends. As a University of Oxford report comments, “sharing data about public problems should be more than performative.” Social media companies must also make bot detection methods more publicly available to their users to increase awareness of disinformation. Twitter is already taking a small step by warning users before they retweet misleading information.
The U.S. government can follow suit by considering changes to Section 230 of the Communications Decency Act, which currently releases platforms from liability of the content that is posted on them.
Even in a world of exceptional AI detection efforts, there is simply not enough time to flag or remove all disinformation on the internet. With hostile, foreign-run information operations becoming “regular and acknowledged parts of the social media ecosystems of the future,” we need greater individual attentiveness to information consumption. This more conscientious social media consumption cannot and should not be enforced by any entity except the individual. We all need to verify the information we consume, especially if it confirms our preconceived biases.
A final solution lies outside of cyberspace. We need strong leadership, both political and grassroots, that take meaningful change and racial justice reform seriously. If we continue to ignore issues of racial justice and inequality that are fundamental to our history as Americans, then we fall directly in the hands of those who spread disinformation in the first place.
Disinformation plays no quality role in the truth-seeking process necessary to American democracy and strong American leaders must look past the veil of confusion on social media to the tangible problems and racial injustices that plague our society and my home, Minnesota.
Olivia Shoemaker is an intern at the Lobo Institute and senior at Yale University where she studies Global Affairs. After graduation, she hopes to work in the field of national security. You can find her on LinkedIn or contact her at olivia@loboinstitute.org.
Zack Baddorf, Senior Fellow at Lobo Institute, contributed to this report.
The views and opinions expressed in this paper are the views of the author and not necessarily the views of Lobo Institute. For more information on the institute or to get on the mailing list for our papers and LoboCasts, please go to Lobo Institute.
One Comment