AXIOS Media Trends
Illustration: Aïda Amer/Axios
Misinformation bots are increasingly deploying more sophisticated techniques to game social media platforms, even as the platforms are making changes to weed them out, according to new studies.
Why it matters: Most Americans say they can't distinguish bots from humans on social media, according to a recent Pew Research Center survey.
Driving the news on bots:
Focus on speed: The spread of low-credibility content by social bots happens very quickly, according to a new study from Indiana University published in Nature Magazine. The study suggests that bots amplify questionable content in the early spreading moments before it goes viral, like the first few seconds after an article is first published on Twitter.Using specific targets: Bots target specific social influencers, who are more likely to engage with bots, according to a new study from the Proceedings of the National Academy of Sciences (PNAS). This elevates content more quickly than if it were to be exposed to everyday users or other bots and suggests that bots are more strategic in who they target than previously thought.Elevating human content:Bots aim to exploit human-generated content, because it is more prone to polarization, according to the PNAS study. "They promote human-generated content from (social) hubs, rather than automated tweets, and target significant fractions of human users," the report said. This helps social bots accentuate the exposure of opposing parties to negative content, which can exacerbate social conflict online.Targeting original posts, not replies: Bots spread bad content that is created through an initial tweet or posting, according to the Nature study. "Most articles by low-credibility sources spread through original tweets and retweets, while few are shared in replies," per the study. "This is different from articles by fact-checking sources, which are shared mainly via retweets but also replies."Gaming metadata: Bots are using more metadata to mimic authentic human engagement, not just the way that humans post, according to a new studyfrom Data & Society. As platforms get better at detecting inauthentic activity, bots are using metadata — photo captions, followers, comments, etc. — to make their posts seem more human-like.
Social platforms have been trying to reduce the content-elevating signals that are easily gamed by bots. Twitter, for example, has made follower counts appear less prominent on its iOS app by making the font size smaller in a new redesign effort, per The Verge.
What's next: The best way to tackle the problem at scale is by identifying the source of inauthentic behavior, says Joshua Geltzer, executive director of Georgetown University's Institute for Constitutional Advocacy and Protection.
"Although it's improved over the past two years, there needs to be an even better collaboration between the government and the private sector about detection of bad activity in the early stages. While the government doesn't normally share this type of information with the private sector, they should be doing so in order for platforms to act on it and vice versa."
— Joshua Geltzer