Dive into "The Gatekeepers," a gripping exploration of the urgent challenges social media platforms face in an era rife with digital disinformation. As hosts Donald Trump, Jamie Bartlett, Maria Ressa, and others reveal, foreign governments, particularly Russia, have leveraged these platforms to meddle in political affairs, as seen in the US elections. Discover how Facebook and Twitter executives grapple with the use of their sites for sophisticated voter manipulation—a startling testament to the complexities in curbing foreign interference.
As the episode unfolds, the pressing issue of misinformation and its rampant spread on platforms like Facebook takes center stage. Delve into the economic underpinnings that fuel fake news, where profit-driven individuals outpace foreign actors in spreading falsehoods, amplified by algorithms that prioritize engagement over accuracy. Meanwhile, tech giants like Facebook and Google confront the harsh reality of being unprepared for such disinformation campaigns, with executives like Zuckerberg facing the consequences before US Senate committees. Through this compelling dialogue, witness the transformative journey of tech companies as they transition from resistance to reluctant acceptance of their pivotal role in moderating content.
Sign up for Shortform to access the whole episode summary along with additional materials like counterarguments and context.
Foreign governments, particularly Russian entities, have employed social media platforms to influence political outcomes, such as the US elections. The Internet Research Agency, a Russian troll farm, sent agents posing as tourists across the US to collect information to craft a social media strategy aimed at election interference. Facebook and Twitter executives have admitted their platforms were used in unexpected and sophisticated ways to manipulate voters, highlighting a significant challenge in detecting and preventing such foreign interference.
Misinformation and fake news, particularly on Facebook, have proliferated, with malicious actors leveraging the platform's emphasis on engagement over the truth. In Macedonia, teenagers discovered substantial financial gains could be made from generating fake news favoring Donald Trump during the 2016 election. Their success was due in part to Facebook's algorithms that prioritize engagement, which often amplified false stories, a prime example being the fabricated story claiming Pope Francis endorsed Trump. This phenomenon was driven more by individuals seeking profit than by foreign interference, although both played notable roles.
Tech platforms, including Facebook and Google, have been largely unprepared for digital disinformation campaigns. Executives like Facebook's Mark Zuckerberg underestimated the threat, dismissing early warnings from journalists like Maria Ressa regarding manipulative activities on the platform. Meanwhile, early concerns raised by figures such as Roger McNamee about election interference were ignored. The recognition of these campaigns as a form of warfare came too late, prompting a belated investment in security measures, including the expansion of Facebook’s security team to approximately 40,000 employees.
Tech companies face growing challenges in content moderation, having to act as gatekeepers—a role they initially resisted but now embrace due to the prevalence of disinformation and legal pressures. Content moderators like Yoel Roth at Twitter have found themselves overwhelmed by the volume and nature of harmful content. The awakening to their platform's roles in public opinion came sharply after US intelligence reports detailed Russian interference using Facebook. Following this, tech CEOs like Mark Zuckerberg were called before the Senate committee, marking a turning point in social media companies accepting the necessity to manage content more actively and decisively.
1-Page Summary
Social media platforms have been manipulated by foreign entities, notably Russian troll farms, with the intention of influencing political outcomes.
In an alarming revelation of cyber interference tactics, it has been disclosed how Russian agents have targeted US electoral processes.
Agents from the Internet Research Agency, a Russian troll farm, deceptively posing as tourists, traversed the United States and collected numerous photographs. Upon their return to St. Petersburg, it came to light that their true purpose was centered around influencing the US presidential election through a concerted social media strategy.
Facebook's Mark Zuckerberg has expressed regret over the platform’s slow response in recognizing the sophisticated information operations conducted by Russian operatives in 2016. The platform had been on guard for traditional cyberattacks but was unprepared for this kind of manipulation.
S ...
Manipulation of social media by foreign governments
The growth of misinformation and the phenomenon of fake news, particularly as it relates to Facebook, highlight an urgent problem in the digital age where truth is often overshadowed by engagement.
In 2016, teenagers from Veles, Macedonia, discovered that they could earn significant income from crafting and distributing fake news on Facebook.
Craig Silverman reports a significant spike in engagement with fake news stories on Facebook as the 2016 U.S. election approached. Marco, a young man from Veles, created pro-Trump English language websites that spread misinformation. These teenagers, including Marco, were noticed for driving new cars and wearing designer suits—all financed by the ad revenue from their websites.
Jamie Bartlett notes that fake news stories, especially those with right-wing slant during the 2016 U.S. election, were particularly successful. The most viral story, which falsely claimed Pope Francis endorsed Donald Trump, is an example of how these stories were designed to appeal to certain demographics and exploit emotional reactions.
The websites creat ...
Spread of misinformation and fake news
Tech giants are now grappling with the reality of being unprepared for the digital disinformation campaigns that have had a global impact. Despite implementing new policies, earlier warning signs were either dismissed or underestimated by these platforms.
There was a clear trend of company executives underestimating the threat posed by disinformation campaigns, with notable figures like Maria Ressa and Roger McNamee raising early alarms.
Jamie Bartlett reports on how platforms like Facebook initially failed to distinguish between fake news and legitimate content, focusing on engagement without considering the consequences. Nobel Peace Prize-winning journalist Maria Ressa ran Rappler, the Philippines' biggest online news site, and noticed suspicious pro-Duterte activity on Facebook that indicated manipulation rather than genuine community concern. When Ressa brought her data to Facebook's Singapore office showing how sock puppet accounts reached 3 million people and targeted her personally, the executives did not initially grasp the significance of her warnings.
Craig Silverman notes that Mark Zuckerberg thought it a "crazy idea" that fake news on Facebook could influence the election outcome. Contrastingly, Roger McNamee, a previous Zuckerberg mentor, recognized the detrimental effects of Facebook's culture on democracy and elections. He attempted to alert Zuckerberg and Sheryl Sandberg, providing an article draft outlining hi ...
Tech platforms unprepared for disinformation campaigns
The evolving challenges tech companies face in content moderation underscore their reluctant transformation into gatekeepers—a role they initially tried to avoid but one they're increasingly pressured into by disinformation campaigns and legal concerns.
Yoel Roth, a former content moderator at Twitter, recalled an incident that highlighted the severe nature of content moderation when he was shaken by the graphic content he had to review. This incident underscores the difficulties moderators face as they sort through harmful content that proliferates on social media platforms.
After Donald Trump's election, Facebook employees expressed distress, signaling a growing awareness of the platform's influence on public opinion. US intelligence reports revealed that Russian propagandists used Facebook to interact with vast numbers of users, pushing their beliefs to more extreme positions rather than directly instilling false convictions.
Tech company CEOs, including Facebook's Mark Zuckerberg, were summoned before the Senate committee to address these issues. ...
Struggles with content moderation
Download the Shortform Chrome extension for your browser