Research Indicates Facebook’s Design Hinders Misinformation Control

Amidst the proliferation of online misinformation during the COVID-19 pandemic, various platforms instituted policies and measures to counter the spread of false information. The effectiveness of these initiatives, particularly Facebook’s vaccination misinformation policies, has come into question, as indicated by research published in Science Advances. Titled “The Efficacy of Facebook’s Vaccine Misinformation Policies and Architecture During The COVID-19 Pandemic,” this study involved contributions from researchers at Johns Hopkins University and was led by experts from the George Washington University.

The study’s findings revealed that Facebook’s endeavors were hindered by the inherent design elements of the platform itself. David Broniatowski, the lead study author and an associate professor of engineering management and systems engineering at GW, emphasized the need to shift the focus from merely addressing content and algorithms when combating misinformation. He pointed out that Facebook’s core purpose, which revolves around connecting individuals with common interests, played a significant role in perpetuating vaccine hesitancy and the quest for misinformation.

Facebook’s structural features, including fan pages endorsing brands and community influencers, allowed a select group of influencers to reach broad audiences. These influencers could create groups designed explicitly for fostering communities where members could exchange information, including misinformation, or other engaging content outside the platform.

Intriguingly, despite Facebook’s substantial efforts to eliminate anti-vaccine content during the pandemic, engagement with such content did not decline significantly compared to prior trends, and, in some cases, it even increased. This alarming trend underscored the difficulty society faces in eradicating health misinformation from public spaces. Additionally, the content that remained on the platform increasingly linked to off-platform, low-credibility websites and misinformation on alternative social media platforms, further fueling vaccine hesitancy.

The unrestrained spread of misinformation was compounded by the fact that the remaining anti-vaccine content on Facebook became more misleading, often featuring sensationalized false claims about vaccine side effects that were difficult to fact-check in real-time. The study noted “collateral damage” where pro-vaccine content may have also been removed due to the platform’s policies, ultimately leading to a more polarized landscape of vaccine-related content.

Furthermore, producers of anti-vaccine content demonstrated more effective coordination in distributing content across pages, groups, and users’ news feeds compared to pro-vaccine content producers. Despite Facebook’s adjustments to its algorithms and content removal efforts, the platform’s inherent architecture continued to counteract these interventions.

Broniatowski’s work suggested that social media platforms should collaborate to establish a set of “building codes” informed by scientific evidence to mitigate online harms and promote public health and safety. These codes, akin to the rules governing physical structures, should be developed through partnerships between industry, government, and community organizations, informed by robust science and best practices.

Notably, this research stands as the first and only scientific evaluation of the world’s largest social media platform’s systematic attempts to combat misinformation and misinformative accounts.

Source: George Washington University

Leave a Comment