The Ripon Forum

Volume 51, No. 5

November 2017

Reactions & Regulation in the Age of Computational Propaganda

By on November 13, 2017

by NICK MONACO & SAMUEL WOOLLEY

Washington’s scrutiny of Silicon Valley’s biggest tech firms has reached its peak in recent weeks, as various committees work to discern the ways social media platforms were used to manipulate public opinion during the 2016 U.S. Election.

Facebook and Twitter have revealed the extent to which the Internet Research Agency, a contracted arm of the Russian government, penetrated and exploited social networks during the contest. The platforms have disclosed hundreds of thousands of dollars in online advertising and 671 accounts and pages linked directly to the Agency, which has been disseminating disinformation online with varying degrees of success since at least 2014.

In response to repeated requests from lawmakers and expert researchers, Facebook and Twitter have announced they will take measures aimed at curbing the influence of disinformation on their networks. These efforts include promoting transparency in online political advertising on both platforms. While these moves are steps in the right direction, more still needs to be done to illuminate how American citizens are being politically coerced, harassed, and silenced on social media. Specifically, how does manipulation of public news algorithms happen, and how is computational propaganda effected at scale? Regulation against certain uses of disinformation tools, such as bots, would be a welcome development. Legislation targeting digital tools themselves, however, could be detrimental to the internet and free speech.

Regulation against certain uses of disinformation tools, such as bots, would be a welcome development. Legislation targeting digital tools themselves, however, could be detrimental to the internet and free speech.

The use of social media bots has been at the forefront of congressional inquiries into Russian manipulation in 2016. Bots are computer programs built to carry out automated tasks online. These software entities can be programmed to interact with users, promote messages online, or perform more mundane tasks such as manage permissions in a chatroom. Recent media coverage has focused on social bots – iterations of this automated technology that pose as humans or interact with humans online. Some of the more unscrupulous social bots on sites like Twitter, Facebook and YouTube promote political messages and game social media algorithms to drive online trends. They have the effect of manufacturing political consensus online — they create artificial trends or manipulate news feeds on social media, making particular information or people appear as if they are supported by real human traffic.

In and of themselves, however, bots are not inherently good or bad – they are merely an infrastructural part of the internet. In fact, bots make up slightly over half of all internet traffic. Tools that everyday users of the internet know and love — such as search engines, Wikipedia, and chatrooms — would not be possible without the use of these automated agents. It is for that reason that any policy concerning bots should be targeted against specific, malevolent uses of them – against driving political messages or rendering protest hashtags irrelevant, for example – instead of taking the form of a blanket ban on bots altogether. Social bots can be used to hide the identity of those behind political manipulation and massively amplify digital attacks. They can, however, also be used as a social prosthesis for democracy — allowing journalists and civil society groups to scour large datasets and automate aspects of reporting and political communication that would otherwise have to be done manually.

Some argue that private companies alone should be the ultimate arbiters of what is regulated on their networks. While there is merit to this view, especially in terms of preserving the rights of users to freedom of expression, there are also important caveats that cannot be ignored. Indeed, the past records of these companies suggest that self-regulation is not sufficient to counter the fact that big tech’s conflicts of interest tend to go unchecked until it is too late. The tough truth is that, left to their own devices, it is tenuous at best to claim that tech giants will curb computational propaganda on their networks. With Google and Facebook representing 77% of digital ad revenue in the States and nearly all of that market’s new growth, the abstraction of democratically-oriented software design is plainly secondary to concrete profits.

Social bots can be used to hide the identity of those behind political manipulation and massively amplify digital attacks. They can, however, also be used as a social prosthesis for democracy.

A litany of events over the past year alone has made this evident: Mark Zuckerberg dismissed out-of-hand the idea that online disinformation may have influenced the 2016 election as “pretty crazy”; Google stifled criticism in the U.S. of its business practices abroad; and, Twitter deleted data critical to understanding exploitation of its platform. This was mere days after the platform publicly criticized third-party research based on data limits the company itself is responsible for. A fundamentalist defense of purely private regulation also ignores the very real danger of regulatory capture: tech giants have spent over $150 million in lobbying in the past decade, with a vast increase in the past five years.

Civil society groups and expert researchers — such as the Alliance for Securing Democracy, the Atlantic Council’s Digital Forensics Lab, Bellingcat, and ComProp — have all made significant contributions to illuminating the dark underbelly of online disinformation, even as tech giants have been coy about their knowledge or critical of such research. They’ve revealed that there’s no marketplace of ideas when bots can amplify or dampen any message. Only after sufficient outcry from legislators, researchers, and the public have private companies even begun to openly acknowledge these problems. Even after Facebook admitted the presence of Russian disinformation on its network, for instance, Columbia Professor Jonathan Albright was quick to point out that it had vastly underestimated the number of users the propaganda had reached. Bots and false amplifiers can even be said to represent a new form of censorship – one based on content amplification rather than content suppression. Legal experts have astutely observed that our current legal framework is ill-equipped to handle such challenges.

It is plain that all sides have their blind spots: scholars and researchers grasp technology’s impacts on democracy without an access to the data, computing power, or business savvy of the private sector; business interests can be naïve and myopic about the political and social harm their platforms can inflict on democracy; policymakers can lack the technological literacy to craft effective policy.

What is needed is more cooperation between experts, private industry and policymakers to craft both public and private policies that will preserve the inviolate American principle of free speech, while also limiting the insidious harm that abuse of online networks can incur.

What is needed is more cooperation between experts, private industry and policymakers to craft both public and private policies that will preserve the inviolate American principle of free speech, while also limiting the insidious harm that abuse of online networks can incur. Private self-regulation, uninformed, heavy-handed public policy (such as those currently being proposed in Brazil and Germany), or maintaining the anarchic status-quo are all undesirable options. A blue-ribbon commission composed of members from all parties would be a proper initial step in the right direction, and have the highest probability of benefitting all members and ensuring that the internet continues to be a positive-sum game for everyone involved.

Malicious uses of technologies, explained by those academic experts who best understand them, can be prohibited by law by informed legislators. This in turn would provide private companies with necessary latitude in the management of their proprietary software, while also preventing regulatory capture. As Tim Wu writes, “If we believe in liberty, it must be freedom from both private and public coercion.”

Samuel Woolley is the Director of Research of the Computational Propaganda Project at the Oxford Internet Institute, University of Oxford. He is also a fellow at Alphabet’s think-tank and technology incubator, Jigsaw. Nick Monaco is a research associate on the Computational Propaganda Project. He is also a research associate at Jigsaw.

Print Friendly, PDF & Email

Subscribe

If you enjoyed this article, subscribe now to receive more just like it.

Comments are closed.

Top