Tarleton Gillespie (Microsoft Research), Jillian York (Electronic Frontier Foundation), Sarah Myers West (University of Southern California), José van Dijck (Royal Netherlands Academy of Arts and Sciences), Sarah Roberts (UCLA).
Tarleton Gillespie opened, noting that most users never run into content moderation rules: they never confront what gets deleted or suspended, particularly not the details. Other users run into those rules again and again. It’s also important to recognise that there’s a whole apparatus: not just the rules, but also people who evaluate content, complaints, and policy changes. This apparatus is largely opaque to users, even when users do interact with parts of it. It’s also opaque to scholars. How do we begin to open this up?
Sarah Roberts talked about the shift from our online presence being confined to a server in someone’s closet to being hosted by these multinational entities. Firms have outgrown any policy dimensions they (or others) can come up with, and they really seek outside assistance in developing their policy. Content moderation is often outsourced: what’s the experience of those working with this content? We also need to recognise that one person’s censorship is another’s security. Total automation of these processes, some suggest, would fix these concerns, but this wouldn’t solve issues with the opacity of moderation processes, and it wouldn’t prevent the production of problematic content.
Sarah Myers West voiced her frustration with the limits of research on moderation policies. Platforms are meant to be giving us greater voice, but in practice it can be incredibly difficult for marginalised users to get their content back up when it’s taken down – doing so usually requires getting help from an NGO. At-risk users tend not to find transparency reports and policy statements helpful, as they’re often lacking in necessary detail (especially when translated) and not adequately localised.
Jillian York discussed onlinecensorship.org, which came out of conversations she had with Palestinian activists discussing the kinds of content they were seeing censored. This project tries to track terms of service takedowns, which are much murkier than government censorship practices. There are very different contexts here, for example, when content is taken off a platform, it still exists elsewhere. This might not matter for some users, and some kinds of content. But some users’ Internet use is heavily (or exclusively) focused on particular platforms, and as platforms like Facebook encourage users to stay within the bounds of the site it can mean that content doesn’t exist elsewhere.
José van Dijck reiterated some of her ideas from her keynote, talking about the shift from having companies and institutions that (for example) specialised in news, to companies like Google and Facebook that are data companies…even when they’re clearly incredibly important sources of news. We need to discuss this discrepancy.
After the initial provocations, the discussion opened up, with contributions from the audience as well. Some great points came out of this, including around the ability of users to create change, and whether news organisations and journalists should be treated differently to ordinary users.