The Responsive Policies session began with Nathan Fisk‘s work on ‘Vile pornography, sexual miscreants, and electronic stalkers: policy discourse of youth internet safety‘. Fisk argued that we are in a general mode of crisis, in which we’re seeing a transition from ways of controlling society that are focused on segmented, regimented space and time (the panopticon) to forms of control that are much more about continuous streams of surveillance and checking in. As such, we see frequent discussion of ‘choice’ and ‘individual freedoms’, although in many ways these are illusory.
In this context, Fisk discusses the ways in which media create moments of panic that have been used to extend policies of regulation. For example, movies like War Games prompted a shift from the idea of the Internet not just as a bunch of data being moved around data, but rather as a space you can go to (and also a space that can be attacked).
Similarly, recent concerns about ‘cyberbullying’ have lead to regulators pressing social media platforms to extend their own mechanisms for dealing with complaints, and reporting data on ‘bullying’ to regulators.
Next, Stacey Blasiola presented, You [don’t] gotta pay the toll troll: A Transaction Costs Model of Online Harassment, considering ways to change platform design to make it harder to engage in harassment. One of the interesting differences between Fisk’s work and Blasiola’s is that where Fisk talks about top-down pressure from regulators during moments of panic, Blasiola emphasised that the pressure on platforms like Twitter to deal with harassment has mostly come from users. A while ago Dick Costolo, Twitter CEO, acknowledged that Twitter has sucked at dealing with harassment; Blasiola pointed out that there’s a difference between sucking and something, and not trying.
Rather than asking, “what can victims do to respond to trolling?”, we should be thinking about ways that platforms and communities can make trolling [or harassment, which partially overlaps with but isn’t entirely the same as trolling] harder. One aspect of this might involve introducing transaction costs: friction that makes it more difficult to engage in trolling behaviour.
At the moment, the costs of experiencing, and trying to respond to, harassment, are high. Targets of harassment might have to deal with large volumes of abuse, lose the audience and reputation they’ve accrued online as they shift offline or try to protect their privacy, and spend a lot of time trying to report abusive tweets. For attackers using anonymous accounts, there’s little concern about the costs of losing their audiences or reputation – they can create more accounts – and sending an abusive tweet takes far less time than reporting one.
Twitter doesn’t want to lose users, and users don’t want to lose their audience (or they’d just be using a private network). Solutions might therefore include:
- Options to flag one’s own account as under attack, which could activate protective features, like blocking tweets from new accounts.
- Auto-blocks and shared block lists, like blocktogether.
- Increase the difficulty of tweeting @mentions (here Blasiola is drawing on research on spammers, where increasing friction can have a significant impact).
In part, this drew on Blasiola’s experience with attacks on video game communities she was involved in, where relatively simple tactics ended the issue.
I think there are a bunch of interesting ideas here, including in drawing out the juxtaposition between top-down and bottom-up forms of regulation (which are not always clearly differentiated). I’m kind of curious, in a half-baked-ideas-just-forming kind of way, to think about what the idea of shifting forces of friction in social media would look like if we drew on Tsing‘s work on friction (as both a slowing force and necessary for traction) instead of Coates’. And about how many hurdles would need to be introduced to actually significantly reduce harassing behaviour (having just read This is why we can’t have nice things, in which Whitney Phillips tracks some of the ways in which trolls shift tactics to accommodate technical changes).
It’s not actually about ethics in games journalismI ended the day with the Gamergate session. As you may have noticed reading yesterday’s evasive notes, I don’t feel entirely comfortable writing about Gamergate. Accurate threat assessments seem challenging. Not being able to write about stuff happening at a conference feels wrong. Potentially putting other people at risk by making them more visible, or by writing about their work in ways that might expose them to abuse, feels wrong. So does not highlighting the excellent research and theorisation being done by people working in the area.
In the end I didn’t end up taking a lot of notes, thinking through this, so here’s some of the work that people on the panel have published in the area,:
-
Chess, S., & Shaw, A. (2015). A Conspiracy of Fishes, or, How We Learned to Stop Worrying About #GamerGate and Embrace Hegemonic Masculinity. Journal of Broadcasting & Electronic Media, 59(1), 208–220. [Closed access.]
-
Shaw, Adrienne. “Do you identify as a gamer? Gender, race, sexuality, and gamer identity.” new media & society 14.1 (2012): 28-44.
-
Shaw, Adrienne. Gaming at the Edge: Sexuality and Gender at the Margins of Gamer Culture. Univ Of Minnesota Press, 2015.
-
Consalvo, Mia, and Christopher A. Paul. “Welcome to the discourse of the real: Constituting the boundaries of games and players.” FDG. 2013.
- Shepherd, T., Harvey, A., Jordan, T., Srauy, S., & Miltner, K. (2015). Histories of Hating. Social Media+ Society, 1(2).
-
Massanari, Adrienne. (2015). “# Gamergate and The Fappening: How Reddit’s algorithm, governance, and culture support toxic technocultures.” New Media & Society.
One of the things that struck me listening to this panel was the parallels with social movement researchers’ ethical and methodological difficulties in studying ‘unlikeable’ (or awkward) movements neo-nazis, the Christian right, and others that make for difficult engagements. It seems like some of this experience and ethical reflection might be useful in thinking about how to approach research on gaming’s latest round of toxicity.