October 31, 2013 § Leave a Comment
The second day of the Compromised Data colloquium was fascinating, and I’m looking forward to chasing down further work from many of the presenters.
The opening session started with Lisa Blackman discussing experiments with repurposing commercial software tools to explore contagion in complex environments, drawing on controversies around psychic research of the nineteenth century (including work on automatic writing). I liked the idea of ‘haunted data’: the ways in which research takes on a new life after publication, and may begin to be circulated by non-academic networks in ways that the original researchers never intended.
Ingrid M. Hoofd raised some interesting questions about the ways in which academic institutions, researchers, media, and activists may be becoming implicated in problematic representational regimes in their use of social media. She discussed The Guardian’s Reading the Riots project, which she argued simultaneously made claims to build an empirically-based analysis of the reasons behind the riots while also being based in, and reinforcing, existing stereotypes around class and race.
Yuk Hui‘s work on self-archiving the massive amounts of digital objects which we generated notes the difference between merely storing, and archiving, this material: archiving requires the additional of contextual framing. The theoretical framework of Hui’s work is accompanied by attempts to design self-archiving tools which will allow them to create physical objects through which to share their archives.
The following session explored other attempts to combine analyse with software design. Fenwick McKelvey discussed network diagnostic tools, some of which may be helpful in better understanding NSA surveillance. He also raised questions about the structure of crowdsourced research: often, he notes, researchers set their aims and create the infrastructure for crowd participation, rather than allowing the ‘crowd’ (however that might be defined) to do more in setting research goals and processes.
Robert W. Gehl‘s presentation focused critical reverse engineering approaches, including making suggestions about how these may be applied to the humanities. He argued that critical reverse engineering allows us to understand they ways in which new technologies and systems are not radical breaks with the past, but rather come from a particular history and series of struggles, looking in detail at how this applied to attempts to create an alternative to Twitter, TalkOpen.
Anatoliy Gruzd talked about some of the work currently happening at the Dalhousie University Social Media Lab, including the creation of the Netlytic tool, which may be useful for visualizing networks and is currently being used to explore a number of different online communities and discussions.
In the session on audience engagement, Gavin Adamson looked at some of the ways in which social media is affecting mental health coverage (noting that audiences much prefer to share positive news stories, rather than those framed through the lens of violence/risk); Mariluz Sanchez discussed the use of social media in transmedia storytelling, and Kamilla Pietrzyk gave a thought-provoking presentation on the research she’s beginning on the effects of read receipts on online communication.
Alessandra Renzi and Ganaele Langlois kicked off the final session with a conversation about some of the issues involved in data/activism, exploring the ways in which militant research methods might be combined with critical software studies. They argued that much of the discussion around participatory culture takes celebratory approach to understanding political participation, and that we need to think about the ways in which being ‘active’ differs from resisting existing systems and building alternatives. They also raised many of the questions around the relationship between researchers and activists that Tim and I covered in our talk, including some we hadn’t considered.
David Karpf‘s challenged the idea that online activism, particularly petitions, are spontaneous examples of ‘organising without organisations’. Instead, he argues, a closer look at online petition sites demonstrates that we are seeing organising with different organisations. The organisations involved in MoveOn.org and Change.org both make choices about their platforms which shape the kinds of petitions created (those on MoveOn tend to be more political). MoveOn’s prompts guiding members’ creation of petitions also serve as an educational tool, drawing in part on (Saul Alinsky’s?) ideas about political organising.
‘Compromised Data?’ Social media research: methodological challenges, unexamined niches, and the politics of big data
October 28, 2013 § Leave a Comment
Today’s presentations on big data research at Compromised Data? raised some important questions about the role that big data is playing in academic research and government policy, as well as about the methodological challenges faced by big data researchers.
Greg Elmer‘s opening remarks positioned the ‘compromised data?’ theme in the broader context of neoliberal policies and the Canadian government’s anti-environmental policies. Joanna Redden‘s work on the increasing incorporation of big data research into Canadian policy-making and government service provision expanded on this theme. Redden pointed out that the turn towards big data is framed in the language of efficiency and money-saving, but that we should be concerned about the quality of the data being used, including the erasure of poverty as those who are not online (or online less) become invisible, and as services which generate oppositional forms of knowledge have their funding cut. We should also remain aware of the ways in which a reliance on big data research can change government processes, changing the role of bureaucrats and changing the relationship between citizens and the government. We need to recognise that neoliberalism is not just a political project, but also one which aims to change how we think: big data is not neutral, but rather is easily incorporated within this system.
Tainer Bucher‘s exploration of shifts in the Twitter APIs complemented this well, inviting us to look more deeply at the role of APIs in shaping how we interact with data. Bucher argues that while there’s a risk of seeing APIs as just another convenient tool to gather data, we need to critically analyse software tools and understand the power relations embedded in their design. Her empirical research in 2010 and 2011 focused on shifts in the Twitter APIs, in which the initial openness which helped Twitter to grow was increasingly shut down.
Jean Burgess and Axel Bruns also touched on the consequences of Twitter’s API as they discussed Twitter research and the politics of data access. To begin with, they point out, there’s a disproportionate focus on Twitter in academic research because it’s the easiest social media data to access. At the same time, much of the work is biased by limitations in the software tools used to study the platform: key tools like TwapperKeeper and DataSift were constrained in important ways by the changes to Twitter APIs. There are also biases that come from a focus on the low-hanging fruit, such as a focus on hashtags rather than on more complex layers of interaction like follower networks and @replies. Burgess and Bruns argue that we need to be reaching beyond the easily-available data in order to build a better picture of how people are using Twitter.
Carolin Gerlitz provided one model for doing this, outlining an approach based on a model of social media as multivalent: producing data that is both standardised and vague, and therefore allows for multiple readings. Gerlitz argued that more research needs to be open to the multiple use practices involved in social media. Frauke Zeller‘s work also provided useful templates for research which is open to the multiple meanings of social media texts, suggesting that there are benefits to an interative approach in which qualitative and quantitative analysis mutually inform each other.
Daniel Pare and Mary Francoli‘s research raised concerns about existing approaches in big data research, particularly focusing on the literature on political engagement and mobilisation. Like others, they pointed out that the data which is most easily available is not necessarily the most accurate; a focus on big data research on social media is problematic when it’s used as a simple measure of broader political trends. There’s also far too little recognition of the ways in which assumptions about what ‘democracy’ means shape research on political mobilisation and engagement online, and of the inherently political nature of social media platforms.
Asta Zelenkauskaite’s work on mainstream media’s approaches to big data also highlighted the contested nature of these platforms, inviting us to consider the difference between social media engagement as a top-down process and what it might look like if it was driven by consumer interests. Sidneyeve Matrix’s presentation served as a useful complement to this, examining the shift towards niche social networks—often paid, gated communities—that support consumers’ use of their geolocative data.
The day’s presentations opened up some vital questions that are being asked far too infrequently in big data research, and in the broader big data community, about the political and methodological issues involved in the push towards big data as a magical cure-all. I’m looking forward to tomorrow’s presentations, as well as to talking about how these concerns relate to the research Tim and I are doing.
For more see:
October 7, 2013 § Leave a Comment
Today when I logged into Facebook I got a message letting me know that I was banned from posting any content for the next 24 hours. Another contributor from a group I help to moderate had posted ‘inappropriate content’ and so all moderators for that group were temporarily locked from posting to Facebook at all.
This would be mildly annoying most of the time, but at the moment I’m teaching a unit where a substantial proportion of the discussion takes place through a Facebook group. Ironically, the unit is on ‘power and politics’ and the Internet.
While there are compelling reasons to experiment with Facebook in teaching (including students’ preference for the site over universities’ official learning management systems), doing so will inevitably raise issues like this. Should I leave the group? Should I, and other educators, avoid posting to Facebook about issues that may lead to bans? Should I try to create a teaching profile and a personal profile (which is against Facebook policy)? I and other contributors have touched on some of these issues in the chapter I contributed to An Education in Facebook?, but we need to be thinking more about ownership and control as we explore new teaching tools.
September 11, 2013 § Leave a Comment
Early next year I’ll be discussing strategies for opposing the TPPA at Linux Conference Australia, in Perth:
This presentation suggests a variety of strategies and tactics that the Linux community might adopt when acting on political issues, with the Trans Pacific Partnership Agreement (TPPA) being of particular concern at the moment. The TPPA is a multinational free trade agreement (FTA), and will probably build on and extend the damaging provisions imposed by the 2004 Australia-US FTA. The extent of damage likely to be done by the TPPA is not yet known, as only draft copies have been leaked and the negotiations remain secret.
Currently, free and open source communities often find ways deal with problematic laws, such as the copyright extensions and restrictions on circumventing technological restrictions brought in by the 2004 Australia-US FTA, with clever hacks of the legal system (such as copyleft and creative commons licenses); workarounds which meet the letter of the law (such as providing Linux installations without potentially-illegal codecs); or ignoring laws which seem unlikely to be enforced. However, all of these strategies have problems. Hacks can only go so far; relying on a lack of enforcement is risky; and workarounds make free and open source software less accessible for novice users and others who would prefer software that works out of the box. Part of the work of promoting free and open source software must therefore involve activism that is directly aimed at the TPPA and other FTAs.
Important activism did take place around the 2004 Australia-US FTA, including work within Linux Australia led by Rusty Russell, Kimberlee Weatherall and others. Much of this took a similar form to activism currently happening around the TPPA: the focus has been on lobbying, letter-writing, and media relations. Coalition-building and other activism around the TPPA, as with the 2004 FTA, has predominantly taken place within tech communities. However, while this work has been valuable, it may be useful to explore ways to build alliances with other communities and to draw on a broader range of activist tactics. This discussion will draw on some of the lessons learned from relatively successful attempts to oppose FTAs in the past, including protests in the late 1990s around the Multilateral Agreement on Investment and World Trade Organization negotiations, as well more recent FTAs such as those between the US and Malaysia and the Free Trade Area of the Americas proposed by the US. Drawing on this work, I will suggest tactics for effective action, including use of a spectrum of allies model, organizational models which facilitate tiered levels of participation, and creative use of the Overton window. I will also outline some of the key groups opposing the TPPA outside of the tech community in both Australia and the US.
I really enjoyed last year’s LCA, including the commitment by the conference organisers to creating a safe and inclusive space, and it looks like there are some great speakers this year (even from my non-tech perspective). The miniconfs also look well worth checking out.