October 4, 2014 § Leave a comment
For part one of my SMSociety wrap up, look here!
The first day of SMSociety 2014 continued with a panel on cultural acceptance. Irfan Chaudhry opened by discussing the Twitter Racism Project, which explores different ways of tracking and analysing expressions of racism online. Twitter, he notes, is interesting because it’s highly-visible and easily tracked, although there are still plenty of challenges, including the ways in which racism takes on news forms and expressions with changing social conditions, and the necessity of telling the difference between a racist tweet and an affectionate self-identification. The first round of research has focused an particular racist terms in the Canadian cities with the highest number of reported race-hate attacks (I found it interesting that one of the terms, ‘white trash’, is more classist than racist).
Tweets were then categorised to see whether they were a casual slur, a discussion of racism, an expression of a negative stereotype, and so on, with about 50 per cent of tweets being real-time responses to an event (such as a racist response to being seated next to someone of a different ethnicity on a plane). Chaudhry’s hypothesis that these tweets represent the externalisation of thoughts which were not able to be expressed in person present an interesting contrast to Hampton’s work showing that only 0.3% of people were willing to discuss potentially-controversial perspectives online, but not offline: this might be due to a different sample set, but this connection might also suggest that those expressing their racism online are also comfortable expressing it in person. Chaudry finished by noting that there’s also a need to track racism that’s expressed in more subtly coded ways, such as through the #whiteresistance tag used by white supremacists.
The next presentation was from Abigail Oakley, looking at the online discourse around plus-sized women (and, to a lesser extent, the fat acceptance movement). Oakley noted that much of the abuse faced by plus-sized women sharing images of themselves online came from other plus-sized women. By exploring this through public sphere theory, Oakley proposed that factors such as strength of social ties and emotional involvement play significant roles in the participation of this type of negative online discourse. I’d be curious to see whether this research could connect with some of the practical work around online abuse, including efforts to use moderation to create healthier online communities.
The session wrapped up with Daria Dayter’s work on the ways in which complaints are used to build rapport online, focusing particularly on tweets about ballet. This research is grounded in linguistics, drawing on debates about whether language on Twitter is standard or in the intimate register. Dayter discussed the ways in which language can also be action, so, for example, “I will be there tomorrow” is doing promise, “My foot hurts” is doing complaint. Some of the results include the prevalance of complaint as a form of self praise; the tweet “Iced two ankles 9:07AM 31 DEC 2013” implies dedication both through the timestamp, and through the minimal information and emotional content (which indicates that the poster experiences this often, and doesn’t consider it a big deal). Dayter also noted that gender was not a factor: male dancers complained as much as women. This research really highlighted the benefit of in-depth qualitative analysis of online content.
The final session of the day looked at social media and activism, opening with Brett Caraway‘s discussion of the ways in which Canadian labour unions are using social media. This work draws on Bennett and Segerberg‘s distinction between ‘collective’ and ‘connective’ action, the former being linked to more hierarchical, professionalised movement organising, as opposed to the more individualised, complex, and horizontal forms of connective action. Caraway argues that while connective and collective approaches to organising overlap, in general unions with higher levels of membership, established histories, and emphasising servicing unionism are likely to have organisationally-brokered networks (in which social media is more cenrally controlled). In contrast, unions focusing on recruitment, activism, and issue awareness are likely to have organisationally-enabled networks (which are more open and horizontal). Unions in either of the above contexts may benefit from the integration of social media platforms with their campaigns, however the logic of action is fundamentally different in the contexts of self-organizing networks and organizationally-enabled networks.
Finally, Alfred Hermida discussed recent research with Candis Callison on Idle No More’s use of social media, focused on the period between December 2012 and January 2013, which included several big peaks in Twitter activity around national days of action. Hermida and Callison’s research show that much of the content on Twitter was directed at others within the Idle No More network, rather than being appeals to the mainstream media. This is in large part because Idle No More protesters are aware of the terrible mainstream coverage of Indigenous issues in Canada: journalists will only cover these issues if Indigenous people are ‘dead, drunk, drumming, or dancing’, as thus easily incorporated within dominant racist narratives.
Hermida and Callison used two different methods for measuring influence within the #idlenomore network. The first was similar to the Topsy algorithm, and showed the highest influence to be from institutional elites (such as mainstream news journalists). However, using a different measure that prioritised retweets told a different story, with far higher visibility for ‘alternative voices’, including visible Indigenous Twitter users such as @âpihtawikosisân and @deejayNDN. Retweets were not only a way of sharing information, but also a form of contestation, affirmation, and identity-building, a way of reaffirming (for example) the Indigenous character and leadership of the movement. This research shows social media as a ‘contested middle ground’, which is both affected by other power structures and open to people’s efforts to reshape the network around the hashtag. (This has some interesting connections with the Mapping Movements project I’m doing with Tim Highfield).
Unfortunately, I wasn’t able to attend the second day of SMSociety, but you can check out the schedule on the SMSociety website (I’m particularly curious about James Cook’s work on The Bear Club). I also wasn’t able to bring promotional materials for my book, Global Justice and the Politics of Information: The struggle over knowledge, which is now out with Routledge – please do check it out!
September 29, 2014 § Leave a comment
This was my first attendance at Social Media and Society Conference, and sadly I could only participate in the first day, being keen to get back to Montreal to help Claire prepare for the oncoming arrival of BabyClaire. Despite feeling a little anxiety that BabyClaire might decide to make an early appearance, I enjoyed the opportunity to catch up on some of the latest research around social media use, particularly given the heavy focus on issues around social justice, race, and gender.
The morning opened with a keynote from Keith Hampton, which began with an amusing overview of some of the moral panics that have accompanied previous technological developments (including the horror of women on bicycles). After a discussion of ways in which social media facilitates increasing connection and other benefits, Hampton turned to addressing some of the costs of social media. Drawing on work by Noelle-Neuman on ‘The Spiral of Silence’, Hampton discussed recent research he’s carried out with others around the potential of social media to facilitate more lively online discourse. Surprisingly, research on Americans’ discussions of Snowden showed that only 0.3% of people were willing to the topic online but not offline. Twitter and Facebook users who felt their online connections didn’t agree with their opinions were also less willing to talk about those opinions offline, across contexts. Overall, this undermines claims that people will turn to online forums to voice opinions that might be unpopular or controversial offline.
The second potential cost of social media that Hampton discussed was the increased stress that comes from learning more about bad news experienced by close connections. Results here were highly gendered, beginning with the base measures of stress: women are, on average, more stressed than men. (Race also plays a role, unsurprisingly – Jenny Korn noted the need for more discussion on this.) Men, on the whole, experience no changes in stress levels associated with increasing social media use, while women generally experienced lessened stress with more social media use. However, the contagion effects of bad news for close connections were significantly higher for women than for men.
This was interesting research (which my short summary does little justice to), but I did experience an odd moment of grunching during this talk – a sudden sensation of being othered. In discussing women’s higher levels of awareness of stressful events in close connections lives, Hampton made a throwaway joke about his wife having ‘some theories as to why this might be’. This is not, obviously, a glaring instance of sexism, but the smattering of polite laughter did, suddenly, throw me out of my sense of ease and curiosity about research. Some of the tweets that followed helped to catalyse the source of my unease: the expectation that we could all laugh along at the disproportionate burden of emotional labour that women bear, and the lack of interrogation about why we bear that burden, or how we might shift it.
I experienced a few other moments of this sudden grunching throughout the conference (including when a participant well above forty joked on the conference hashtag about the difficulty of verifying age of consent in singles bars). I’ve decided to start writing about them despite my anxiety that, as an early career researcher, such reflections will have negative impacts on my work, because I think it’s important to name and discuss these small moments of alienation and otherness, as well as the big ones.
After the keynote presentation, I presented Tim and my research in the ‘Politics’ stream (we’re currently working on writing this up, so hopefully we’ll be able to share more soon). Next up, Mapping Iran’s Online Public‘, by Xiaoyi Ma and Emad Khazraee, laid out a useful methodology for capturing and automatically categorising tweets in Farsi. While this research does tend to support the common assumption that Twitter in Iran is dominated by young progressives (probably because Twitter is banned in Iran), Khazraee noted that the Iranian blogosphere is much more evenly divided.
Catherine Dumas’ presentation on Political mobilisation of online communities through e-petitioning behaviour in WeThePeople focused on the wake of the Sandy Hook shooting, demonstrating signs of organised counter-mobilisation against gun control, including several e-petitions attempting to shift the focus to mental health services and armed guards in schools.
The final presentation of the session focused on issues of archiving and trust related to government use of social media, particularly around the Canadian Truth and Reconciliation Commission on residential schools. Elizabeth Shaffer spoke about the importance of archives to those trying to prove their experiences at residential schools and seek redress, and noted that records will continue to be important as we look back on the Commission over coming years. She suggested that social media is likely to play a key role in the discussions around the Commission, and has the potential to be used for more horizontal engagement and information sharing. This research is still at an early stage, albeit a fascinating one, bringing together literature on social media, archiving, and governance: I’m very curious to learn more about how the process of archiving social media around the Commission progresses, and whose voices are (and aren’t) included.
The next panel addressed Twitter and Privacy, with all three panelists noting that this issue is inherently gendered. Siobhan O’Flynn addressed the ways in which Twitter’s terms of service create a legal grey zone. O’Flynn argued, in part, that the existence of hashtags as a means of joining a broader conversation sets up an implicit expectation of privateness for non-hashtagged content – I’m curious about the empirical data around this, and whether users base their actions on this expectation. Nehal ElHadi, like O’Flynn, discussed the appropriation of tweets in response to Christine Fox‘s question to her followers about what they were wearing when they were assaulted, using this as a starting-point for exploring what it means for Twitter content to be ‘public’. ElHadi’s theoretical framework draws on a range of literature, including postcolonial work on the politicisation of space, bringing in vital attention to race and power online, which is often neglected in academia.
Finally, Ramona Pringle spoke briefly on some of her transmedia storytelling projects (including Avatar Secrets, which looks like a super-cool exploration of what it means to live in a wired world, told through a personal lens). Pringle emphasised that Twitter, like other social media, isn’t just a device like a VCR; it’s not a tool we read the manual for, operate, and then put down. Instead, it’s a space we hang out in – we may not understand all of the implications and potential consequences of being there, in much the same way that we may not understand all of the laws governing public spaces like a library or coffee shop. She also spoke about the inherent messiness of human relationships, which includes human relationships online, and why this means that it’s not reasonable to draw lines like, ‘adults just shouldn’t sext’, or ‘if you don’t want people to see naked images of you, don’t ever take them’.
In tomorrow’s installment of the SMSociety14 wrap up: cultural acceptance, social media use by unions, and Idle No More!
July 29, 2014 § Leave a comment
The first day of CLSI 2014 started with Ron Diebert talking about the state of the field and the attempt currently under way to build an inter-disciplinary research community around monitoring Internet openness and rights. Fenwick McKelvey has also put up a reading list of papers mentioned at CLSI 2014.
The opening panel looked at Network Measurement and Information Controls, and was facilitated by Meredith Whittaker of Google Research. Phillipa Gill gave an outline of the ICLAB project [slides]. This project is trying to develop better automation techniques for measuring censorship which would allow a better understanding of not just what is blocked, but also how it’s being blocked, who’s blocking it, and which circumvention methods might be most effective. At the moment the tool is still running in pre-alpha, and having some successes with block page detection: early findings will come out in IMC later this year.
Nick Feamster from Georgia Tech then discussed another project which is attempting to build a more nuanced picture of Web filtering than the data currently available. He argued that censorship takes many forms, not just blocking: performance degradation, personalisation, and other tactics. This means that measuring Web filtering is harder than it appears, and what is required is, “Widespread, continuous measurements of a large number of censored sites.” Issues with this include the problem of distributing client software to look for censorship, which is potentially done through the browser. This is possible, but leads to ethical issues.
Jeffrey Knockel of the University of New Mexico talked about moving, ‘Toward Measuring Censorship Everywhere All the Time’ [slides]. The method discussed here was to use side channels, which allows measuring IP censorship off-path without running any software on the server or the client or anywhere in between. This can be done completely in Layer 3, which has enough side channels. Almost 10% of IPv4 addresses respond to large pings, higher in some countries – this allows for more vantage points. [I have no idea what this means.]
Finally, Collin Anderson talked about studying information controls inside Iran. He discussed the use of mass-scale continuous data collection as a way to show themes of political discourse within the country. This requires content-specific, context-specific knowledge. For example, when Iraq started to clamp down on the Internet, Islamist content was specifically blocked, as well as an odd assortment of pornographic site. Anderson argued that this research will be more effective when people avoid references to “censorship”, which can be divisive, and instead talk about “interference” and “information controls”. (This was also a theme that came up in the Q&A as Meredith discussed the need to avoid ‘inflammatory activist tinge’ to language, project titles, and so on, because this can discourage use and endanger anyone accessing services).
The Q&A for this last session focused quite a bit on ethics issues, and on the problems with managing these given the limitations of current ethics research boards and the challenges involved in the research itself. For example, while university ethics boards tend to prioritise ‘informed consent’, this can create problems for users of circumvention tools as it removes plausible deniability. Similarly, the idea of using anonymity to protect activists may not always match activists’ experience: some participants want their real names used because they feel this offers the protection of international visibility. Gill argued that part of what we need is better models of risk: frameworks for predicting how censors are likely to react to measurement.
The next session of the date focused on Mobile Security and Privacy. David Lie of University of Toronto began with a discussion of ‘Pscout: Analyzing the Android Permission Specification’. This tool uses two-factor attestation as a way to improve data security. This combines two-factor authentication with malware protection across both laptops and mobiles/authentication tokens. (I have some concern about the focus here on ‘trusted computing’, which takes devices further out of their users’ control).
Jakub Dalek of Citizen Lab talked next about the Asia Chats project, which focuses on chat apps that are popular outside the western context. In this case, Line, Firechat, and WeChat. Line implements blocking for users registered with a Chinese number, although there are a number of ways to circumvent this blocking. Firechat, which has been popular in Iraq, is promoted as being anonymous, but the actual content of messages is very poorly protected. Finally, Dalek noted that there was a lot of Chinese government interest in regulating WeChat.
Jason Q. Ng, also Citizen Lab, shared his work on the same project, this time focusing on Weixin. One of the interesting trends here is the emergence of messages which place the blame on other users for blocked content, such as: “This content has been reported by multiple people, the related content is unable to be shown”. Looking at the specific kinds of content blocked suggest that even if ‘users’ are blocking this material, there’s some link with the Chinese government (or at least with government interests). More work is needed, perhaps, which looks at these kinds of indirect forms of information control.
Finally, Bendert Zevenbergen of the Oxford Internet Institute outlined the Ethical Privacy Guidelines for Mobile Connectivity Measures, the outcome of a workshop held with ten lawyers and ten technical experts. He also raised the potential helpfulness of a taxonomy of Internet Measurement ethics issues, and invited people to begin collaborating in the creation of a draft document.
The next session focused on Transparency and Accountability in Corporations and Government. Chris Prince of the Office of the Privacy Commissioner of Canada talked about the annual report in Canada on the use of electronic surveillance which has been made available since 1974. A paper analysing this data, Big Brother’s Shadow, was published in 2013, and suggested important shifts in targets and sites of surveillance.
Jon Penney of the Berkman Center, Citizen Lab, and Oxford Internet Institute, outlined three major challenges for transparency reporting in ‘Corporate Transparency: the US experience’. These include the need for more companies to be willing to share transparency reports with more and better data (including standardised data); better presentation and communication of transparency reports which balance advocacy and research and provide contextualisation; and more work on the legal and regulatory space impacting transparency reporting.
Nathalie Marechal of USC Annenberg talked about the ‘Ranking Digital Rights‘ project, which is developing and testing criteria for particular privacy-protections from companies (such as whether they allow users to remain anonymous), working within an international human rights framework. This work has the potential to be useful not only for civil society actors advocating for better corporate behaviour, but also for corporations lobbying for policy change. The initial phase of the project is looking at geographically-based case studies to better understand themes across different locations, and during this phase there’s an interest in understanding how to assess multinational corporations operating across multiple regulatory contexts, including those which are acquired by other companies. Marechal and other researchers on the project are seeking feedback on the work so far.
Chris Parsons of Citizen Lab spoke on the need for better data about online privacy and related issues in the Canadian context: at the moment, we’re aware that, “an eye is monitoring Canadan communications”, but don’t have full details. This work began by sending surveys to leading Canadian companies in order to get more information an data retention. Results mainly indicated a generalised refusal to engage in any depth with the questions. The work has also been crowdsourcing ‘right of access’ information through an open request tool [try it out, if you’re Canadian!]. Unlike the surveys, these requests are legally binding, and through the data generated, they’re trying to figure out how long online data is stored, how it is processed, and who it is shared with. Collaborations with MP Charmaine Borg have also led to more information about how Canadian intelligence and police agencies are engaging in data surveillance. From this initial research, they’re now trying to use this data to develop a transparency template to more effectively map what still need to know.
In the final talk of the session, Matt Braithwaite of Google talked about work around Gmail to build a better understanding of increasing encryption of email in transit. Google also has useful data available on this, and their report on it received significant attention, which resulted in a spike in encryption of email.
The final panel for day one looked at Surveillance
Seth Hardy of Citizen Lab talked about ‘Targeted Threat Index: Characterizing and Quantifying Politically Motivated Malware’, This is a way of measuring the combination of social targeting (for example, the use of specific language and internal group knowledge to convince activists to open attachments) and technical sophistication to build a better understanding of how politically-motivated malware is developing. Research from this project will be presented at USENIX Security on August 21st, 2014.
Bill Marczak (UC Berkeley and Citizen Lab) and John Scott-Railton (UCLA and Citizen Lab), talking about the growth of state sponsored hacking. They described the growth of mercenaries, companies selling tools to governments (such as FinFly). Some of the challenges for this research include the lack of people available to contact targeted groups and find out about the issues they might be having, and that targeted users may not even realised they’re under attack in some cases. There is some information available on malware that users are accessing, but metadata on this is limited: researchers get a file name, country of submitter, and time submitted, which doesn’t give information about the context in which malware was accessed.
Ashkan Soltani spoke on how technological advances enable bulk surveillance. One of the important differences between traditional surveillance techniques and new methods is the cost. For example, Soltani estimates that for the FBI to tail someone, it’s about $50/hr by foot, $105/hour by car, and covert auto pursuit with five cars is about $275/hour. Mobile tracking might work out to between 4c and $5/hour. This means that the FBI has been able to use mobile tracking to watch 3,000 people at a time, which would be totally impossible otherwise. This is vital when we think about how different forms of surveillance are (or aren’t) regulated.
Nicholas Weaver is based at the International Computer Science Institute, and emphasising that this gives him more freedom to look at NSA-relevant areas because he has a freedom to look at leaks that US government employees are prohibited from accessing. He advises us not to trust any Tim Horton’s near any government buildings. He gave a brief overview of NSA surveillance, arguing that it’s not particularly sophisticated and opens up a lot of vulnerabilities. Weaver said that anyone with a knowledge of the kinds of surveillance that the US’s allies (such as France and Israel) are engaging in will find them more worrying than actions of the US’s opponents (eg. Russia and China).
Cynthia Wong discussed work by Internet Research and Human Rights Watch on documenting the harms of surveillance. One of the organisation’s case studies has focused on Ethiopia, which is interesting because of the network of informants available, and the extreme hostility to human rights documentation and research on the part of the Ethiopian government. Surveillance in Ethiopia is complex but not necessarily sophisticated, often relying on strategies like beating people up and demanding their Facebook passwords. However, the state also buys surveillance tools from foreign companies, and documenting the harms of surveillance may help in bringing action against both companies and Ethiopia itself. The organisation also has a new report out which looks at surveillance in the US, where it’s harder to document both surveillance and resultant harms: this report highlights the chilling effects of surveillance on lawyers and journalists.
July 23, 2014 § Leave a comment
I didn’t get much (well, any) training in the ethics of research during my formal studies, apart from the documents that came along with my first ethics application. Over the years, I’ve been thinking more about what constitutes ethical research involving activists, and how I can use my relatively-privileged position within academia. The work I’ve been doing with Tim Highfield on the Mapping Movements project has also raised new challenges as we’ve tried to think about how to use social media material ethically (and rigorously). Over recent years, I’ve also come across more critiques by feminists and people of colour of the ways in which their lives, analyses, and work has been appropriated by academia and the media.
In response to this, I’ve tried to put together a rough public document about the ethical guidelines for my research. This is intended as a public statement of accountability, and I hope to continue updating it in response to further thinking and self-education.
July 10, 2014 § Leave a comment
Today I went to an advance screening and Q&A on Preempting Dissent, about the application of the ‘Miami model’ of policing to the G20 protests in Toronto.
The documentary started with an overview of changes to policing tactics in North America, looking at how Giuliani’s ‘broken window’ theory got applied to protest policing: no loss of control should be allowed, lest it get out of hand. In particular, following the 1999 anti-WTO protests in Seattle police have begun to treat events which might attract protest as national security events. The US PATRIOT act exacerbated this tendency, with a further shutting-down of the space for protest.
The ‘Miami model’ includes cooperation between police and federal agencies; extensive surveillance in the lead-up to protests, including raids on meeting spaces, preemptive arrests and searches; the (mis)use of ‘less than lethal‘ weapons during protests; and the development of ‘free speech zones’ which separate protesters from events and contain them.
Preempting Dissent points out that a Canadian government investigation has acknowledged that protesters had no way to know in advance that the police were enacting previous wartime legislation to, in effect, bring in martial law during the G20. Even protesters who took care to educate themselves about the law surrounding protests ‘had no way of knowing they were walking into a trap’.
The documentary ends by talking about the need to challenge the ‘security logic’ that underpins the Miami model of policing. In the Q&A session afterwards Greg Elmer talked about the need to move away from planned events which lead to protesters walking into a trap, suggesting that more mobile and fluid protest tactics are one way of responding to changes in policing. He also emphasised the need to respond to intimidation tactics which try to scare protesters off the streets.
One of the questions about the shift from surveillance to preemption brought up an important point: that ‘surveillance’ often isn’t about information-gathering. It’s simply another form of intimidation, a way of letting activists know that they’re being watched, and of undermining organisation. The discussion session brought up quite a few other interesting issues: the need to consider race (and particularly racism) when thinking about the dynamics of protest policing; the ethics of showing images of protesters in the current surveillance environment; and the ethics of making sensitive footage available under a creative commons license which might allow for problematic uses.
There was also some useful sharing of resources: I liked the suggestion that protesters carry a self-addressed envelope with them so that if necessary they can mail their SD cards back to themselves to prevent police wiping phones used to document violence, and someone from the What World Productions team mentioned their documentary on police violence against homeless, poor, and other marginalised groups in Toronto.
Trigger warnings: there’s some quite intense footage here, including of protesters being kettled, tazed, pepper-sprayed, and violently arrested.