Theorizing the Web Day 2: here comes every body + h8 + lockscreen + algorithms + technologies and pathologies

April 19, 2015 § Leave a comment

Pretty Fat II-Spring 2013~ Kadejah H.

The second day of Theorizing the Web was as intense as the first, and many of the presentations discussed potentially-distressing issues, including anti-fat prejudice, online harassment and abuse, police violence against people of colour, suicide, and transmisogyny. This post will only give a short overview of the presentations (and conversations) that happened. My notes from day one are here – I also recommend checking out the TtW15 website and hashtag for more information.

 

 

Day two began with Here comes every body, and Apryl Williams‘ discussion of fat activism online. Like most movements, fat activism is fractured: ‘body positivity’ is often still very much about healthiness, with strong moral undercurrents (for example, attempts to counter the idea that fat people are lazy by showing fat people exercising). ‘Fat acceptance’ rejects the idea that fat people have to prove their worth through performances of health, instead insisting that fat people (whether healthy or not) are valuable and retain autonomy over their own bodies. Williams notes, however, that fat activist spaces reproduce hegemonic ideology: fat activism often continues to frame women within the male gaze (“fat women are sexy too!”), and fat positive spaces are often dominated by white women. The Fat People of Colour tumblr provides an alternative space that includes men and genderqueer people, resists the fetishising of fat people, and invokes intersectional approaches (including considering class and disability).

 

Legacy Russell followed with a presentation on feminism and glitch body politic, asking how experimentations with sex and gender in the digital arena can act to undermine the discourse of sex and gender. Art by glitch feminists like Amalia Ulman, AK Burns, Ann Hirsch, Mykki Blanco, and Fannie Sosa, creates cracks in the glossy narrative of the patriarchal gaze and invites us to consider ways of disrupting platforms at the same time as we use them. Glitch feminism not just about individual projects, but about the connections and spaces in between them.

 

Mariam Naziripour‘s ‘Craft of Beauty: Make-up after the Internet’ tracked some of the ways in which technology (including older technologies like photography and black and white film) have changed our approach to make-up. Jenna Marbles’ early vlogs demonstrate the strange tensions in how modern Western society views make-up: women are meant to ‘look natural’ at the same time as we’re expected not to look like ourselves. We’re pushed to engage in constant attempts to meet particular (unachievable) standards of beauty, at the same time as we’re criticised for artifice and deception. This also reveals tensions in many people’s relationship to make-up, which is in some ways an imposition (to look a certain way, often at significant economic and personal cost), but also a source of creativity and experimentation. Online communities like Makeup Alley have created one of the richest archives of make-up practices ever to exist, documented by the people who use make-up (rather than poets or essayists writing misogynistic critiques of makeup, as was often the case in the past).

 

Image search results for 'virtual agents' on Google

Image search results for ‘virtual agents’ on Google

Finally, Emily Bick talked about the ways in which ‘virtual agents’ (virtual assistants, custometr support bots, and so on) reproduce and enforce gender roles. These programs are often gendered, shifting from the more gender-neutral agents of previous decades (like Microsoft’s infamous Clippy), and are subservient and obedient. They represent-and help reinforce-an ideology of a feminised support worker who is constantly available and deferential. Thinking about this now, I’m curious about ways in which this is additionally racialised (with the idealised Western virtual agent usually presented as white, at the same time as a significant proportion of caring work in Western countries is undertaken by women of colour), and about the ways in which glitches or limitations of these programs might be understood as acts of resistance by virtual agents.

 

The h8 session opened with Alison Annunziata’s discussion of Love and Terror in the Digital Age. She outlined two central problems with dealing with cyberstalking and digital harassment: firstly, that technology shifts more rapidly than the law, and secondly, that both the law and police as individuals are often not capable of understanding the language of threat (and of terror). Antistalking laws, for example, often have a requirements of ‘credible threat’ – would a ‘reasonable person’ see this as genuinely dangerous? Victims are often the only ones with the right intelligence to understand why a particular action is threatening or violating, and they bear a  heavy burden of proof.

 

Caroline Sinders extended this by talking about Twitter’s UX problem, starting with the very real impact of this: her mother was recently swatted, which lead to a painful discussion in which Sinders was asked by her mother, and local police, what gg is and why they’re mad at her…which is kind of hard to explain, when the answer is “I tweet about feminism sometimes”. (This reminds me about some of the discussions at AdaCamp around resources to give to therapists: for people experiencing online harassment and abuse, it can be useful – and even necessary – to have an information pack to give therapists and other support people to explain the background and kinds of abuse that are happening. Sinders mentioned abusive tweets, doxxing, swatting, sealioning, and dog piling as particular issues.) Sinders notes that Twitter has a very specific problem with harassment, in part because it was never designed from a perspective that recognised and aimed to prevent harassment. Legal frameworks (as Annunziata explained) don’t deal well with misogynistic stalking and harassment, and particularly haven’t kept up with online abuse, but Sinders argues that there’s a lot that Twitter could do to become safer, including rewriting community guidelines to recognise and ban emerging uses of the platform for abuse, look at and learn from Block Together, redesign their interface, allowing more user agency, and using algorithms and data better (for example, enacting the PGP Web of Trust, recognising that often friends of friends are safe to interact with), and allowing batched submissions of abusive tweets. They should also be drawing on the knowledge of people who have experienced these forms of abuse in developing their responses.

 

Thomas Rousse explored two case studies in implementing moderation systems for online communities using peer judgements: Wikipedia and League of Legends. He notes that this isn’t an issue of free speech: it’s about the management of bounded online communities, and not about the forms of speech that the state controls or represses. Rouse outlined two major models of community management: moderation, and ‘online vigilantism’. Many communities start without clear rules for behaviour, and end up defaulting to a vigilante approach as users try to find their own solutions: often these are incredibly inventive, and really terrible. Moderation offers better possibilities, but often requires a lot of work from community managers. Peer-judgement systems offer one alternative. However, democracy is not an inherent good, and majoritarian spaces can less to ‘a majority of assholes’. Neither Wikipedia nor League of Legend’s systems are without problems; in fact, the Wikipedia requests for comment system ended during Rousse’s research. League of Legends’ system has been more successful: it allows players to look at transcripts when players are reported, and decide if they should be punished or pardoned. 94% of those who were reported were punished. But using human adjudicators isn’t fast enough, so they took the body of data accumulated and used machine learning to create a machine judge. This opens up lots of interesting (and worrying) questions about the ways in which peer judgement processes and machine learning might be deployed in other spaces.

 

Click here for my slides

Click here for my slides

I closed the session by exploring some of the ways in which geek feminist activism is challenging the predominantly liberal and libertarian politics of the digital liberties movement (which I’ve written more about here and here). This was a very brief sketch of a complex movement that I’ll be writing about in more detail later, but I hope it brought up some useful reflections on the ways in which we approach-or might approach-issues around online harassment. While Rousse referred back to liberal democratic frameworks (talking about being judged by ‘juries of our peers’ and noting that Wikipedia’s system looked more like a ‘kangaroo court than the Supreme Court’), women, trans people, and people of colour are often very aware that existing liberal democratic frameworks do not work for us. Anarchafeminist praxis offers an alternative source of experience to draw on in considering how we might deal with abuse and harassment, silencing, and structural inequality, within communities that are frequently male-dominated, and in spaces shaped by the broader context of the capitalist system.

 

The Lockscreen: Control and Resistance extended the discussion on many of these themes. Harry Halpin kicked off arguing that ‘only cryptography can save us’. With the failure of the liberal state and the capitalist order, he says, we’ll be seeing hundreds of revolutions still to come. Technology won’t determine the shape or outcome of these, but it will affect the possibilities available, and if technologies of communication are open to surveillance then states will be able to crush resistance before it can grow. Snowden has argued that we can’t trust liberal mechanisms of governance, so we have to find ways of inscribing the values of the society we want into technology. I’m rather dubious about this idea, however. Sinders’ talk on Twitter’s UX problem described the problems that arise from building a platform based around the life experiences and priorities of a relatively homogenous set of designers (mostly white, relatively privileged, men). There are some excellent women and people of colour involved in crypto communities (as there are at Twitter), but even just within TtW there were many mentions of the problems with crypto culture. So it seems like before ‘we’ ‘inscribe our values’, more work needs to go into working out who the ‘we’ is here, and giving attention and hard work to the culture within crypto communities (and looking at the ways in which these communities overlap – or fail to overlap – with those of users for whom this technology might be a life-or-death issue).

 

Ted Perlmutter continued the discussion of ‘Twitter revolutions’, but also noted that while people have been very enthusiastic about the platform when it seemed to be supporting progressive revolutions, it becomes more worrying when it’s used by groups like ISIS or gg as a recruitment tool (I’d also add that the US state apparatus is far less enthusiastic about movements organising on Twitter when it’s happening within the US). How should we be disrupting violent hate movements using Twitter? And if we isolate participants, are we sticking them in an echo chamber that will only radicalise them further? This was an interesting talk, but it seemed strange to me to discuss gg primarily through the lens of other male theorists, and without drawing on the experience or analysis of women and other marginalised groups that have been attacked by them.

 

How would we read this image if the protester didn’t have their hands up? How would we read it if they were flipping off the police?

Raven Rakia wrapped up the session with a critique of the anti-police movement’s dependency on visual images. As activists have been bringing attention to police killings of people of colour, there has been a focus on images of police in riot gear, police killings, and police brutality. These images are powerful, but they implicitly rewrite history, and support a politics of respectability. Photographs of riot police with armoured vehicles suggest that the US police have become militarised, hiding the fact that police have always been militarised, and from the beginning played a role in enforcing racist structures of control (including slavery and lynchings). These images also build a politics of respectability: they rely on an opposition between violent police and pacifist protesters, on telling us that victims of violence were going to college or parents (which implies that those who aren’t ‘respectable’ are suitable targets for violence). These images also focus our attention on visible forms of violence while other structures are hidden, including the prison and legal system that disproportionately affects black lives. Some of these structures are also taking new forms online: for example, if children talk to each other about trying to organise resistance to police or violence experienced from others, they can be charged under ‘gang laws’ and given much harsher sentences. Rakia argues that instead of focusing on images of police violence, we need to work to abolish the police and dismantle systems of incarceration and control.

 

The second keynote of TtW15, Algorithms as Social Control, brought together Zeynep Tufekci, Kate Crawford, Gilad Lotan, Amy O’Leary, and Frank Pasquale. I won’t try to summarise all of the discussion on this panel, but you can catch the Twitter feed here. There were some important points raised about the ways in which algorithms can act as architectures of control, and potentially also work in liberatory ways. There were also questions raised about appropriate points of focus: should we be examining algorithms, or are they just tools (“just like the process you use for tying your shoelaces”, as two data scientists told Amy O’Leary)? If we are interrogating algorithms, how do we actually do this using the tools and data available to us?

I enjoyed Kate Crawford’s discussion about what the history of the deodand can tell us about algorithms: this legal structure was a way of dealing with death or injury caused by animals or inanimate objects, and was finally replaced by negligence laws in large part due to the political power of the railway industry. Looking at that history reminds us that we have a long history to draw on in working out responsibility in complex systems, and that we can make creative solutions, but but also that the forces of capital shape the ways in which we develop structures for accountability and responsibility.

 

“Body and Soul: The Black Panther Party and the Fight Against Medical Discrimination,” by Alondra Nelson

Tweets about the final keynote, In Sickness and in Health: Technologies and Pathologies, can be found at the #TtW15 #k3 hashtags, with participants Jason Wilsonmerritt kopas, Ayesha Siddiqi, Gabriella Coleman, and Alondra Nelson. Nelson’s overview of her work on the Black Panther’s grassroots genetic screening program was amazing, and laid out a six-point theory of health and technology for the social media age which set up the frame for the panel well:

  1. Information does not want to be free, but demand it is because your life might depend on it. We need access to advanced medical and technical information.
  2. DIY is self-care.
  3. Technology needs to be for and by the people.
  4. Bringing attention to neglected or rare diseases requires an activated network. The Black Panthers had two types of network: one based on homophily (sameness), and another with well-connected nodes that could bring in celebrity (around the campaign on sickle cell anaemia).
  5. Access to and strategic use of tech must be coupled with vigilance about its excesses. For example, the Black Panthers actively challenged racist assumptions about genetic difference and built a multifaceted understanding of the politics of genetics and race.
  6. Disruptive innovation can move the state: one outcome of the Black Panther’s campaign was increased funding for sickle cell anaemia.

 

‘Videogames for humans’, edited by merritt kopas

merritt kopas followed this with a discussion of games as a site for exploring complex ideas around interiority, mental health, gender, and sexuality. Online games can be produced and distributed easily, and the format allows for non-linear narratives. Games like Depression Quest that explore these issues are getting more attention, and much of this work done is being done by women, and especially trans women. Previously, trans people have mainly been allowed to occupy the literary space of the memoir (specifically around transition), which makes trans lives consumable for cis audiences. New games formats allow space for trans women to explore and share their experiences in ways that are more challenging, and frequently are made for other trans people rather than for a broader cis audience. This is important, particularly when being trans online means hearing about suicides (but being told not to talk about them in case you spread suicide), hearing about the murder of trans people (and realising that most of society doesn’t care), being purposefully and continually misgendered, and harassed and doxxed. Even when in queer or feminist spaces, trans people cannot assume they are safe. merritt also notes that while gg has received a lot of attention, this attention usually centres on the experience of cis white women. However, trans people (and especially trans women) have been experiencing these forms of harassment by trans-exclusionary radical feminists for years.

 

Ayesha Siddiqi talked about the ways in which marginalised people are building narratives of self care. Posts and tweets sharing tips for self care, or even telling others that they deserve self care, can be seen as a way of sharing amateur mental health resources. We need to be asking why people are turning to these to try to survive: what is it about our communities that create this need for self care, and why do are people forced to look after themselves (rather than being looked after by those around them)?

 

Finally, Biella Coleman talked about a question that’s come out of her previous project on Anonymous: how did those, and do those, who are deemed ‘crazy’, gain a voice when the very category of being ‘mad’ makes you ‘irrational’? She notes that disability marks the past and present of hacking in dramatic ways. While this has many negative impacts, it also creates spaces where people with disabilities (or people who identify with different neurodiversities) are able to find a place where they are accepted (although I would argue that this space is far more welcoming for some people than others).

 

The discussion that followed emphasised the ways in which ‘madness’ is socially-constructed: Siddiqi pointed out that traits that would mark others as ‘crazy’…are sentimentalized when they occur in white bodies, Coleman argued that in order to resist categorisations of madness you need strong communities of mutual aid, and Nelson noted that the Black Panthers knew you can’t be healthy in a pathological society, and there’s been a pathologization of anyone who poses a threat to the state and the market.

 

I’ll do one more post about Theorizing the Web, but I want to end this one with Alondra Nelson’s words (or as close as I could get to them while typing frantically):

I don’t feel optimistic at all, but people make do and keep going. But we can find a glimmer of hope in spaces and moments, not fully autonomous, of community, and of gathering.

Citizen Lab Summer Institute on Monitoring Internet Openness and Rights, Day 1

July 29, 2014 § Leave a comment

The first day of CLSI 2014 started with Ron Diebert talking about the state of the field and the attempt currently under way to build an inter-disciplinary research community around monitoring Internet openness and rights. Fenwick McKelvey has also put up a reading list of papers mentioned at CLSI 2014.

The opening panel looked at Network Measurement and Information Controls, and was facilitated by Meredith Whittaker of Google Research. Phillipa Gill gave an outline of the ICLAB project [slides]. This project is trying to develop better automation techniques for measuring censorship which would allow a better understanding of not just what is blocked, but also how it’s being blocked, who’s blocking it, and which circumvention methods might be most effective. At the moment the tool is still running in pre-alpha, and having some successes with block page detection: early findings will come out in IMC later this year.

Nick Feamster from Georgia Tech then discussed another project which is attempting to build a more nuanced picture of Web filtering than the data currently available. He argued that censorship takes many forms, not just blocking: performance degradation, personalisation, and other tactics. This means that measuring Web filtering is harder than it appears, and what is required is, “Widespread, continuous measurements of a large number of censored sites.” Issues with this include the problem of distributing client software to look for censorship, which is potentially done through the browser. This is possible, but leads to ethical issues.

Jeffrey Knockel of the University of New Mexico talked about moving, ‘Toward Measuring Censorship Everywhere All the Time’ [slides]. The method discussed here was to use side channels, which allows measuring IP censorship off-path without running any software on the server or the client or anywhere in between. This can be done completely in Layer 3, which has enough side channels.  Almost 10% of IPv4 addresses respond to large pings, higher in some countries – this allows for more vantage points. [I have no idea what this means.]

Finally, Collin Anderson talked about studying information controls inside Iran. He discussed the use of mass-scale continuous data collection as a way to show themes of political discourse within the country. This requires content-specific, context-specific knowledge. For example, when Iraq started to clamp down on the Internet, Islamist content was specifically blocked, as well as an odd assortment of pornographic site. Anderson argued that this research will be more effective when people avoid references to “censorship”, which can be divisive, and instead talk about “interference” and “information controls”. (This was also a theme that came up in the Q&A as Meredith discussed the need to avoid ‘inflammatory activist tinge’ to language, project titles, and so on, because this can discourage use and endanger anyone accessing services).

The Q&A for this last session focused quite a bit on ethics issues, and on the problems with managing these given the limitations of current ethics research boards and the challenges involved in the research itself. For example, while university ethics boards tend to prioritise ‘informed consent’, this can create problems for users of circumvention tools as it removes plausible deniability. Similarly, the idea of using anonymity to protect activists may not always match activists’ experience: some participants want their real names used because they feel this offers the protection of international visibility. Gill argued that part of what we need is better models of risk: frameworks for predicting how censors are likely to react to measurement.

The next session of the date focused on Mobile Security and Privacy. David Lie of University of Toronto began with a discussion of ‘Pscout: Analyzing the Android Permission Specification’. This tool uses two-factor attestation as a way to improve data security. This combines two-factor authentication with malware protection across both laptops and mobiles/authentication tokens. (I have some concern about the focus here on ‘trusted computing’, which takes devices further out of their users’ control).

Jakub Dalek of Citizen Lab talked next about the Asia Chats project, which focuses on chat apps that are popular outside the western context. In this case, Line, Firechat, and WeChat. Line implements blocking for users registered with a Chinese number, although there are a number of ways to circumvent this blocking. Firechat, which has been popular in Iraq, is promoted as being anonymous, but the actual content of messages is very poorly protected. Finally, Dalek noted that there was a lot of Chinese government interest in regulating WeChat.

Jason Q. Ng, also Citizen Lab, shared his work on the same project, this time focusing on Weixin. One of the interesting trends here is the emergence of messages which place the blame on other users for blocked content, such as: “This content has been reported by multiple people, the related content is unable to be shown”. Looking at the specific kinds of content blocked suggest that even if ‘users’ are blocking this material, there’s some link with the Chinese government (or at least with government interests). More work is needed, perhaps, which looks at these kinds of indirect forms of information control.

Finally, Bendert Zevenbergen of the Oxford Internet Institute outlined the Ethical Privacy Guidelines for Mobile Connectivity Measures, the outcome of a workshop held with ten lawyers and ten technical experts. He also raised the potential helpfulness of a taxonomy of Internet Measurement ethics issues, and invited people to begin collaborating in the creation of a draft document.

The next session focused on Transparency and Accountability in Corporations and Government. Chris Prince of the Office of the Privacy Commissioner of Canada talked about the annual report in Canada on the use of electronic surveillance which has been made available since 1974. A paper analysing this data, Big Brother’s Shadow, was published in 2013, and suggested important shifts in targets and sites of surveillance.

Jon Penney of the Berkman Center, Citizen Lab, and Oxford Internet Institute, outlined three major challenges for transparency reporting in ‘Corporate Transparency: the US experience’. These include the need for more companies to be willing to share transparency reports with more and better data (including standardised data); better presentation and communication of transparency reports which balance advocacy and research and provide contextualisation; and more work on the legal and regulatory space impacting transparency reporting.

Nathalie Marechal of USC Annenberg talked about the ‘Ranking Digital Rights‘ project, which is developing and testing criteria for particular privacy-protections from companies (such as whether they allow users to remain anonymous), working within an international human rights framework. This work has the potential to be useful not only for civil society actors advocating for better corporate behaviour, but also for corporations lobbying for policy change. The initial phase of the project is looking at geographically-based case studies to better understand themes across different locations, and during this phase there’s an interest in understanding how to assess multinational corporations operating across multiple regulatory contexts, including those which are acquired by other companies. Marechal and other researchers on the project are seeking feedback on the work so far.

Chris Parsons of Citizen Lab spoke on the need for better data about online privacy and related issues in the Canadian context: at the moment, we’re aware that, “an eye is monitoring Canadan communications”, but don’t have full details. This work began by sending surveys to leading Canadian companies in order to get more information an data retention. Results mainly indicated a generalised refusal to engage in any depth with the questions. The work has also been crowdsourcing ‘right of access’ information through an open request tool [try it out, if you’re Canadian!]. Unlike the surveys, these requests are legally binding, and through the data generated, they’re trying to figure out how long online data is stored, how it is processed, and who it is shared with. Collaborations with MP Charmaine Borg have also led to more information about how Canadian intelligence and police agencies are engaging in data surveillance. From this initial research, they’re now trying to use this data to develop a transparency template to more effectively map what still need to know.

In the final talk of the session, Matt Braithwaite of Google talked about work around Gmail to build a better understanding of increasing encryption of email in transit. Google also has useful data available on this, and their report on it received significant attention, which resulted in a spike in encryption of email.

The final panel for day one looked at Surveillance
Seth Hardy of Citizen Lab talked about ‘Targeted Threat Index: Characterizing and Quantifying Politically Motivated Malware’, This is a way of measuring the combination of social targeting (for example, the use of specific language and internal group knowledge to convince activists to open attachments) and technical sophistication to build a better understanding of how politically-motivated malware is developing. Research from this project will be presented at USENIX Security on August 21st, 2014.

Bill Marczak (UC Berkeley and Citizen Lab) and John Scott-Railton (UCLA and Citizen Lab), talking about the growth of state sponsored hacking. They described the growth of mercenaries, companies selling tools to governments (such as FinFly). Some of the challenges for this research include the lack of people available to contact targeted groups and find out about the issues they might be having, and that targeted users may not even realised they’re under attack in some cases. There is some information available on malware that users are accessing, but metadata on this is limited: researchers get a file name, country of submitter, and time submitted, which doesn’t give information about the context in which malware was accessed.

Ashkan Soltani spoke on how technological advances enable bulk surveillance. One of the important differences between traditional surveillance techniques and new methods is the cost. For example, Soltani estimates that for the FBI to tail someone, it’s about $50/hr by foot, $105/hour by car, and covert auto pursuit with five cars is about $275/hour. Mobile tracking might work out to between 4c and $5/hour. This means that the FBI has been able to use mobile tracking to watch 3,000 people at a time, which would be totally impossible otherwise. This is vital when we think about how different forms of surveillance are (or aren’t) regulated.

Nicholas Weaver is based at the International Computer Science Institute, and emphasising that this gives him more freedom to look at NSA-relevant areas because he has a freedom to look at leaks that US government employees are prohibited from accessing. He advises us not to trust any Tim Horton’s near any government buildings. He gave a brief overview of NSA surveillance, arguing that it’s not particularly sophisticated and opens up a lot of vulnerabilities. Weaver said that anyone with a knowledge of the kinds of surveillance that the US’s allies (such as France and Israel) are engaging in will find them more worrying than actions of the US’s opponents (eg. Russia and China).

Cynthia Wong discussed work by Internet Research and Human Rights Watch on documenting the harms of surveillance. One of the organisation’s case studies has focused on Ethiopia, which is interesting because of the network of informants available, and the extreme hostility to human rights documentation and research on the part of the Ethiopian government. Surveillance in Ethiopia is complex but not necessarily sophisticated, often relying on strategies like beating people up and demanding their Facebook passwords. However, the state also buys surveillance tools from foreign companies, and documenting the harms of surveillance may help in bringing action against both companies and Ethiopia itself. The organisation also has a new report out which looks at surveillance in the US, where it’s harder to document both surveillance and resultant harms: this report highlights the chilling effects of surveillance on lawyers and journalists.

Security for the real world

August 7, 2013 § 1 Comment

I’m kicking myself for missing Observe. Hack. Make. – it sounds like it was an amazing event that brought together geek and activist communities in a really interesting and valuable way. Coverage coming through on Twitter also suggested that #OHM2013 hosted political discussions that were informed by a more complex political analysis than the ones I often see surrounding issues about digital security and civil rights. There was a lot of excitement around Eleanor Saitta’s talk in particular, Ethics and Power in the Long War. I encourage you to read the full transcript, but there were a few stand-out points that are worth emphasising.

  • Saitta talked about the need for those involved in developing digital security to stop harassing each other and have “a polite technical conversation like professionals do in the real world. (Sarah Sharp’s recent calls for civility on the Linux mailing list give good insight into some of the culture surrounding this.) This is especially important to me because poor communication and unwelcoming discussion are one of the barriers between better inter-community engagement I’ve noticed coming up over and over in my research and activism. Aggressive communication styles within a community are not only unproductive and tiring for those involved, they also makes it harder for those outside the community to consider joining, or coming in and saying, “hey, we need some help with this tool” or “can we link up on this issue”.
  • She also argued that “the user model is the thing that needs to come first”. There are some really useful security tools out there that people I know would benefit from, but they’re not using them because they require investing too much time and energy to learn, and the benefits aren’t clear.
  • Linked to this is her injunction to value the “incredibly complex and very powerful pattern matching CPU hooked-up to your system that you are not using … the user”. Many activists on the ground don’t have the skills (or the interest) to work through complicated tools that aren’t user-friendly, but they do have other important skills and knowledge, including an awareness of their own needs and an informed political analysis.
  • Saitta argued that we need new tools to be informed by a theory of change, an understanding of the larger battles and overall landscape in which tools will be deployed. Although her example focused on the brittleness of security systems (once stuff breaks, it really breaks), I’d argue that we also need to think about this in terms of a political theory of change. The theory of change for a lot of digital rights activism at the moment is, ‘more information will necessarily change politics’. More information helps, but we also need to understand that the system is sustained by powerful interests, not just ignorance, and our theory of change needs to be informed by that. (Which I think is happening, increasingly.)
  • She also calls out the tech community’s claims to being apolitical: “we don’t get to be apolitical anymore. Because If you’re doing security work, if you’re doing development work and you are apolitical, then you are aiding the existing centralizing structure. If you’re doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures.”

In response to this, Saitta lays out her own politics, noting that the increased surveillance we’re seeing these days is an inherent function of the state as it exists today:

if we want to have something that resembles democracy, given that the tactics of power and the tactics of the rich and the technology and the typological structures that we exist within, have made that impossible, then we have to deal with this centralizing function. As with the Internet, so the world. We have to take it all apart. We have to replace these structures. And this isn’t going to happen overnight, this is decades long project. We need to go build something else. We need to go build collective structures for discussion and decision making and governance, which don’t rely on centralized power anymore. If we want to have democracy, and I am not even talking about digital democracy, if we want to have democratic states that are actually meaningfully democratic, that is simply a requirement now.

Conversations which make this their starting point are incredibly important right now. It’s necessary, but not sufficient, to talk about decentralising political power. We need to also be talking about what that means in practice, how it will work, what kinds of tools and systems will support it.

New article out: the emergence of the digital liberties movement

August 7, 2012 § Leave a comment

My new article is now out on First Monday. This work complements my previous conference paper, which looked at the need to see the Internet and other digital technologies as a contested space which activists must work to protect.

The digital liberties movement is an emerging social movement that draws together activism around online censorship and surveillance, free/libre and open source software, and intellectual property. This paper uses the social movement literature’s framework to build an understanding of the movement, expanding the dominant framework by including a focus on the networks which sustain the movement. While other communities and movements have addressed these issues in the past, activists within the digital liberties movement are beginning to build a sense of a collective identity and a master frame that ties together these issues. They are doing this in online spaces, including blogs, and through campaigns around landmark issues, which also help to build the network which the movement relies upon. The 2012 campaign against the U.S. Stop Online Piracy Act has highlighted the movement’s strength, but will also, perhaps, raise challenges for digital liberties activists as they confront the tension between attempts to disavow politics and a profoundly political project. [Read it all.]

Putting the Trans Pacific Partnership Agreement into context

March 16, 2011 § Leave a comment

There’s been a lot of excitement among digital liberties types about the TPPA recently, as the US IP proposals were leaked last week. There’s an excellent analysis by Kim Weatherall over at LawFont, more analysis over at techdirt, and some opposition starting up by groups like the Pirate Party and EFA. Most of these activists have raised some great points – I particularly recommend Kim Weatherall’s article, which has identified some areas that might be particularly problematic, especially relating to copyright extensions and anti-circumvention provisions.

However, what I find strange about a lot of this, though, is the lack of connection with other anti-trade agreement activism. Left-wing activists have been critiquing “free trade” agreements for decades: the protests in Seattle in 1999 were some of the most visible examples of this in the global North, but they certainly haven’t been the only protests. When it comes to the TPPA, there are a number of groups continuing on from previous rounds of global justice activism, including TPPWatch (NZ) and AFTINET (AU). I’m not particularly well-linked to this activist scene, so I’m sure there are also plenty of less-visible groups.

There are a few reasons why digital liberties activists might not be connecting up with other strands of global justice activism, as I argued in my PhD. These include:

  • Many (but not all!) digital liberties activists come from “geeky” backgrounds – they know a lot about copyright, or software, but not necessarily a lot about non-institutional politics or protest movements.
  • Many digital liberties activists seem to want to avoid any association with left-wing politics, and often identify as libertarian, or as “apolitical” (despite the fact that they’re involved in intensely political projects).
  • A significant proportion of digital liberties activism comes from a pro-capitalist perspective and is based on the assumption that we need to expand the economy and encourage more “innovation”. See, for example, techdirt‘s complaints that the TPPA is “against the basic principles of the free market and consumer rights”. This doesn’t tend to mesh well with anarchist/socialist perspectives, although there are some overlaps.
  • As I’m learning more about digital liberties groups, it’s becoming clearer to me that many of those involved want to be identified as “serious” and capable of consultation. In fact, I suspect that many of them would resist the “activist” label, and would prefer to stick to formal lobbying activity, trying for inclusion in decision-making bodies.

However, while I can see the reasons that digital liberties activists might not want to link up with global justice activism against “free trade” agreements, I do think there are important arguments that they should at least consider:

  • There’s no point reinventing the wheel. Activists around the world have been involved in building critiques the processes used to create free trade agreements, bringing attention to the fact that these processes are undemocratic and opaque. Digital liberties activists might not fully agree with the critiques put forward by global justice activists, but they can draw on them.
  • Building coalitions can be helpful, especially if they bring together a range of demographics. Demonstrating that proposed agreements are likely to have effects on people beyond a relatively small band of “knowledge workers” is a good way to put pressure on governments.
  • If you want to get bring attention to intellectual property issues, you need to convince people that these issues will have some effect on their lives. Analysing them within the broader context of other provisions of free trade agreements is one way to do this.

I’ve argued elsewhere that global justice activists should be paying attention to digital liberties. I think it’s also important that digital liberties activists pay attention to what global justice activists are doing.

Where Am I?

You are currently browsing entries tagged with digital liberties at skycroeser.net.

Follow

Get every new post delivered to your Inbox.

Join 58 other followers