October 6, 2016 § Leave a comment
Tarleton Gillespie (Microsoft Research), Jillian York (Electronic Frontier Foundation), Sarah Myers West (University of Southern California), José van Dijck (Royal Netherlands Academy of Arts and Sciences), Sarah Roberts (UCLA).
Tarleton Gillespie opened, noting that most users never run into content moderation rules: they never confront what gets deleted or suspended, particularly not the details. Other users run into those rules again and again. It’s also important to recognise that there’s a whole apparatus: not just the rules, but also people who evaluate content, complaints, and policy changes. This apparatus is largely opaque to users, even when users do interact with parts of it. It’s also opaque to scholars. How do we begin to open this up?
Sarah Roberts talked about the shift from our online presence being confined to a server in someone’s closet to being hosted by these multinational entities. Firms have outgrown any policy dimensions they (or others) can come up with, and they really seek outside assistance in developing their policy. Content moderation is often outsourced: what’s the experience of those working with this content? We also need to recognise that one person’s censorship is another’s security. Total automation of these processes, some suggest, would fix these concerns, but this wouldn’t solve issues with the opacity of moderation processes, and it wouldn’t prevent the production of problematic content.
Sarah Myers West voiced her frustration with the limits of research on moderation policies. Platforms are meant to be giving us greater voice, but in practice it can be incredibly difficult for marginalised users to get their content back up when it’s taken down – doing so usually requires getting help from an NGO. At-risk users tend not to find transparency reports and policy statements helpful, as they’re often lacking in necessary detail (especially when translated) and not adequately localised.
Jillian York discussed onlinecensorship.org, which came out of conversations she had with Palestinian activists discussing the kinds of content they were seeing censored. This project tries to track terms of service takedowns, which are much murkier than government censorship practices. There are very different contexts here, for example, when content is taken off a platform, it still exists elsewhere. This might not matter for some users, and some kinds of content. But some users’ Internet use is heavily (or exclusively) focused on particular platforms, and as platforms like Facebook encourage users to stay within the bounds of the site it can mean that content doesn’t exist elsewhere.
José van Dijck reiterated some of her ideas from her keynote, talking about the shift from having companies and institutions that (for example) specialised in news, to companies like Google and Facebook that are data companies…even when they’re clearly incredibly important sources of news. We need to discuss this discrepancy.
After the initial provocations, the discussion opened up, with contributions from the audience as well. Some great points came out of this, including around the ability of users to create change, and whether news organisations and journalists should be treated differently to ordinary users.
July 29, 2014 § Leave a comment
The first day of CLSI 2014 started with Ron Diebert talking about the state of the field and the attempt currently under way to build an inter-disciplinary research community around monitoring Internet openness and rights. Fenwick McKelvey has also put up a reading list of papers mentioned at CLSI 2014.
The opening panel looked at Network Measurement and Information Controls, and was facilitated by Meredith Whittaker of Google Research. Phillipa Gill gave an outline of the ICLAB project [slides]. This project is trying to develop better automation techniques for measuring censorship which would allow a better understanding of not just what is blocked, but also how it’s being blocked, who’s blocking it, and which circumvention methods might be most effective. At the moment the tool is still running in pre-alpha, and having some successes with block page detection: early findings will come out in IMC later this year.
Nick Feamster from Georgia Tech then discussed another project which is attempting to build a more nuanced picture of Web filtering than the data currently available. He argued that censorship takes many forms, not just blocking: performance degradation, personalisation, and other tactics. This means that measuring Web filtering is harder than it appears, and what is required is, “Widespread, continuous measurements of a large number of censored sites.” Issues with this include the problem of distributing client software to look for censorship, which is potentially done through the browser. This is possible, but leads to ethical issues.
Jeffrey Knockel of the University of New Mexico talked about moving, ‘Toward Measuring Censorship Everywhere All the Time’ [slides]. The method discussed here was to use side channels, which allows measuring IP censorship off-path without running any software on the server or the client or anywhere in between. This can be done completely in Layer 3, which has enough side channels. Almost 10% of IPv4 addresses respond to large pings, higher in some countries – this allows for more vantage points. [I have no idea what this means.]
Finally, Collin Anderson talked about studying information controls inside Iran. He discussed the use of mass-scale continuous data collection as a way to show themes of political discourse within the country. This requires content-specific, context-specific knowledge. For example, when Iraq started to clamp down on the Internet, Islamist content was specifically blocked, as well as an odd assortment of pornographic site. Anderson argued that this research will be more effective when people avoid references to “censorship”, which can be divisive, and instead talk about “interference” and “information controls”. (This was also a theme that came up in the Q&A as Meredith discussed the need to avoid ‘inflammatory activist tinge’ to language, project titles, and so on, because this can discourage use and endanger anyone accessing services).
The Q&A for this last session focused quite a bit on ethics issues, and on the problems with managing these given the limitations of current ethics research boards and the challenges involved in the research itself. For example, while university ethics boards tend to prioritise ‘informed consent’, this can create problems for users of circumvention tools as it removes plausible deniability. Similarly, the idea of using anonymity to protect activists may not always match activists’ experience: some participants want their real names used because they feel this offers the protection of international visibility. Gill argued that part of what we need is better models of risk: frameworks for predicting how censors are likely to react to measurement.
The next session of the date focused on Mobile Security and Privacy. David Lie of University of Toronto began with a discussion of ‘Pscout: Analyzing the Android Permission Specification’. This tool uses two-factor attestation as a way to improve data security. This combines two-factor authentication with malware protection across both laptops and mobiles/authentication tokens. (I have some concern about the focus here on ‘trusted computing’, which takes devices further out of their users’ control).
Jakub Dalek of Citizen Lab talked next about the Asia Chats project, which focuses on chat apps that are popular outside the western context. In this case, Line, Firechat, and WeChat. Line implements blocking for users registered with a Chinese number, although there are a number of ways to circumvent this blocking. Firechat, which has been popular in Iraq, is promoted as being anonymous, but the actual content of messages is very poorly protected. Finally, Dalek noted that there was a lot of Chinese government interest in regulating WeChat.
Jason Q. Ng, also Citizen Lab, shared his work on the same project, this time focusing on Weixin. One of the interesting trends here is the emergence of messages which place the blame on other users for blocked content, such as: “This content has been reported by multiple people, the related content is unable to be shown”. Looking at the specific kinds of content blocked suggest that even if ‘users’ are blocking this material, there’s some link with the Chinese government (or at least with government interests). More work is needed, perhaps, which looks at these kinds of indirect forms of information control.
Finally, Bendert Zevenbergen of the Oxford Internet Institute outlined the Ethical Privacy Guidelines for Mobile Connectivity Measures, the outcome of a workshop held with ten lawyers and ten technical experts. He also raised the potential helpfulness of a taxonomy of Internet Measurement ethics issues, and invited people to begin collaborating in the creation of a draft document.
The next session focused on Transparency and Accountability in Corporations and Government. Chris Prince of the Office of the Privacy Commissioner of Canada talked about the annual report in Canada on the use of electronic surveillance which has been made available since 1974. A paper analysing this data, Big Brother’s Shadow, was published in 2013, and suggested important shifts in targets and sites of surveillance.
Jon Penney of the Berkman Center, Citizen Lab, and Oxford Internet Institute, outlined three major challenges for transparency reporting in ‘Corporate Transparency: the US experience’. These include the need for more companies to be willing to share transparency reports with more and better data (including standardised data); better presentation and communication of transparency reports which balance advocacy and research and provide contextualisation; and more work on the legal and regulatory space impacting transparency reporting.
Nathalie Marechal of USC Annenberg talked about the ‘Ranking Digital Rights‘ project, which is developing and testing criteria for particular privacy-protections from companies (such as whether they allow users to remain anonymous), working within an international human rights framework. This work has the potential to be useful not only for civil society actors advocating for better corporate behaviour, but also for corporations lobbying for policy change. The initial phase of the project is looking at geographically-based case studies to better understand themes across different locations, and during this phase there’s an interest in understanding how to assess multinational corporations operating across multiple regulatory contexts, including those which are acquired by other companies. Marechal and other researchers on the project are seeking feedback on the work so far.
Chris Parsons of Citizen Lab spoke on the need for better data about online privacy and related issues in the Canadian context: at the moment, we’re aware that, “an eye is monitoring Canadan communications”, but don’t have full details. This work began by sending surveys to leading Canadian companies in order to get more information an data retention. Results mainly indicated a generalised refusal to engage in any depth with the questions. The work has also been crowdsourcing ‘right of access’ information through an open request tool [try it out, if you’re Canadian!]. Unlike the surveys, these requests are legally binding, and through the data generated, they’re trying to figure out how long online data is stored, how it is processed, and who it is shared with. Collaborations with MP Charmaine Borg have also led to more information about how Canadian intelligence and police agencies are engaging in data surveillance. From this initial research, they’re now trying to use this data to develop a transparency template to more effectively map what still need to know.
In the final talk of the session, Matt Braithwaite of Google talked about work around Gmail to build a better understanding of increasing encryption of email in transit. Google also has useful data available on this, and their report on it received significant attention, which resulted in a spike in encryption of email.
The final panel for day one looked at Surveillance
Seth Hardy of Citizen Lab talked about ‘Targeted Threat Index: Characterizing and Quantifying Politically Motivated Malware’, This is a way of measuring the combination of social targeting (for example, the use of specific language and internal group knowledge to convince activists to open attachments) and technical sophistication to build a better understanding of how politically-motivated malware is developing. Research from this project will be presented at USENIX Security on August 21st, 2014.
Bill Marczak (UC Berkeley and Citizen Lab) and John Scott-Railton (UCLA and Citizen Lab), talking about the growth of state sponsored hacking. They described the growth of mercenaries, companies selling tools to governments (such as FinFly). Some of the challenges for this research include the lack of people available to contact targeted groups and find out about the issues they might be having, and that targeted users may not even realised they’re under attack in some cases. There is some information available on malware that users are accessing, but metadata on this is limited: researchers get a file name, country of submitter, and time submitted, which doesn’t give information about the context in which malware was accessed.
Ashkan Soltani spoke on how technological advances enable bulk surveillance. One of the important differences between traditional surveillance techniques and new methods is the cost. For example, Soltani estimates that for the FBI to tail someone, it’s about $50/hr by foot, $105/hour by car, and covert auto pursuit with five cars is about $275/hour. Mobile tracking might work out to between 4c and $5/hour. This means that the FBI has been able to use mobile tracking to watch 3,000 people at a time, which would be totally impossible otherwise. This is vital when we think about how different forms of surveillance are (or aren’t) regulated.
Nicholas Weaver is based at the International Computer Science Institute, and emphasising that this gives him more freedom to look at NSA-relevant areas because he has a freedom to look at leaks that US government employees are prohibited from accessing. He advises us not to trust any Tim Horton’s near any government buildings. He gave a brief overview of NSA surveillance, arguing that it’s not particularly sophisticated and opens up a lot of vulnerabilities. Weaver said that anyone with a knowledge of the kinds of surveillance that the US’s allies (such as France and Israel) are engaging in will find them more worrying than actions of the US’s opponents (eg. Russia and China).
Cynthia Wong discussed work by Internet Research and Human Rights Watch on documenting the harms of surveillance. One of the organisation’s case studies has focused on Ethiopia, which is interesting because of the network of informants available, and the extreme hostility to human rights documentation and research on the part of the Ethiopian government. Surveillance in Ethiopia is complex but not necessarily sophisticated, often relying on strategies like beating people up and demanding their Facebook passwords. However, the state also buys surveillance tools from foreign companies, and documenting the harms of surveillance may help in bringing action against both companies and Ethiopia itself. The organisation also has a new report out which looks at surveillance in the US, where it’s harder to document both surveillance and resultant harms: this report highlights the chilling effects of surveillance on lawyers and journalists.
October 30, 2012 § Leave a comment
Yesterday’s opening presentation and discussions focused on Internet governance: new challenges, different perspectives, and the lack of public awareness. This will, sadly, only be a very truncated version of the evening’s discussions, as I had to cut down the many pages of notes into a more readable form.
John Kampfner‘s keynote gave a broad overview of the issues, which I’m sure was particularly welcomed by those who weren’t already familiar with the ‘acronym soup’ of Internet governance (for a quick briefing, try: WSIS, ITU, ICANN and IGF). The upcoming meeting of the ITU in Baku has led to some panic about an upcoming ‘Internet Armageddon’ (at least on the part of the US and some others in the West) if the ITU, a UN agency, takes a greater role in regulating the Internet. Kampfner (like everyone else on the panel) also sees the ITU as an inappropriate, and possibly dangerous, body for this role, especially given the current push from the ITU towards decreasing anonymity online and strengthening government sovereignty over the Net, and the fact that only governments really have a seat at the table at the ITU. However, the other main players in Internet governance at the international level are also limited: the IGF, while it is most open to civil society engagement, remains a “cumbersome talking shop”.
Kampfner also emphasised that while the original dream of the Internet was for a freer world, the Net is becoming increasingly fettered, in large part by national governments. This is partly a response to the stepping-back of opinion-makers in society from dealing with new questions about boundaries opened up by the vast amount of information shared online. We have yet to draw firm lines around what is and is not appropriate behaviour online, and frequently the response is to send police out rather than engaging in a more nuanced discussion around unacceptable content (which Kampfner should include direct incitement to violence) and offensive but not actionable content (which Kampfner argues should include blasphemous and ‘mean’ content).
Christian Mihr‘s response to the keynote focused on two points: firstly, a defence of the IGF as both the most inclusive process for Internet governance currently available, and currently under threat from the London process; secondly, a reminder that we also need to look critically at the role of corporations. (He, very politely, did not to link this to Kampfner’s role as a consultant to Google and the GNI.) Personally, I think this is crucial: when so many of us access the Internet through Apple’s walled garden and Google is our main way of finding information in the vast mess of online material, the role of private corporations matters very much.
Ben Scott, former policy advisor for innovation to the US state department in the Obama administration, disagreed with Kampfner’s claim that all governments were seeking more control over the Internet. He said that discussions within the Obama administration during his time there had led to the conclusion that, whether or not more control was desirable, it was impossible. They started with the assumption that while the government could control the information system some of the time, they certainly couldn’t do so all of the time, and they needed to adapt accordingly. (Of course, this raises the question about surveillance and censorship provisions in the NDAA, ACTA, and other legislation.)
The discussion then shifted towards the upcoming IGF meeting in Dubai. Moez Chakchouk had a fascinating perspective here, having served as the CEO of the Tunisian Internet Agency (ITA) under the previous Tunisian government and continuing his work today. Previously, he and others in the ITA had not been able to participate in the IGF because it was likely to lead to punishment from the regime. The 2011 Nairobi IGF was the first such forum he was able to attend, and this was a step in the process of learning how to build trust and communicate with civil society. While Tunisia is interested in getting involved in discussions around Internet governance, the issue is complex and the main focus at the moment is on promoting transparent debate around Internet freedoms after years of censorship.
The moderator of the panel, Geraldine de Bastion, encouraged the panellists to reflect on how governments in the West are pushing for more control and asked what Western governments would be pushing for at the next ITU meeting in Baku.
Ben Scott’s reply was simple: nothing at all. Not, however, because Western governments don’t want to control the Internet, but because they don’t want to control it at the UN. Scott argues that the best, and most likely, outcome for international Internet governance is that there will be decades of slow work through multistakeholder institutions, building norms, negotiating, before a international regulation is more thoroughly in place. This is not unprecedented: most international coordination efforts look very similar. Scott also acknowledged that this process will have to involve an internationalisation of current institutions, which remain largely US-centric (because of the origins of the Internet, rather than any conspiracy).
During the question time, I asked the panellists how effective grassroots-level campaigning around these issues had been, including the campaign around SOPA and PIPA. Ben Scott said that Internet-based campaigns are very good at mobilising against things, stopping bad legislation from happening, but not so good at the kind of long-term constructive engagement required to build alternatives: those who opposed SOPA and PIPA aren’t creating alternative legislation (my reply that they are trying was not met with enthusiasm). Similarly, Scott said, the young people who were in Tahrir Square are now not represented in the structures of power, aren’t working to build the new system (of course, not everyone sees that as a problem – see Mohammed Bamyeh’s comments in my previous post, and today’s panels have demonstrated that some young people are involved in the political process).
Another question focused on the perceived balance national security and freedom of speech online, asking how the Tunisian government is planning on dealing with this.
Chakchouk’s answer was that for years under the Ben Ali regime they had censored large portions of the Internet, but had tried to undermine the censorship regime by demonstrating to the courts that censoring content only increased its popularity as people found other ways to share it. So because censorship is ineffective, and because they have had enough of censorship, they have been refusing all requests to censor information since August 2011. Chakchouk acknowledges that there are times that information-sharing is problematic, such as when rumours are spread or when this feeds into tensions between different Tunisian communities. But the answer is not censorship: it’s countering rumours, and making better information available.
He also, in response to another excellent question from de Bastion, pointed out that the censorship software they had been buying from the West had been costing the country a lot (and those selling the software are making even more money from countries in the region who had more money than Tunisia). The hypocrisy of Western governments condemning censorship and surveillance abroad while allowing companies to sell software used for this, and indeed engaging in their own censorship and surveillance, was not lost on any of the panellists.
This opening formed a good basis for the region- and country-specific sessions that follow. Mixing presentations and panel discussions is also a useful format (even if it harder to summarise!). I wish I could also summarise some of the debates I’ve had over tea breaks – there are so many people with interesting perspectives to share, and I’m really enjoying the post-session debriefings. The posts that follow will look at how activists in South East Asia, Latin America, the Middle East and Africa are using the Internet.
If you want to follow along, you can follow the #activism2action tag on Twitter or look at more of Cucchiaio’s amazing comic-form summaries (which I only just discovered when looking for photos to illustrate this).