AoIR2016: Forced migration and digital connectivity in(to) Europe – communicative infrastructures, regulations and media discourses

October 8, 2016 § Leave a comment

Mark Latonero (USC Annenberg School) spoke on they ways in which data is being collected around forced migration flows. Latonera is interested in the technologies that are being used to track and managed refugees’ movements across borders. People were stopping at the short border between Serbia and Croatia for a variety of reasons, including to get medical treatment, food, money transfers, and wireless access.

wifiAs we research these infrastructures, we also need to examine which actors are inserting themselves into these flows (or being drawn into them). Platforms like Facebook, Whatsapp, and Viber are being used to organise travel, while others, including Google and IBM, are developing responses aimed at supporting refugees or managing refugee flows. Coursera is offering online study for refugees, and there are also other edutech responses.

Aid agencies like UNHCR are teaming up with technology companies to try to develop support infrastructures: the World Food Program, for example, is coordinating with Mastercard. The ‘tech for good’ area, including techfugees, is also getting involved. Latonera is deeply doubtful that a lot of the hackathons in the West are going to produce systems that can help in meaningful ways.

We need to think about the social, political, and ethical consequences of the ways in which these technological structures of support, management, and surveillance are emerging.

Paula H. Kift (New York University, NY) In search of safe harbors: privacy and surveillance of refugees at the borders of Europe

There are two important EU regulations: Eurosur (drone and satellite surveillance of the Mediterranean sea), and Eurodac (which governs biometric data).

At the moment, the EU engages in drone and satellite surveillance of boats arriving, arguing that this doesn’t impinge on privacy because it tracks boats, not individuals. However, Kift argues that the right to privacy should impact on non-identifiability as well, and the data currently being gathered does have the ability to identify individuals in aggregate.

There are claims that data on boats may be used for humanitarian reasons, to respond to boats in distress, but the actual regulations don’t specify anything about how this might happen, or who would be responsible, which suggests that humanitarian claims are tacked on, rather than central to Eurosur.

4639Similarly, biometric data is being collected for indefinite storage and collection, and this is justified with claims that it will be used to help deal with crime. This is clearly discriminatory, as refugees are no more likely to be involved in crime that citizens. Extensive biometric data is now being collected on children as young as six. This is particularly worrying for people who are fleeing government persecution.

The right to privacy should apply to refugees: blanket surveillance is discriminatory, has the potential to create serious threats to refugee safety, and is frequently being used for surveillance and control rather than any humanitarian purposes.

Kift suggests that the refusal to collect personally identifiable information can also be seen as problematic: states are refusing to process refugee claims, which creates further flow-on effects in terms of a lack of support and access to services.

Emerging coordination with tech firms creates further concerns: one organisation suggested creating an app that offered to give information on crossing borders and resettlement, but actually tracked refugee flows.

Çiğdem Bozdağ (Kadir Has University, Turkey) and Kevin Smets (Vrije Universiteit Brussel, Belgium and Universiteit Antwerpen, Belgium). Discourses about refugees and #AylanKurdi on Social Media

After the image of Aylan Kurdi was shared, research showed huge peaks in online discussions of refugees, and searches for information on refugees and Syria. However, these findings also raise further questions. Did this actually alter the debate on refugees? How did different actors use the impact of the image? And how did this take shape in different local and national contexts?

This research focused on Turkey and Belgium (and specifically on Flanders). Belgium has taken much fewer refugees than Turkey, but nevertheless there are significant debates about refugee issues in Belgium. In Maximiliaanpark, a refugee camp was set up outside the immigration offices in response to slow processing times.

In the tweets studied, there were a lot of ironic/cynical/sarcastic tweets, which would be hard to code quantitatively: qualitative methods were more appropriate to understanding these practices.

6748738-1x1-340x340Among the citizen tweets studied, the two dominant narratives were refugees as victims, or refugees as threats. In Turkey, anti-government tweeters blame the government for victimising refugees, pro-government tweets blame the opposition, Assad, or humanity as a whole. In Belgium, refugees were mostly seen as victims of a lack of political action, or as the victims of instrumentalisation (by politicians, media, and NGOs). When refugees were seen as a threat, in Turkey this focused on Aylan’s Kurdish ethnicity, whereas in Belgium this drew on far-right frames.

Research also looked at reasons given for the refugee ‘crisis’: those who are against migration tended to focus on economic pull factors, those in favour tended to give more vague reasons (‘failure of humanity’). When solutions were provided, those employing a victim representation called for action and solidarity, whereas those seeing refugees as threats called for measures like closing borders.

When the image of Aylan emerged, it was usually incorporated into existing narratives, rather than changing them. The exception was ‘one-time tweeters’: people who had affective responses (a single tweet about their sadness about Aylan, and then returning to their non-refugee tweets). Both Belgium and Turkish users tended to see Gulf countries as bad ‘others’ who do not take refugees. There was little focus on Daesh.

Twitter users who were opposed to immigration tended to employ the clearest vocabulary and framework: there were very strong in oppressing what they saw as the problem, and the solutions.

Unfortunately, the conclusion is pessimistic: the power of this image (on Twitter) is limited: it didn’t disrupt existing discourses, and there were also great similarities with how refugees and refugee issues are portrayed in the mainstream media.

Eugenia Siapera (Dublin City University, Ireland) and Moses Boudourides’ (University of Patras, Greece) work looks at the representation of refugee issues on Twitter.

There are two important theoretical frameworks: digital storytelling (Nick Coludry) and affective publics (Zizi Papacharissi). Affective publics both reflect and reorganise structures of feeling: the culture, mood, and feel of given historical moments. The refugee issue is a polymedia event, but this research focuses specifically on Twitter.

What are the affective publics around the ‘refugee issue’? There wasn’t one debate, but overlapping rhythms. Here, there were four key events: the Paris attacks, the Cologne station sexual assaults, the Idomeni crisis, and the Brussels bombing.

This research used large-scale capture of relevant tweets across many different languages. The overall story is about crisis, about European countries and their response, about chilrden and human rights told in many languages. It concerns political institutions ad politicians, as well as about terrorist attacks and US right-wig politics. Canada and Australia are also very much involved.

Incidents in particular countries rapidly become entangled with narratives elsewhere, as they were incorporated into national debates. There’s a tendency for discussions on Twitter to fit into existing narratives and discourses.

Kaarina Nikunen, University of Tampere. Embodied solidarities: online participation of refugees and migrants as a political struggle

By drawing together the public and private, campaigns build affective engagement that can be thought of as media solidarities. This research looks at ‘Once I was a refugee’, where refugees use their own voice and bodies to embody solidarity.

In Finland, the refugee population is very low: since 1973, the country has only 42,000 people with refugee status. In 2015, 30,000 refugees came, which was a significant change. The refugee presence in the public debate is very small. Debates are really between politicians, and some NGOs. Refugees are silent in the mainstream media.

‘Once I was a refugee’ was initiated by two journalists, following from examples in other European countries. It began in June 2015, which was crucial timing: August and September saw attacks on several reception centres, and anti-refugee rallies calling for borders to be closed. Public debates focused on the economic cost on Finland’s welfare state. The campaign tried to build a counter-narrative to these claims.

11951262_1648702685410427_3715380495121936746_nWithin a few days, many young Finns shared their photos on the site: there are now 172 stories on the Facebook site. The format for stories is the same: “Once I was a refugee, now I’m a …” The site gained national attention, including in the mainstream press. It provided alternative images of labour, education, and value. The narratives are united by optimism: while they may have a sense of struggle, they highlight successful integration.

Most end with gratitude: “thank you, Finland”. This highlights the sense that refugees had (and have) of having to earn their citizenship. Uniforms are used to signal order and belonging. In particular, there are many images of people wearing army uniforms – these also gain the most shares. This can be seen as an attempt to counter claims of ‘dangerous’ refugee bodies.

Responses sometimes drew divisions between these ‘acceptable’ refugees and the need to refuse others. We should also recognise that the campaign requires former refugees to become vulnerable and visible: this is clear from the ways in which images become the focus of discussion for those against immigration. The campaign didn’t disrupt the narrative of refugees as primarily an economic burden which needed to be dealt with (merely promising

However, ‘Once I was a refugee’ did open space where refugees spoke up in their defence (when others weren’t), emphasising their value and agency, and engaging in the national political debate.

AoIR2016: Activism

October 7, 2016 § Leave a comment

Digital Unmasking: the Ethical Issue of Crowd Surveillance
Mathias Klang, University of Massachusetts Boston

[This talk opened, rather jarringly, with a quotation from a guy who recently left ToR after multiple accusations of long-term predatory behaviour. I admit that this unsettled me substantially and probably didn’t help with my note-taking.]

Is there a right to protest anonymously? Anti-masking laws suggest otherwise. This is, in most jurisdictions, no legal right to anonymity, but there are some cases in which we’ve developed a commitment to anonymity, for example, in voting. Anonymity in voting shouldn’t be taken for granted: it was characterised as ‘cowardly’ in US history. We have this idea that democracy should be open.

If every device has politics, what is the politics of a device that captures mobile data? This is a technology that silences uncomfortable discourse.

Collateral Visibility
Bryce Newell, Tilburg Institute for Law, Technology, and Society

licenseSome key questions about body cameras and automatic license plate recognition systems (ALPR). Newell cited several examples of the tracking of police behaviour, and videotaping of police killings. Police talk about feeling victimised, or about a ‘witch-hunt’ against them. In interviews around the filming of police violence, themes around context and control. This is also leading to attempts by police to try to limit access to footage.

In other jurisdictions, police are making data more available instead, for example, putting bodycam footage online. However, this leads to its own issues, including ‘collateral visibility’, as citizens interacting with police have their interactions shared online.

Data privacy in commercial uses of municipal location data
Meg Young, University of Washington

This research asks about how data privacy is enacted by Seattle’s municipal government. Data collection drew on interviews, focus groups, and other ethnographic research. In Seattle, the state freedom of information law is grounded in a strong presumption of citizen’s right to know.

The Acyclica company collects data (MAC addresses), aggregates this data, and uses it to track travel patterns within the city. If the raw data was a public record, it would be requestable. Since it’s outsourced, it’s not. But analysis of the contract suggests that the data can be resold. Data collecting for this was rationalised in a variety of ways. For example, one employee said that people were ‘opting-in’ by having their phones’ wifi turned on in public space.

Sandra Braman provided some closing comments. One key question: what would you do (as an individual activist and as a community), assuming all of this is true, to be as politically effective as possible? We have to recognise that no matter what we do, it will be unpredictable. Activists can use big data (and other) analysis as well as researchers. [And somewhere in there discussion shifted to another skeevy JA from the tech activist world and I unfortunately ran entirely out of energy].

Citizen Lab Summer Institute on Monitoring Internet Openness and Rights, Day 1

July 29, 2014 § Leave a comment

The first day of CLSI 2014 started with Ron Diebert talking about the state of the field and the attempt currently under way to build an inter-disciplinary research community around monitoring Internet openness and rights. Fenwick McKelvey has also put up a reading list of papers mentioned at CLSI 2014.

The opening panel looked at Network Measurement and Information Controls, and was facilitated by Meredith Whittaker of Google Research. Phillipa Gill gave an outline of the ICLAB project [slides]. This project is trying to develop better automation techniques for measuring censorship which would allow a better understanding of not just what is blocked, but also how it’s being blocked, who’s blocking it, and which circumvention methods might be most effective. At the moment the tool is still running in pre-alpha, and having some successes with block page detection: early findings will come out in IMC later this year.

Nick Feamster from Georgia Tech then discussed another project which is attempting to build a more nuanced picture of Web filtering than the data currently available. He argued that censorship takes many forms, not just blocking: performance degradation, personalisation, and other tactics. This means that measuring Web filtering is harder than it appears, and what is required is, “Widespread, continuous measurements of a large number of censored sites.” Issues with this include the problem of distributing client software to look for censorship, which is potentially done through the browser. This is possible, but leads to ethical issues.

Jeffrey Knockel of the University of New Mexico talked about moving, ‘Toward Measuring Censorship Everywhere All the Time’ [slides]. The method discussed here was to use side channels, which allows measuring IP censorship off-path without running any software on the server or the client or anywhere in between. This can be done completely in Layer 3, which has enough side channels.  Almost 10% of IPv4 addresses respond to large pings, higher in some countries – this allows for more vantage points. [I have no idea what this means.]

Finally, Collin Anderson talked about studying information controls inside Iran. He discussed the use of mass-scale continuous data collection as a way to show themes of political discourse within the country. This requires content-specific, context-specific knowledge. For example, when Iraq started to clamp down on the Internet, Islamist content was specifically blocked, as well as an odd assortment of pornographic site. Anderson argued that this research will be more effective when people avoid references to “censorship”, which can be divisive, and instead talk about “interference” and “information controls”. (This was also a theme that came up in the Q&A as Meredith discussed the need to avoid ‘inflammatory activist tinge’ to language, project titles, and so on, because this can discourage use and endanger anyone accessing services).

The Q&A for this last session focused quite a bit on ethics issues, and on the problems with managing these given the limitations of current ethics research boards and the challenges involved in the research itself. For example, while university ethics boards tend to prioritise ‘informed consent’, this can create problems for users of circumvention tools as it removes plausible deniability. Similarly, the idea of using anonymity to protect activists may not always match activists’ experience: some participants want their real names used because they feel this offers the protection of international visibility. Gill argued that part of what we need is better models of risk: frameworks for predicting how censors are likely to react to measurement.

The next session of the date focused on Mobile Security and Privacy. David Lie of University of Toronto began with a discussion of ‘Pscout: Analyzing the Android Permission Specification’. This tool uses two-factor attestation as a way to improve data security. This combines two-factor authentication with malware protection across both laptops and mobiles/authentication tokens. (I have some concern about the focus here on ‘trusted computing’, which takes devices further out of their users’ control).

Jakub Dalek of Citizen Lab talked next about the Asia Chats project, which focuses on chat apps that are popular outside the western context. In this case, Line, Firechat, and WeChat. Line implements blocking for users registered with a Chinese number, although there are a number of ways to circumvent this blocking. Firechat, which has been popular in Iraq, is promoted as being anonymous, but the actual content of messages is very poorly protected. Finally, Dalek noted that there was a lot of Chinese government interest in regulating WeChat.

Jason Q. Ng, also Citizen Lab, shared his work on the same project, this time focusing on Weixin. One of the interesting trends here is the emergence of messages which place the blame on other users for blocked content, such as: “This content has been reported by multiple people, the related content is unable to be shown”. Looking at the specific kinds of content blocked suggest that even if ‘users’ are blocking this material, there’s some link with the Chinese government (or at least with government interests). More work is needed, perhaps, which looks at these kinds of indirect forms of information control.

Finally, Bendert Zevenbergen of the Oxford Internet Institute outlined the Ethical Privacy Guidelines for Mobile Connectivity Measures, the outcome of a workshop held with ten lawyers and ten technical experts. He also raised the potential helpfulness of a taxonomy of Internet Measurement ethics issues, and invited people to begin collaborating in the creation of a draft document.

The next session focused on Transparency and Accountability in Corporations and Government. Chris Prince of the Office of the Privacy Commissioner of Canada talked about the annual report in Canada on the use of electronic surveillance which has been made available since 1974. A paper analysing this data, Big Brother’s Shadow, was published in 2013, and suggested important shifts in targets and sites of surveillance.

Jon Penney of the Berkman Center, Citizen Lab, and Oxford Internet Institute, outlined three major challenges for transparency reporting in ‘Corporate Transparency: the US experience’. These include the need for more companies to be willing to share transparency reports with more and better data (including standardised data); better presentation and communication of transparency reports which balance advocacy and research and provide contextualisation; and more work on the legal and regulatory space impacting transparency reporting.

Nathalie Marechal of USC Annenberg talked about the ‘Ranking Digital Rights‘ project, which is developing and testing criteria for particular privacy-protections from companies (such as whether they allow users to remain anonymous), working within an international human rights framework. This work has the potential to be useful not only for civil society actors advocating for better corporate behaviour, but also for corporations lobbying for policy change. The initial phase of the project is looking at geographically-based case studies to better understand themes across different locations, and during this phase there’s an interest in understanding how to assess multinational corporations operating across multiple regulatory contexts, including those which are acquired by other companies. Marechal and other researchers on the project are seeking feedback on the work so far.

Chris Parsons of Citizen Lab spoke on the need for better data about online privacy and related issues in the Canadian context: at the moment, we’re aware that, “an eye is monitoring Canadan communications”, but don’t have full details. This work began by sending surveys to leading Canadian companies in order to get more information an data retention. Results mainly indicated a generalised refusal to engage in any depth with the questions. The work has also been crowdsourcing ‘right of access’ information through an open request tool [try it out, if you’re Canadian!]. Unlike the surveys, these requests are legally binding, and through the data generated, they’re trying to figure out how long online data is stored, how it is processed, and who it is shared with. Collaborations with MP Charmaine Borg have also led to more information about how Canadian intelligence and police agencies are engaging in data surveillance. From this initial research, they’re now trying to use this data to develop a transparency template to more effectively map what still need to know.

In the final talk of the session, Matt Braithwaite of Google talked about work around Gmail to build a better understanding of increasing encryption of email in transit. Google also has useful data available on this, and their report on it received significant attention, which resulted in a spike in encryption of email.

The final panel for day one looked at Surveillance
Seth Hardy of Citizen Lab talked about ‘Targeted Threat Index: Characterizing and Quantifying Politically Motivated Malware’, This is a way of measuring the combination of social targeting (for example, the use of specific language and internal group knowledge to convince activists to open attachments) and technical sophistication to build a better understanding of how politically-motivated malware is developing. Research from this project will be presented at USENIX Security on August 21st, 2014.

Bill Marczak (UC Berkeley and Citizen Lab) and John Scott-Railton (UCLA and Citizen Lab), talking about the growth of state sponsored hacking. They described the growth of mercenaries, companies selling tools to governments (such as FinFly). Some of the challenges for this research include the lack of people available to contact targeted groups and find out about the issues they might be having, and that targeted users may not even realised they’re under attack in some cases. There is some information available on malware that users are accessing, but metadata on this is limited: researchers get a file name, country of submitter, and time submitted, which doesn’t give information about the context in which malware was accessed.

Ashkan Soltani spoke on how technological advances enable bulk surveillance. One of the important differences between traditional surveillance techniques and new methods is the cost. For example, Soltani estimates that for the FBI to tail someone, it’s about $50/hr by foot, $105/hour by car, and covert auto pursuit with five cars is about $275/hour. Mobile tracking might work out to between 4c and $5/hour. This means that the FBI has been able to use mobile tracking to watch 3,000 people at a time, which would be totally impossible otherwise. This is vital when we think about how different forms of surveillance are (or aren’t) regulated.

Nicholas Weaver is based at the International Computer Science Institute, and emphasising that this gives him more freedom to look at NSA-relevant areas because he has a freedom to look at leaks that US government employees are prohibited from accessing. He advises us not to trust any Tim Horton’s near any government buildings. He gave a brief overview of NSA surveillance, arguing that it’s not particularly sophisticated and opens up a lot of vulnerabilities. Weaver said that anyone with a knowledge of the kinds of surveillance that the US’s allies (such as France and Israel) are engaging in will find them more worrying than actions of the US’s opponents (eg. Russia and China).

Cynthia Wong discussed work by Internet Research and Human Rights Watch on documenting the harms of surveillance. One of the organisation’s case studies has focused on Ethiopia, which is interesting because of the network of informants available, and the extreme hostility to human rights documentation and research on the part of the Ethiopian government. Surveillance in Ethiopia is complex but not necessarily sophisticated, often relying on strategies like beating people up and demanding their Facebook passwords. However, the state also buys surveillance tools from foreign companies, and documenting the harms of surveillance may help in bringing action against both companies and Ethiopia itself. The organisation also has a new report out which looks at surveillance in the US, where it’s harder to document both surveillance and resultant harms: this report highlights the chilling effects of surveillance on lawyers and journalists.

Security for the real world

August 7, 2013 § 1 Comment

I’m kicking myself for missing Observe. Hack. Make. – it sounds like it was an amazing event that brought together geek and activist communities in a really interesting and valuable way. Coverage coming through on Twitter also suggested that #OHM2013 hosted political discussions that were informed by a more complex political analysis than the ones I often see surrounding issues about digital security and civil rights. There was a lot of excitement around Eleanor Saitta’s talk in particular, Ethics and Power in the Long War. I encourage you to read the full transcript, but there were a few stand-out points that are worth emphasising.

  • Saitta talked about the need for those involved in developing digital security to stop harassing each other and have “a polite technical conversation like professionals do in the real world. (Sarah Sharp’s recent calls for civility on the Linux mailing list give good insight into some of the culture surrounding this.) This is especially important to me because poor communication and unwelcoming discussion are one of the barriers between better inter-community engagement I’ve noticed coming up over and over in my research and activism. Aggressive communication styles within a community are not only unproductive and tiring for those involved, they also makes it harder for those outside the community to consider joining, or coming in and saying, “hey, we need some help with this tool” or “can we link up on this issue”.
  • She also argued that “the user model is the thing that needs to come first”. There are some really useful security tools out there that people I know would benefit from, but they’re not using them because they require investing too much time and energy to learn, and the benefits aren’t clear.
  • Linked to this is her injunction to value the “incredibly complex and very powerful pattern matching CPU hooked-up to your system that you are not using … the user”. Many activists on the ground don’t have the skills (or the interest) to work through complicated tools that aren’t user-friendly, but they do have other important skills and knowledge, including an awareness of their own needs and an informed political analysis.
  • Saitta argued that we need new tools to be informed by a theory of change, an understanding of the larger battles and overall landscape in which tools will be deployed. Although her example focused on the brittleness of security systems (once stuff breaks, it really breaks), I’d argue that we also need to think about this in terms of a political theory of change. The theory of change for a lot of digital rights activism at the moment is, ‘more information will necessarily change politics’. More information helps, but we also need to understand that the system is sustained by powerful interests, not just ignorance, and our theory of change needs to be informed by that. (Which I think is happening, increasingly.)
  • She also calls out the tech community’s claims to being apolitical: “we don’t get to be apolitical anymore. Because If you’re doing security work, if you’re doing development work and you are apolitical, then you are aiding the existing centralizing structure. If you’re doing security work and you are apolitical, you are almost certainly working for an organization that exists in a great part to prop up existing companies and existing power structures.”

In response to this, Saitta lays out her own politics, noting that the increased surveillance we’re seeing these days is an inherent function of the state as it exists today:

if we want to have something that resembles democracy, given that the tactics of power and the tactics of the rich and the technology and the typological structures that we exist within, have made that impossible, then we have to deal with this centralizing function. As with the Internet, so the world. We have to take it all apart. We have to replace these structures. And this isn’t going to happen overnight, this is decades long project. We need to go build something else. We need to go build collective structures for discussion and decision making and governance, which don’t rely on centralized power anymore. If we want to have democracy, and I am not even talking about digital democracy, if we want to have democratic states that are actually meaningfully democratic, that is simply a requirement now.

Conversations which make this their starting point are incredibly important right now. It’s necessary, but not sufficient, to talk about decentralising political power. We need to also be talking about what that means in practice, how it will work, what kinds of tools and systems will support it.

Where Am I?

You are currently browsing entries tagged with surveillance at skycroeser.net.