Wrapping up Mapping Movements

Over the last few years, I’ve been working on a project with Tim Highfield that explores the connections and disjunctions of activism that crosses online and offline spaces, Mapping Movements. We had a book contract to bring the research together and write up some material that hasn’t made it into other publications, but we’ve decided to withdraw it. It was the right choice to make, and it means wrapping up the project.

I learned a lot doing this research, and even though not all of it will end up seeing publication it will continue to weave through my understanding of the myriad of ways people are trying to create change in the world. This post is an awkward goodbye, and a chance to reflect on some of what I learned.

A large part of what I found valuable (as in many of my collaborations) was working out how our approaches fit: how to bring together quantitative and qualitative data from the Internet and the streets to show more than we might see otherwise. We wrote a bit about our methodology in ‘Mapping Movements – Social Movement Research and Big Data: Critiques and Alternatives’ (in Compromised Data From Social Media to Big Data) and a chapter in the forthcoming Second International Handbook of Internet Research. I continue to reflect on how academics can engage in research that’s safe, and hopefully eventually also useful, for activists. Internet research poses particular challenges in this respect, in part because of the increased online surveillance of many social movements.

Fieldwork I carried out for Occupy Oakland and #oo: uses of Twitter within the Occupy movement was particularly instructive when it came to thinking about surveillance and oppression. There were important debates happening in Occupy at the time about livestreaming and the ways in which citizen journalism might feed into claims to represent or lead the movement. And the open police violence made it clear what the stakes might involve. I won’t forget being teargassed, seeing someone carried away on a stretcher, being kettled, running with a group of friends as we got away, desperately trying to work out where the bulk of the marchers were and if there was anything we could do to help them. This violence was a large part of what dispersed the Occupy movement, but activists also spoke about how it prompted them to a deeper understanding of the problems with the US state and the extents to which it will go to protect capitalism.

My second round of fieldwork, in Athens, led to Harbouring Dissent: Greek Independent and Social Media and the Antifascist Movement. Activists there are doing vital work resisting fascism and racism and, increasingly, working to support refugees seeking safety. I am so grateful for the people I met through a friend-of-a-friend-of-a-friend who were willing to talk to me, help me improve my shoddy classroom Greek, make introductions, and argue with my analyses. Getting the opportunity to talk about some of my work at Bfest and in small workshops made me feel like there’s some hope for this research to be useful beyond academia.

Finally, research at the 2015 World Social Forum in Tunis is unlikely to be published. However, it did feed into my continuing reflections on the way the WSF is constituted and contested.

Mapping Movements helped me grow a lot as a researcher and let me connect and better understand movements that I often feel very far from in Perth. Ending the project opens up space to consider what comes next. Whatever that is, I know it will continue to be influenced by the work we’ve done over the last few years.

AIES: AI for social good, human machine interactions, and trustworthy AI

If you want to read more about any of these, accepted papers are here.

AI for Social Good

On Influencing Individual Behavior for Reducing Transportation Energy Expenditure in a Large Population, Shiwali Mohan, Frances Yan, Victoria Bellotti, Ahmed Elbery, Hesham Rakha and Matthew Klenk

Transportation is a huge drain on energy use: how can we develop multi-modal planning systems that can improve this? We need to find systems that humans find useful and actually implement, which means finding timely, acceptable, and compelling ways to suggest transport options.

Guiding Prosecutorial Decisions with an Interpretable Statistical Model, Zhiyuan Lin, Alex Chohlas-Wood and Sharad Goel

District attorneys will often hold arrestees in jail for several business days (which may mean many days if it’s over the weekend or a holiday) while they decide whether to press changes. Most reports on cases arrive shortly after booking, but they aren’t always processed in time. This research proposes a system to sort cases from most likely to be dismissed to least likely, allowing a faster processing time (with the district attorney having final discretion). [Note: this seems to introduce some worrying possibilities for bias, including racial bias. When I asked about this, the presenters said that the model was trained on historical data, which was “fair across races”. This seems to require much more careful interrogation, given all the evidence on incarceration and racism in the US. n answer to another question, the presenters said that they didn’t expect the DA would be influenced by the system’s recommendations. The DA would still carefully evaluate each case. Again: this seems to require further interrogation, especially given the work (cited in a bunch of other talks here) on bias in machine learning models used for sentencing.]

Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithm and experimentation, Cristina Cornelio, Lucrezia Furian, Antonio Nicolò and Francesca Rossi

This research looks at ways of introducing chains of transplants, starting from a deceased donor organ, continuing with consecutive donations among pairs of incompatible donar-recipients, and ending with donors who would otherwise be less likely to be recipients. This research suggests that such chains of donation could be useful.

Inferring Work Task Automatability from AI Expert Evidence, Paul Duckworth, Logan Graham and Michael Osborn

We’re currently unsure about what is automatable, and why some tasks are more automatable than others. Looking at tasks (rather than jobs) is one way to evaluate this. The research looked at 150+ experts’ evaluations of different tasks. Work automatability was unevenly distributed across jobs, and disproportionately affects the least adjustable (those with less education and lower-paid jobs). This is exploratory research! Please write papers that explore real-world validation of this work, the differences between the potential for work to be automatable and whether that work should be automated, and other related issues. [Note: like maybe how to use this as a basis for decreasing standard working hours?]

Human and Machine Interaction

Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots, Arifah Addison, Kumar Yogeeswaran and Christoph Bartneck

This transfers existing bias demonstrated in humans to robots, using a modified version of the police officer’s dilemma study. The previously-demonstrated shooter bias (increased likelihood of shooting Black people among US participants among all groups) did transfer to robots. In follow-up studies, researchers asked whether anthropomorphism and racial diversity would modify this. It would be useful to expand this research, including to consider whether bias can be transferred from robots to humans (as well as from humans to robots), and whether there are human-robot interaction strategies that can decrease bias. It also seems that as robots become more human-like, they’re also designed to reflect their creators’ racial identification more.

Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands, Ryan Blake Jackson, Ruchen Wen and Tom Williams

This research looks at moral competence in social robots (drawing on Malle and Scheutz, 2014). Natural language capability seems very useful for robots, especially when we think about robots in caring roles. However, robots shouldn’t follow every command: there are a range of different reasons for rejecting commands, but how? If the rejection is too impolite it might have social consequences, and if it’s too polite it may imply tacit approval of norm violations. Robots’ responses influence humans’ perceptions of the robots’ likeability, and future research may show other ways that responses can feed back into human behaviour. [Note: I wonder how this would be affected by human’s perceptions of robots as gendered?]

robot and frank 2012 017

AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI, Karina Vold and Jose Hernandez-Orallo

How would our approach to AI change if we saw it as part of us? And how would it change our potential for impacting on society? This isn’t merely abstract: AI systems can be thought of as ‘cognitive extenders’ which are outside our skull but are still part of how we think. We can see AI as existing on a continuum between autonomous and internalised. This work draws on Huchin’s (1999) definition of cognitive extenders. this opens up a range of issues about dependency, interference, and control.

Human Trust Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simulator, Shervin Shahrdar, Corey Park and Mehrdad Nojoumian

This study considered two groups of trust-damaging incidents, drawing on substantial data that was carefully gathered with regard to IRB guidelines and laws. But also my gosh I am tired by now, sorry.

 

The Value of Trustworthy AI, David Danks

We’re using the word ‘trust’ to mean radically-different things, and this has important consequences. Trust is the thing we should seek in our AI. We can understand ‘trust’ as a function of the trustor making themself vulnerable because of positive experections about the behavior or intentions of the trustee. For example, we might trust that the car will start in the morning, allowing us to get to work on time.

Psychological literature gives several different understanding of trust, including behavioural reliability, and understanding of the trustee. There are a couple of themes in this literature on trust. The first is a focus on ‘what is entrusted’ (the trustee should have, or act as if she has, the same values as the trustor). The second is a predictive gap (trust requires that expectations or hopes are not certainties). If you’re going to ethically use a system, you need to have a reasonable expectation that it will behave (at least approximately) as intended.

This has a variety of implications. For example, explainability is important for trust because it provides relevant information about dispositions. Simple measures of trust are insufficient – we need to understand trust in more deep and nuanced ways.

AIES: Human-AI collaboration, social science approaches to AI, measurement and justice

Specifying AI Objectives as a Human-AI Collaboration Problem, Anca Dragan

Dragan describes some problems with self-driving cars, like this example of a car giving up on merging when there was no gap. After adding some more aggressive driving tactics, researchers then also had to add some courtesy to moderate those. One odd outcome of this was that when the car got to an uncontrolled intersection with another, the car would back up slightly to signal to the other driver that it could go first. Which actually worked fine! It mostly led to the other driver crossing the intersection more quickly (probably because they felt confident that the self-driving car wasn’t going to go). …….except if there’s another car waiting behind the self-driving car, or a very unnerved passenger in the car. It’s a challenge to work out what robots should be optimising for, when it comes to human-robot interactions. Generating good behaviour requires specifying a good cost function, which is remarkably difficult for most agents.

Designers need to think about how robots can work in partnership with humans to work out what their goals actually are (because humans are often bad at this). Robots that can go back to humans and actively query whether they’re making the right choices will be more effective. This framework also lets us think about humans as wanting the robots to do well.

Social Science Models for AI
Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures, Daniel Susser

This talk focused specifically on individual (rather than structural) issues in AI ethics. It drew on behavioural economics, philosophy of technology, and normative ethics to connect a set of abstract ethical principles to a (somewhat) concrete set of design choices.

Draws on an understanding of online manipulation as the use of information technology to impose hidden influences on another person’s decision-making: this undermines their autonomy, which can produce the further harm of diminishing their welfare. Thaler and Sunstein’s Nudge discusses choice architecture: the framing of our decision-making. We act reflexively and habitually on the basis of subtle cues, so choice architecture can have an enormous impact on our decisions. Adaptive choice environments are highly-personalised choice environments that draw on user data.

What kind of world are we building with these tools? Technological transparency: once we become adept at using technologies they recede from conscious awareness (this is kind of the opposite of how we talk about transparency in a governance context). Our environment is full of tools that are functionally invisible to us, but shape our choices in significant ways. Adaptive choice architectures create vulnerabilities in our decision-making, and there are few reasons to assume that the technology industry shaping those architectures are trustworthy. However, manipulation is harmful even when it doesn’t change people’s behaviour because of the threats to our autonomy.

Reinforcement learning and inverse reinforcement learning with system 1 and system 2, Alexander Peysakhovich
napm9jrWe might think of ourselves as a dual system model: system one is fast, effortless, emotional and heuristic, system two is slower and more laborious. We often need to balance short-term desires (EAT THE DONUT) against longer-term goals (HOARD DONUTS INTO A GIANT PILE TO ATTRACT A DONUT-LOVING DRAGON). [Note: these are my own examples.]

How do we deal with this? We need to have good models for understanding how irrational we are. We also need to balance these two systems against each other.

Incomplete Contracting and AI Alignment, Dylan Hadfield-Menell and Gillian Hadfield

Problem: there’s a misalignment between individual and social welfare in many cases. AI research can draw on economic research around the contract design problem. Economists have discovered that contracts are always incomplete, failing to consider important factors like the expenditure of effort. Misspecification in contract design is unavoidable and pervasive, and it’s useful for AI research to learn from this: it’s not just an engineering error or a mistake. Economic theory offers insights for weakly strategic AI. Human contacts are incomplete, and relational – they’re always shaped by and interpreted by the wider context. Can we build AIs that can similarly draw on their broader context?

Then our talk!

AIES Day 1: Artificial Agency, Autonomy and Lethality, Rights and Principles.

Sadly I missed the first few sessions of the Artificial Agency session because we had to wander around a bunch to find lunch. Conference organisers: I cannot emphasise enough the value of easily-available and delicious snacks. Also, I tend to be pretty dazed during afternoon talks these days because of Jetlag + Nonsense Toddler. Luckily, accepted papers are available here!

Speaking on Behalf: Representation, Delegation, and Authority in Computational Text Analysis, Eric Baumer and Micki McGee [Note: Baumer referred to ASD, I’m aware that framing this as a ‘disorder’ is contested, including by people with autism who are part of the neurodiversity movement.]
Baumer discusses analysing Autism Spectrum Disorder (ASD) Parenting blogs, and becoming unsure whether it was ethical to publish the results. Initial data gathering seems innocent. However, we should think about the ways in which objects can ‘speak for’ people (drawing on Latour and others). Computational text analysis has the potential to become the lens through which we see the bloggers, and the topic itself. Claims about what a group of people are ‘really’ saying can have important ramifications, particularly when we look at ASD. For example, research of these blogs might be convincing to policymakers, either for policy based on the assumption that vaccines cause ASD, or at the other extreme, for policy that removes financial and educational supports on the basis that Autism is part of normal human neurodiversity.

In one of the more unsettling talks in Session 4: Autonomy and Lethality, Killer Robots and Human Dignity, Daniel Lim argued that the arguments which seem to underpin claims that being killed by a robot offends human dignity are unconvincing. These arguments seem to rest on the idea that robots may not feel the appropriate emotions and cannot understand the value of human life (among other reasons). But humans might not feel the right emotions either. This doesn’t mean that we should make killer robots, just that there doesn’t seem to be an especially compelling reason why being killed by a robot is worse than being killed by a human.

In Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes, Tracy Pearl argues that autonomous vehicles will be an incredible positive net gain for society. However, the failure of the US legal system (from judges through to law through to juries) to provide a reasonable framework for dealing with injuries from autonomous vehicles threatens this, in part because all of US law is designed with the idea that it will be applied to humans.  The US Vaccine Injury Compensation Program provides one paradigm for law dealing with autonomous vehicles: it’s based on the idea that vaccines overall are beneficial, but there are a small number of people who will be harmed (fewer than would be harmed without vaccines), and they should be compensated. A similar fund for autonomous vehicles may be useful, although it would need to come with regulations and incentives to promote safety development. A victim compensation fund would offer much greater stability than relying on private insurance.

Session 5: Rights and Principles

The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions, Jess Whittlestone, Rune Nyrup, Anna Alexandrova and Stephen Cave
This discusses a forthcoming report from the Leverhulme Centre for the Future of Intelligence. Principles have limitations: they’re subject to different interpretations (for example, what does ‘fairness’ mean?), they’re highly general and hard to assess, and they frequently come into conflict with each other. Many of these tensions aren’t unique to AI: they also overlap with ethical principles at play in discussions of climate change, medicine, and other areas.

AIES : how we talk about AI, algorithmic fairness, norms and explanations

A whole lot of drones in the sky above trees

My brief notes from today’s talks: for more details, check the program.

Ryan Calo: How we talk about AI (and why it matters)

There are several studies which demonstrate the ways in which language might shape approaches to policy. For example, one showed that people were more likely to recommend punitive measures when a threat was described as “a predator stalking the city”, rather than “an illness plaguing the city”.  There are legal precedents in the US of language about “robots” being a way to talk about people who have no choice, (and therefore liability).

A whole lot of drones in the sky above treesCalo notes that there are some trends in AI that he’s “upset about but not going to discuss at length, particularly the tendency for both supporters and critics of AI talk about it as if it’s magic. For example, Calo mentioned a billboard displaying a line of identical people with backpacks claiming that, “AI has already found the terrorist.” On the other hand, we should consider language about “killer robots coming door to door to kill us” with caution.

Rhetorical choices about AI policy influence policy, often in very subtle ways. For example, do we talk about AI research as a “race” or do we talk about it as a global collaborative effort that works towards human flourishing? And how do these different frames shape different concrete policies? Current US policy (including restrictions on sharing particular technologies) only make sense if we understand AI research as a high-stakes competition.

Language around “ethics” and “governance” also plays a role here. This rhetoric is familiar, and therefore palatable. Efforts to bring in ethical governance of AI research is laudable. Ethics has a critical role in shaping technology. However, we should also pay attention to the power of these words. Before we start imposing requiremlaents and limits, we need to be sure that we actually understand the ethical frameworks we’re working with.

Both proponents and critics of AI think that it will change everything. We should be thinking about a hypothetical future existential threat posed by AI, but we should also be thinking about more immediate concerns (and possibilities?). If it’s true that AI is the next world-shaping technology, like the steam engine, then policy needs to shift radically to meet this. And we need to start changing the way we talk. That project begins with conferences like this one.

We should also be looking at specific measures, like impact assessments and advisory bodies, for implementing AI tools. Unfortunately, the US government will probably not refrain from the use of any AI weapons that are seen to be effective.

We absolutely should be talking about ethics, guided by the folks who are deeply trained in ethics. Lawyers are contractors building the policies, but ethicists should be the architects.

Note: One of the main questions that I have regarding Calo’s talk, and that Peter and I partially – albeit implicitly – address in our own talk, is how we decide who counts as ‘deeply trained in ethics’ and how the AI community should reach out to ethicists. There is an ongoing under-representation of women and minorities in most university philosophy departments. Mothers (and not fathers) are also less likely to be hired and less likely to progress within academia. This is partially shaped by, and shapes, dominant framings of what is valued and promoted as expertise in ethics. This is fairly obvious when we look at the ethical frameworks cited in AI research ethics: most philosophers cited are white, male, and Western.

The spotlight session giving brief overviews of some of the posters presented included a few that particularly stood out (for various reasons) to me:

  • In ‘The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users‘, Emanuelle Burton, Kristel Clayville, Judy Goldsmith and Nicholas Mattei argue that medical ethics have useful parallels with AI research. For example, when might inefficiency enable users to have an experience that better matches their goals and wishes?
  • In ‘Toward the Engineering of Virtuous Machines‘, Naveen Sundar Govindarajulu, Selmer Bringsjord and Rikhiya Ghosh (or maybe Hassan?) talk about ‘virtue ethics’: focus on virtuous people, rather than on actions. Eg. Zagzebski’s Theory: we admire exemplar humans, study their traits, and attempt to emulate them. (I’m curious what it would look like to see a machine that we admire and hope to emulate.)
  • Perhaps the most interesting and troubling paper was ‘Ethically Aligned Opportunistic Scheduling for Productive Laziness‘, by Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel and Cyril Leung. They discussed developing an ‘efficient ethically aligned personalized scheduler agent’ will can workers (including those in the ‘sharing’ economy) work when they are highly efficient and rest when they’re not, for better overall efficiency. Neither workers nor the company testing the system were that keen on it: it was a lot of extra labour for workers, and company managers seemed to have been horrified by the amount of ‘rest’ time that workers were taking.
  • In ‘Epistemic Therapy for Bias in Automated Decision-Making’, Thomas Gilbert and Yonatan Mintz draw on distinctions between ‘aliefs‘ and ‘beliefs’ to suggest ways of identifying and exploring moments when these come into tension around AI.
The second session, on Algorithmic Fairness, was largely too technical for me to follow easily (apart from the final paper, below), but there were some interesting references to algorithms currently in use which are demonstrably and unfairly biased (like COMPAS, which is meant to predict recidivism, and which recommends harsher sentences for minorities). Presenters in this panel are working an attempts to build fairer algorithms.
In ‘How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness‘, Nripsuta Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David Parkes and Yang Liu discuss different understandings of ‘fairness’. This research looks at loan scenarios, drawing on research on Moral Machines. It used crowdsourcing methods via Amazon Turk. Participants were asked to choose whether to allocate the entire $50,000 amount to a candidate with a greater loan repayment rate; divide it equally between candidates; or divide the money between candidates in proportion to their loan repayment rates.
There are three different ways of understanding fairness examined in this paper:
  • meritocratic fairness,
  • treat similar people similarly,
  • calibrated fairness.
This research found that race affected participants’ perceptions of fair allocations of money, but people broadly perceive decisions aligned with ratio to be fairest, regardless of race.
The presenters hope that this research might spark a greater dialogue between computer scientists, ethicists, and the general public in designing algorithms that affect society.
Session 2: Norms and Explanations
Learning Existing Social Conventions via Observationally Augmented Self-Play, Alexander Peysakhovich and Adam Lerer
This looks at social AI. At the moment, social AI is mainly trained through reinforcement learning, which is highly sample inefficient. Instead, the authors suggest ‘self play’. During training time, AI might draw on a model of the world to learn before test time. If self-play converges, it converges at a Nash equilibrium. In two-play zero sum games, every equilibrium strategy is a minimax strategy. However, many interesting situations are not two-player zero-sum games, for example traffic navigation. The solution to this is: quite technical!
Legible Normativity for AI Alignment: The Value of Silly Rules, Dylan Hadfield-Menell, Mckane Andrus and Gillian Hadfield
A lot of conversations right now focus on how we should regulate AI: but we should also ask how we can regulate AI. AIs can’t (just) be give the rules, they will need to learn to interpret them. For example, there’s often a gap between formal rules, and rules that are actually enforced. Silly rules are (sometimes) good for societies, and AIs might need to learn them. Hadfield discusses the Awa society in Brazil, and what it might look like to drop a robot into the society that would make arrows (drawing on anthropological research). Rules include: use hard wood for the shaft, use a bamboo arrowhead, put feathers on the end, use only dark feathers, make and use only personalised arrows, etc. Some of these rules seem ‘silly’, in that more arrows are produced than are needed and much of hunting actually relies on shotguns. However, these rules are all important – there are significant social consequences to breaking them.
A 1960s advertisement for "the Scaredy Kit", encouraging women to start shaving by buying a soothing shaving kit.This paper looked at the role of ‘silly rules’. To understand this, it’s useful to look at how such rules affect group success, the chance of enforcement, and the consequences for breaking rules. The paper measured the value of group membership, the size of the community over time, the sensitivity to cost and density of silly rules. As long as silly rules are cheap enough, the community can maintain its size. It’s useful to live in a society with a bunch of rules around stuff you don’t care about because it allows a lot of observations of whether rule infraction is punished. AIs may need to read, follow, and help enforce silly as well as functional rules.
Note: Listening to this talk I was struck by two things. Firstly, how much easier it seems to be to identify ‘silly’ rules when we look at societies that seem very different from our own. (I think, for example, of wondering this morning whether I was wearing ‘suitable’ conference attire, whether I was showing an inappropriate amount of shoulder, and so on.) Secondly, I wondered what this research might mean for people trying to change the rules that define and constrain our society, possibly in collaboration with AI agents?
TED: Teaching AI to Explain its Decisions, Noel Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush Varshney, Dennis Wei and Aleksandra Mojsilovic
Understanding the basis for AI decisions is likely to be important, both ethically and possibly legally (for example, as an interpretation of the GPDR’s requirements for providing meaningful information about data use). How can we get AI to meaningfully explain its decisions? One way is to get users (‘consumers’) to train AI about what constitutes a meaningful explanation. The solution to this is: quite technical!
Understanding Black Box Model Behavior through Subspace Explanations, Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Jure Leskovec
Discussing a model for decisions on bail. Important reasons to understand the model’s behaviour:
  • decisions-makers readily trust models they can understand,
  • it will allow decision-makers to override the machine when it’s wrong,
  • it will be easier to debug and detect biases.

How to facilitate interpretability? The solution to this is: quite technical!

ICA18 Day 4: labour in the gig economy; resistant media; feminist peer review; love, sex, and friendship; illiberal democracy in Eastern and Central Europe

Voices for Social Justice in the Gig Economy: Where Labor, Policy, Technology, and Activism Converge
uberVoices for Social Justice in the Gig Economy, Michelle Rodino-Colocino.
This research discusses the App-Based Driver Association, looking specifically at Seattle. There’s no “there” for gig economy work: previous spaces of organising, such as the shop floor, aren’t available. One space is a parking lot, where people sit waiting to get lifts. There’s one shady tree, where people tend to converge. Another space is an Ethiopian grocery store, as many drivers are East African. The ABDA is largely funded and supported by the teamsters. Drivers interviewed definitely understand that they’re producing for Uber, and that they’re being exploited. They spoke about the challenges of planning – they can’t go watch a movie. Above all, Uber sells drivers’ availability. One driver was told: “we can always get another Mohammed”. Drivers feel dehumanized. They’re not provided with toilets, there’s nowhere to pray. They’re also cautious about organising, as Uber is clearly anti-union.

Work in the European Gig Economy. Kaire Holts, University of Hertfordshire. This research aims to survey and measure the extent and characteristics of crowd work in Europe. Working conditions are characterised by precariousness (including frequent changes to pay levels), unpredictability, work intensity, the impact of customer ratings, abuse from customers, and poor communication with platform staff (including a lack of face to face contact, and no social etiquette). One driver was asked to deliver drugs to a criminal gang late at night. When she told the platform about it they said it was her responsibility to check what was in the bags. Workers face both physical risks and stresses, and issues with mental health. There are some attempts at collective representation of platform workers in Europe. In UK, for example, there’s the Independent Workers Union of Great Britain delivering Deliveroo drivers, and the United Private Hire Drivers (UPHD) representing Uber drivers.

Reimagining Work [didn’t quite catch the current title], Laura Forlano. This draws on a project with Megan Halpern, using workshops and games that helped people collaborate to imagine what work might look like in the future. One participation spoke up the importance of the shift from talking around around each other to needing to actually physically move as part of the workshop process. Shifts in work are linked to reimagining the city as a (new, urban) factory, so we need to reimagine relationships between work, technology, and the city to embed social justice values into our future.

Information and the Gig Economy. Brian Dolber.
Talks about shifting from a tenure-track position to adjunct work, and then taking up work with Uber and Unite Here (campaigning against Airbnb). From 2008 to 2012, Silicon Valley received little of the broader critique addressed at capitalism more generally. Silicon Valley can be seen within Nancy Fraser’s concept of ‘progressive neoliberalism’, but we’re also seeing a shift towards an emergent neofascism. Airbnb’s valuation is greater than all the hotel chains, which is odd when we think about ‘hosts’ as small business owners. Airbnb has created online communities called ‘Airbnb citizen’ which aim to mobilise hosts to affect city policy. The narrative is very much about facilitating people staying in their homes, paying medical bills, supporting the creative industries, which Dolber argues is cultivating a petit bourgeois attitude that shifts us towards an emergent neofascism.

Power Politics of Resistant Media: Critical Voices From Margins to Center

The opening speaker (whose name I unfortunately didn’t get) discusses the ways in which pop feminism works, and the complexity of vulnerability. There’s a distorted mirroring of vulnerability between popular feminism and white misogyny.

Polemology: counterinsurgency and culture jamming, Jack Bratich.
We need a genealogy to elaborate and understand the persistence and connection of struggles across time.

Rosemary Clark-Parsons (University of Pennsylvania) will discuss de Certeau’s concept of “tactics” within the context of her ethnographic work among grassroots feminist collectives in the city of Philadelphia. She focuses on ‘girl army’, a secret Facebook group developed as a space for women and nonbinary people to share experiences. Tilly and Tarrow’s definition of contentious politics would exclude this group, which isn’t in line with women and nonbinary people’s solidarity and organising work within the group. De Certeau’s concept of tactics allows us to take the everyday seriously; can teach us about strategies; and allows explicit recognition of agency within systems of power. There are limitations, too, including issues with addressing differential access to agency, and theorizing structural change over time. The strategies/tactics binary can be reductive and reify power relations.

hashtagactivism#HashtagActivism: race and gender in America’s Networked Counterpublics. Sarah J. Jackson (Northeastern University). Networked counterpublics theory is one way to understand how marginalised communities create their own public spheres. Mainstream media coverage of the public response to #myNYPD mostly treated it as ‘trolling’, or a PR disaster, that could happen to anyone. In the coverage of #Ferguson, there was a flow of the narrative from ordinary people’s framing through to social movement organisations, and finally the media. #GirlsLikeUs is a useful case, because even within counterpublics, there are people at the margins, who produce their own counter-counterpublics.

Jessa Lingel (University of Pennsylvania) focused on “mainstream creep,” referring to the uneasy relationships between countercultural communities and dominant media platforms, where the former uses the latter reluctantly or in highly-limited ways. How do we construct particular bodies as vulnerable: the language of ‘marginalised people’ is important for understanding structures of power, but does it also construct people as essentially weaker?

Gendered Voices and Practices of Open Peer Review
I opened this panel by reflecting on some of the ways in which I am currently trying to understand, and reconfigure, my approaches to both mothering and academia. I’ll put up a blog post about this later.

The Fembot Collective’s Global South Initiatives. Radhika Gajjala, Bowling Green State University. Problems for women in academia in the Global South start with the much-more-oppressive system of neocolonialism. To participate in autoethnography or other feminist methodologies would be a problem because it’s devalued within universities that see it as navel-gazing. Women need to publish in top-tier journals in order to be successful (or even survive) within their academic spaces. How do we as feminist publishers work with women in the Global South to help them access the resources that their institutions value? How do we support them without asking them to do a lot of extra activist work within their institutions? We need to think about power differences within the networks of solidarity and resistance we build across borders. It’s a messy terrain. We need to work to allow women in academia in the Global South to get access to a space where they can speak (and be heard).

Voicing New Forms of Scholarly Publishing. Sarah Kember, Goldsmith’s, University of London. There’s a seismic shift happening at the moment in academic publishing. Revolution and disruption are not the same thing. We need to understand this within the context of efforts to police and politicise scholarly practices: there’s no distinction between these two at the moment. We need to both uphold something (the trust in academic work), but also change it (the opacity of peer review processes). We’re currently seeing a “pay to say” model of academic publishing in open access, at least in the UK. “Openness” works in different ways, with an asymmetrical structure. Goldsmiths has to be open, Google doesn’t. “Open access” publishing is often incredibly expensive, especially where academics are pushed to continue publishing with traditional academic publishers. Kember cites ADA as a big intervention in these models. The disruptions of scholarly publishing models is a by-product of neoliberalism. The disruption of academia isn’t. We need to restate the university press mission, revise it, and rethink it. The policies around scholarly publishing need careful examination. The issue is not about adding ever-more OA panels, which are entrepreneurial, and technicist.

Peer Review is Dead, Long Live Peer Review: Conflicts in the Field of Academic Production. Bryce Peake, University of Maryland, Baltimore County. Academics often undertake review because it gives access to particular networks. Women tend to receive much more negative feedback from review, and to engage in (be asked to do?) more peer review. There are different ways of understanding peer review: as enforcer (for example, of particular norms), networker, gatekeeper (of one particular journal), and/or mentor.

Ada and Affective Labor. Roopika Risam, Salem State University. ADA and the peer review process intervenes in scholarly systems, but is at risk particularly because of that. Risam talks about an experience drawing on theory from the margins: journal editors for a journal with a more experimental peer review process decided to shift from post-publication review to the traditional peer review process. Generosity in peer review is not the same as being ‘nice’: it’s about the level of engagement in the process. It means that the community takes seriously the project that the author is engaged in, rather than what they think the author should be doing. This means that the community has developed and perpetuated a set of norms. Even when editors are advising authors that their text is not ready for publishing, they are kind. Too often, ‘rigor’ has been set up as opposing kindness. This kind of peer review presents a challenge to the masculinist mode of academic production: it’s collectivist rather than individualist, seeing knowledge as an open system rather than a closed hierarchy. How can we look at the intersection of rigor and kindness? Scholarship is more rigorous when it makes its multiple genealogies visible, writing voices which have been made invisible back into academia.

Carol Stabile, in beginning discussion, prompted us to read Toward a Zombie Epistomology by Deanna Day, asking whether we should be should be considering a nonreproductive (or even antireproductive) approach to academia: one not concerned with leaving behind a specific legacy, either institutional or theoretical. Radhika’s answer was very much in line with my thinking on this: that in trying to rethink our approach not only to academia but also to mothering, she (and I) want to think of mothering not as a process of reproducing ourselves, but as a way of making space for children (and students, and colleagues) to be their own people. Thinking about the important challenges and prompts that (re)reading Revolutionary Mothering, The Argonauts, and more informal conversations with the many amazing people I know reflecting on their parenting experiences, have given me, I’d add that it’s also important to consider the ways in which feminist practices of peer review (and academia more generally), should not only not be about reproducing ourselves, but should be about allowing ourselves to be changed.

There was also some excellent discussion about the role of institutions (like the committees that evaluate promotions and tenure), and citation practices. In a response to a question about how to balance attempts to create change against the requirements of tenure, Carol and Sarah spoke on the importance of joining evaluation panels, both to get a better understanding of how they work and to intervene in them. Sarah notes that when we’re forced to write and research more quickly, it can be hard to find sources to draw on beyond the standard offerings. (I’ve particularly noted this myself: after managing not to cite any men, I think, in my last publication before giving birth, my writing since referring to work has relied far more heavily on the most well-known literature.) Sarah prompts peer reviewers to actively consider the breadth of sources that research draws on.

Love, Sex, Friendship: LGBTQ Relationships and Intimacies
Lover(s), Partner(s), and Friends: Exploring Privacy Management Tactics of Consensual Non-Monogamists in Online Spaces. Jade Metzger, Wayne State University. In 1986 a researcher surveyed around 3,000 people, and found that 15-28% of that population didn’t define themselves as monogamous, and more recent research has also found that many young people don’t define themselves as not strictly monogamous. Consensual non-monogamy is often stigmatised. How do we understand disclosure of consensual non-monogamy? Metzger notes that one of the main researchers in this area doesn’t engage in consensual non-monogamy herself. Metzger’s research, which included open-ended interviews and self-disclosure, found that self-disclosure varied, including ‘keeping it an open secret’, using ambiguous terms (like ‘friend’ or ‘partner…s’), or using terms open to interpretation (‘cuties’, ‘comets’, ‘cat’). Reasons cited for privacy included family disapproval, repercussions at work, harm to parental custody, and general discomfort. Privacy is often negotiated at the small-group community level: self-disclosure often implicates others. For some, social media is a risk that has to be navigated carefully: blocking family, for example, or using multiple accounts. Often, it can be hard not to be connected online: it can be painful to not be able to acknowledge people important to you online. Some sites don’t allow you to list multiple partners, embedding heteronormativity into their structure. We need to see privacy as negotiated at the community level (as opposed to individually, as many neoliberal approaches to privacy understand it). The transparency of networks on social media places risks and burdens on those wanting (or needing) to remain private.

28182642460_2012772a36_bDoes Gender Matter? Exploring Friendship Patterns of LGBTQ Youth in a Gender-Neutral Environment. Traci Gillig, USC Annenberg, Leila Bighash, USC – Annenberg School for Communication and Journalism. Gender is not a binary, but we constantly encounter spaces structured by the social gender binary, and gender stereotypes. Gender is a major driver of peer relationships among youth, including LGBTQ people. This research looked at the Brave Trails LGBTQ youth camp, which is gender neutral. Gillig and Bighash found that here, were students weren’t separated out by gender, friendship groupings didn’t cluster by gender.

Hissing and Hollering: Performing Radical Queerness at Dinner. Greg Niedt, Drexel University. The word ‘radical’ is often seen as a confrontational challenge to the mainstream, which is certainly a part of it. But radical queerness can also be about more quiet, everyday moments of queerness: the queer ordinary. In discussing radical queer ‘family dinners’, there is an act of radical queerness to reconstituting family as chosen family. Radical Faeries came out of activism in the 1970s, borrowing – or appropriating – from various forms of paganism and spirituality. Harry Hay was particularly central (and some of his statements about what it means to be queer are kind of what you might expect from a relatively privileged white man). Existing research is limited, and focuses on the high ritual and performativity. Niedt focuses, instead, on weekly fa(e)mily dinners in Center City Philadelphia. The research methodology drew on Dell Hymes (1974).

Music in Queer Intimate Relationships. Marion Wasserbauer, Universiteit Antwerpen. Thea DeNora discusses music as a touchstone of social relations, but there’s a dearth of beographical analysis of sociological study of music consumption. Wasserbauer talked about one interview in which a 44-year-old woman tracked the entanglement of her relationship with music, and how after the breakup she’d never experienced music again. Another 27-year-old-woman, who mostly enjoyed classical and 1920s music, found herself almost crying at a Bryan Adams concert she attended because a woman she was in a relationship with loved him so much.

I rounded out the day at an excellent panel with Maria Bakardjieva, Jakub Macek, Alena Macková, and Monika Metykova (I think – the last two were not listed in the program), discussing attacks on media and political freedoms in the Czech Republic, Hungary, and Bulgaria. Metykova outlined the incredibly worrying range of attacks on independent press and political opposition in Hungary (some of which are outlined here), noting that these have been legal and difficult to fully track, let alone resist. Becasue there a small audience (the last panel on the last day sadly often suffers), it was more of a discussion and I didn’t take notes in the panel, but I strongly encourage you to follow up the speakers’ work – and the situation in Central and Eastern Europe. It was a bit strange to me that ICA as an institution did little to address the specific situation of communications in the Czech Republic – the odd floating ‘placelessness’ of Western-centric academia (with numerous panels addressing US politics).

ICA18, Day 3: activism, subalterns, more activism, post/colonial imaginations, and cultural symbols

Activism and Social Media
mamfakinchMamfakinch: From Protest Slogan to Mediated Activism. Annemarie Iddins, Fairfield University. [CN: rape.]
Iddens argues that the digital must be understood as part of a network of different media – the Mamfakinch collective only makes sense as a response to the limitations of the Moroccan media (which combines strong state influence with neoliberal tendencies). Morocco’s uprising, referred to as M20, used “Mamfakinch” (no concessions) as a slogan. Mamfakinch was developed as a citizen media portal, modelled over Nawaat. M20 was largely focused on reform of the existing political system. Protests were mostly planned online. The collective moves effectively between on and offline locations, supporting some campaigns and sparking others. Amina Filali was a 16 year old who swallowed rat poison after marrying her rapist. Protests took place in physical space and online to change the laws, and nearly two years after Filali’s death the laws that allowed rapists to escape prosecution if they married those they’d raped were changed. Mamfakinch was closed in 2014 after attack from a government-backed spyware attack and loss of momentum. Founders started the Association for Digital Rights (ADN), which is still attempting to register as an organisation. What began as an attempt to establish a viable opposition in Morocco has resulted in a restructuring of the norms of how Moroccans interact with power.

The Purchase of Witnessing in Human Rights Activism. Sandra Ristovska, University of Colorado Boulder. Witnessing is often associated with notions of ‘truth-telling’: this paper maps out two different modes of witnessing. Witnessing an event: bearing witness for historical and ethical reasons. Today, we a see a shift towards witnessing for a purpose. This second mode means that witnessing is very much shaped by a sense of strategic framing for a particular audience. If your end-goal is to appeal to a public audience, or a court, the imperatives are different: do you focus on a particular aesthetic, or on making sure that you get key details (such as badge numbers of police, or landmark shots to show where an event takes place). The push towards shaping witnessing towards particular audiences and institutional contexts can constrain, or oven silence, the voices of activists. Activists may feel they can’t let their own passion, or own voice, speak through as they attempt to meet institutional needs to be heard.

Citizen Media and Civic Engagement. Divya C. McMillin, University of Washington – Tacoma. This research examined the conditions that support particular forms of mobilisation and engagement on the ground: how do movements endure, and how do grassroots movements reclaim local spaces. There were two local case studies of grassroots tourism efforts which aim to preserve heritage and promote eco-friendly environments: Anthony’s Kolkata Heritage Tours, and Native Place in Bangalore. McMillin draws on Massey’s understanding of place as not already-existing, but as becoming – place is transformed by use. Indian cities are changing massively, with seven major Indian cities targeted for “megacity” or “smart city” development which makes them sites of urgent struggle for those living there. Using translation as a theoretical framework allows us to understand negotiations within the global economy: a translation of meaning through the opportunities of encounter. The way in which a space is translated into a place of consumption can also work to reclaim places in ways that the government doesn’t facilitate.

Whose Voices Matter? Digital Media Spaces and the Formation of New Publics in the Global South
fanyusuWhat Happens When the Subaltern Speaks?: Worker’s Voice in Post-Socialist China. Bingchun Meng, London School of Economics. It is important to emphasise the class dimension of how we understand the subaltern. Chinese migrant works can be understood as the subaltern (drawing on Sun 2014). The Hukou system divides and discriminates against the rural population. There is a concentration of symbolic resources and an exercise of epistemic violence, with the marginalisation of migrant workers within China. Migrant workers are represented as the other: the looming spectre of social slippage for the children of middle-class urban people, a force for social instability that needs to be contained. Xu Lizhi’s poetry explores the experiences of migrant workers (he committed suicide, working for Foxconn). Fan Yusu’s writing is, however, more well-known within China, and some is available in English translation. She’s in her mid-40s, from rural Hubei, and works in Beijing as a domestic helper. Her writing draws extensively on Chinese literary tradition, and demonstrates a strong egalitarian view. Responses to her writing have included an outpouring of sympathy from the urban middle-class (which positions the subaltern as disadvantaged); warnings from urban elites against mixing literary criteria with moral judgement (seeing the subaltern as uneducated); and criticism of Fan’s writing about her employer (seeing the subaltern as ungrateful). Fan Yusu’s responses to journalists are not always what they expect: for example, she refuses the valuing of intellectual over physical work.

Social Media and Censorship: the Queer Art Exhibition Case in Brazil. Michel Nicolau Netto, State University of Campinas, and Olívia Bandeira, Federal University of Rio de Janeiro. [CN: homophobia.] Physical violence cannot be understood if we don’t take into account symbolic violence. As an emblematic example, we see the murder of Marielle Franco, which can be understood as a violent response to seeing the subaltern voice start to be valued. This research looks at the Queermuseum Art Exhibition. After the exhibition opened, a man visited wearing a shirt reading “I’m a sexist, indeed”, and recorded the video calling visitors names such as “perverted” and “pedophile” – he shared this on a right-wing Facebook group (“Free Brazil Movement”). After this was further shared, the Santander bank hosting the exhibition cancelled it. Posts about the exhibition were then shared even more widely: right-wing groups were empowered by their success. Most-shared posts in Brazil are disproportionately those from the right wing. The bank’s actions can be seen as a way of supporting the extension of neoliberalism in Brazil, via the strengthening of right-wing extremism.

Sound Clouds: Listening and Citizenship in Indian Public Culture. Aswin Punathambekar, University of Michigan, Ann Arbor.

This paper examines the centrality of sound in conveying voice. Sound technologies and practices serve as a vital infrastructure for political culture. The sonic dimensions of the digital turn have received comparatively little attention. This work disagrees with Tung-Hui Hu’s claims that the prehistory of the cloud is one of silences [I may have misunderstood this], focusing on Kolaveri – a song which was widely shared and remixed. Kolaveri became a sonic text that sparked discussion of inequality, violence, and caste.

Selfies as Voice?: Digital Media, Transnational Publics and the Ironic Performance of Selves. Wendy Willems, London School of Economics and Political Science. African digital users are often seen as being on the other side of the digital divide, not contributing to digital culture. This research looks at responses to boastful selfies from a Zimbabwean businessman, Philip Chiyangwa, mostly in Shona and aimed at discussion within the Zimbabwean diaspora (rather than aimed at an external public). There’s an online archive of 3000 images – often playful and ironic selfies and videos exploring the idea of zvirikufaya (“things are fine”). Discussions between diasporic and home-based Zimbabweans played with the history of colonisation, and reinforced or subverted the idea that diasporic Zimbabweans take on demeaning work overseas (for example, a woman in Australia filming herself being served in a cafe by a white man). Willems is keen to situate discussions of the transnational within a particular historical context, and to shift from ‘flowspeak’ to thinking more about mediated encounters. Diasporas can be seen as fundamentally postcolonial, understanding shifts as being responses specifically to the impacts of colonisation (“we are here because you were there” – A. Sivanandan). How do we understand the role of digital media in transnationalising publics?

Digital Constellations: The Individuation of Digital Media and the Assemblage of Female Voices in South Korea. Jaeho Kang, SOAS, University of London. We need to go beyond the limitations of ‘network’ theory, which reduce the social world to ‘actor-constellations’.  One alternative is to understand protests in terms of assemblages of social individuals: non-conscious cognitive assemblages, collective individuation, and the connective action of affect, and non-representative democracy.

In the response, Nick Couldry invited us to think more about the metaphors around sound, including not only the sonic resonance, but also interference. We also need to think about the ways in which the theoretical language that we use reinforces neoliberal values, rather than subverting them.

Hashtag Activism
#BlackLivesMatter and #AliveWhileBlack: A Study of Topical Orientation of Hashtags and Message Content. Chamil Rathnayake, Middlesex University, Jenifer Sunrise Winter, University of Hawaii at Manoa, and Wayne Buente, University of Hawaii at Manoa.The use of hashtags can be seen within the context of collective coping, which can increase resiliency (while not necessarily leading to political change).

The Voices of #MeToo: From Grassroots Activism to a Viral Roar. Carly Michele Gieseler. Tarana Burke’s original goals for the #metoo mission can be seen as largely silenced (or pushed aside) as the roar grew around the hashtag, echoing broader patterns in white feminism. Outrage is selectively deployed – the wall between white women and Black women within feminism isn’t new, but perhaps the digital space can do something to change it. We need to think about the ways in which white feminisms within academia have ignored or appropriate the work of women of colour. Patricia Hill Collins talks about the painstaking process of collecting ideas and experiences of thrown-away Black women, even when these women started the dialogue.

Voice, Domestic Violence, and Digital Activism: Examining Contradictions in Hashtag Feminism. Jasmine Linabary, Danielle Corple, and Cheryl Cooky, Purdue University. This research looks at #WhyIStayed or #WhyILeft within a postfeminist lens, supplementing data gathered online with interviews. This research highlighted the importance of inviting voice (opening spaces for sharing experiences – but with a focus on the individual, which often lead to victim-blaming); multivocality (with openings for a multitude of identities – but this also opened up the conversation for trolling and co-opting); immediacy in action (which allows responses to current events); and the creation of visibility around domestic violence (unfortunately often neglecting broader structural context). Looking at these hashtags with reference to postfeminist contradictions allows both an understanding of how they were important for those participating, but also the limitations in the focus on the individual.

Women’s Voices in the Saudi Arabian Twittersphere. Walaa Bajnaid, Einar Thorsen, and Chindu Sreedharan, Bournemouth University. This research focuses on women’s resistance to the system of male guardianship, asking about how Twitter facilitate cross-gender communication during the campaign. Women’s tweets connected online and offline mobilisation, for example by posting videos of themselves walking in public unaccompanied. Protesters actively tried to keep the hashtag trending, and to gain international attention. Tweets from male opponents attempted to defend the status quo by attempting to derail the campaign, accusing the protesters of being atheists and/or foreign agents trying to destabilise Saudi Arabia. Men frequently seemed hesitant to support the campaign to end male guardianship.

The Mediated Life of Social Movements: The Case of the Women’s March. Katarzyna Elliott-Maksymowicz, Drexel University. This research draws on the literature on new social movement theory, collective identity, and visuality in social movements. Changing dynamics of hashtags and embedded images is a useful way of understanding how the movement changed over time.

Colonial Imaginations, Techno-Oligarchs, and Digital Technology
(The discussion here was interesting and important, but I struggled a bit to take good notes given the flow of the format. Please excuse the especially fragmentary notes gathered under each presenter, as that seemed easier than taking notes following the flow of discussion.)

[Correction: I initially attributed Payal Arora’s excellent prompts to discussion to Radhika Gajjala.]

Discussant: Payal Arora, Erasmus University Rotterdam
We have to remember that colonial theory is buried in different areas, including development discourse. It’s also important that ‘the margins’ aren’t always positive – the extreme right were also once on the margins (though they are being brought to the centre in many places, including Brazil). Is identity politics toxic to our cause, or should we be leveraging aspects of it? When we talk about visibility in the Global South, we largely celebrate it (“They’ve gained visibility! They’re speaking for themselves!”), without recognising the complicated nature of different identities within nations. There’s a lot of talk about data activism and data justice – we need to also look at data resistance. How do we conceptualise resistance in a broader way without moralising it? We also need to think not just about values in design, but also about who the curators of design are (and how they are embedded within particular territorial spaces and power structures). We also need to think about who is operationalising design.

Digital Neo-Colonization: A Perspective From China, Min Jiang, University of North Carolina – Charlotte.
Min Jiang talks about the challenge of working out: is China the colonised, or the coloniser? Looking at the role of large digital companies, we could see Google as colonising China…but also see Chinese companies as having largely replaced Google now, and as colonising Africa. China has its own colonial history. In China today, there’s been so much crackdown on resistance: colleagues in China working in journalism are forbidden for even mentioning the word resistance.

Islamic State’s Digital Warfare and the Global Media System, Marwan M. Kraidy, Annenberg, University of Pennsylvania
North American white supremacists use digital technologies to mess around with spatial perceptions. Social media platforms are working in tandem with all kinds of techniques of spatial control and surveillance. There’s something about the ways in which these platforms claim innocence from the kinds of feelings that they spark, and we shouldn’t release them from responsibility. Kraidy notes the environmental, social, and economic issues tied up in the ways that data works, using data centres that need to be air-conditioned as an example.

Non-Spectacular Politics: Global Social Media and Ideological Formation, Sahana Udupa, LMU Munich
We need to understand not just intersectional oppression, but also nested inequalities, and the ways in which the digital has lead to increased expressions of nationalism. A decolonial approach requires that we recognise the resurgence of previous forms of racism. Is digital media just a tool for discourses of racism and neonationalism that exist outside it? Udupa argues that we should see digital media cultures as inducing effects on users themselves. In India, Facebook is having a huge (but largely invisible) impact on politics. For example, the BJP uses data extensively in crafting particular political narratives.

Decolonial Computing and the Global Politics of Social Media platforms; Wendy Willems, London School of Economics and Political Science.
A decolonial approach means bringing back in structures, and seeing colonisation as fundamental (rather than additive) to processes of identity formation. It resists claims to speak ‘from nowhere’, and helps us to understand the global aspects of platforms. How might we understand the colonisation of digital space by platforms, including the extraction of data? These platforms are positioned as beneficial (‘connecting the unconnected’) – Willems mentions Zuckerberg visiting Africa in shorts and a t-shirt, the image of white innocence this portrays. There’s a challenge around provoking more discussion of these platforms in Africa. There’s a discussion of Internet shut-downs – the state is being seen as the enemy as it shuts down particular services, but we’re not turning the same critical eye on the platforms themselves. She also distinguished between the use of digital media in resistance, and resistance to digital media and datafication itself – there’s been less of the latter. In South Africa, there was #datamustfall in the wake of #RhodesMustFall (focusing on the costs of accessing digital media, rather than contesting platforms themselves). Operators are crucial gatekeepers in accessing the Internet – we need to look at the relationship between operators, platforms, and the state.

Media Representation of Cultural Symbols, Nationalism and Ethnic and Racial Politics
Framing the American Turban: Media Representations of Sikhs, Islamophobia, and Racialized Violence. Srividya Ramasubramanian and Angie Galal, Texas A&M University.
Sikhism is the fifth largest religion in the world. Several waves of Sikh immigration to the US, with various degrees of control. There’s a history of hate crimes against Sikhs in the US, but disaggregated data only began to be collected (by the FBI) in 2015. Anti-Sikh views, and violence, is tied to the othering and dehumanization of Muslims. There’s a long history of negative portrayals of Sikhs (tangled in with Hindus and Muslims) before 9/11. Going on from this research, it’s also important to look at how Sikhs are resisting negative media portrayals. This research located three key moments of rupture in US media portrayals: 9/11, the Wisconsin shootings, and the Muslim Ban/Trump era.

Selfie Nationalism: Twitter, Narendra Modi, and the Making of a Symbolically Hindu/Ethnic Nation. Shakuntala Rao, SUNY, Plattsburgh.
Modi‘s use of Twitter has been seen as particularly strategic, with extensive use of selfies. He always presents himself as someone who can speak to the layperson as “I”. Rao’s methods involve reading, rather than quantifying, tweets, including replies. For example, as soon as Modi starts ‘praying’ online, people upload videos of himself praying. He tweets in seven languages (using local languages when he travels), but mostly a combination of Gujarati, Hindi, and English. He portrays himself as a Hindu god – some people talk about the ‘banalisation of Hindutva’. Part of this is portraying “every Indian” as special. ‘Selfie Nationalism’ has four characteristisc: Modi’s personification of a symbolic self (and driven by him, not others); a rejection of plural religious/cultural narratives of India; a discourse with a short self life driven by optics as in the frequent launch of new policy initiatives (which are then discarded); less concern with media access and more by media use.

Representing the Divine Cow: Indian Media, Cattle Slaughter and Identity Politics Sudeshna Roy, Stephen F. Austin State University. What are discursive strategies used to generate, resist, sustain, or reify discourses of Hindu nationalism surrounding the Divine Cow? Modi has had a lifelong association with the Hindu nationalist organisation RSS. He has been providing the conditions to support the growth of violent identity politics. In 2014, as Gujrat chief minister, he started attacking the beef export industry. In 2017 he instituted a ban against small-time Muslim and low-caste Dalit, leather-workers. Some low-caste Dalit Hindus do eat beef. Roy notes that while we commonly understand culture as private, our common associations and larger context shape how we understand culture. There have been several cases of Hindu mobs murdering Muslim people for (allegedly) eating beef. Newspaper articles on these events frequently refer to the ceremonial, ritual, and religious roles of the cow, including its sanctity and ahimsa (harmlessness); and pastoral Khrishna. There is, however, no monolithic adherence to the sanctity of the cow for Hindus. There’s a forced conflation of private and public culture in the media’s coverage of the symbolic cow. Hindutva is being presented as a way of life.

Do We Truly Belong: Ethnic and Racial Politics of Post-Disaster News Coverage of Puerto Rico. Sumana Chattopadhyay, Marquette University. In surveys, only a very small majority of people in the mainland US knew that Puerto Ricans are American citizens. However, they can’t vote in the national elections, because they’re not represented in the Electoral College. US mainstream media coverage of Hurricane Maria Puerto Rico is like their coverage of foreign countries.