April 20, 2015 § 5 Comments
Theorizing the Web was an amazing conference: the organisers and volunteers did a great job of finding a diverse mix of speakers and putting together a well-run event with a minimal budget. As my voluminous notes suggest, I came away with a bunch of new ideas and information which I’m sure I’ll be processing for a while to come. I was also very impressed by the organisers’ commitment to making attendance economically accessible: this is far too rare for academic (and academic-ish) conferences.
I do want to reflect a bit, however, on some of the ways in which my experience of TtW contrasted with AdaCamp. I enjoyed both tremendously, and there were aspects of TtW that weren’t present at AdaCamp (including a very strong analysis of the relationship between capitalism and technology). AdaCamp also has a very different format (it’s an unconference requiring applications to participate, and with much smaller attendance) than TtW, so I don’t want to imply that every part of AdaCamp’s policies could, or should, be transferred to TtW. I’m sure, also, that there are important ways in which AdaCamp’s policies are incomplete.
I have to note that I’m speaking from my own experience, which comes with its own limitations. So, for example, microaggressions related to gender are much more visible to me than those related to race or accessibility issues. Similarly, if I’m more aware of the ways in which many spaces are unsafe for trans people, it’s because I’ve been lucky enough to be exposed to the work of trans activists to highlight these problems (often at significant personal cost): many cis people will not be aware of this work. So this discussion will be missing a bunch of stuff, because I’m still learning myself.
Discussions at Theorizing the Web paid a lot of attention to the politics of technology: what does it mean to think of algorithms as architecture that shape our experience of the Web in particular ways? What does it mean if key communications technologies are privately owned? How do particular design choices contribute to platforms being safe or unsafe? A conference is also a technology, though. So it is useful, and important, to be thinking about how the conference-as-technology shapes particular relationships and power structures.
This becomes especially important when an event is likely to be attended by people who are risk of harassment or abuse, either within the conference or from external threats. At the moment, that potentially includes anyone who is writing about feminism and the Internet. Two of the women on the h8 panel yesterday had experienced sustained and potentially life-threatening online abuse (which transferred to offline spaces). There’s also clear evidence that gg is paying attention to academia, as well as to feminist game developers and game critics.
Dealing with this shouldn’t be left up to people who are at risk, it should be something that the whole community works at. It also shouldn’t require having to take the potentially-alienating step of opting out of a practice established as an event norm. The photo policy at each event is one example of this: at TtW the deal was basically, “you’re going to be photographed, your photograph will be used however people want to”, although presumably individual participants might have attempted to avoid being photographed, and speakers might start their talk by asking that the audience don’t photograph them. In contrast, at AdaCamp the lanyards used for name badges were colour-coded red (never photograph), yellow (ask before photographing), or green (go ahead!). This allowed participants to pick the option that most suited them without having to feel like they were making a fuss if they chose not to be photographed.
It also made a huge difference to me that these policies were reinforced by the organiser at AdaCamp. As well as reminders that it was okay to change your lanyard colour at any point, Skud stepped in to say that it wasn’t okay to approach yellow-lanyard people and say, “I’m going to take a photo, okay?” and assume that a lack of dissent meant consent. She also gave reminders about ableist language (suggesting alternatives), and about the scent policy. TtW has an anti-harassment policy, which is an excellent first step, but as far as I could tell it wasn’t mentioned in the introduction to the event or followed up with reminders in-session.
Building better cultures around conferences and other events should not just be up to conference organisers and volunteers. Participants also need to step in and say something (when they feel safe doing so). But this is much easier to do if organisers have already helped to set up a culture in which there are clear expectations around behaviour (including language). It’s much easier to say, “please remember than in the opening session the organiser asked us to avoid that language” than, “I’m very uncomfortable with that language”.
There’s a tendency in some places to see codes of conduct and safer spaces policies as autocratic, a way of enforcing rules in a top-down way. In some ways a top-down approach is necessary for temporary spaces (in the same way that conference organisers choose speakers and create a program, or choose participants, or a location). But safer spaces policies aren’t usually created in isolation: they grow out of discussions, challenges, and praxis built by feminist and trans activists, PoC, accessibility advocates, and others.
There were a few policies that would be relatively simple to implement at TtW, and which would not put too much pressure on resources, including:
- A clear photograph policy, including an easy way for people to signal that they don’t want to be photographed.
- A request that participants write their pronouns on their namebadges.
- A scent policy (I nearly left this out because my first response was, “oh, scents don’t bother me”, and then I realised how ridiculous that response was).
- Opt-in options for sexual content that don’t require participants to avoid important talks or spaces: while the looping video featuring disembodied genitalia was an interesting addition to the conference, playing it next to the keynote (and somewhere visible from the street) made it hard to avoid. Marking it as an option in the program (and with signs in the physical space) and playing it somewhere easier to avoid would be more appropriate.
- Announcements, including reminders if necessary, about key aspects of the anti-harassment policy.
- Announcements, including reminders, about who to contact about problems (there were reminders about this online, which was great, but with the wifi issues not everyone was online during the conference).
- [Edit: also clear policies and moderation in question sessions, which I discuss in more detail at the bottom of this post.]
There are also some steps which feel important to me but might require more resources to implement, including:
- Childcare (because our activist and academic spaces should have room for parents).
- Accessible spaces and pathways. (I’m not sure how, or whether, participants with mobility issues could reach the basement talks. [Edit: organisers have clarified that there was an elevator!])
- Access to enough bathrooms, drinking water, and snacks. This might seem extravagant, but for participants with mobility issues, fatigue, or related issues, it can make a huge difference to have basic requirements easily available. (I did really like that the organisers reminded people repeatedly that those without mobility issues should help out by going across the street for bathrooms when possible.)
It is awkward to write about these things, or raise these issues in academic or activist spaces. It is awkward to feel like I’m making trouble and being a bother. But it feels important to me. Honestly, I have minimal ability to effect change around many of the issues discussed at the conference: it’s interesting to consider the impact of Facebook’s algorithms, to examine new legal frameworks for regulating online spaces, or to consider the role of crypto in activist work in the Middle East…but I have limited ability to do anything about those things. But the technologies and processes of academic conferences is an area where I do have some agency, and I hope this contribution comes in useful in thinking about what we want our academic spaces to look like.
Theorizing the Web Day 2: here comes every body + h8 + lockscreen + algorithms + technologies and pathologies
April 19, 2015 § 1 Comment
The second day of Theorizing the Web was as intense as the first, and many of the presentations discussed potentially-distressing issues, including anti-fat prejudice, online harassment and abuse, police violence against people of colour, suicide, and transmisogyny. This post will only give a short overview of the presentations (and conversations) that happened. My notes from day one are here – I also recommend checking out the TtW15 website and hashtag for more information.
Day two began with Here comes every body, and Apryl Williams‘ discussion of fat activism online. Like most movements, fat activism is fractured: ‘body positivity’ is often still very much about healthiness, with strong moral undercurrents (for example, attempts to counter the idea that fat people are lazy by showing fat people exercising). ‘Fat acceptance’ rejects the idea that fat people have to prove their worth through performances of health, instead insisting that fat people (whether healthy or not) are valuable and retain autonomy over their own bodies. Williams notes, however, that fat activist spaces reproduce hegemonic ideology: fat activism often continues to frame women within the male gaze (“fat women are sexy too!”), and fat positive spaces are often dominated by white women. The Fat People of Colour tumblr provides an alternative space that includes men and genderqueer people, resists the fetishising of fat people, and invokes intersectional approaches (including considering class and disability).
Legacy Russell followed with a presentation on feminism and glitch body politic, asking how experimentations with sex and gender in the digital arena can act to undermine the discourse of sex and gender. Art by glitch feminists like Amalia Ulman, AK Burns, Ann Hirsch, Mykki Blanco, and Fannie Sosa, creates cracks in the glossy narrative of the patriarchal gaze and invites us to consider ways of disrupting platforms at the same time as we use them. Glitch feminism not just about individual projects, but about the connections and spaces in between them.
Mariam Naziripour‘s ‘Craft of Beauty: Make-up after the Internet’ tracked some of the ways in which technology (including older technologies like photography and black and white film) have changed our approach to make-up. Jenna Marbles’ early vlogs demonstrate the strange tensions in how modern Western society views make-up: women are meant to ‘look natural’ at the same time as we’re expected not to look like ourselves. We’re pushed to engage in constant attempts to meet particular (unachievable) standards of beauty, at the same time as we’re criticised for artifice and deception. This also reveals tensions in many people’s relationship to make-up, which is in some ways an imposition (to look a certain way, often at significant economic and personal cost), but also a source of creativity and experimentation. Online communities like Makeup Alley have created one of the richest archives of make-up practices ever to exist, documented by the people who use make-up (rather than poets or essayists writing misogynistic critiques of makeup, as was often the case in the past).
Finally, Emily Bick talked about the ways in which ‘virtual agents’ (virtual assistants, custometr support bots, and so on) reproduce and enforce gender roles. These programs are often gendered, shifting from the more gender-neutral agents of previous decades (like Microsoft’s infamous Clippy), and are subservient and obedient. They represent-and help reinforce-an ideology of a feminised support worker who is constantly available and deferential. Thinking about this now, I’m curious about ways in which this is additionally racialised (with the idealised Western virtual agent usually presented as white, at the same time as a significant proportion of caring work in Western countries is undertaken by women of colour), and about the ways in which glitches or limitations of these programs might be understood as acts of resistance by virtual agents.
The h8 session opened with Alison Annunziata’s discussion of Love and Terror in the Digital Age. She outlined two central problems with dealing with cyberstalking and digital harassment: firstly, that technology shifts more rapidly than the law, and secondly, that both the law and police as individuals are often not capable of understanding the language of threat (and of terror). Antistalking laws, for example, often have a requirements of ‘credible threat’ – would a ‘reasonable person’ see this as genuinely dangerous? Victims are often the only ones with the right intelligence to understand why a particular action is threatening or violating, and they bear a heavy burden of proof.
Caroline Sinders extended this by talking about Twitter’s UX problem, starting with the very real impact of this: her mother was recently swatted, which lead to a painful discussion in which Sinders was asked by her mother, and local police, what gg is and why they’re mad at her…which is kind of hard to explain, when the answer is “I tweet about feminism sometimes”. (This reminds me about some of the discussions at AdaCamp around resources to give to therapists: for people experiencing online harassment and abuse, it can be useful – and even necessary – to have an information pack to give therapists and other support people to explain the background and kinds of abuse that are happening. Sinders mentioned abusive tweets, doxxing, swatting, sealioning, and dog piling as particular issues.) Sinders notes that Twitter has a very specific problem with harassment, in part because it was never designed from a perspective that recognised and aimed to prevent harassment. Legal frameworks (as Annunziata explained) don’t deal well with misogynistic stalking and harassment, and particularly haven’t kept up with online abuse, but Sinders argues that there’s a lot that Twitter could do to become safer, including rewriting community guidelines to recognise and ban emerging uses of the platform for abuse, look at and learn from Block Together, redesign their interface, allowing more user agency, and using algorithms and data better (for example, enacting the PGP Web of Trust, recognising that often friends of friends are safe to interact with), and allowing batched submissions of abusive tweets. They should also be drawing on the knowledge of people who have experienced these forms of abuse in developing their responses.
Thomas Rousse explored two case studies in implementing moderation systems for online communities using peer judgements: Wikipedia and League of Legends. He notes that this isn’t an issue of free speech: it’s about the management of bounded online communities, and not about the forms of speech that the state controls or represses. Rouse outlined two major models of community management: moderation, and ‘online vigilantism’. Many communities start without clear rules for behaviour, and end up defaulting to a vigilante approach as users try to find their own solutions: often these are incredibly inventive, and really terrible. Moderation offers better possibilities, but often requires a lot of work from community managers. Peer-judgement systems offer one alternative. However, democracy is not an inherent good, and majoritarian spaces can less to ‘a majority of assholes’. Neither Wikipedia nor League of Legend’s systems are without problems; in fact, the Wikipedia requests for comment system ended during Rousse’s research. League of Legends’ system has been more successful: it allows players to look at transcripts when players are reported, and decide if they should be punished or pardoned. 94% of those who were reported were punished. But using human adjudicators isn’t fast enough, so they took the body of data accumulated and used machine learning to create a machine judge. This opens up lots of interesting (and worrying) questions about the ways in which peer judgement processes and machine learning might be deployed in other spaces.
I closed the session by exploring some of the ways in which geek feminist activism is challenging the predominantly liberal and libertarian politics of the digital liberties movement (which I’ve written more about here and here). This was a very brief sketch of a complex movement that I’ll be writing about in more detail later, but I hope it brought up some useful reflections on the ways in which we approach-or might approach-issues around online harassment. While Rousse referred back to liberal democratic frameworks (talking about being judged by ‘juries of our peers’ and noting that Wikipedia’s system looked more like a ‘kangaroo court than the Supreme Court’), women, trans people, and people of colour are often very aware that existing liberal democratic frameworks do not work for us. Anarchafeminist praxis offers an alternative source of experience to draw on in considering how we might deal with abuse and harassment, silencing, and structural inequality, within communities that are frequently male-dominated, and in spaces shaped by the broader context of the capitalist system.
The Lockscreen: Control and Resistance extended the discussion on many of these themes. Harry Halpin kicked off arguing that ‘only cryptography can save us’. With the failure of the liberal state and the capitalist order, he says, we’ll be seeing hundreds of revolutions still to come. Technology won’t determine the shape or outcome of these, but it will affect the possibilities available, and if technologies of communication are open to surveillance then states will be able to crush resistance before it can grow. Snowden has argued that we can’t trust liberal mechanisms of governance, so we have to find ways of inscribing the values of the society we want into technology. I’m rather dubious about this idea, however. Sinders’ talk on Twitter’s UX problem described the problems that arise from building a platform based around the life experiences and priorities of a relatively homogenous set of designers (mostly white, relatively privileged, men). There are some excellent women and people of colour involved in crypto communities (as there are at Twitter), but even just within TtW there were many mentions of the problems with crypto culture. So it seems like before ‘we’ ‘inscribe our values’, more work needs to go into working out who the ‘we’ is here, and giving attention and hard work to the culture within crypto communities (and looking at the ways in which these communities overlap – or fail to overlap – with those of users for whom this technology might be a life-or-death issue).
Ted Perlmutter continued the discussion of ‘Twitter revolutions’, but also noted that while people have been very enthusiastic about the platform when it seemed to be supporting progressive revolutions, it becomes more worrying when it’s used by groups like ISIS or gg as a recruitment tool (I’d also add that the US state apparatus is far less enthusiastic about movements organising on Twitter when it’s happening within the US). How should we be disrupting violent hate movements using Twitter? And if we isolate participants, are we sticking them in an echo chamber that will only radicalise them further? This was an interesting talk, but it seemed strange to me to discuss gg primarily through the lens of other male theorists, and without drawing on the experience or analysis of women and other marginalised groups that have been attacked by them.
Raven Rakia wrapped up the session with a critique of the anti-police movement’s dependency on visual images. As activists have been bringing attention to police killings of people of colour, there has been a focus on images of police in riot gear, police killings, and police brutality. These images are powerful, but they implicitly rewrite history, and support a politics of respectability. Photographs of riot police with armoured vehicles suggest that the US police have become militarised, hiding the fact that police have always been militarised, and from the beginning played a role in enforcing racist structures of control (including slavery and lynchings). These images also build a politics of respectability: they rely on an opposition between violent police and pacifist protesters, on telling us that victims of violence were going to college or parents (which implies that those who aren’t ‘respectable’ are suitable targets for violence). These images also focus our attention on visible forms of violence while other structures are hidden, including the prison and legal system that disproportionately affects black lives. Some of these structures are also taking new forms online: for example, if children talk to each other about trying to organise resistance to police or violence experienced from others, they can be charged under ‘gang laws’ and given much harsher sentences. Rakia argues that instead of focusing on images of police violence, we need to work to abolish the police and dismantle systems of incarceration and control.
The second keynote of TtW15, Algorithms as Social Control, brought together Zeynep Tufekci, Kate Crawford, Gilad Lotan, Amy O’Leary, and Frank Pasquale. I won’t try to summarise all of the discussion on this panel, but you can catch the Twitter feed here. There were some important points raised about the ways in which algorithms can act as architectures of control, and potentially also work in liberatory ways. There were also questions raised about appropriate points of focus: should we be examining algorithms, or are they just tools (“just like the process you use for tying your shoelaces”, as two data scientists told Amy O’Leary)? If we are interrogating algorithms, how do we actually do this using the tools and data available to us?
I enjoyed Kate Crawford’s discussion about what the history of the deodand can tell us about algorithms: this legal structure was a way of dealing with death or injury caused by animals or inanimate objects, and was finally replaced by negligence laws in large part due to the political power of the railway industry. Looking at that history reminds us that we have a long history to draw on in working out responsibility in complex systems, and that we can make creative solutions, but but also that the forces of capital shape the ways in which we develop structures for accountability and responsibility.
Tweets about the final keynote, In Sickness and in Health: Technologies and Pathologies, can be found at the #TtW15 #k3 hashtags, with participants Jason Wilson, merritt kopas, Ayesha Siddiqi, Gabriella Coleman, and Alondra Nelson. Nelson’s overview of her work on the Black Panther’s grassroots genetic screening program was amazing, and laid out a six-point theory of health and technology for the social media age which set up the frame for the panel well:
- Information does not want to be free, but demand it is because your life might depend on it. We need access to advanced medical and technical information.
- DIY is self-care.
- Technology needs to be for and by the people.
- Bringing attention to neglected or rare diseases requires an activated network. The Black Panthers had two types of network: one based on homophily (sameness), and another with well-connected nodes that could bring in celebrity (around the campaign on sickle cell anaemia).
- Access to and strategic use of tech must be coupled with vigilance about its excesses. For example, the Black Panthers actively challenged racist assumptions about genetic difference and built a multifaceted understanding of the politics of genetics and race.
- Disruptive innovation can move the state: one outcome of the Black Panther’s campaign was increased funding for sickle cell anaemia.
merritt kopas followed this with a discussion of games as a site for exploring complex ideas around interiority, mental health, gender, and sexuality. Online games can be produced and distributed easily, and the format allows for non-linear narratives. Games like Depression Quest that explore these issues are getting more attention, and much of this work done is being done by women, and especially trans women. Previously, trans people have mainly been allowed to occupy the literary space of the memoir (specifically around transition), which makes trans lives consumable for cis audiences. New games formats allow space for trans women to explore and share their experiences in ways that are more challenging, and frequently are made for other trans people rather than for a broader cis audience. This is important, particularly when being trans online means hearing about suicides (but being told not to talk about them in case you spread suicide), hearing about the murder of trans people (and realising that most of society doesn’t care), being purposefully and continually misgendered, and harassed and doxxed. Even when in queer or feminist spaces, trans people cannot assume they are safe. merritt also notes that while gg has received a lot of attention, this attention usually centres on the experience of cis white women. However, trans people (and especially trans women) have been experiencing these forms of harassment by trans-exclusionary radical feminists for years.
Ayesha Siddiqi talked about the ways in which marginalised people are building narratives of self care. Posts and tweets sharing tips for self care, or even telling others that they deserve self care, can be seen as a way of sharing amateur mental health resources. We need to be asking why people are turning to these to try to survive: what is it about our communities that create this need for self care, and why do are people forced to look after themselves (rather than being looked after by those around them)?
Finally, Biella Coleman talked about a question that’s come out of her previous project on Anonymous: how did those, and do those, who are deemed ‘crazy’, gain a voice when the very category of being ‘mad’ makes you ‘irrational’? She notes that disability marks the past and present of hacking in dramatic ways. While this has many negative impacts, it also creates spaces where people with disabilities (or people who identify with different neurodiversities) are able to find a place where they are accepted (although I would argue that this space is far more welcoming for some people than others).
The discussion that followed emphasised the ways in which ‘madness’ is socially-constructed: Siddiqi pointed out that traits that would mark others as ‘crazy’…are sentimentalized when they occur in white bodies, Coleman argued that in order to resist categorisations of madness you need strong communities of mutual aid, and Nelson noted that the Black Panthers knew you can’t be healthy in a pathological society, and there’s been a pathologization of anyone who poses a threat to the state and the market.
I’ll do one more post about Theorizing the Web, but I want to end this one with Alondra Nelson’s words (or as close as I could get to them while typing frantically):
I don’t feel optimistic at all, but people make do and keep going. But we can find a glimmer of hope in spaces and moments, not fully autonomous, of community, and of gathering.
Theorizing the Web, Day 1: cache flow & code queering & racial standpoints & magic & music & concrete dust
April 18, 2015 § 2 Comments
Theorizing the Web has been fascinating, but a bit of a shock to the system after AdaCamp. TtW is gloriously DIY, which has a lot of benefits: it’s particularly great to see an academic(ish) conference that’s open to activists and artists, and not hideously expensive to attend. I did miss the efforts AdaCamp went to in building a safe and inclusive space (including having a clear photo policy, pronouns on badges, and marked walkways for accessibility) – TtW has an anti-harassment policy, which is a great start, but I’d love to see a few more active steps around publicising and extending this policy.
As usual with events like this, I’ve tried to summarise a few of my notes for those who couldn’t make it (and Future Me), but I strongly suggest you check out the program, tweets, and livestream for the conference: there were so many great sessions I couldn’t go to, and of course my notes have been edited down (and tend to get shorter and shorter as the conference progresses).
The first session I went to, Cache Flow, kicked off with Zac Zimmer’s historical perspective on Bitcoin, linking the economic, environmental, and social impacts of sixteenth-century silver mining in the South American region of Potosí with Bitcoin. Zimmer pointed out that the ideology behind Bitcoin reveals a very particular (and circular) understanding of currency: Bitcoin is modelled on gold (and therefore scarce, and increasingly difficult to mine) because gold is seen as an archetypical currency, and gold is seen as an archetypical currency because it is scarce and increasingly difficult to mine. At the same time, this model demonstrates a lack of awareness of the environmental and social externalities involved in mining, which was horrifically destructive in Potosí.
Trebor Scholz lightened the mood briefly by opening his talk, “Okay, tardigrades”, and pointing out that these microscopic animals are much more well-suited to the rigours of capitalism than us unsteady, exhausted humans. Scholz outlined some of the ways in which digital technologies are allowing for increasing surveillance and atomisation of workers, from Amazon warehouse workers fired for spending a few minutes standing ‘inactive’ to the Mechanical Turk. Online platforms become digital bottlenecks for insecure and precarious workers. Scholz ended by outlining some of the ways in which we might “rip out the algorithmic model” at the heart of the ‘sharing economy’ and make something different, taking the corporate mediation out of the picture and using apps or other digital technologies to build worker-run and/or unionised alternatives. Examples to check out include: Turkopticon and the Transunion car service in NYC.
Next up, Andrea Hunter talked about crowdfunding, Crackstarter, and changing journalistic norms. She argued that while many journalists are trying out crowdfunding, this isn’t a sustainable alternative to funding problems in the long term. Crowdfunding requires negotiating new ways of engaging with funders/audiences, and new ways of trying to preserve autonomy while building this engagement. Many journalists currently using crowdfunding are hoping to use it as a step towards setting up new arrangements with advertisers (based on crowdfunding as evidence of a substantial audience).
Finally, Reubenn Binns explored the idea of selling our own data as the answer to our privacy concerns. This talk raised some thought-provoking ideas about how we respond to and resist the incredible levels of data-gathering taking place today, often with the goal of more effectively marketing at us. He argued that while selling our data ourselves can be tempting, doing so undermines our autonomy (as it gives marketers tools with which to more effectively manipulate our desires). However, in doing so he referred to a set of goods and services which it is ‘inherently morally problematic’ to exchange, citing sex work along with voting, indentured labour, selling organs, and other examples – this reference to sex work as inherently problematic (and particularly the reference to sex work as ‘prostitution’) wasn’t necessary for the argument, and has many fierce critics.
The second session, Code Queering, open with Dorian Adams and Steven Losco‘s discussion of ‘Viral Martyrs: Gender Identity, Race, and the Digital Construction of Victimhood’. They argued that allies and media brought attention to the 2014 suicide of 17-year-old Leelah Alcorn while violence against so many trans people of colour is largely ignored in part because she was white, young, middle-class, and from the suburbs, and her parents could afford conversion therapy. This mean coverage and support for Alcorn “did not require acknowledging existing networks of domination beyond a bounded notion of transphobia”. In contrast, despite the fact that trans people of colour (and particularly Black women) make up 70% of LGBT-related murders in the US, public attention to these victims limited, with media coverage frequently misgendered them, and either implying or explicitly referring to a real or imagined history of sex work.
Max Thorntorn continued the discussion of trans issues, beginning by noting that Leelah Alcorn’s suicide note talked about being isolated from her online communities by her parents’ confiscation of her devices. The Web, Thornton argues, can become a prosthesis for trans people, not just in the sense of extending or supplementing the self, but also in a more transformative way. Social media accounts and online communities can offer trans people who are not able to safely come out a space in which they can explore their identity, and be recognised by others. The web doesn’t just extend the borders of the self, it dissipates them (we are all cyborgs now). This encourages us to divest ourselves of the fallacy of the discrete, atomised, individual self. Thornton argues that this isn’t just theoretical: we need to take trans people’s gender identities seriously, which means recognising that a laptop and wifi can keep people alive.
Next, Chelsea Summers (standing in for Fuck Theory) talked about gay cruising apps. She/they argued that while common understandings of cruising apps tend to create a binary between cruising online and cruising in person, the actual shift is from a mode of cruising in specific times and places to constantly and ever-presently cruising.
Finally, Dorothy Howard talked about gynoids and geminoid: falling in love with machines. She asked why, when we think about robots and AI, we’re usually asking questions about whether we’ll lose our humanity, rather than about the new forms of intimacy we might be creating? How do algorithms change love? And how, when we think about loving machines, might we explore issue of intimacy, social function, and alienation. (For those interested in these issues, I also recommend my colleague Eleanor Sandry’s Robots and Communication.)
The Racial Standpoints panel was in one of the upper rooms with pretty poor acoustics, so please excuse brevity/errors in my notes. Kyra Gaunt opened by dedicating her work on ‘The Bottomlines Project: YouTube, Segregation and Black Girls’ to Jaime Adedro Moore, who was involved in one of the original YouTube twerk teams and was murdered in 2014. Gaunt and her students have found and watched over 800 hours of twerking videos by black girls on YouTube. She notes that as twerking (which comes out of a number of different African-American and African dance traditions) has become more popular, there are more white girls sharing twerking videos online. Videos by white girls tend to get more views, and more supportive comments, than those by black girls. Perhaps most worryingly, videos by black girls are often posted by older white male users, and/or might share identifying information or receive comments from men trying to make contact with the dancers. Gaunt notes that there are some important ethical issues with this research, including how to present it without revealing information about the girls themselves.
In the next presentation Julia Michiko Hori discussed the ways in which TripAdvisor reveals (or conceals) the relationship between tourism and traumatic histories. Reviews on the site unmask both an anxiety about, and the banality of, systemic historical erasure. Even those who are engaging in ‘cultural heritage tourism’ often post about their experiences within a colonialist framework, in which they are explorers overcoming the challenges of mosquito bites, uncovered food, and overpriced gift shops. These reviews reveal a desire for all places to be welcoming to (Western) tourists, no matter how historically hunted they are.
Louis Philippe Römer‘s Caribbean Visions of Digital Dystopia looked at Facebook demons and trickster prostitutes. He opened by reviewing the history of the Caribbean as the ground-zero of european colonisation, and talking about the ways in which this has shaped ICT infrastructures in the Caribbean today: telegraph networks integral to colonial trade have been replaced by internet cable networks. This
has enabled rapid adoption of internet and other ICTs in the Caribbean. However, at the same time there’s often little support for, or recognition of, a local manifestation of Web communities: Facebook, for example, doesn’t even recognise Curaçao as a location.
Mikhel Proulx closed the session talking about ‘Digital Natives: Indigenous Cultures on the Early Web’. He opened with an acknowledgement of the Native history of Manhattan (the only acknowledgement of country I’ve heard at a North American conference, as far as I can remember). Proulx spoke both about the colonialism embedded in many Internet spaces (such as the resonances in browsers ‘Explorer’ and ‘Navigator’), and of early attempts by Native artists in particular to make room for indigenous perspectives online, including on CyberPowWow and the Zapatista’s Internet presence.
I was quite curious to see what Magic, Machines, and Metaphors would be about, and it turned out to be a fascinating exploration of the overlaps and disjunctures between how we think about (and practice?) magic and technology. I really can’t do justice to the beautiful, rambling, conversation here, and I recommend checking out the tweets from the session. Participants Ingrid Burrington, Melissa Gira Grant, Karen Gregory, Damien Williams, and Deb Chachra invoked magic as a metaphor for structures of power, but also for resistance. Williams spoke of both magic and technology as systems that are unknown to us, unworkable to us, unless we take the time to become initiated, and Chachra pointed out that for technology, that process of initiation is often made pointlessly difficult in ways that exclude many people.
Chachra has no interest in making technology seem like magic, making it more arcane and inaccessible than it already is. Burrington talked about how this technology-as-magic frame is simultaneously criticised by the crypto community (“crypto’s not magic, why don’t people use it properly?”) at the same time as many people imply that they’re wizards in the area. She also did a cool project looking at the NSA and the occult after seeing an astrology magazine doing star charts for Snowden and the NSA as a lens to talk about surveillance. “What does it mean to make a star chart for an institution? You have to give it a birthday for a start.” That might seem ridiculous, she says, but at the same time it makes about as much sense as killing people based on metadata.
I also liked the efforts to think through relationships between magic and capitalism. Karen Gregory’s work on Tarot practitioners tracked ways in which this was often a response to being pushed out of a precarious economy, with Tarot becoming a means of survival. Magic as a means of survival and resistance can take many forms – Burrington’s mention of bots as a way of conjuring familiars made me think of this recent anti-troll campaign, or heartbot. At the same time, we can’t forget that capital is always seeking expansion and enclosure, so talking about magic (or otherwise exposing our spaces of resistance) is always risking their commodification.
This linked in with discussions about anglocentrism and appropriation: what does it mean that many of the magical traditions that we draw on are so Western? What does it mean that when tech culture draws on other spiritual traditions, it often does it in ways that are appropriative, or about turning them into tools for productivity?
The keynote to wrap day one focused on Music and the Web. I admit I was a little exhausted at this stage and so I’m not going to try to draw on my rather-incoherent notes too much: again, I highly recommend checking out tweets from the session. Participants Sasha Geffen, Gavin Mueller, Robin James, Reggie Ugwu, and Naomi Zeichner brought up some great points about the changing nature of celebrity and fan labour, and about how social media is shifting practices around not just the sharing of music, but also how it’s composed and produced.