November 8, 2015 § Leave a comment
The Creating Knowledge session opened with Julian Unkel and Alexander Haas’ work on ‘Credibility and Search Engines. The Effects of Source Reputation, Neutrality and Social Recommendations on the Selection of Search Engine Results.’ Using a model of search engine results they added different credibility cues, including markers of the reputation of the source, neutrality of the source, and social recommendations. Students participating in the experiment tended to choose ‘high neutrality’ sources, and also preferred links with a high reputation (news sites).
Reputation influences the probability of selecting a result, but has a weaker effect than rank (how high a source turns up in the search list): other credibility cues don’t have as much of an impact. This leads to two kinds of theoretical conclusion: firstly, that people think about credibility in a secondary way; or, that search rank is seen as a credibility cue. Future research will include modelling images with Google rather than DuckDuckGo, focusing on source cues, and looking at dwell time on different sources.
Next up, Colin Doty talked about ‘Believing the Internet: user comments about vaccine safety’. This research tries to understand misinformation on the Internet. The general theory is that the Internet increases information (because anyone can post; AND/OR it’s easy to retrieve; AND/OR spread of information is rapid; AND/OR echo chambers develop). Doty, instead, focuses on understanding why people believe what they do. He focuses on vaccines because this isn’t a case where there’s uncertainty in the research: instead, like climate change, there’s a strong consensus claims truth and a tiny minority in the research community disputing them (much of it, like Wakefield’s study, discredited).
Thinking about the kinds of claims being made online opposing vaccines, one issue is the way risk/benefit analyses are framed (for example, claims that only one person in a thousand dies of measles while vaccines are “putting everyone at risk of autism”). Searching for “vaccines” on google leads to autocomplete options that include “vaccines cause autism”, and search results lead to breakdowns that over-represent the risk of the vaccines, compared to a straight literature search/meta-analysis (turning up a much higher proportion of anti-vaccine search results than exist in the research).
Other routes to misinformation online include the use of ‘common sense’ reasoning (“it stands to reason that vaccines must…”), motivated reasoning (people’s desire to hold onto ideas they’re emotionally attached to, and the emotional nature of concerns around children’s safety), the spread of personal stories and claims to authority around this (sharing personal anecdotes about vaccination – “my child got vaccinated and the next day they had extreme behaviour changes” – that are used to push back against doctors’ claims of authority). There’s also a new claim to authority being made: parents “do their own research”, arguing that the Internet is leading them to an unobscured truth. This kind of motivated reasoning can be linked to echo chambers theories: that people go looking for information that will support their felt beliefs. One notable trend found here is the rise in the perception of the ability to know, as anti-vaccination advocates claim that “the internet has empowered me with knowledge/research”.
Nicholas Proferes followed with ‘A heuristic for tracing user knowledge of information flow on SMSs’. The problem he’s addressing is the vast user misunderstanding of how social media platforms actually work. For example, users not knowing that Twitter is public by default; Facebook users’ lack of knowledge that their newsfeed is based on an algorithm; Occupy accusing Twitter of censoring Trending Topics – subsequent analysis showed this was actually because the Trending Topic algorithms measure changes in velocity, not ongoing volume; and user responses to the Library of Congress Twitter archive – many users didn’t realise that Twitter was saving their tweets.
All of these issues relate to users’ knowledge of information flow. This matters because our knowledge of information flows on SMS allows us to gauge risks for information disclosure, make meaningful decisions about use, and participate in governance decisions. In part, our knowledge about information flows allows us to push back: our power is limited, but we can participate in networked power (like organising with friends not to use Facebook). However, there’s comparatively little research on the intersection between how information flows online, and how users think information flows online.
Understanding how information flows online is a difficult task: it requires understanding algorithms and design, but also policies, and economic structures. Drawing on Jose Van Dijck’s critical history of social media, Proferes understands information flow as constituted by both technocultural and socioeconomic flows. For Twitter, understanding its system of information flows requires looking at Twitter’s development, user guides, EDGAR search results (NYSE filings), and source code, among other things.
Finally, Leah Scolere and Lee Humphrey’s work on ‘Pinning originality’ examined the curation practices of creative professionals. This starts by understanding Pinterest as a visual discovery tool for finding ideas, and one which privileges curation over creation. This research drew on interviews and participant observation with professional designers. Pinning practices among this group highlighted the idea of originality as performance, process, and product.
Originality was defined differently from how we might expect, here. Rather than being about pinning images taken by the users themselves, it was about taking content from outside Pinterest and pinning it (rather than repinning other’s images). It was also about taking offline design strategies and taking them online, for example, by collecting and effectively curiting inspirational images.
Pinboards are a means for designers to present themselves, as a performance of their identities as designers. Therefore they include a lot of design-related imagery (and a distance between this and what they saw non-designers as pinning, for example, there were no health, recipe, or workout tip pins). Originality as a process was limited by how you can curate a Pinboard, so designers would take a large private Pinboard and repin onto smaller Pinboards. Pinterest allows three private boards: use this as a space to ‘safeguard process’ and try out more ‘edgy’ ideas. There were also links between offline practices and Pinterest use, including face to face discussions between designers about group Pinboards, and conversations about the effort involved in developing Pinboards. Finally, the visibility of pinboards made them into a product presented to others: a way of inspiring imagined audiences.
The next session, Design, opened with Ben Light’s ‘Anyone Here Around Now Today: digitally mediated public sexual cultures’. The ‘real name Web’ is often posed as establishing trust (somewhat disengenously, given companies’ commercial interests in users’ providing their real names), and presented as a passport to authentic connection. However, there are still many spaces where connection happens through pseudonymity. Light draws on Nancy Fraser’s work on subaltern politics (shaped practices that are culturally unacceptable and often also illegal) and work on public sexual cultures from Frankis and Flowers to understand Grindr and other apps as tools to help you connect with other people. Frankis and Flowers differentiate between ‘public sex environments’ (not meant for sex) and ‘public sex venues’ (meant for public sex). Light instead talks about ‘public sex locations’ – as the distinction is not so neat.
This research obviously poses significant methodological and ethical challenges. Data collection draws on user comments and geolocated sites. Data is scraped and anonymised, with pseudonyms from site removed. There are many decisions not to use data in particular ways, and comments aren’t directly quoted in case they eventually become searchable. Getting participant consent was neither possible not desirable.
Next James Malazita talked about ‘Non-Humans as meaning makers: Elizabeth as a co-designer of Bioshock Infinite’. Malazita asks, “who counts as a who?”, arguing for the affordances and agency, but also a subject position, for technologies (and specifically the non-player character of Elizabeth). He talks about Elizabeth as a ‘her’, a meaning-making subject [and, somewhat jarringly, Malazita also only referred to the hypothetical male player as ‘he’]. The original plan for Elizabeth was for her to be saved by the player, but for the player to rapidly find out that Elizabeth’s in-game power eclipsed theirs.
However, there’s a contrast between the potential of Elizabeth’s power, and the actual gameplay (in which she mostly hides in corners). Ken Levine talking about design of Bioshock: ‘she was the shark in Jaws’, talking about her as ‘falling through the ground’, ‘staring creepily’…a designed object, but also a ‘she’ who didn’t do what they wanted her to. ‘Elizabeth contributed to her own design’.
Jeffrey Holmes followed with ‘Teaching as designing: creating game-inspired courses’. Holmes notes that experiences, and specifically good experiences, are important for learning. A lot of teaching is about designing good experiences, which means students should have:
- Something at stake (affective involvement),
- Specific actions to complete,
- Clear goals,
- The ability to plug in to other tools and minds, and
These are all also found in video games. This leads to a lot of literature on gamification. THis problems with this is that there is often too much focus on ‘the game’ (including the game mechanics, which means that students end up playing the game rather than the course, and there are metaphoric layers that interfere with learning). We ask teachers to be game designers (which requires skills that take a long time to learn), and end up with games that may not align well with course goals.
Instead, we might ask what video games can tell us about teaching. Holmes does this by looking at two courses he’s taught that draw on lessons from video games. Some of these lessons include the value of:
- Using a World of Warcraft party model to cultivate and resource distributed knowledge skills.
- Allowing customisation and problems with multiple solutions.
- Treating learners as co-designers and agentive participants.
- Structuring ways to gauge how a learner is doing, and where to go next (where to next is the far more important part).
- Providing ways to develop a critical narrative for their learning (including how to think of their learning as meaningful; and progression not just of skills but as a journey through identities).
Finally, Helen Kennedy presented research on ‘The Role of Convention in Visualising and Imagining Data’. With the growth in available data, access is often through visualisations: this means we need to think critically about how visualisations are produced, and about how they produce data. Part of the skill in understanding visualisations is understanding that something (data) has been transformed; there’s a difference between seeing visualisations as “windows into data” and visualisations as purposeful mediations of data. Visualisations are purposeful acts: results of decisions. But the resulting visualisation pretends to be coherent and tidy, and removes traces of the interpretation involved.
The power of charts is that they communicate numbers, which people see as trustworthy. There’s an ongoing belief in ‘doing good with data’, and an idea that visualisation makes data transparent and accessible. In interviews with visual designers, they talked about trying to empower people with their visualisations, in part by representing data accurately; including links to sources; and recognising that choices are involved in creating visualisations. We need to take seriously what visual designers say, including their idealism about their work.
Visualisation conventions constrain what visualisations do. Conventions do rhetoric work, play a persuasive role, hide the messiness of visualisation. For example, the use of two-dimensional viewpoints creates a sense of objectivity (use of three dimensional views is frowned upon, as it makes it harder to view data…this makes sense, but also ‘encodes objectivity’ in the two-dimensional viewpoint). Geometric shapes and lines create a sense of order. Citing data sources makes the data look transparent, which does persuasive work – it gives an aura of truthfulnes (which means many of us don’t feel we need to go back to the source, and couldn’t understand it anyway.) We need to think about all of this critically to understand practices surrounding the production and consumption of visualisations.
Theorizing the Web, Day 1: cache flow & code queering & racial standpoints & magic & music & concrete dust
April 18, 2015 § 2 Comments
Theorizing the Web has been fascinating, but a bit of a shock to the system after AdaCamp. TtW is gloriously DIY, which has a lot of benefits: it’s particularly great to see an academic(ish) conference that’s open to activists and artists, and not hideously expensive to attend. I did miss the efforts AdaCamp went to in building a safe and inclusive space (including having a clear photo policy, pronouns on badges, and marked walkways for accessibility) – TtW has an anti-harassment policy, which is a great start, but I’d love to see a few more active steps around publicising and extending this policy.
As usual with events like this, I’ve tried to summarise a few of my notes for those who couldn’t make it (and Future Me), but I strongly suggest you check out the program, tweets, and livestream for the conference: there were so many great sessions I couldn’t go to, and of course my notes have been edited down (and tend to get shorter and shorter as the conference progresses).
The first session I went to, Cache Flow, kicked off with Zac Zimmer’s historical perspective on Bitcoin, linking the economic, environmental, and social impacts of sixteenth-century silver mining in the South American region of Potosí with Bitcoin. Zimmer pointed out that the ideology behind Bitcoin reveals a very particular (and circular) understanding of currency: Bitcoin is modelled on gold (and therefore scarce, and increasingly difficult to mine) because gold is seen as an archetypical currency, and gold is seen as an archetypical currency because it is scarce and increasingly difficult to mine. At the same time, this model demonstrates a lack of awareness of the environmental and social externalities involved in mining, which was horrifically destructive in Potosí.
Trebor Scholz lightened the mood briefly by opening his talk, “Okay, tardigrades”, and pointing out that these microscopic animals are much more well-suited to the rigours of capitalism than us unsteady, exhausted humans. Scholz outlined some of the ways in which digital technologies are allowing for increasing surveillance and atomisation of workers, from Amazon warehouse workers fired for spending a few minutes standing ‘inactive’ to the Mechanical Turk. Online platforms become digital bottlenecks for insecure and precarious workers. Scholz ended by outlining some of the ways in which we might “rip out the algorithmic model” at the heart of the ‘sharing economy’ and make something different, taking the corporate mediation out of the picture and using apps or other digital technologies to build worker-run and/or unionised alternatives. Examples to check out include: Turkopticon and the Transunion car service in NYC.
Next up, Andrea Hunter talked about crowdfunding, Crackstarter, and changing journalistic norms. She argued that while many journalists are trying out crowdfunding, this isn’t a sustainable alternative to funding problems in the long term. Crowdfunding requires negotiating new ways of engaging with funders/audiences, and new ways of trying to preserve autonomy while building this engagement. Many journalists currently using crowdfunding are hoping to use it as a step towards setting up new arrangements with advertisers (based on crowdfunding as evidence of a substantial audience).
Finally, Reubenn Binns explored the idea of selling our own data as the answer to our privacy concerns. This talk raised some thought-provoking ideas about how we respond to and resist the incredible levels of data-gathering taking place today, often with the goal of more effectively marketing at us. He argued that while selling our data ourselves can be tempting, doing so undermines our autonomy (as it gives marketers tools with which to more effectively manipulate our desires). However, in doing so he referred to a set of goods and services which it is ‘inherently morally problematic’ to exchange, citing sex work along with voting, indentured labour, selling organs, and other examples – this reference to sex work as inherently problematic (and particularly the reference to sex work as ‘prostitution’) wasn’t necessary for the argument, and has many fierce critics.
The second session, Code Queering, open with Dorian Adams and Steven Losco‘s discussion of ‘Viral Martyrs: Gender Identity, Race, and the Digital Construction of Victimhood’. They argued that allies and media brought attention to the 2014 suicide of 17-year-old Leelah Alcorn while violence against so many trans people of colour is largely ignored in part because she was white, young, middle-class, and from the suburbs, and her parents could afford conversion therapy. This mean coverage and support for Alcorn “did not require acknowledging existing networks of domination beyond a bounded notion of transphobia”. In contrast, despite the fact that trans people of colour (and particularly Black women) make up 70% of LGBT-related murders in the US, public attention to these victims limited, with media coverage frequently misgendered them, and either implying or explicitly referring to a real or imagined history of sex work.
Max Thorntorn continued the discussion of trans issues, beginning by noting that Leelah Alcorn’s suicide note talked about being isolated from her online communities by her parents’ confiscation of her devices. The Web, Thornton argues, can become a prosthesis for trans people, not just in the sense of extending or supplementing the self, but also in a more transformative way. Social media accounts and online communities can offer trans people who are not able to safely come out a space in which they can explore their identity, and be recognised by others. The web doesn’t just extend the borders of the self, it dissipates them (we are all cyborgs now). This encourages us to divest ourselves of the fallacy of the discrete, atomised, individual self. Thornton argues that this isn’t just theoretical: we need to take trans people’s gender identities seriously, which means recognising that a laptop and wifi can keep people alive.
Next, Chelsea Summers (standing in for Fuck Theory) talked about gay cruising apps. She/they argued that while common understandings of cruising apps tend to create a binary between cruising online and cruising in person, the actual shift is from a mode of cruising in specific times and places to constantly and ever-presently cruising.
Finally, Dorothy Howard talked about gynoids and geminoid: falling in love with machines. She asked why, when we think about robots and AI, we’re usually asking questions about whether we’ll lose our humanity, rather than about the new forms of intimacy we might be creating? How do algorithms change love? And how, when we think about loving machines, might we explore issue of intimacy, social function, and alienation. (For those interested in these issues, I also recommend my colleague Eleanor Sandry’s Robots and Communication.)
The Racial Standpoints panel was in one of the upper rooms with pretty poor acoustics, so please excuse brevity/errors in my notes. Kyra Gaunt opened by dedicating her work on ‘The Bottomlines Project: YouTube, Segregation and Black Girls’ to Jaime Adedro Moore, who was involved in one of the original YouTube twerk teams and was murdered in 2014. Gaunt and her students have found and watched over 800 hours of twerking videos by black girls on YouTube. She notes that as twerking (which comes out of a number of different African-American and African dance traditions) has become more popular, there are more white girls sharing twerking videos online. Videos by white girls tend to get more views, and more supportive comments, than those by black girls. Perhaps most worryingly, videos by black girls are often posted by older white male users, and/or might share identifying information or receive comments from men trying to make contact with the dancers. Gaunt notes that there are some important ethical issues with this research, including how to present it without revealing information about the girls themselves.
In the next presentation Julia Michiko Hori discussed the ways in which TripAdvisor reveals (or conceals) the relationship between tourism and traumatic histories. Reviews on the site unmask both an anxiety about, and the banality of, systemic historical erasure. Even those who are engaging in ‘cultural heritage tourism’ often post about their experiences within a colonialist framework, in which they are explorers overcoming the challenges of mosquito bites, uncovered food, and overpriced gift shops. These reviews reveal a desire for all places to be welcoming to (Western) tourists, no matter how historically hunted they are.
Louis Philippe Römer‘s Caribbean Visions of Digital Dystopia looked at Facebook demons and trickster prostitutes. He opened by reviewing the history of the Caribbean as the ground-zero of european colonisation, and talking about the ways in which this has shaped ICT infrastructures in the Caribbean today: telegraph networks integral to colonial trade have been replaced by internet cable networks. This
has enabled rapid adoption of internet and other ICTs in the Caribbean. However, at the same time there’s often little support for, or recognition of, a local manifestation of Web communities: Facebook, for example, doesn’t even recognise Curaçao as a location.
Mikhel Proulx closed the session talking about ‘Digital Natives: Indigenous Cultures on the Early Web’. He opened with an acknowledgement of the Native history of Manhattan (the only acknowledgement of country I’ve heard at a North American conference, as far as I can remember). Proulx spoke both about the colonialism embedded in many Internet spaces (such as the resonances in browsers ‘Explorer’ and ‘Navigator’), and of early attempts by Native artists in particular to make room for indigenous perspectives online, including on CyberPowWow and the Zapatista’s Internet presence.
I was quite curious to see what Magic, Machines, and Metaphors would be about, and it turned out to be a fascinating exploration of the overlaps and disjunctures between how we think about (and practice?) magic and technology. I really can’t do justice to the beautiful, rambling, conversation here, and I recommend checking out the tweets from the session. Participants Ingrid Burrington, Melissa Gira Grant, Karen Gregory, Damien Williams, and Deb Chachra invoked magic as a metaphor for structures of power, but also for resistance. Williams spoke of both magic and technology as systems that are unknown to us, unworkable to us, unless we take the time to become initiated, and Chachra pointed out that for technology, that process of initiation is often made pointlessly difficult in ways that exclude many people.
Chachra has no interest in making technology seem like magic, making it more arcane and inaccessible than it already is. Burrington talked about how this technology-as-magic frame is simultaneously criticised by the crypto community (“crypto’s not magic, why don’t people use it properly?”) at the same time as many people imply that they’re wizards in the area. She also did a cool project looking at the NSA and the occult after seeing an astrology magazine doing star charts for Snowden and the NSA as a lens to talk about surveillance. “What does it mean to make a star chart for an institution? You have to give it a birthday for a start.” That might seem ridiculous, she says, but at the same time it makes about as much sense as killing people based on metadata.
I also liked the efforts to think through relationships between magic and capitalism. Karen Gregory’s work on Tarot practitioners tracked ways in which this was often a response to being pushed out of a precarious economy, with Tarot becoming a means of survival. Magic as a means of survival and resistance can take many forms – Burrington’s mention of bots as a way of conjuring familiars made me think of this recent anti-troll campaign, or heartbot. At the same time, we can’t forget that capital is always seeking expansion and enclosure, so talking about magic (or otherwise exposing our spaces of resistance) is always risking their commodification.
This linked in with discussions about anglocentrism and appropriation: what does it mean that many of the magical traditions that we draw on are so Western? What does it mean that when tech culture draws on other spiritual traditions, it often does it in ways that are appropriative, or about turning them into tools for productivity?
The keynote to wrap day one focused on Music and the Web. I admit I was a little exhausted at this stage and so I’m not going to try to draw on my rather-incoherent notes too much: again, I highly recommend checking out tweets from the session. Participants Sasha Geffen, Gavin Mueller, Robin James, Reggie Ugwu, and Naomi Zeichner brought up some great points about the changing nature of celebrity and fan labour, and about how social media is shifting practices around not just the sharing of music, but also how it’s composed and produced.