AoIR16 Day 3: Creating Knowledge and Design

The Creating Knowledge session opened with Julian Unkel and Alexander Haas’ work on ‘Credibility and Search Engines. The Effects of Source Reputation, Neutrality and Social Recommendations on the Selection of Search Engine Results.’ Using a model of search engine results they added different credibility cues, including markers of the reputation of the source, neutrality of the source, and social recommendations. Students participating in the experiment tended to choose ‘high neutrality’ sources, and also preferred links with a high reputation (news sites).

Reputation influences the probability of selecting a result, but has a weaker effect than rank (how high a source turns up in the search list): other credibility cues don’t have as much of an impact. This leads to two kinds of theoretical conclusion: firstly, that people think about credibility in a secondary way; or, that search rank is seen as a credibility cue. Future research will include modelling images with Google rather than DuckDuckGo, focusing on source cues, and looking at dwell time on different sources.
Next up, Colin Doty talked about ‘Believing the Internet: user comments about vaccine safety’. This research tries to understand misinformation on the Internet. The general theory is that the Internet increases information (because anyone can post; AND/OR it’s easy to retrieve; AND/OR spread of information is rapid; AND/OR echo chambers develop). Doty, instead, focuses on understanding why people believe what they do. He focuses on vaccines because this isn’t a case where there’s uncertainty in the research: instead, like climate change, there’s a strong consensus claims truth and a tiny minority in the research community disputing them (much of it, like Wakefield’s study, discredited).

IPV vaccination scene, Sanofi Pastuer

Thinking about the kinds of claims being made online opposing vaccines, one issue is the way risk/benefit analyses are framed (for example, claims that only one person in a thousand dies of measles while vaccines are “putting everyone at risk of autism”). Searching for “vaccines” on google leads to autocomplete options that include “vaccines cause autism”, and search results lead to breakdowns that over-represent the risk of the vaccines, compared to a straight literature search/meta-analysis (turning up a much higher proportion of anti-vaccine search results than exist in the research).

Other routes to misinformation online include the use of ‘common sense’ reasoning (“it stands to reason that vaccines must…”), motivated reasoning (people’s desire to hold onto ideas they’re emotionally attached to, and the emotional nature of concerns around children’s safety), the spread of personal stories and claims to authority around this (sharing personal anecdotes about vaccination – “my child got vaccinated and the next day they had extreme behaviour changes” – that are used to push back against doctors’ claims of authority). There’s also a new claim to authority being made: parents “do their own research”, arguing that the Internet is leading them to an unobscured truth. This kind of motivated reasoning can be linked to echo chambers theories: that people go looking for information that will support their felt beliefs. One notable trend found here is the rise in the perception of the ability to know, as anti-vaccination advocates claim that “the internet has empowered me with knowledge/research”.

Nicholas Proferes followed with ‘A heuristic for tracing user knowledge of information flow on SMSs’. The problem he’s addressing is the vast user misunderstanding of how social media platforms actually work. For example, users not knowing that Twitter is public by default; Facebook users’ lack of knowledge that their newsfeed is based on an algorithm; Occupy accusing Twitter of censoring Trending Topics – subsequent analysis showed this was actually because the Trending Topic algorithms measure changes in velocity, not ongoing volume; and user responses to the Library of Congress Twitter archive – many users didn’t realise that Twitter was saving their tweets.

All of these issues relate to users’ knowledge of information flow. This matters because our knowledge of information flows on SMS allows us to gauge risks for information disclosure, make meaningful decisions about use, and participate in governance decisions. In part, our knowledge about information flows allows us to push back: our power is limited, but we can participate in networked power (like organising with friends not to use Facebook). However, there’s comparatively little research on the intersection between how information flows online, and how users think information flows online.

Understanding how information flows online is a difficult task: it requires understanding algorithms and design, but also policies, and economic structures. Drawing on Jose Van Dijck’s critical history of social media, Proferes understands information flow as constituted by both technocultural and socioeconomic flows. For Twitter, understanding its system of information flows requires looking at Twitter’s development, user guides, EDGAR search results (NYSE filings), and source code, among other things.
Finally, Leah Scolere and Lee Humphrey’s work on ‘Pinning originality’ examined the curation practices of creative professionals. This starts by understanding Pinterest as a visual discovery tool for finding ideas, and one which privileges curation over creation. This research drew on interviews and participant observation with professional designers. Pinning practices among this group highlighted the idea of originality as performance, process, and product.

Originality was defined differently from how we might expect, here. Rather than being about pinning images taken by the users themselves, it was about taking content from outside Pinterest and pinning it (rather than repinning other’s images). It was also about taking offline design strategies and taking them online, for example, by collecting and effectively curiting inspirational images.

Pinboards are a means for designers to present themselves, as a performance of their identities as designers. Therefore they include a lot of design-related imagery (and a distance between this and what they saw non-designers as pinning, for example, there were no health, recipe, or workout tip pins). Originality as a process was limited by how you can curate a Pinboard, so designers would take a large private Pinboard and repin onto smaller Pinboards. Pinterest allows three private boards: use this as a space to ‘safeguard process’ and try out more ‘edgy’ ideas. There were also links between offline practices and Pinterest use, including face to face discussions between designers about group Pinboards, and conversations about the effort involved in developing Pinboards. Finally, the visibility of pinboards made them into a product presented to others: a way of inspiring imagined audiences.

The next session, Design, opened with Ben Light’s ‘Anyone Here Around Now Today: digitally mediated public sexual cultures’. The ‘real name Web’ is often posed as establishing trust (somewhat disengenously, given companies’ commercial interests in users’ providing their real names), and presented as a passport to authentic connection. However, there are still many spaces where connection happens through pseudonymity. Light draws on Nancy Fraser’s work on subaltern politics (shaped practices that are culturally unacceptable and often also illegal) and work on public sexual cultures from Frankis and Flowers to understand Grindr and other apps as tools to help you connect with other people. Frankis and Flowers differentiate between ‘public sex environments’ (not meant for sex) and ‘public sex venues’ (meant for public sex). Light instead talks about ‘public sex locations’ – as the distinction is not so neat.

This research obviously poses significant methodological and ethical challenges. Data collection draws on user comments and geolocated sites. Data is scraped and anonymised, with pseudonyms from site removed. There are many decisions not to use data in particular ways, and comments aren’t directly quoted in case they eventually become searchable. Getting participant consent was neither possible not desirable.
Next James Malazita talked about ‘Non-Humans as meaning makers: Elizabeth as a co-designer of Bioshock Infinite’. Malazita asks, “who counts as a who?”, arguing for the affordances and agency, but also a subject position, for technologies (and specifically the non-player character of Elizabeth). He talks about Elizabeth as a ‘her’, a meaning-making subject [and, somewhat jarringly, Malazita also only referred to the hypothetical male player as ‘he’]. The original plan for Elizabeth was for her to be saved by the player, but for the player to rapidly find out that Elizabeth’s in-game power eclipsed theirs.

However, there’s a contrast between the potential of Elizabeth’s power, and the actual gameplay (in which she mostly hides in corners). Ken Levine talking about design of Bioshock: ‘she was the shark in Jaws’, talking about her as ‘falling through the ground’, ‘staring creepily’…a designed object, but also a ‘she’ who didn’t do what they wanted her to. ‘Elizabeth contributed to her own design’.

Jeffrey Holmes followed with ‘Teaching as designing: creating game-inspired courses’. Holmes notes that experiences, and specifically good experiences, are important for learning. A lot of teaching is about designing good experiences, which means students should have:

  • Something at stake (affective involvement),
  • Specific actions to complete,
  • Clear goals,
  • The ability to plug in to other tools and minds, and
  • Constraints.

These are all also found in video games. This leads to a lot of literature on gamification. THis problems with this is that there is often too much focus on ‘the game’ (including the game mechanics, which means that students end up playing the game rather than the course, and there are metaphoric layers that interfere with learning). We ask teachers to be game designers (which requires skills that take a long time to learn), and end up with games that may not align well with course goals.
Instead, we might ask what video games can tell us about teaching. Holmes does this by looking at two courses he’s taught that draw on lessons from video games. Some of these lessons include the value of:

  • Using a World of Warcraft party model to cultivate and resource distributed knowledge skills.
  • Allowing customisation and problems with multiple solutions.
  • Treating learners as co-designers and agentive participants.
  • Structuring ways to gauge how a learner is doing, and where to go next (where to next is the far more important part).
  • Providing ways to develop a critical narrative for their learning (including how to think of their learning as meaningful; and progression not just of skills but as a journey through identities).

Finally, Helen Kennedy presented research on ‘The Role of Convention in Visualising and Imagining Data’. With the growth in available data, access is often through visualisations: this means we need to think critically about how visualisations are produced, and about how they produce data. Part of the skill in understanding visualisations is understanding that something (data) has been transformed; there’s a difference between seeing visualisations as “windows into data” and visualisations as purposeful mediations of data. Visualisations are purposeful acts: results of decisions. But the resulting visualisation pretends to be coherent and tidy, and removes traces of the interpretation involved.

The power of charts is that they communicate numbers, which people see as trustworthy. There’s an ongoing belief in ‘doing good with data’, and an idea that visualisation makes data transparent and accessible. In interviews with visual designers, they talked about trying to empower people with their visualisations, in part by representing data accurately; including links to sources; and recognising that choices are involved in creating visualisations. We need to take seriously what visual designers say, including their idealism about their work.

Visualisation conventions constrain what visualisations do. Conventions do rhetoric work, play a persuasive role, hide the messiness of visualisation. For example, the use of two-dimensional viewpoints creates a sense of objectivity (use of three dimensional views is frowned upon, as it makes it harder to view data…this makes sense, but also ‘encodes objectivity’ in the two-dimensional viewpoint). Geometric shapes and lines create a sense of order. Citing data sources makes the data look transparent, which does persuasive work – it gives an aura of truthfulnes (which means many of us don’t feel we need to go back to the source, and couldn’t understand it anyway.) We need to think about all of this critically to understand practices surrounding the production and consumption of visualisations.

Leave a comment