November 8, 2015 § Leave a comment
The Creating Knowledge session opened with Julian Unkel and Alexander Haas’ work on ‘Credibility and Search Engines. The Effects of Source Reputation, Neutrality and Social Recommendations on the Selection of Search Engine Results.’ Using a model of search engine results they added different credibility cues, including markers of the reputation of the source, neutrality of the source, and social recommendations. Students participating in the experiment tended to choose ‘high neutrality’ sources, and also preferred links with a high reputation (news sites).
Reputation influences the probability of selecting a result, but has a weaker effect than rank (how high a source turns up in the search list): other credibility cues don’t have as much of an impact. This leads to two kinds of theoretical conclusion: firstly, that people think about credibility in a secondary way; or, that search rank is seen as a credibility cue. Future research will include modelling images with Google rather than DuckDuckGo, focusing on source cues, and looking at dwell time on different sources.
Next up, Colin Doty talked about ‘Believing the Internet: user comments about vaccine safety’. This research tries to understand misinformation on the Internet. The general theory is that the Internet increases information (because anyone can post; AND/OR it’s easy to retrieve; AND/OR spread of information is rapid; AND/OR echo chambers develop). Doty, instead, focuses on understanding why people believe what they do. He focuses on vaccines because this isn’t a case where there’s uncertainty in the research: instead, like climate change, there’s a strong consensus claims truth and a tiny minority in the research community disputing them (much of it, like Wakefield’s study, discredited).
Thinking about the kinds of claims being made online opposing vaccines, one issue is the way risk/benefit analyses are framed (for example, claims that only one person in a thousand dies of measles while vaccines are “putting everyone at risk of autism”). Searching for “vaccines” on google leads to autocomplete options that include “vaccines cause autism”, and search results lead to breakdowns that over-represent the risk of the vaccines, compared to a straight literature search/meta-analysis (turning up a much higher proportion of anti-vaccine search results than exist in the research).
Other routes to misinformation online include the use of ‘common sense’ reasoning (“it stands to reason that vaccines must…”), motivated reasoning (people’s desire to hold onto ideas they’re emotionally attached to, and the emotional nature of concerns around children’s safety), the spread of personal stories and claims to authority around this (sharing personal anecdotes about vaccination – “my child got vaccinated and the next day they had extreme behaviour changes” – that are used to push back against doctors’ claims of authority). There’s also a new claim to authority being made: parents “do their own research”, arguing that the Internet is leading them to an unobscured truth. This kind of motivated reasoning can be linked to echo chambers theories: that people go looking for information that will support their felt beliefs. One notable trend found here is the rise in the perception of the ability to know, as anti-vaccination advocates claim that “the internet has empowered me with knowledge/research”.
Nicholas Proferes followed with ‘A heuristic for tracing user knowledge of information flow on SMSs’. The problem he’s addressing is the vast user misunderstanding of how social media platforms actually work. For example, users not knowing that Twitter is public by default; Facebook users’ lack of knowledge that their newsfeed is based on an algorithm; Occupy accusing Twitter of censoring Trending Topics – subsequent analysis showed this was actually because the Trending Topic algorithms measure changes in velocity, not ongoing volume; and user responses to the Library of Congress Twitter archive – many users didn’t realise that Twitter was saving their tweets.
All of these issues relate to users’ knowledge of information flow. This matters because our knowledge of information flows on SMS allows us to gauge risks for information disclosure, make meaningful decisions about use, and participate in governance decisions. In part, our knowledge about information flows allows us to push back: our power is limited, but we can participate in networked power (like organising with friends not to use Facebook). However, there’s comparatively little research on the intersection between how information flows online, and how users think information flows online.
Understanding how information flows online is a difficult task: it requires understanding algorithms and design, but also policies, and economic structures. Drawing on Jose Van Dijck’s critical history of social media, Proferes understands information flow as constituted by both technocultural and socioeconomic flows. For Twitter, understanding its system of information flows requires looking at Twitter’s development, user guides, EDGAR search results (NYSE filings), and source code, among other things.
Finally, Leah Scolere and Lee Humphrey’s work on ‘Pinning originality’ examined the curation practices of creative professionals. This starts by understanding Pinterest as a visual discovery tool for finding ideas, and one which privileges curation over creation. This research drew on interviews and participant observation with professional designers. Pinning practices among this group highlighted the idea of originality as performance, process, and product.
Originality was defined differently from how we might expect, here. Rather than being about pinning images taken by the users themselves, it was about taking content from outside Pinterest and pinning it (rather than repinning other’s images). It was also about taking offline design strategies and taking them online, for example, by collecting and effectively curiting inspirational images.
Pinboards are a means for designers to present themselves, as a performance of their identities as designers. Therefore they include a lot of design-related imagery (and a distance between this and what they saw non-designers as pinning, for example, there were no health, recipe, or workout tip pins). Originality as a process was limited by how you can curate a Pinboard, so designers would take a large private Pinboard and repin onto smaller Pinboards. Pinterest allows three private boards: use this as a space to ‘safeguard process’ and try out more ‘edgy’ ideas. There were also links between offline practices and Pinterest use, including face to face discussions between designers about group Pinboards, and conversations about the effort involved in developing Pinboards. Finally, the visibility of pinboards made them into a product presented to others: a way of inspiring imagined audiences.
The next session, Design, opened with Ben Light’s ‘Anyone Here Around Now Today: digitally mediated public sexual cultures’. The ‘real name Web’ is often posed as establishing trust (somewhat disengenously, given companies’ commercial interests in users’ providing their real names), and presented as a passport to authentic connection. However, there are still many spaces where connection happens through pseudonymity. Light draws on Nancy Fraser’s work on subaltern politics (shaped practices that are culturally unacceptable and often also illegal) and work on public sexual cultures from Frankis and Flowers to understand Grindr and other apps as tools to help you connect with other people. Frankis and Flowers differentiate between ‘public sex environments’ (not meant for sex) and ‘public sex venues’ (meant for public sex). Light instead talks about ‘public sex locations’ – as the distinction is not so neat.
This research obviously poses significant methodological and ethical challenges. Data collection draws on user comments and geolocated sites. Data is scraped and anonymised, with pseudonyms from site removed. There are many decisions not to use data in particular ways, and comments aren’t directly quoted in case they eventually become searchable. Getting participant consent was neither possible not desirable.
Next James Malazita talked about ‘Non-Humans as meaning makers: Elizabeth as a co-designer of Bioshock Infinite’. Malazita asks, “who counts as a who?”, arguing for the affordances and agency, but also a subject position, for technologies (and specifically the non-player character of Elizabeth). He talks about Elizabeth as a ‘her’, a meaning-making subject [and, somewhat jarringly, Malazita also only referred to the hypothetical male player as ‘he’]. The original plan for Elizabeth was for her to be saved by the player, but for the player to rapidly find out that Elizabeth’s in-game power eclipsed theirs.
However, there’s a contrast between the potential of Elizabeth’s power, and the actual gameplay (in which she mostly hides in corners). Ken Levine talking about design of Bioshock: ‘she was the shark in Jaws’, talking about her as ‘falling through the ground’, ‘staring creepily’…a designed object, but also a ‘she’ who didn’t do what they wanted her to. ‘Elizabeth contributed to her own design’.
Jeffrey Holmes followed with ‘Teaching as designing: creating game-inspired courses’. Holmes notes that experiences, and specifically good experiences, are important for learning. A lot of teaching is about designing good experiences, which means students should have:
- Something at stake (affective involvement),
- Specific actions to complete,
- Clear goals,
- The ability to plug in to other tools and minds, and
These are all also found in video games. This leads to a lot of literature on gamification. THis problems with this is that there is often too much focus on ‘the game’ (including the game mechanics, which means that students end up playing the game rather than the course, and there are metaphoric layers that interfere with learning). We ask teachers to be game designers (which requires skills that take a long time to learn), and end up with games that may not align well with course goals.
Instead, we might ask what video games can tell us about teaching. Holmes does this by looking at two courses he’s taught that draw on lessons from video games. Some of these lessons include the value of:
- Using a World of Warcraft party model to cultivate and resource distributed knowledge skills.
- Allowing customisation and problems with multiple solutions.
- Treating learners as co-designers and agentive participants.
- Structuring ways to gauge how a learner is doing, and where to go next (where to next is the far more important part).
- Providing ways to develop a critical narrative for their learning (including how to think of their learning as meaningful; and progression not just of skills but as a journey through identities).
Finally, Helen Kennedy presented research on ‘The Role of Convention in Visualising and Imagining Data’. With the growth in available data, access is often through visualisations: this means we need to think critically about how visualisations are produced, and about how they produce data. Part of the skill in understanding visualisations is understanding that something (data) has been transformed; there’s a difference between seeing visualisations as “windows into data” and visualisations as purposeful mediations of data. Visualisations are purposeful acts: results of decisions. But the resulting visualisation pretends to be coherent and tidy, and removes traces of the interpretation involved.
The power of charts is that they communicate numbers, which people see as trustworthy. There’s an ongoing belief in ‘doing good with data’, and an idea that visualisation makes data transparent and accessible. In interviews with visual designers, they talked about trying to empower people with their visualisations, in part by representing data accurately; including links to sources; and recognising that choices are involved in creating visualisations. We need to take seriously what visual designers say, including their idealism about their work.
Visualisation conventions constrain what visualisations do. Conventions do rhetoric work, play a persuasive role, hide the messiness of visualisation. For example, the use of two-dimensional viewpoints creates a sense of objectivity (use of three dimensional views is frowned upon, as it makes it harder to view data…this makes sense, but also ‘encodes objectivity’ in the two-dimensional viewpoint). Geometric shapes and lines create a sense of order. Citing data sources makes the data look transparent, which does persuasive work – it gives an aura of truthfulnes (which means many of us don’t feel we need to go back to the source, and couldn’t understand it anyway.) We need to think about all of this critically to understand practices surrounding the production and consumption of visualisations.
October 7, 2013 § Leave a comment
Today when I logged into Facebook I got a message letting me know that I was banned from posting any content for the next 24 hours. Another contributor from a group I help to moderate had posted ‘inappropriate content’ and so all moderators for that group were temporarily locked from posting to Facebook at all.
This would be mildly annoying most of the time, but at the moment I’m teaching a unit where a substantial proportion of the discussion takes place through a Facebook group. Ironically, the unit is on ‘power and politics’ and the Internet.
While there are compelling reasons to experiment with Facebook in teaching (including students’ preference for the site over universities’ official learning management systems), doing so will inevitably raise issues like this. Should I leave the group? Should I, and other educators, avoid posting to Facebook about issues that may lead to bans? Should I try to create a teaching profile and a personal profile (which is against Facebook policy)? I and other contributors have touched on some of these issues in the chapter I contributed to An Education in Facebook?, but we need to be thinking more about ownership and control as we explore new teaching tools.
May 3, 2013 § Leave a comment
Recently, professors at a San Jose State refused to use a lecture series by Michael Sandel at their university: it’s well worth reading their explanation of this decision. After a long and somewhat frustrating discussion about this, I think it’s worth teasing out some of the issues surrounding MOOCs. Much of this draws on the conversation which I just had, but mostly because these views are representative of much more widely-held opinions.
There’s the assumption that just because something is ‘open source’, it must be good. This is tied to other assumptions about what openness means, such as the assumption that ‘open source’ necessarily means more participatory and more accessible. While MOOCs certainly have the potential to make interesting, useful, learning material widely available so that students (and others) can enrich their learning, we do need to bear in mind the context in which they’re being developed. Context matters. ‘Open source’ doesn’t necessarily mean ‘good’ in all contexts, because other considerations must be taken into account.
In this case, we need to remember that MOOCs are being developed in the context of cuts to university funding around the world, and in the context of university systems which tend to privilege publishing over teaching, with ever-increasing class sizes and workloads for lecturers. We’re seeing a massive casualisation of the workforce as we shift from full-time lecturers doing most of the teaching to the use of underpaid teaching assistants who are usually on short-term, precarious contracts. Funding for students is also limited, making it harder for students from disadvantaged backgrounds to get a university education (in the US far more than in Australia).
What does this mean for our evaluation of MOOCs? Firstly, we need to be aware of the probability that requests (or demands) that lecturers use content from MOOCs hosted at other universities are motivated more by a desire on the part of university management to cut costs than by a concern for quality teaching. Secondly, there is a strong chance that the use of lecture content from MOOCs will be used to justify further casualisation of the academic workforce on the basis that as the backbone of the unit is there, all that’s needed will be teaching assistants/tutors rather than full-time lecturers. Thirdly, this is likely to contribute to and reinforce the existing two-tier system (more so in the US than Australia): some students will have access to lecturers who develop units, have funding for research, and engage in hands-on teaching, while poorer students at under-resourced universities will get content developed elsewhere, taught by tutors who are unlikely to have the resources and support necessary to develop themselves as teachers and as researchers.
There’s also the issue of what we use as the standard. While I’m sure Sandel is an engaging lecturer with many valuable points to make, the outline for the ‘Justice’ unit which the San Jose professors declined to use states that, “principal readings for the course are texts by Aristotle, John Locke, Immanuel Kant, John Stuart Mill, and John Rawls.” In a unit covering “affirmative action, income distribution, same-sex marriage, the role of markets, debates about rights (human rights and property rights), arguments for and against equality, dilemmas of loyalty in public and private life”, it’s worth questioning whether a backbone consisting purely of dead white men is most appropriate.
Universities, and particularly the most prestigious and well-funded US universities, are still disproportionately accessible to privileged groups within society. If unit content is increasingly produced primarily by these universities, and then farmed out to other places, we are likely to hear a more and more narrow range of perspectives. The existing constraints on marginalised voices within academia will be reinforced: women and minority groups will, in all likelihood, be those who are pushed (further) into precarious employment as short-term teaching staff unable to create their own units.
I’m not against the idea of MOOCs. But we need to think about the broader context in which they’re developed, and take active steps to shape them in positive directions. We need to hold open spaces for participatory, accessible learning that values a diversity of voices – including those of both students and teachers. In order to do this, we can’t take the discourse of ‘openness’ associated with MOOCs at face value.
March 12, 2011 § 6 Comments
There’s was an interesting debate about referencing over on the OUA Coffee Shop page on Facebook recently. I didn’t have time to participate, since my recent datapocalype* meant I had to remark a heap of papers. I also feel a little uncomfortable participating in, or even reading, debates in what I think of as “student spaces”, which I’ll have to write/think more about later.
But back to my point, which is referencing! Many students are uncomfortable with the referencing requirements at university, for a range of reasons. For some, it’s difficult to work out how to use the referencing system correctly, or to work out what needs referencing and what doesn’t. For others, it’s the concept itself they’re uncomfortable, often because they think it means we don’t value their own ideas and experience. This is understandable – I remember having similar complaints when I first started university.
Reading Evgeny Morozov’s The Net Delusion has been a great reminder, for me, about why I care about referencing. It’s well-written, passionate, and obviously informed by significant research. (Hopefully I’ll write more about Morozov’s argument’s later.) From an academic viewpoint, and even from an activist viewpoint, though, it’s also tremendously frustrating.
This is because The Net Delusion has, quite sensibly, been written for a popular audience. Rather than including clear in-text references, Morozov’s included a bibliography at the end and indicated many of his sources within the flow of the text, (for example, “In 1914 Popular Mechanics thought that…” (p. 286)) but some sources aren’t clearly indicated. This means that when I’m reading it, and come to an argument that I find unlikely or an idea I’d like to explore futher, it’s occasionally quite difficult to find more information.
What methodology was used in that study? Which organisation carried out that work? What were the details of that author’s argument? In this case, some careful scanning of the bibliography (and reading near an Internet-connected computer) would let me find the sources used and look into them more deeply, but it’s more difficult than I’m accustomed to. Without references, it wouldn’t be possible at all.
The main reason that referencing matters to me is that I don’t see the lives of texts as ending once they’re written. Even brilliant research needs to be tested, added to, updated. While this might not be true for many university essays, which are often written, read by tutors once, and then gather (metaphorical) dust, I want students to learn to reference so that they can contribute to ongoing debates in a way that other people can question and build on.
When Morozov writes, “Revolutions prize centralization and require fully committed leaders, strict discipline, absolute dedicated, and strong relationships based on trust” (p. 196) , for example, I want to see his sources! If this is just something he worked out through personal (second-hand) experience, well, then, I can say, “ah, but my own personal (second-hand) experience is quite different” – and then what’s left but to stare at each other awkwardly? But if he cites particular examples or research studies, I can provide counter-examples, cite contrasting research, question the methodology of the studies cited… and then we have at least the potential for a conversation, and for the work to grow into something new.
Although for those of you struggling to remember where the comma goes and which titles go in italics, this may not be much consolation!
* My external hard drive died, I put off getting a new one for backups, my internal hard drive died, most of my data was saved by some friends, but sadly not all of the recent batches of marking I’d done. I’ll be returning to the paranoid back-eveything-up-in-three-regime of my PhD days.