Sadly I missed the first few sessions of the Artificial Agency session because we had to wander around a bunch to find lunch. Conference organisers: I cannot emphasise enough the value of easily-available and delicious snacks. Also, I tend to be pretty dazed during afternoon talks these days because of Jetlag + Nonsense Toddler. Luckily, accepted papers are available here!
Speaking on Behalf: Representation, Delegation, and Authority in Computational Text Analysis, Eric Baumer and Micki McGee [Note: Baumer referred to ASD, I’m aware that framing this as a ‘disorder’ is contested, including by people with autism who are part of the neurodiversity movement.]
Baumer discusses analysing Autism Spectrum Disorder (ASD) Parenting blogs, and becoming unsure whether it was ethical to publish the results. Initial data gathering seems innocent. However, we should think about the ways in which objects can ‘speak for’ people (drawing on Latour and others). Computational text analysis has the potential to become the lens through which we see the bloggers, and the topic itself. Claims about what a group of people are ‘really’ saying can have important ramifications, particularly when we look at ASD. For example, research of these blogs might be convincing to policymakers, either for policy based on the assumption that vaccines cause ASD, or at the other extreme, for policy that removes financial and educational supports on the basis that Autism is part of normal human neurodiversity.
In one of the more unsettling talks in Session 4: Autonomy and Lethality, Killer Robots and Human Dignity, Daniel Lim argued that the arguments which seem to underpin claims that being killed by a robot offends human dignity are unconvincing. These arguments seem to rest on the idea that robots may not feel the appropriate emotions and cannot understand the value of human life (among other reasons). But humans might not feel the right emotions either. This doesn’t mean that we should make killer robots, just that there doesn’t seem to be an especially compelling reason why being killed by a robot is worse than being killed by a human.
In Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes, Tracy Pearl argues that autonomous vehicles will be an incredible positive net gain for society. However, the failure of the US legal system (from judges through to law through to juries) to provide a reasonable framework for dealing with injuries from autonomous vehicles threatens this, in part because all of US law is designed with the idea that it will be applied to humans. The US Vaccine Injury Compensation Program provides one paradigm for law dealing with autonomous vehicles: it’s based on the idea that vaccines overall are beneficial, but there are a small number of people who will be harmed (fewer than would be harmed without vaccines), and they should be compensated. A similar fund for autonomous vehicles may be useful, although it would need to come with regulations and incentives to promote safety development. A victim compensation fund would offer much greater stability than relying on private insurance.
Session 5: Rights and Principles