AIES Day 1: Artificial Agency, Autonomy and Lethality, Rights and Principles.

Sadly I missed the first few sessions of the Artificial Agency session because we had to wander around a bunch to find lunch. Conference organisers: I cannot emphasise enough the value of easily-available and delicious snacks. Also, I tend to be pretty dazed during afternoon talks these days because of Jetlag + Nonsense Toddler. Luckily, accepted papers are available here!

Speaking on Behalf: Representation, Delegation, and Authority in Computational Text Analysis, Eric Baumer and Micki McGee [Note: Baumer referred to ASD, I’m aware that framing this as a ‘disorder’ is contested, including by people with autism who are part of the neurodiversity movement.]
Baumer discusses analysing Autism Spectrum Disorder (ASD) Parenting blogs, and becoming unsure whether it was ethical to publish the results. Initial data gathering seems innocent. However, we should think about the ways in which objects can ‘speak for’ people (drawing on Latour and others). Computational text analysis has the potential to become the lens through which we see the bloggers, and the topic itself. Claims about what a group of people are ‘really’ saying can have important ramifications, particularly when we look at ASD. For example, research of these blogs might be convincing to policymakers, either for policy based on the assumption that vaccines cause ASD, or at the other extreme, for policy that removes financial and educational supports on the basis that Autism is part of normal human neurodiversity.

In one of the more unsettling talks in Session 4: Autonomy and Lethality, Killer Robots and Human Dignity, Daniel Lim argued that the arguments which seem to underpin claims that being killed by a robot offends human dignity are unconvincing. These arguments seem to rest on the idea that robots may not feel the appropriate emotions and cannot understand the value of human life (among other reasons). But humans might not feel the right emotions either. This doesn’t mean that we should make killer robots, just that there doesn’t seem to be an especially compelling reason why being killed by a robot is worse than being killed by a human.

In Compensation at the Crossroads: Autonomous Vehicles and Alternative Victim Compensation Schemes, Tracy Pearl argues that autonomous vehicles will be an incredible positive net gain for society. However, the failure of the US legal system (from judges through to law through to juries) to provide a reasonable framework for dealing with injuries from autonomous vehicles threatens this, in part because all of US law is designed with the idea that it will be applied to humans.  The US Vaccine Injury Compensation Program provides one paradigm for law dealing with autonomous vehicles: it’s based on the idea that vaccines overall are beneficial, but there are a small number of people who will be harmed (fewer than would be harmed without vaccines), and they should be compensated. A similar fund for autonomous vehicles may be useful, although it would need to come with regulations and incentives to promote safety development. A victim compensation fund would offer much greater stability than relying on private insurance.

Session 5: Rights and Principles

The Role and Limits of Principles in AI Ethics: Towards a Focus on Tensions, Jess Whittlestone, Rune Nyrup, Anna Alexandrova and Stephen Cave
This discusses a forthcoming report from the Leverhulme Centre for the Future of Intelligence. Principles have limitations: they’re subject to different interpretations (for example, what does ‘fairness’ mean?), they’re highly general and hard to assess, and they frequently come into conflict with each other. Many of these tensions aren’t unique to AI: they also overlap with ethical principles at play in discussions of climate change, medicine, and other areas.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s