AIES: AI for social good, human machine interactions, and trustworthy AI

If you want to read more about any of these, accepted papers are here.

AI for Social Good

On Influencing Individual Behavior for Reducing Transportation Energy Expenditure in a Large Population, Shiwali Mohan, Frances Yan, Victoria Bellotti, Ahmed Elbery, Hesham Rakha and Matthew Klenk

Transportation is a huge drain on energy use: how can we develop multi-modal planning systems that can improve this? We need to find systems that humans find useful and actually implement, which means finding timely, acceptable, and compelling ways to suggest transport options.

Guiding Prosecutorial Decisions with an Interpretable Statistical Model, Zhiyuan Lin, Alex Chohlas-Wood and Sharad Goel

District attorneys will often hold arrestees in jail for several business days (which may mean many days if it’s over the weekend or a holiday) while they decide whether to press changes. Most reports on cases arrive shortly after booking, but they aren’t always processed in time. This research proposes a system to sort cases from most likely to be dismissed to least likely, allowing a faster processing time (with the district attorney having final discretion). [Note: this seems to introduce some worrying possibilities for bias, including racial bias. When I asked about this, the presenters said that the model was trained on historical data, which was “fair across races”. This seems to require much more careful interrogation, given all the evidence on incarceration and racism in the US. n answer to another question, the presenters said that they didn’t expect the DA would be influenced by the system’s recommendations. The DA would still carefully evaluate each case. Again: this seems to require further interrogation, especially given the work (cited in a bunch of other talks here) on bias in machine learning models used for sentencing.]

Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithm and experimentation, Cristina Cornelio, Lucrezia Furian, Antonio Nicolò and Francesca Rossi

This research looks at ways of introducing chains of transplants, starting from a deceased donor organ, continuing with consecutive donations among pairs of incompatible donar-recipients, and ending with donors who would otherwise be less likely to be recipients. This research suggests that such chains of donation could be useful.

Inferring Work Task Automatability from AI Expert Evidence, Paul Duckworth, Logan Graham and Michael Osborn

We’re currently unsure about what is automatable, and why some tasks are more automatable than others. Looking at tasks (rather than jobs) is one way to evaluate this. The research looked at 150+ experts’ evaluations of different tasks. Work automatability was unevenly distributed across jobs, and disproportionately affects the least adjustable (those with less education and lower-paid jobs). This is exploratory research! Please write papers that explore real-world validation of this work, the differences between the potential for work to be automatable and whether that work should be automated, and other related issues. [Note: like maybe how to use this as a basis for decreasing standard working hours?]

Human and Machine Interaction

Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots, Arifah Addison, Kumar Yogeeswaran and Christoph Bartneck

This transfers existing bias demonstrated in humans to robots, using a modified version of the police officer’s dilemma study. The previously-demonstrated shooter bias (increased likelihood of shooting Black people among US participants among all groups) did transfer to robots. In follow-up studies, researchers asked whether anthropomorphism and racial diversity would modify this. It would be useful to expand this research, including to consider whether bias can be transferred from robots to humans (as well as from humans to robots), and whether there are human-robot interaction strategies that can decrease bias. It also seems that as robots become more human-like, they’re also designed to reflect their creators’ racial identification more.

Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands, Ryan Blake Jackson, Ruchen Wen and Tom Williams

This research looks at moral competence in social robots (drawing on Malle and Scheutz, 2014). Natural language capability seems very useful for robots, especially when we think about robots in caring roles. However, robots shouldn’t follow every command: there are a range of different reasons for rejecting commands, but how? If the rejection is too impolite it might have social consequences, and if it’s too polite it may imply tacit approval of norm violations. Robots’ responses influence humans’ perceptions of the robots’ likeability, and future research may show other ways that responses can feed back into human behaviour. [Note: I wonder how this would be affected by human’s perceptions of robots as gendered?]

robot and frank 2012 017

AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI, Karina Vold and Jose Hernandez-Orallo

How would our approach to AI change if we saw it as part of us? And how would it change our potential for impacting on society? This isn’t merely abstract: AI systems can be thought of as ‘cognitive extenders’ which are outside our skull but are still part of how we think. We can see AI as existing on a continuum between autonomous and internalised. This work draws on Huchin’s (1999) definition of cognitive extenders. this opens up a range of issues about dependency, interference, and control.

Human Trust Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simulator, Shervin Shahrdar, Corey Park and Mehrdad Nojoumian

This study considered two groups of trust-damaging incidents, drawing on substantial data that was carefully gathered with regard to IRB guidelines and laws. But also my gosh I am tired by now, sorry.

 

The Value of Trustworthy AI, David Danks

We’re using the word ‘trust’ to mean radically-different things, and this has important consequences. Trust is the thing we should seek in our AI. We can understand ‘trust’ as a function of the trustor making themself vulnerable because of positive experections about the behavior or intentions of the trustee. For example, we might trust that the car will start in the morning, allowing us to get to work on time.

Psychological literature gives several different understanding of trust, including behavioural reliability, and understanding of the trustee. There are a couple of themes in this literature on trust. The first is a focus on ‘what is entrusted’ (the trustee should have, or act as if she has, the same values as the trustor). The second is a predictive gap (trust requires that expectations or hopes are not certainties). If you’re going to ethically use a system, you need to have a reasonable expectation that it will behave (at least approximately) as intended.

This has a variety of implications. For example, explainability is important for trust because it provides relevant information about dispositions. Simple measures of trust are insufficient – we need to understand trust in more deep and nuanced ways.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s