AIES: AI for social good, human machine interactions, and trustworthy AI

If you want to read more about any of these, accepted papers are here.

AI for Social Good

On Influencing Individual Behavior for Reducing Transportation Energy Expenditure in a Large Population, Shiwali Mohan, Frances Yan, Victoria Bellotti, Ahmed Elbery, Hesham Rakha and Matthew Klenk

Transportation is a huge drain on energy use: how can we develop multi-modal planning systems that can improve this? We need to find systems that humans find useful and actually implement, which means finding timely, acceptable, and compelling ways to suggest transport options.

Guiding Prosecutorial Decisions with an Interpretable Statistical Model, Zhiyuan Lin, Alex Chohlas-Wood and Sharad Goel

District attorneys will often hold arrestees in jail for several business days (which may mean many days if it’s over the weekend or a holiday) while they decide whether to press changes. Most reports on cases arrive shortly after booking, but they aren’t always processed in time. This research proposes a system to sort cases from most likely to be dismissed to least likely, allowing a faster processing time (with the district attorney having final discretion). [Note: this seems to introduce some worrying possibilities for bias, including racial bias. When I asked about this, the presenters said that the model was trained on historical data, which was “fair across races”. This seems to require much more careful interrogation, given all the evidence on incarceration and racism in the US. n answer to another question, the presenters said that they didn’t expect the DA would be influenced by the system’s recommendations. The DA would still carefully evaluate each case. Again: this seems to require further interrogation, especially given the work (cited in a bunch of other talks here) on bias in machine learning models used for sentencing.]

Using deceased-donor kidneys to initiate chains of living donor kidney paired donations: algorithm and experimentation, Cristina Cornelio, Lucrezia Furian, Antonio Nicolò and Francesca Rossi

This research looks at ways of introducing chains of transplants, starting from a deceased donor organ, continuing with consecutive donations among pairs of incompatible donar-recipients, and ending with donors who would otherwise be less likely to be recipients. This research suggests that such chains of donation could be useful.

Inferring Work Task Automatability from AI Expert Evidence, Paul Duckworth, Logan Graham and Michael Osborn

We’re currently unsure about what is automatable, and why some tasks are more automatable than others. Looking at tasks (rather than jobs) is one way to evaluate this. The research looked at 150+ experts’ evaluations of different tasks. Work automatability was unevenly distributed across jobs, and disproportionately affects the least adjustable (those with less education and lower-paid jobs). This is exploratory research! Please write papers that explore real-world validation of this work, the differences between the potential for work to be automatable and whether that work should be automated, and other related issues. [Note: like maybe how to use this as a basis for decreasing standard working hours?]

Human and Machine Interaction

Robots Can Be More Than Black And White: Examining Racial Bias Towards Robots, Arifah Addison, Kumar Yogeeswaran and Christoph Bartneck

This transfers existing bias demonstrated in humans to robots, using a modified version of the police officer’s dilemma study. The previously-demonstrated shooter bias (increased likelihood of shooting Black people among US participants among all groups) did transfer to robots. In follow-up studies, researchers asked whether anthropomorphism and racial diversity would modify this. It would be useful to expand this research, including to consider whether bias can be transferred from robots to humans (as well as from humans to robots), and whether there are human-robot interaction strategies that can decrease bias. It also seems that as robots become more human-like, they’re also designed to reflect their creators’ racial identification more.

Tact in Noncompliance: The Need for Pragmatically Apt Responses to Unethical Commands, Ryan Blake Jackson, Ruchen Wen and Tom Williams

This research looks at moral competence in social robots (drawing on Malle and Scheutz, 2014). Natural language capability seems very useful for robots, especially when we think about robots in caring roles. However, robots shouldn’t follow every command: there are a range of different reasons for rejecting commands, but how? If the rejection is too impolite it might have social consequences, and if it’s too polite it may imply tacit approval of norm violations. Robots’ responses influence humans’ perceptions of the robots’ likeability, and future research may show other ways that responses can feed back into human behaviour. [Note: I wonder how this would be affected by human’s perceptions of robots as gendered?]

robot and frank 2012 017

AI Extenders: The Ethical and Societal Implications of Humans Cognitively Extended by AI, Karina Vold and Jose Hernandez-Orallo

How would our approach to AI change if we saw it as part of us? And how would it change our potential for impacting on society? This isn’t merely abstract: AI systems can be thought of as ‘cognitive extenders’ which are outside our skull but are still part of how we think. We can see AI as existing on a continuum between autonomous and internalised. This work draws on Huchin’s (1999) definition of cognitive extenders. this opens up a range of issues about dependency, interference, and control.

Human Trust Measurement Using an Immersive Virtual Reality Autonomous Vehicle Simulator, Shervin Shahrdar, Corey Park and Mehrdad Nojoumian

This study considered two groups of trust-damaging incidents, drawing on substantial data that was carefully gathered with regard to IRB guidelines and laws. But also my gosh I am tired by now, sorry.

 

The Value of Trustworthy AI, David Danks

We’re using the word ‘trust’ to mean radically-different things, and this has important consequences. Trust is the thing we should seek in our AI. We can understand ‘trust’ as a function of the trustor making themself vulnerable because of positive experections about the behavior or intentions of the trustee. For example, we might trust that the car will start in the morning, allowing us to get to work on time.

Psychological literature gives several different understanding of trust, including behavioural reliability, and understanding of the trustee. There are a couple of themes in this literature on trust. The first is a focus on ‘what is entrusted’ (the trustee should have, or act as if she has, the same values as the trustor). The second is a predictive gap (trust requires that expectations or hopes are not certainties). If you’re going to ethically use a system, you need to have a reasonable expectation that it will behave (at least approximately) as intended.

This has a variety of implications. For example, explainability is important for trust because it provides relevant information about dispositions. Simple measures of trust are insufficient – we need to understand trust in more deep and nuanced ways.

AIES: Human-AI collaboration, social science approaches to AI, measurement and justice

Specifying AI Objectives as a Human-AI Collaboration Problem, Anca Dragan

Dragan describes some problems with self-driving cars, like this example of a car giving up on merging when there was no gap. After adding some more aggressive driving tactics, researchers then also had to add some courtesy to moderate those. One odd outcome of this was that when the car got to an uncontrolled intersection with another, the car would back up slightly to signal to the other driver that it could go first. Which actually worked fine! It mostly led to the other driver crossing the intersection more quickly (probably because they felt confident that the self-driving car wasn’t going to go). …….except if there’s another car waiting behind the self-driving car, or a very unnerved passenger in the car. It’s a challenge to work out what robots should be optimising for, when it comes to human-robot interactions. Generating good behaviour requires specifying a good cost function, which is remarkably difficult for most agents.

Designers need to think about how robots can work in partnership with humans to work out what their goals actually are (because humans are often bad at this). Robots that can go back to humans and actively query whether they’re making the right choices will be more effective. This framework also lets us think about humans as wanting the robots to do well.

Social Science Models for AI
Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures, Daniel Susser

This talk focused specifically on individual (rather than structural) issues in AI ethics. It drew on behavioural economics, philosophy of technology, and normative ethics to connect a set of abstract ethical principles to a (somewhat) concrete set of design choices.

Draws on an understanding of online manipulation as the use of information technology to impose hidden influences on another person’s decision-making: this undermines their autonomy, which can produce the further harm of diminishing their welfare. Thaler and Sunstein’s Nudge discusses choice architecture: the framing of our decision-making. We act reflexively and habitually on the basis of subtle cues, so choice architecture can have an enormous impact on our decisions. Adaptive choice environments are highly-personalised choice environments that draw on user data.

What kind of world are we building with these tools? Technological transparency: once we become adept at using technologies they recede from conscious awareness (this is kind of the opposite of how we talk about transparency in a governance context). Our environment is full of tools that are functionally invisible to us, but shape our choices in significant ways. Adaptive choice architectures create vulnerabilities in our decision-making, and there are few reasons to assume that the technology industry shaping those architectures are trustworthy. However, manipulation is harmful even when it doesn’t change people’s behaviour because of the threats to our autonomy.

Reinforcement learning and inverse reinforcement learning with system 1 and system 2, Alexander Peysakhovich
napm9jrWe might think of ourselves as a dual system model: system one is fast, effortless, emotional and heuristic, system two is slower and more laborious. We often need to balance short-term desires (EAT THE DONUT) against longer-term goals (HOARD DONUTS INTO A GIANT PILE TO ATTRACT A DONUT-LOVING DRAGON). [Note: these are my own examples.]

How do we deal with this? We need to have good models for understanding how irrational we are. We also need to balance these two systems against each other.

Incomplete Contracting and AI Alignment, Dylan Hadfield-Menell and Gillian Hadfield

Problem: there’s a misalignment between individual and social welfare in many cases. AI research can draw on economic research around the contract design problem. Economists have discovered that contracts are always incomplete, failing to consider important factors like the expenditure of effort. Misspecification in contract design is unavoidable and pervasive, and it’s useful for AI research to learn from this: it’s not just an engineering error or a mistake. Economic theory offers insights for weakly strategic AI. Human contacts are incomplete, and relational – they’re always shaped by and interpreted by the wider context. Can we build AIs that can similarly draw on their broader context?

Then our talk!

AIES : how we talk about AI, algorithmic fairness, norms and explanations

A whole lot of drones in the sky above trees

My brief notes from today’s talks: for more details, check the program.

Ryan Calo: How we talk about AI (and why it matters)

There are several studies which demonstrate the ways in which language might shape approaches to policy. For example, one showed that people were more likely to recommend punitive measures when a threat was described as “a predator stalking the city”, rather than “an illness plaguing the city”.  There are legal precedents in the US of language about “robots” being a way to talk about people who have no choice, (and therefore liability).

A whole lot of drones in the sky above treesCalo notes that there are some trends in AI that he’s “upset about but not going to discuss at length, particularly the tendency for both supporters and critics of AI talk about it as if it’s magic. For example, Calo mentioned a billboard displaying a line of identical people with backpacks claiming that, “AI has already found the terrorist.” On the other hand, we should consider language about “killer robots coming door to door to kill us” with caution.

Rhetorical choices about AI policy influence policy, often in very subtle ways. For example, do we talk about AI research as a “race” or do we talk about it as a global collaborative effort that works towards human flourishing? And how do these different frames shape different concrete policies? Current US policy (including restrictions on sharing particular technologies) only make sense if we understand AI research as a high-stakes competition.

Language around “ethics” and “governance” also plays a role here. This rhetoric is familiar, and therefore palatable. Efforts to bring in ethical governance of AI research is laudable. Ethics has a critical role in shaping technology. However, we should also pay attention to the power of these words. Before we start imposing requiremlaents and limits, we need to be sure that we actually understand the ethical frameworks we’re working with.

Both proponents and critics of AI think that it will change everything. We should be thinking about a hypothetical future existential threat posed by AI, but we should also be thinking about more immediate concerns (and possibilities?). If it’s true that AI is the next world-shaping technology, like the steam engine, then policy needs to shift radically to meet this. And we need to start changing the way we talk. That project begins with conferences like this one.

We should also be looking at specific measures, like impact assessments and advisory bodies, for implementing AI tools. Unfortunately, the US government will probably not refrain from the use of any AI weapons that are seen to be effective.

We absolutely should be talking about ethics, guided by the folks who are deeply trained in ethics. Lawyers are contractors building the policies, but ethicists should be the architects.

Note: One of the main questions that I have regarding Calo’s talk, and that Peter and I partially – albeit implicitly – address in our own talk, is how we decide who counts as ‘deeply trained in ethics’ and how the AI community should reach out to ethicists. There is an ongoing under-representation of women and minorities in most university philosophy departments. Mothers (and not fathers) are also less likely to be hired and less likely to progress within academia. This is partially shaped by, and shapes, dominant framings of what is valued and promoted as expertise in ethics. This is fairly obvious when we look at the ethical frameworks cited in AI research ethics: most philosophers cited are white, male, and Western.

The spotlight session giving brief overviews of some of the posters presented included a few that particularly stood out (for various reasons) to me:

  • In ‘The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users‘, Emanuelle Burton, Kristel Clayville, Judy Goldsmith and Nicholas Mattei argue that medical ethics have useful parallels with AI research. For example, when might inefficiency enable users to have an experience that better matches their goals and wishes?
  • In ‘Toward the Engineering of Virtuous Machines‘, Naveen Sundar Govindarajulu, Selmer Bringsjord and Rikhiya Ghosh (or maybe Hassan?) talk about ‘virtue ethics’: focus on virtuous people, rather than on actions. Eg. Zagzebski’s Theory: we admire exemplar humans, study their traits, and attempt to emulate them. (I’m curious what it would look like to see a machine that we admire and hope to emulate.)
  • Perhaps the most interesting and troubling paper was ‘Ethically Aligned Opportunistic Scheduling for Productive Laziness‘, by Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel and Cyril Leung. They discussed developing an ‘efficient ethically aligned personalized scheduler agent’ will can workers (including those in the ‘sharing’ economy) work when they are highly efficient and rest when they’re not, for better overall efficiency. Neither workers nor the company testing the system were that keen on it: it was a lot of extra labour for workers, and company managers seemed to have been horrified by the amount of ‘rest’ time that workers were taking.
  • In ‘Epistemic Therapy for Bias in Automated Decision-Making’, Thomas Gilbert and Yonatan Mintz draw on distinctions between ‘aliefs‘ and ‘beliefs’ to suggest ways of identifying and exploring moments when these come into tension around AI.
The second session, on Algorithmic Fairness, was largely too technical for me to follow easily (apart from the final paper, below), but there were some interesting references to algorithms currently in use which are demonstrably and unfairly biased (like COMPAS, which is meant to predict recidivism, and which recommends harsher sentences for minorities). Presenters in this panel are working an attempts to build fairer algorithms.
In ‘How Do Fairness Definitions Fare? Examining Public Attitudes Towards Algorithmic Definitions of Fairness‘, Nripsuta Saxena, Karen Huang, Evan DeFilippis, Goran Radanovic, David Parkes and Yang Liu discuss different understandings of ‘fairness’. This research looks at loan scenarios, drawing on research on Moral Machines. It used crowdsourcing methods via Amazon Turk. Participants were asked to choose whether to allocate the entire $50,000 amount to a candidate with a greater loan repayment rate; divide it equally between candidates; or divide the money between candidates in proportion to their loan repayment rates.
There are three different ways of understanding fairness examined in this paper:
  • meritocratic fairness,
  • treat similar people similarly,
  • calibrated fairness.
This research found that race affected participants’ perceptions of fair allocations of money, but people broadly perceive decisions aligned with ratio to be fairest, regardless of race.
The presenters hope that this research might spark a greater dialogue between computer scientists, ethicists, and the general public in designing algorithms that affect society.
Session 2: Norms and Explanations
Learning Existing Social Conventions via Observationally Augmented Self-Play, Alexander Peysakhovich and Adam Lerer
This looks at social AI. At the moment, social AI is mainly trained through reinforcement learning, which is highly sample inefficient. Instead, the authors suggest ‘self play’. During training time, AI might draw on a model of the world to learn before test time. If self-play converges, it converges at a Nash equilibrium. In two-play zero sum games, every equilibrium strategy is a minimax strategy. However, many interesting situations are not two-player zero-sum games, for example traffic navigation. The solution to this is: quite technical!
Legible Normativity for AI Alignment: The Value of Silly Rules, Dylan Hadfield-Menell, Mckane Andrus and Gillian Hadfield
A lot of conversations right now focus on how we should regulate AI: but we should also ask how we can regulate AI. AIs can’t (just) be give the rules, they will need to learn to interpret them. For example, there’s often a gap between formal rules, and rules that are actually enforced. Silly rules are (sometimes) good for societies, and AIs might need to learn them. Hadfield discusses the Awa society in Brazil, and what it might look like to drop a robot into the society that would make arrows (drawing on anthropological research). Rules include: use hard wood for the shaft, use a bamboo arrowhead, put feathers on the end, use only dark feathers, make and use only personalised arrows, etc. Some of these rules seem ‘silly’, in that more arrows are produced than are needed and much of hunting actually relies on shotguns. However, these rules are all important – there are significant social consequences to breaking them.
A 1960s advertisement for "the Scaredy Kit", encouraging women to start shaving by buying a soothing shaving kit.This paper looked at the role of ‘silly rules’. To understand this, it’s useful to look at how such rules affect group success, the chance of enforcement, and the consequences for breaking rules. The paper measured the value of group membership, the size of the community over time, the sensitivity to cost and density of silly rules. As long as silly rules are cheap enough, the community can maintain its size. It’s useful to live in a society with a bunch of rules around stuff you don’t care about because it allows a lot of observations of whether rule infraction is punished. AIs may need to read, follow, and help enforce silly as well as functional rules.
Note: Listening to this talk I was struck by two things. Firstly, how much easier it seems to be to identify ‘silly’ rules when we look at societies that seem very different from our own. (I think, for example, of wondering this morning whether I was wearing ‘suitable’ conference attire, whether I was showing an inappropriate amount of shoulder, and so on.) Secondly, I wondered what this research might mean for people trying to change the rules that define and constrain our society, possibly in collaboration with AI agents?
TED: Teaching AI to Explain its Decisions, Noel Codella, Michael Hind, Karthikeyan Natesan Ramamurthy, Murray Campbell, Amit Dhurandhar, Kush Varshney, Dennis Wei and Aleksandra Mojsilovic
Understanding the basis for AI decisions is likely to be important, both ethically and possibly legally (for example, as an interpretation of the GPDR’s requirements for providing meaningful information about data use). How can we get AI to meaningfully explain its decisions? One way is to get users (‘consumers’) to train AI about what constitutes a meaningful explanation. The solution to this is: quite technical!
Understanding Black Box Model Behavior through Subspace Explanations, Himabindu Lakkaraju, Ece Kamar, Rich Caruana and Jure Leskovec
Discussing a model for decisions on bail. Important reasons to understand the model’s behaviour:
  • decisions-makers readily trust models they can understand,
  • it will allow decision-makers to override the machine when it’s wrong,
  • it will be easier to debug and detect biases.

How to facilitate interpretability? The solution to this is: quite technical!