My brief notes from today’s talks: for more details, check the program.
Ryan Calo: How we talk about AI (and why it matters)
There are several studies which demonstrate the ways in which language might shape approaches to policy. For example, one showed that people were more likely to recommend punitive measures when a threat was described as “a predator stalking the city”, rather than “an illness plaguing the city”. There are legal precedents in the US of language about “robots” being a way to talk about people who have no choice, (and therefore liability).
Calo notes that there are some trends in AI that he’s “upset about but not going to discuss at length, particularly the tendency for both supporters and critics of AI talk about it as if it’s magic. For example, Calo mentioned a billboard displaying a line of identical people with backpacks claiming that, “AI has already found the terrorist.” On the other hand, we should consider language about “killer robots coming door to door to kill us” with caution.
Rhetorical choices about AI policy influence policy, often in very subtle ways. For example, do we talk about AI research as a “race” or do we talk about it as a global collaborative effort that works towards human flourishing? And how do these different frames shape different concrete policies? Current US policy (including restrictions on sharing particular technologies) only make sense if we understand AI research as a high-stakes competition.
Language around “ethics” and “governance” also plays a role here. This rhetoric is familiar, and therefore palatable. Efforts to bring in ethical governance of AI research is laudable. Ethics has a critical role in shaping technology. However, we should also pay attention to the power of these words. Before we start imposing requiremlaents and limits, we need to be sure that we actually understand the ethical frameworks we’re working with.
Both proponents and critics of AI think that it will change everything. We should be thinking about a hypothetical future existential threat posed by AI, but we should also be thinking about more immediate concerns (and possibilities?). If it’s true that AI is the next world-shaping technology, like the steam engine, then policy needs to shift radically to meet this. And we need to start changing the way we talk. That project begins with conferences like this one.
We should also be looking at specific measures, like impact assessments and advisory bodies, for implementing AI tools. Unfortunately, the US government will probably not refrain from the use of any AI weapons that are seen to be effective.
We absolutely should be talking about ethics, guided by the folks who are deeply trained in ethics. Lawyers are contractors building the policies, but ethicists should be the architects.
Note: One of the main questions that I have regarding Calo’s talk, and that Peter and I partially – albeit implicitly – address in our own talk, is how we decide who counts as ‘deeply trained in ethics’ and how the AI community should reach out to ethicists. There is an ongoing under-representation of women and minorities in most university philosophy departments. Mothers (and not fathers) are also less likely to be hired and less likely to progress within academia. This is partially shaped by, and shapes, dominant framings of what is valued and promoted as expertise in ethics. This is fairly obvious when we look at the ethical frameworks cited in AI research ethics: most philosophers cited are white, male, and Western.
The spotlight session giving brief overviews of some of the posters presented included a few that particularly stood out (for various reasons) to me:
- In ‘The Heart of the Matter: Patient Autonomy as a Model for the Wellbeing of Technology Users‘, Emanuelle Burton, Kristel Clayville, Judy Goldsmith and Nicholas Mattei argue that medical ethics have useful parallels with AI research. For example, when might inefficiency enable users to have an experience that better matches their goals and wishes?
- In ‘Toward the Engineering of Virtuous Machines‘, Naveen Sundar Govindarajulu, Selmer Bringsjord and Rikhiya Ghosh (or maybe Hassan?) talk about ‘virtue ethics’: focus on virtuous people, rather than on actions. Eg. Zagzebski’s Theory: we admire exemplar humans, study their traits, and attempt to emulate them. (I’m curious what it would look like to see a machine that we admire and hope to emulate.)
- Perhaps the most interesting and troubling paper was ‘Ethically Aligned Opportunistic Scheduling for Productive Laziness‘, by Han Yu, Chunyan Miao, Yongqing Zheng, Lizhen Cui, Simon Fauvel and Cyril Leung. They discussed developing an ‘efficient ethically aligned personalized scheduler agent’ will can workers (including those in the ‘sharing’ economy) work when they are highly efficient and rest when they’re not, for better overall efficiency. Neither workers nor the company testing the system were that keen on it: it was a lot of extra labour for workers, and company managers seemed to have been horrified by the amount of ‘rest’ time that workers were taking.
- In ‘Epistemic Therapy for Bias in Automated Decision-Making’, Thomas Gilbert and Yonatan Mintz draw on distinctions between ‘aliefs‘ and ‘beliefs’ to suggest ways of identifying and exploring moments when these come into tension around AI.
- meritocratic fairness,
- treat similar people similarly,
- calibrated fairness.

- decisions-makers readily trust models they can understand,
- it will allow decision-makers to override the machine when it’s wrong,
- it will be easier to debug and detect biases.
How to facilitate interpretability? The solution to this is: quite technical!