JOHN TASIOULAS ON ETHICS IN AI

John Tassioulas. Credit: Rebecca Lowe

JOHN TASIOULAS ON ETHICS IN AI

The Director of Oxford's new Institute for Ethics in AI outlines his strategy

Published: 2 November 2020

Author: Matt Pickles

 

Share this article

Professor John Tasioulas has been appointed as the first Director of Oxford University’s Institute for Ethics in AI. Ahead of starting his new role in October, he sat down with Matt Pickles to explain why he is excited about the job and what he hopes the Institute will achieve.

Professor Tasioulas is currently the inaugural Chair of Politics, Philosophy and Law and Director of the Yeoh Tiong Lay Centre for Politics, Philosophy & Law at King’s College London. He has strong links to Oxford, having studied as a Rhodes Scholar, completed a doctorate in philosophy and taught philosophy from 1998-2010. He is also a Distinguished Research Fellow of the Oxford Uehiro Centre and Emeritus Fellow of Corpus Christi College, Oxford. He has held visiting appointments at the Australian National University, Harvard University, the University of Chicago, the University of Notre Dame, and the University of Melbourne, and acted as a consultant on human rights to the World Bank.

What role do you envision for the Institute?

My aim is for the Institute to bring the highest standards of academic rigour to the discussion of AI ethics. The Institute is strongly embedded in philosophy and I do not know of any other centre along those lines. At Oxford, we have the largest Philosophy department in the English-speaking world and it has historically been a very powerful presence in the discipline. We will also draw on other disciplines like literature, medicine, history, music, law, and computer science. This is a radical attempt to bridge the divide between science and humanities in this area and Oxford is uniquely placed to pull it off.

Why Oxford a good place for the Institute?

Oxford is an outstanding environment for the Institute not only because of its great academic strengths generally, and especially in philosophy, but also because in Oxford the study of philosophy at undergraduate level has always been pursued in tandem with other subjects, in joint degrees such as PPE, Physics and Philosophy, and Computer Science and Philosophy. The Institute can reap the benefits of this long historical commitment to the idea that the study of philosophy is enriched by other subjects, and vice versa. Add to this the interdisciplinary connections fostered by the collegiate system, and also the high regard in which Oxford held throughout the world, and I think we have the ideal setting for ambitious interdisciplinary project of this kind.

Why is AI ethics important?

AI has a transformative potential for many parts of life, from medicine to law to democracy. It raises deep ethical questions – about matters such as privacy, discrimination, and the place of automated decision-making in a fulfilling human life – that we inevitably have to confront both as individuals and societies. I do not want AI ethics to be seen as a narrow specialism, but to become something that anyone seriously concerned with the major challenges confronting humanity has to address. AI ethics is not an optional extra or a luxury, it is absolutely necessary if AI is to advance human flourishing and social justice.

Given that AI is here to stay, we must raise the level of debate around AI ethics and feed into the wider democratic process among citizens and legislators. AI regulation and policy are ultimately matters for democratic decision-making, but the quality of the deliberative process is enhanced by the arguments and insights of experts working on questions of AI ethics.

How does COVID-19 make you think about AI ethics and the Institute?

COVID-19 demonstrates that it is never going to be enough just to ‘follow the science’. There are always value judgements that have to be made, about things like the distribution of risk across society and trade-offs between prosperity and health. Science can tell us the consequences of our actions but it does not tell us which goals we should pursue or what sacrifices are justified to achieve them. In so far as we are going to have AI as part of the technological solution to societal challenges, we inevitably have to address the ethical questions too. AI ethics is a way to get clearer about the value judgements involved and to encourage a more rigorous and inclusive debate.

What are your priorities for the Institute?

There are many things I want to get done. I want to embed within Oxford the idea of AI ethics as an important, high quality area of research and discussion that is open to all interested parties. Not everyone has it at the forefront of their minds, but I want people to become aware that there is a lively and rigorous discussion going on about the very pressing questions it raises, one which bears on the topics they are already interested, such as health care, climate change, migration, and so on. If we can secure this high-quality culture of research and debate, it will be the platform on which we can achieve everything else. Vital to all this is getting serious intellectual buy-in from the broader Oxford community.

At King’s, you led and developed a centre that was also new when you became the Director. What lessons can you bring from that experience?

The first challenge is getting people from different disciplines to talk to each other in a productive way. This is not easy because the meanings of words, and the methods adopted, can differ significantly from one discipline to another, so people can talk past each other. And then there is just the inertia of staying in your intellectual comfort zone. We need to generate an environment of goodwill in which people feel comfortable talking about things with those from other disciplines and to learn from each other.

Another important challenge is that this discussion must not be confined to academics. It is important that whatever we do must also be presented in a way that is accessible to a broader community, whether that is legislators, scientists or ordinary citizens. However profound or sophisticated our research is, we must convey it in a way that can be engaged with by a non-specialist community. Otherwise we will not be fulfilling our task. I want us to hold events where the general public feels very free to come along, engage and make points in the discussions.

What aims do you have for teaching AI ethics in Oxford?

It looks like AI will become an inescapable feature of ordinary human life. In so far as an undergraduate degree equips students to cope with life in a critical and intelligent way, it would seem natural that the ethical dimension of AI is one of the aspects of life they should be able to engage with in the course of their degrees. AI ethics can be seen through the lens of any given discipline, whether it is classics or medicine or something else.

What is your aim for the field of AI ethics as a whole?

Bioethics is a good example of the role of ethics in tackling major issues facing society, but it is also a cautionary tale. Bioethics has truly outstanding figures with a strong philosophical background who drew on deeper expertise in moral and political philosophy in order to advance that discipline. But at the moment, a lot of what you hear about AI ethics lacks this kind of depth, too much is a rehash of the language of corporate governance, or even just soundbites and buzzwords. A sustainable AI ethics needs to be grounded in something deeper and broader, and that must include philosophy and the humanities more generally. The Institute can serve to channel this intellectual rigour and clarity into the sphere of public debate and decision making.

In the past, philosophers have played an active role in government reports on matters such as censorship, IVF or gambling, but no philosopher was involved in the recent House of Lords report on AI, for example. This is unfortunate and can lead to an unnecessarily limited perspective. Often what happens is people are tempted to use the law as a framework for evaluating various options in AI. Law is, of course, extremely important as a tool of regulation. But ethics goes deeper than law, because we can always ask whether existing law is acceptable, and because we need more than legal rules in order to live good lives.

Finally, how do you feel about "returning" to Oxford?

Although this is a new and exciting challenge, it’s also a homecoming because I have always regarded Oxford as my intellectual home. I have such great admiration for Oxford because it manages to combine a commitment to the highest intellectual standards with a broadly democratic academic culture. In that sense, too, I think Oxford is unique in the world and this combination equips us well to pursue our aims for the Institute.

You can find more information on the Institute here