I have worked in AI since 1999, specialising in multiagent systems, a research area that is concerned with designing the algorithms and architectures needed to coordinate the activities of multiple independent actors in environments where they are interdependent. These actors can be artificial software agents, for example the bots investment companies use in algorithmic trading, or those used by advertisers when they are bidding for space on the results users see when searching the web.
More often than not, however, these multiagent systems involve human actors – for example drivers in ridesharing or food delivery platforms, buyers and sellers in online retail, or businesses advertising their products on recommendation systems that offer anything from hotel bookings to entertainment.
What is fascinating about this area is that we’re dealing with individual agents, each aiming to achieve their own, independent, objectives while interacting with each other. Just like in human society, these individual objectives may stand in conflict with each other, so if we want to make sure the whole “society” functions as we would like it to, the question becomes one of designing the rules of engagement, incentives, and mechanisms to ensure smooth and productive collaboration between agents. In essence, the techniques we develop in this area aim to achieve “decentralised intelligence” in situations where it is not possible to control the whole system centrally.
Whether we’re looking at the “gig economy”, complex supply chains and transportation systems, or even crowdsourcing and social media platforms, in a more data-driven world these multiagent systems have now become an important part of our daily lives. In recent years, this has led me to focus my research on investigating the ethical challenges such systems create in the real world, and to designing algorithms and architectures that can underpin their ethical design.
This research involves both theoretical and very applied work. At the more theoretical end, you can model these systems as games, and ask questions such as “what do mechanisms to distribute benefit fairly look like that the participants in the game would actually agree to voluntarily?”. This is a bit like asking taxpayers to design their own taxation system, but in global decentralised networks like the web or social media, the mechanisms we put in place need to provide incentives to (possibly selfish, or even malicious) agents to behave appropriately, rather than create situations in which they might be led to “game” the system.
At the more practical end, I have done a lot of work with users to understand what their actual perceptions and strategies are when they use these kinds of platforms, and to develop computational methods that support transparency, fairness, and diversity for them. This requires stepping away from simple mathematical models and diving into the human factors that determine perceptions and behaviours, and it also involves a heavy element of educating the public about how these systems work.
Apart from my role as Professor of AI in the School of Informatics, which is focused on my own research and teaching, I am also Director of the Bayes Centre, one of the five hubs established by the Data-Driven Innovation Programme. In this role I work closely with many internal and external partners to help turn their ideas into practice by facilitating new research and innovation collaborations, and the delivery of new training and entrepreneurship programmes in data science and AI.
I also act as a liaison between the University and the Alan Turing Institute, the UK’s national institute for data science and AI, and occupy a similar role within our recently established UNA Europa network of European universities. Finally, as Deputy Vice Principal Research, I oversee the University’s wider AI strategy, helping make sure the University continues and expands its leadership in this area.