Hardwiring ethics into a future worth wanting

How can we fuse technology and ethics to create the kind of future we want? Professor Shannon Vallor, Director of the Centre for Technomoral Futures in Edinburgh, talks to us about why we must act urgently.

Listen to our podcast with Shannon here.

 

Every day, we all make a multitude of instant and often emotional online decisions, which leave behind a digital footprint.

We might have an uneasy sense about that digital footprint, a feeling that the data we generate by these online decisions might be used in ways we don’t like, or don’t fully understand.

Yet how often do we stop to think how we might shape technology in an image we want, and not just allow it to shape us?

Professor Shannon Vallor arrived at the University of Edinburgh as the first Baillie Gifford Chair in the Ethics of Data and AI, and Director of the new Centre for Technomoral Futures at the Edinburgh Futures Institute – determined to accelerate the debate about creating what she calls “a future worth wanting”.

“The digital environments we have built are not conducive to the kind of community, democratic structures and types of leadership that we want for our futures,” says Professor Vallor, who has worked at the intersection of ethics and emerging technologies for 15 years.

“So what kind of digital environments, platforms, processes and systems do we need to enable a future worth wanting, where we can flourish together?”

Professor Vallor, who came to Edinburgh from Santa Clara University in the heart of Silicon Valley, says we shouldn’t accept that powerful technology will inevitably mould our fate.

“We must avoid technological determinism that says technology leads and society follows,” she says. “That’s a lie, a convenient lie for those building their values into these technologies.

“Technology and Artificial Intelligence are human all the way – built to promote, optimise or systematise; built to create power and realise specific values in the world.

“Humans are the creators of technology and technologies are therefore reflections of human power, will and values. We need to ensure human accountability isn’t lost.”

This is at the heart of the mission of the Centre for Technomoral Futures.

It starts with the premise that technology and morality, or technology and ethics, are not unrelated.

“The aim is to move away from that artificial, damaging split between technology and society,” says Professor Vallor.

“Doing technology right is no different to doing society right,” says Professor Vallor. “Technology does not live outside our social world; it’s interwoven.

“I want to figure out, using a blend of data-driven and humanistic tools, what are the forms of expertise, technological and moral, that can design and manage systems that work better for people, to build better worlds.

“It’s not the tools themselves that can build those better worlds, it’s people and the moral and social intelligence they use.”

How might this work in practice?

“The Centre is about reuniting forms of expertise cleaved off from each other in universities, and encouraged to develop in a relationship of antagonism,” says Professor Vallor. “That needs to end.”

“Technomoral virtues are skills that allow us to guide our technologies wisely, and use them for purposes that are good. This conversation is already under way and I want the Centre to take it in a constructive direction.

“In the past, we see ethicists telling technologists where they are wrong, or technologists telling ethicists they are deluded. Then we need regulators to break the impasse.

“How can we intervene earlier? How can we create something which isn’t just a battle between technology and ethics, a clash of incompatible approaches and methods, but something truly collaborative?”

The Edinburgh Futures Institute was designed to be precisely the kind of environment where this approach could flourish.

And Professor Vallor believes the opportunity is there to fuse technology and moral intelligence from the ground up at the Centre for Technomoral Futures.

“Everything from the teaching spaces being built in EFI’s new home in the old Royal Infirmary, to the content and delivery of our courses, to research, is designed with that in mind,” she says.

“I want to take the energy and desire out there and give it a path to action – to provide resources to those people with a desire to use their technical and moral intelligence, or to bring one or the other into their work more.

“How can we use data and AI in socially and politically constructive ways to build systems and institutions that actually support people? We can see technological progress, but humanity needs to make progress in step with it.”

Before coming to Edinburgh, Professor Vallor observed a significant change in Silicon Valley which can help bridge the technomoral divide.

She explains: “You started to see graduates at elite, tech-focused universities like Stanford who would only interview with tech companies they felt took ethics seriously. They started asking ‘What are your values? Can I build something to make the world safer and more just? I want to do work valued by society.

“Go back 10-15 years and a degree, then a job at a big tech company would have received nothing but admiration and enthusiasm from everyone you knew. Fast forward and the response is very different, especially if it’s a big tech company that has run into ethical challenges, like Facebook or Uber.

“When the tech-lash started, it was seen as media-driven. Actually, it was more about very real harms not being responsibly addressed. That failure of responsibility and accountability led to internal industry damage. I knew people at tech companies who refused to put where they worked on dating profiles!” But today, she says, many tech companies recognise the importance of restoring public trust, and are starting to invest more in ethical and responsible design practices.

While she sees hopeful signs, Professor Vallor fears time is tight to create a future worth wanting.

“The clock is ticking ever faster to make those transformational changes to the way technology and society interact,” she says.

“We have to move quickly to use our technological and moral intelligence to remove obstacles to a sustainable and flourishing future. The window of opportunity is here, but will not be open indefinitely.”

Without action, Professor Vallor believes the damaging impacts of technology will include far greater global inequality: “For many people on this planet, their opportunities to create new and better ways of life are shrinking. Science and technology should be unleashing human opportunities at every turn to allow us to be sustainable and flourishing. In reality, those opportunities are declining for the majority of the world’s population. That’s a fundamental crisis.

“My goal for the Centre for Technomoral Futures is to effectively communicate the urgency of that project and give people the data, support and resources to help tackle it. And I hope that leads to a future worth warning.

“Technology will not save us – because technology IS us. We have to save ourselves, using technological and moral intelligence.”

Read the latest DDI news

Image of musicians

In support of a UK Centre for AI and the Creative Industries

University of Edinburgh statement in support of the Creative Industries Policy and Evidence Centre proposal…

READ INTERVIEW
Image of a crowd

DDI Discussions: Covid-19 & Data – One year on

Twelve months after the first national lockdown, an expert panel gave us their insights on…

READ INTERVIEW

New online artworks demystify Artificial Intelligence

Artworks launch to the public on 8 March, with live events 5.30pm-6.30pm on 11 and…

READ INTERVIEW