New ethical standards for robots

From self-driving cars to carebots that can assist the elderly, advances in robotics are set to change our world. But as robots are being designed to deal with ever more complex situations, so public debate is growing over the safe and ethical use of artificial intelligence.

The University of Hertfordshire’s internationally renowned researchers in this area are contributing to this debate. Artificial intelligence experts Professor Daniel Polani and Dr Christoph Salge are developing a new concept, Empowerment, which could lead to a set of generic, situation-aware guidelines to help robots work successfully alongside humans.

As a social concept, empowerment represents the ability to change one’s environment and to be aware of that possibility. Researchers in the University’s Centre for Computer Science and Informatics Research have been working for more than a decade to translate this social concept into a quantifiable and operational technical language.

They recognise that increasingly robots are not working in defined scenarios, separated from humans, but may be acting as companions or co-workers. While robots such as those assembling cars in factories shut down when a human comes near, care robots might need to catch an elderly person about to fall.

Flexible decisions

Rather than seeking to restrict robot behaviour to ensure human safety, the Hertfordshire researchers aim to empower robots to maximise the possible ways they can act, so they can make flexible decisions, choosing the best solution for any given scenario.

The Empowerment concept has the potential to equip a robot with motivations which cause it to protect itself and keep itself functioning, to do the same for a human partner and to follow a human’s lead. This has tremendous potential for robots that interact closely with humans in challenging environments, such as elder care and hospital robots, driverless cars or exploration robots.

As science fiction becomes fact, Hertfordshire’s innovative research means it is well placed to work with tech firms, policymakers and industry to address the ethical and societal implications of our increasing use of autonomous systems and AI.

Professor Daniel Polani

Professor Daniel Polani's interest lies in understanding and imitating the processes that allow animals and humans to take flexible decisions in a complex and difficult environment, and enable them to adapt to different conditions gracefully. Can we do so incorporating learning ability, without compromising generalization ability, without hand-coding all necessary rules into a system? Are there general principles underlying intelligent information processing in living beings which we can exploit without having to resort to specialized solutions that vary from task to task?

For this purpose, his work employs methods from Artificial Life, and especially Information Theory, and apply them to Sensor Evolution, Collective and Complex Systems.

Meet the team