AI-based decision-making systems are a great opportunity to make our working world more efficient in the future. Which applicants are suitable for the job that a company is currently advertising? What treatment do patients need in the overcrowded waiting room at the GP? These are questions that AI could answer automatically in the future based on a lot of known data. But what if human prejudices also come into play and make uncontrolled discriminatory decisions?
A dystopian idea that Maximilian Kiener, Junior Professor of Ethics in Technology at Hamburg University of Technology, does not want to see become reality. The solution: apply not only technical but also ethical standards to the development of AI.
Incorporating ethical considerations into technological innovation from the outset
In his inaugural lecture entitled "Ethics in Technology and the Future of Morality", which took place this Wednesday as part of the open event series "Future Lectures" in Audimax II, Kiener explained that just as every child learns the difference between right and wrong from the very beginning of their lives, this must also be integrated into new technologies from the outset - for example through so-called "reinforcement learning", in which an AI learns through feedback and certain positively evaluated incentives.
"Ethics is not just an optional component, but an unavoidable one," says Kiener. "However, this inevitability also presents an opportunity to interlink ethics and technology in such a way that synergies create fair and sustainable progress."
"ConsentGPT" instead of "ChatGPT"? Robots could take over patient conversations
In order to continuously involve humans in this context and ensure human oversight, as suggested by the EU AI Act, Kiener proposes a new responsibility model. By ensuring that everyone involved in the development and use of powerful AI continuously commits to being accountable for new developments, an open social dialog about what kind of technologies people actually want to create should be possible.
Kiener's remarks were complemented by contributions from guest speaker Prof. Dominic Wilkinson, who spoke about the possible use of AI in consultations between doctors and patients before operations - called "ConsentGPT" - and a moderated discussion with the audience led by Dr. Andrew Graham (both University of Oxford).
About the "Future Lectures" series
In the public "Future Lectures", TU Hamburg researchers present their forward-looking research topics and ideas. The aim is to explain the challenges facing society and research, as well as the positive changes that research at TU Hamburg could initiate in society.
Read more