Today’s AI systems are limited to following explicit instructions. Users must often be highly precise, which Murena explains can be challenging, time-consuming, or even impossible—especially when users aren’t sure of their desired outcomes or make errors in their initial instructions.
The goal: Enhanced understanding between AI and its human users
“The next generation of AI should be capable of adapting in such situations, interpreting the instructions they receive, and discerning what we want and how we think,” Murena explains. This is the foundation of a human-centric approach, which Murena believes can be achieved by equipping AI with more insights into human reasoning.
To further illustrate this approach, Prof. Murena invited Prof. Andrew Howes from the University of Exeter to join his lecture. A specialist in human cognition, Prof. Howes studies computer models of human thought and reasoning. In his presentation, “Towards Machines that Understand People,” he discussed how machines must predict how people adapt to their own internal processing limits and to the changing state of the world. He explained, in particular, how to equip machines with some understanding of human rationality, and how to make their predictions more personalized.
About Prof. Pierre-Alexandre Murena
Prof. Murena studied applied mathematics and computer science at École Polytechnique and École Normale Supérieure in Cachan (France), and earned his doctorate at Université Paris-Saclay. He then moved to Finland for four years, working as a postdoc at Aalto University and the University of Helsinki. There, he led a research group at the Finnish Center for Artificial Intelligence (FCAI) focusing on human-machine collaborations. In 2023, he joined Hamburg University of Technology as a junior professor.
Further details on his work are available here: www.tuhh.de/human-ai