Events

Welcome to our Events page, the source for all upcoming talks, conferences, and other events hosted by the Institute. We design our events to foster dialogue, spark curiosity, and facilitate interdisciplinary collaboration.  In addition to expert-led lectures and panel discussions, we host hands-on workshops and international conferences, thereby covering a wide range of opportunities for learning, networking, and intellectual engagement. Keep an eye on this space for event details, registration information, and updates.

Ethics in Practice

The Institute for Ethics in Technology is proud to announce its upcoming event series, "Ethics in Practice". This series represents a unique and vital opportunity to delve into the pressing ethical challenges that emerge at the intersection of technology and society. Our primary objective is to initiate a rich, insightful dialogue between academic scholars, industry leaders, policy-makers, and society, fostering an environment where these different spheres can learn from and enrich each other.

At TUHH, "Ethics in Practice" is a crucial element of our research environment. It provides a dynamic platform where researchers discuss real-world ethical issues with various practitioners, enhancing the relevance and impact of their work.

Moreover, “Ethics in Practice” is also a cornerstone in our Institute’s bespoke teaching approach that ensures that our curriculum is not only academically robust but also closely aligned with contemporary ethical challenges in technology. By incorporating these dialogues into our courses, we equip students with critical thinking skills and an ethical perspective essential for their future careers.

So, join us in bridging the gap between research and practice! Registration details and further information on the events in summer term 2024 will follow soon.

Conference 'Ethics by Design: Implications, Prospects, Limitations'

Venue: Hamburg University of Technology, Am Schwarzenberg Campus 1, 21073 Hamburg. Room: I-0053/54

Dates: 13 June 2024 09:00 to 14 June 2024 16:00

Contemporary AI systems create a wide variety of risks, ranging from bias, discrimination, and misinformation to privacy concerns and socioeconomic harms. In reaction to these issues, some call for striker regulation (see the EU’s AI Act). Yet, others argue that regulation runs the risk of stifling innovation (see the UK’s hands-off take on AI regulation).

A relatively new approach in the ethics of technology, however, promises to help build safe and trustworthy AI without hampering innovation – the so-called ethics by design approach. Simply put, ethics by design commonly refers to incorporating ethical principles into all phases of the design and development process of technology. With respect to the current AI landscape, this means weaving ethics into the different phases of the AI lifecycle. According to this approach to responsible innovation, ethics is an integral part of technological development, not merely an afterthought. In other words, the ethics by design aims to merge ethics and innovation.

However, the specifics of what ethics by design entails and how it should be pursued are still widely debated. This conference explores the methodological and practical implications of this new approach and examines its potential and limitations. The conference focuses not only on clarifying the theoretical and conceptual underpinnings of this approach, but also discusses its feasibility and practical value. Since ethics by design represents a way of doing tech ethics through engineering, it demands a diverse set of skills, comprising both technical and philosophical competencies. For that reason, the conference will also explore the role of collaboration and interdisciplinary exchange in the context of this approach, including related challenges. Lastly, the conference also reflects on how ethics by design can complement regulatory efforts and how it relates to other instruments in the responsible innovation toolbox such as usability testing, expert interviews, auditing, red-teaming, socio-technical scenarios, or thought experiments.

Speakers include: Joanna Bryson (Hertie School Berlin), David Storrs-Fox (University of Oxford), Jan Christoph Bublitz (University of Hamburg), Sven Nyholm (Ludwig Maximilian University of Munich), Judith Simon (University of Hamburg), Ibo van de Poel (Delft University of Technology), Jonas Bozenhard (Hamburg University of Technology).

Register here: https://www.eventbrite.de/e/ethics-by-design-conference-tuhh-tickets-908187291637 

We thank the Fritz Thyssen Foundation and the Society for Analytic Philosophy (GAP) for their generous support of this conference.

Talks

"Stell Dir vor, Du bist eine Literaturwissenschaftlerin"
Zur Einsetzbarkeit von künstlicher Intelligenz in den Geisteswissenschaften (talk in German)
- Evelyn Gius (Technische Universität Darmstadt)

7 November 2024: 16:30-18:00 CEST (hybrid)
Room: E - 4.022
Zoom: https://tuhh.zoom.us/j/87200163616?pwd=oVGTuDZPN9LHh2MulqNC7MBbkNn64l.1

Abstract: 

Seit der Veröffentlichung von ChatGPT im November 2022 ist auf großen Sprachmodellen (large language models, LLM) basierende künstliche Intelligenz für jede:n über einen Internetbrowser zugänglich. An Universitäten gilt diese Entwicklung primär als problematisch, da ein Ansteigen von Täuschungsversuchen befürchtet wird. Die Möglichkeiten, die in Forschung und Lehre mit LLMs einhergehen, wurden und werden hingegen außerhalb der informatischen Fachcommunity kaum untersucht. Hier setze ich mit meinem Vortrag an und beleuchte die Einsetzbarkeit von KI-Tools in den Geisteswissenschaften anhand von LLMs. Dafür stelle ich zuerst meine Erfahrungen mit dem aktiven Einsatz von Chatbots in der geisteswissenschaftlichen Lehre dar. Anschließend widme ich mich der Frage, ob man sich auch als Forscherin bei komplexen Aufgaben wie der Textanalyse und -interpretation von künstlicher Intelligenz unterstützen lassen kann – und soll. Neben Überlegungen zur Bewertung des Outputs von LLMs zeige ich den Einsatz bestimmter Promptingstrategien, wie etwa dem Rollenprompting („Stell Dir vor, Du bist eine Literaturwissenschaftlerin“), und diskutiere die Ergebnisse von Experimenten, bei denen wir große Sprachmodelle direkter – über eine API – trainiert haben.

Evelyn Gius ist Professorin für Digitale Philologie und neuere deutsche Literaturwissenschaft an der Technischen Universität Darmstadt. Sie leitet dort das fortext lab, welches zur Anwendung und Methodik der computationellen Textanalyse forscht und u.a. das Annotationstool CATMA (https://catma.de/) zur Verfügung stellt.  Weitere Informationen unter https://evelyngius.de

"Big Tech and Responsible AI: Can self-regulation work?" - Lucy Davis (ex-Google)

04 June 2024: 15.00 – 16.30 CEST  Room: H-0.09 (in-person only)

Lucy Davis is an accomplished professional with over a decade of experience at Google, specialising in AI ethics, regulation, and reputation management. Before leaving Google in March 2024, Lucy held several distinguished roles at Google, including Head of Responsible AI (Marketing, EMEA), Head of Regulation & Reputation (Strategic Partnerships, EMEA), and Head of Brand & Reputation Programmes (Marketing, UK). Lucy holds a degree in Philosophy, Politics, and Economics (PPE) from Oxford University and is currently pursuing a Masters in Practical Ethics, also at Oxford. Her extensive background and ongoing education make her a leading voice in responsible AI and ethical practices in technology.

Lucy’s talk will be part of your 'Ethics in Practice' series and will focus on the ethical challenges in the industry where she worked. The event will be under the Chatham House Rule.

"Ethics by Design in Complex Corporate Environments: Balancing Innovation, Compliance, and Cost Efficiency" – Mai Do & Dr Sergei Bobrovskyi (Airbus)

18 June 2024: 15:00 - 16:30 CEST Room: H-0.09 (in-person only)

This session will explore strategies for embedding ethics by design in a heterogeneous corporate environment, emphasising the need to find common ground among stakeholders with divergent interests. These interests include strict formal compliance, cost containment, and low overhead, alongside the innovative ambitions of AI developers. By examining these dynamics, we aim to provide insights into effectively harmonising ethical practices with the multifaceted demands of a large organisation.

This talk will be part of our 'Ethics in Practice' series and focus on Airbus. The event will be under the Chatham House Rule.

 

"Queere Betriebssysteme anwenden" (talk in German) - Elisa Linseisen (University of Hamburg)

UNFORTUNATELY THE LECTURE IS CANCELED DUE TO ILLNESS!
AN ALTERNATIVE DATE WILL BE ANNOUNCED SOON.

(19 June 2024: 15:00-16:30 CEST, hybrid)
Room: E - 1.022
Zoom: https://tuhh.zoom.us/j/84912361967?pwd=L1dnZWFIVVhaNDRvSElPZXZObTN4QT09

Abstract: 2016 veröffentlicht eine Gruppe von (Medien-)Wissenschaftler*innen, Theoretiker*innen und Künstler*innen ein IT-Handbuch mit dem Titel „Queer OS. A User’s Manual“. In diesem führen die Autor*innen spekulativ und von Wünschen geleitet aus, wie Computertechnologie diskriminierungsfrei eingesetzt werden kann. Der leitenden Frage des Manuals, nämlich wie Computertechnologie aussehen würde, die auf einem queeren Betriebssystem, einem Queer OS, läuft, möchte ich in meinem Vortrag mit dem Konzept der „App(lication)“ begegnen. App(lication) - die (computertechnische) Anwendung ermöglicht es, so meine These, Medientechnik in ihrer Verschränkung mit Medienkultur zu erfassen und damit die vergeschlechtlichten, rassifizierten, ableistischen und klassenbezogenen Dimensionen des Computings aufzuzeigen.

Elisa Linseisen is Professor for digital and audiovisual media at the Institute for Media and Communication at the University of Hamburg. Her research focuses on the aesthetics and episteme of digital media. For more information: https://elisalinseisen.com

Past Events

"A Minimalist Account of the Right to Explanation" - Thomas Grote (University of Tübingen) (joint work with N. Paulo)

08 May 2024: 16:00-17:30 CEST (hybrid)
Room: E - 1.022

Abstract. Critiques of opaque machine learning models, used to guide consequential decisions, are getting traction in moral philosophy. According to the received view, the legitimacy of algorithmic decisions is threatened on the grounds that they undermine the rights of decision-subjects to informed self-advocacy (Vredenburgh, 2022). The appropriate mitigation strategy, in turn, is to grant decision-subjects a right to explanation (via explanation of the model output). This paper challenges the received view. More precisely, we have two objectives. The first is a critical one: we argue that existing accounts of the right to explanation prove unsatisfactory to ameliorate concerns about the moral illegitimacy of algorithmic decision-making. This is in particular due to their individualist framing, overburdening decision-subjects in a two-fold way: (i) the relevant explanations are likely to be epistemically over-demanding since their correct interpretation will require a combination of domain-knowledge and statistical proficiency that cannot be presumed for laypersons; and (ii) shifting the task to detect inadequacies to decision-subjects makes scrutinizing explanations very costly for them. Weakening the epistemic requirements of the right to explanation also lays the ground for our positive contribution. If providing explanations to decision-subjects turns out to be an inadequate amelioration strategy for opaque algorithmic decision-making, alternative moral guardrails are necessary. We outline the basic features of our proposal by discussing literature on model auditing in machine learning.

Thomas Grote is a research fellow at the Cluster of Excellence: “Machine Learning: New Perspectives for Science” at the University of Tübingen. He is also Co-PI in a project on certification and safety of ML models in healthcare, funded by the Carl-Zeiss Stiftung.

Inaugural Lecture

On 17 April 2024, 5:00 PM - 7:00 PM at Audimax II, TUHH, the "Future Lecture" series featured the inaugural lecture by Prof. Maximilian Kiener on "Ethics in Technology and the Future of Morality". This event included a contribution from Prof. Dominic Wilkinson (University of Oxford) and a moderated discussion led by Dr. Andrew Graham (University of Oxford).

 

"On the Quest for Effectiveness in Human Oversight" - Kevin Baum (CERTAIN, DFKI)

10 April 2024: 10:00 CEST (hybrid)
Room: E - 1.022

Abstract: Human oversight is currently discussed as a potential safeguard to counter some of the negative aspects of high-risk AI applications. This prompts a critical examination of the role and conditions necessary for what is prominently termed effective or meaningful human philosophical, and technical domains. Based on the claim that the main objective of human oversight is risk mitigation, we propose a viable understanding of effectiveness in human oversight: for human oversight to be effective, the human overseer has to have (a) sufficient causal power with regards to the system and its effects, (b) suitable epistemic access to relevant aspects of the situation, (c) self-control over their own actions, and (d) fitting intentions for their role. Furthermore, we argue that this is equivalent to saying that a human overseer is effective if and only if they are morally responsible and have fitting intentions. Against this backdrop, we suggest facilitators and inhibitors of effectiveness in human oversight when striving for practical applicability and scrutinize the upcoming AI Act of the European Union – in particular Article 14 on Human Oversight – as an exemplary regulatory framework in which we study the practicality of our understanding of effective human oversight.

Kevin Baum is a philosopher and computer scientist. He is currently head of the Center for European Research in Trusted AI (CERTAIN) at the German Research Center for Artificial Intelligence (DFKI), one of the six German competence centers for AI. He is part of the NGO Algoright e.V., a think tank for good digitalization and interdisciplinary science communication. In his talk, Kevin will present current interdisciplinary work from the Center for Perspicuous Computing (CPEC), to which he is associated.

 

Do Large Language Models Have Credences? - Geoff Keeling (Google)

28 February 2024, 16:00 - 17:30 (CET) (online)

Abstract: Do Large Language Models (LLMs) have credences or degrees of belief? This question matters because a growing body of empirical research aims at quantifying LLM confidence in propositions with downstream implications for calibrating user trust in LLM assertions and combatting LLM-generated misinformation. Here an important question is whether techniques for quantifying confidence in LLMs are measuring degrees of belief on the part of the LLM; and if not, what is being measured and how it relates to credences. These questions are especially significant in relation to empirical studies which compare LLM confidence scores with human degrees of belief. In this paper we argue against the view that LLMs have credences. We consider three plausible accounts of what makes it the case that an LLM has a credence in a proposition: the reported confidence view, the output probabilities view, and the logits view. We argue that each account fails to adequately capture what it means to have a credence. The upshot is to clarify the interpretation of quantitative metrics for LLM confidence by providing a philosophical basis for denying that LLMs have credences. In doing so we not only put question to empirical comparisons between measurements of LLM confidence and reported degrees of belief in humans, but also orient discourse on confidence measurement in LLMs towards a non-mentalistic interpretation of confidence measures.

Bio: Geoff Keeling is a senior research scientist at Google, specialising in machine learning ethics. Prior to this role, Geoff served as a bioethicist at Google Health. His academic background includes a postdoctoral position at Stanford University, where he was part of the Institute for Human-Centered AI and the McCoy Family Center for Ethics in Society, and at the Leverhulme Centre for the Future of Intelligence at the University of Cambridge.

"Ubuntu as our Pathway to Inner Development" - Wakanyi Hoffman (The New Institute, Hamburg)

07 December 2024, 16:00 - 17:30  

Venue: Institute for Ethics in Technology, TUHH. Am Schwarzenberg Campus 3, 21073 Hamburg. Building E, 1st floor.

Wakanyi Hoffman, Research Fellow at The New Institute Hamburg, explores the transformative concept of Ubuntu and its pivotal role in shaping our approach to the climate and moral crises facing our world. In a world intricately woven with diverse narratives, Africa's rich heritage presents the profound concept of "Ubuntu" - a philosophy emphasising our interconnected humanity. Ubuntu, a term that resonates beyond mere words, is encapsulated in the African ethos as "I am because we are." This concept highlights the interconnectedness of all life and the belief that our individual and collective well-being are inextricably linked.

Popularised by Archbishop Desmond Tutu in post-apartheid South Africa, Ubuntu serves as a unifying cry across various African cultures. Its essence lies in the ethical principles of survival, solidarity, compassion, respect, dignity, and the pivotal concept of reciprocity. Reciprocity, or treating all life as we wish to be treated, is a universal principle found in numerous disciplines and indigenous worldviews.

Hoffman delves into the three levels of inner development under Ubuntu: Independence, Interdependence, and Interconnectedness. These stages guide us toward a deeper understanding of our role in the natural world and emphasise the need for a new moral framework in addressing the current climate and inner climate crises.

This talk is not just a presentation of an idea; it's an invitation to embrace a new operating system, a logic of the heart, that aligns with the shared principles of Ubuntu and reciprocity.