For you
Event image

Why Artificial General Intelligence (AGI) is a hindrance for responsible AI that truly benefits societies

Wednesday, Jul 24 · 1:00 PM · 2:00 PM

External

This event is over

You can't join this event anymore

Hosted by
profile picture

TU Eindhoven

Location

Eindhoven, Nederland

About

Joel Fischer, Professor of Human-Computer Interaction (HCI) at the School of Computer Science, University of Nottingham, UK, is a guest of Minha Lee, Assistant Professor of the Future Everyday group at the department of Industrial Design, TU/e. Title  |  Why Artificial General Intelligence (AGI) is a hindrance for responsible AI that truly benefits societies Since the recent astonishing rise to prominence of Generative AI based on Large-Language Models (LLMs) the AI-doomers have been warning of the existential risk of Artificial General Intelligence (AGI) on the one hand, and AI-evangelists have been foretelling of the colossal values AGI will create for humankind on the other. In this talk I will side with the authors of the TESCREAL bundle to argue that these extremes are two sides of the same coin [Gebru & Torres, 2024]. While warning of the potential risks of AGI in order to raise more funding for their companies, the proponents push the development of ‘safe’ AGI that they argue only they are capable of building. For the rest of us, there is a need to “cut through the bullshit” urgently now, and focus on the real issues with current AI, such as energy consumption, bias, hallucinations, exploitation of workers, and intellectual property infringement — we need truly responsible AI if society is to reap the benefits of AI while mitigating and addressing these real issues. Recent legislative texts and political declarations like the EU AI Act, the UK Bletchley Park declaration and the US Presidential Executive Order all have language around trustworthy and responsible AI. However, it is less clear which guidelines decision makers can and should turn to to achieve trustworthy and responsible AI in practice. In surveying some of the key concepts that have been proposed to underpin responsible AI, such as explainability [Arrieta et al., 2020], alignment[Christian, 2021], and responsibility [Vallor, 2023], I will argue that we need multidisciplinary socio-technical approaches to develop a culture of responsible AI that is as inclusive and diverse as the societies it is supposed to benefit. Program (location: Neuron 0.262) 14.45 - 15.00   Doors open 15.00 - 15.45   Lecture  15.45 - 16.00   Q&A

Download the app and discover more

© BASH copyright and trademark 2024