Session type:

Case Study

Presented by:

Kaimar Karu


Session duration:

45 minutes

About the session

Bias in AI-supported decision-making is just one of the challenges we are facing in this buzzword-heavy domain. As world superpowers - both countries and (technology) corporations - are racing to become the leaders of AI research and application, we are (re-)discovering more and more ethical challenges that need to be addressed as part of the design of AI systems, rather than as an afterthought.

Make no mistake - this is a philosophical session, and an applied one at that. We will be looking at widely discussed ethical challenges, like bias in AI-supported decision-making, and traditional philosophical challenges that have a new relevance (and require a new approach) as countries and corporations race from Narrow AI (ANI) towards General AI (AGI).

Among other topics, we will be discussing consciousness, free will, belief, and intentionality. We will look at the relevance of addressing ethical challenges as part of AI system design, and the problems with AI-related legislation and attempts to create universal normative AI principles. We will take a pragmatic approach to resolving bias-related challenges, discuss the relevance of empathy, and cover some of the immediate AI-related threats already impacting our societies.

It is also worth noting that this is not a buzzword-session. We will call things out for what they are, rather than what they are sold as. We will not focus on FUD or FOMO, but on the practical "what should I be aware of" and "what should I do" questions. The objective is to provide the participants with enough ammunition to start having informed discussions at their workplace, avoid their own FUD and FOMO moments, and help advance the industry towards a responsible and pragmatic approach to AI.

About the speaker(s)