Skip to content

SEEDIG 7 Series

Regulating AI: Which approach to follow? | 23 September 2021

Event description

The third event within the SEEDIG 7 Series framework was held as a Town Hall “Regulating AI: Which approach to follow?” and took place on23 September 2021 from 10:00 to 12:00 CEST via Zoom.

Artificial intelligence isn’t only about the future. It is already making breakthrough advances in the present moment. While communities are ready to embrace AI’s potential, complex questions are tackled by society. The international community has already started its path towards defining a comprehensive regulatory framework for AI, dealing with essential aspects such as the definition of high-risk applications, regulatory obligations for providers and many others.

During the SEEDIG 7 Series event #3, we reflected on a global tipping point towards the regulation of artificial intelligence and its potential influence in the SEE region. We discussed the need or lack of need for regulating AI, regulations seeking accountability for unfair or biased algorithms, and potential risks associated with the recent policy developments.

Who was invited to attend?

Everyone interested in getting the most recent insights on:

  • Who among international and regional actors suggests what on AI regulation?

  • Are those approaches complementary or conflicting?

  • How should we regulate AI to both sustain innovation and protect human rights?

  • Agenda

    Keynote speaker

    Gregor Strojin | Chair | Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI)

    Guest speakers

    Prateek Sibal | Programme Specialist | Digital Innovation and Transformation, Communication and Information Sector | UNESCO

    Laura Galindo-Romero | AI Policy Analyst | OECD Artificial Intelligence Policy Observatory

    Cezara Panait | Head of Digital Policy | Europuls – Centre of European Expertise

    Event moderator

    Kristian Bartholin | Secretary | Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI)

    Messages from the event

    "There is a consensus that we need an AI framework consisting of horizontal and vertical legal instruments. Some of them need to be binding and enforceable, while others can remain at the level of recommendations.

    If we want to be effective, coherent, and comprehensive, we need to look at the work of international organisations and other initiatives in the field of AI as complementary, not competitive. […] They should focus on the areas where each is the strongest, where it has the mandate and capacity."

    "AI regulation should be approached in a complementary manner where a legally binding framework, accountability, transparency, redress mechanisms, ethical technical standards, and self-regulation are all balanced. Among the essential elements in AI regulation are the following: strengthening human rights safeguards; raising awareness, AI literacy and capacity building around AI; understanding different developments and policies around AI; multistakeholder approach to regulation; flexibility (to foster innovation); future-proof regulation; large scale applicability."

    "Since the OECD AI Principles were adopted, the OECD has been working on helping policy makers to implement them. For this purpose, it launched an AI Policy Observatory in early 2020. Helping policymakers and AI actors to move from principles to practice is OECD’s next step. It is advanced through sharing of policy practices (e.g. the OECD.AI Network of Experts), establishing an evidence base for AI policy (e.g. the OECD AI Policy Observatory), and developing tools and insights to put policy into practice."

    "To agree on AI global governance, we need to have a common vision and then find different approaches to regulating AI (national, regional, international).

    There are risks for both not regulating and overregulating AI. Regulation is necessary, and we are confident that there are more opportunities than risks in regulating AI. However, we need to find a balance between the regulatory approaches that we will promote further. At the same time, overregulation could hinder innovation, decelerate AI competitiveness, delay technological growth, miss out on innovative solutions, and lead to loss of potential innovators and investors."