AQuA: Get ready for the AI Act. Implications for Researchers, AI, Data & Robotics

October 25@  | 14:00 – 15:00 CET, online

The upcoming AI Act will radically transform the way Artificial Intelligence is governed and regulated. This will have a strong impact on researchers, universities and companies. Embark on this session to get acquainted with the AI Act, know what it will include, and some of its key aspects. Let’s also explore what is missing from an ethical, legal and sustainability perspective. This session intends to show what the future of responsible AI, Data and Robotics research across Europe will likely be. Moderator: Carl Mörch (FARI) Speakers: – Giovanni Sartor (University of Bologna) – Frederic Heymans (Knowledge Centre Data & Society at imec-SMIT-VUB) – Kevin Baum (German Research Center for Artificial Intelligence) – Alex Moltzau (The Norwegian Artificial Intelligence Research Consortium) Inspired by Turing Award winner Donald E. Knuth’s All Questions Answered lecture (…) , CLAIRE All Questions Answered Events (AQuAs) are relaxed, 1 hour, online events that bring together a small group of panellists to discuss current hot topics in AI and beyond and answer questions from the community. These events are usually held via Zoom Webinar for CLAIRE members and members of co-hosting organisations, and live streamed to the CLAIRE YouTube channel, allowing the community at large to get involved and be a part of the discussion.

The CLAIRE SIG Ethics, Social, and Legal, in collaboration with CLAIRE SIG Policy, orchestrated an in-depth exploration into the transformative potential of the upcoming AI Act. 

In a comprehensive exploration of the implications of the AI Act, the event delved into topics such as the categorisation of AI risk, definition of autonomy, regulation of foundation models, high-risk system expansion, the right to an explanation of individual decision-making, obligations for providers, and the urgent need to integrate sustainability requirements into the Act.  

The key takeaways revolved around understanding the profound implications of the AI Act on innovation, navigating the associated challenges and opportunities, requirements for both providers and users, and the urgent necessity to seamlessly integrate sustainability requirements into the AI Act. These takeaways not only shed light on the intricate nuances of the legislation but also serve as a practical guide for stakeholders navigating the multifaceted landscape of AI governance.


  1.       Implications for Innovation: The AI Act emerges not merely as a regulatory framework but as a catalyst for innovation with implications for researchers, AI developers, data, and robotics. 
  2.       Challenges and Opportunities: The Act faces challenges related to its definitions, potential overregulation, and concerns among various stakeholders. The necessity of ensuring reasonable implementation and addressing issues of harm, fairness, responsibility, and transparency in the AI Act was expressed. 
  3.       The AI Act categories of risk: There are four categories of risk: unacceptable, high, limited and minimal/none. The discussion revolved around the high-risk category of AI systems, which includes those used for biometric identification, management of critical infrastructure, employment, etc.
  4.     The obligations and requirements for providers and users of foundation models, including the need for intelligible instructions and a fundamental rights assessment: Foundation model providers, like high-risk systems, need clear instructions; the proposed change suggests adding a fundamental rights assessment for impact identification and mitigation plans.
  5. The necessity of including sustainability requirements in the AI Act: The discussion emphasised the crucial need to integrate sustainability requirements into the AI Act, focusing on incorporating eco-design principles and precise energy consumption measurements.