The second draft of the guiding document for providers of general-purpose AI models to demonstrate compliance with the AI Act was released right before Christmas. We can now easily make comments and recommendations during the New Year break when everybody is focused on such things. Or on parties?
As the website states, on December 19, the group of independent experts presented the second draft of the General-Purpose AI Code of Practice based on the feedback received on the first draft, published on November 14, 2024. Allegedly, in just over one month, numerous activities have occurred as part of the Working Group meetings and interactions with the Chairs.
The EU is composed of various nations and various religions. There is an Orthodox faith in some parts of the EU, where Christmas is celebrated according to the Julian calendar, and the non-working days fall in January. By that old calendar, the Orthodox New Year is on January 13. Everything smells awkward, like when you want to organize a public debate of some lousy legislation – this is the most perfect timing in Europe. Nobody will comment, and you can tick the box that the legislation passed “the comprehensive public debate”.
But why do we pay attention to this legislation? Firstly, remember GDPR, which made the whole IT world sweat and was soon just copied into local legislation around the world? The AI Act is already in that position and will soon be copied into your legislation and jurisdiction. Similar legislation was enacted in South Korea, which is very much influenced by EU legislation: on December 26, the South Korean National Assembly voted to approve and adopt the AI Basic Act.
But the crucial question is: How might this particular extension of the AI Act influence you or even cause complications? Based on what is written, it seems quite possible:
- O tempora, o mores!
Are we prioritizing speed over substance with this rushed timeline for the Code of Practice? Or is there a deadline for it? The first draft was in November, the second in December, and the comment period is over the New Year holidays. If this is not a rushed process, then what is? Without proper time for the public to comment or for policymakers to take into consideration those comments, it looks like tick-boxing. Remember, this piece of legislation should incorporate all feedback from private companies and industry leaders to ensure it is mutually beneficial. Moreover, the second draft was released despite being incomplete, as policymakers openly admitted they had not yet had time to address all the comments from the first draft.
Logical question: considering all the above, why do we have the second draft?
- Alteri stipulari nemo potest
In such a rushed process, errors are expected to occur. This draft suffers from significant flaws in its scope. It requires much more than the AI Act itself, especially in the copyright section, trying to impose more regulations. Yes, the Code should specify certain aspects, but by regulating beyond what is in the AI Act itself, it creates confusion and overlaps in legislation. Let me give you an example: “Make reasonable efforts to assess the copyright compliance of third-party datasets” – this provision raises serious concerns, such as its potential extraterritorial application to agreements between non-EU entities for the use of data in non-EU markets. This can also be the breaking point for non-EU companies working with EU-based counterparts.
Instead of such a provision, the Code should recommend that providers make reasonable efforts to obtain contractual assurances that data does not infringe third-party copyrights. That is the common practice. However, that provision is just one example among many questionable ones, and the Code creates a nightmare for EU AI start-ups. Why?
- Qualis rex, talis grex
The purpose of the legislation is to create an open, inclusive, and regulated market that ensures prosperity and innovation remain at the forefront, enabling everyone to participate. Can you imagine an AI startup in Europe, that must first prepare such a large amount of (un)necessary papers and documentation before even developing its own AI model? The complexity and bureaucracy of the Code can quickly kill innovation in Europe. If that particular startup is stubborn to complete the job, it will just move outside of the EU because the Code can be overkill. And yes, there is a certain achievement: if this Code represents the approach to the EU AI market, the market will be surely regulated, but mostly empty.
- Argumentum ad temperantiam
What if this Code specifically exempted the Small and Medium-sized Enterprises (SME) based on our example? According to the definition provided in the linked document, 99 percent of the whole EU market is SMEs. In that case, SMEs would have lower safety standards purely because of the provider’s size, not the actual risk profile of the AI model, and this would encompass 99 percent of the market. At the same time, the AI Act aims to ensure all AI models are safe, regardless of the provider’s size. Exemptions or loose provisions will go against the fundamentals of the AI Act and create a loophole for the market. The Code should align with the AI Act – balanced, not overly stringent as it is currently, but also not too lenient.
- Sub rosa
The release notes on the website explicitly state that: “The second draft builds upon previous work while aiming to provide a “future-proof” Code”. What it means? If you count on future standards not yet defined by the AI Office, the possibility of being mistaken is high. This is exactly how the Code is written, and such uncertainty is highly problematic. This is not how a reliable AI market should function, nor can it be considered good practice. How can companies rely on the Code when standards might flip the provisions upside down? At least, the Code should wait for those standards to be in place and then add the provisions, not play the role of fortune teller.
Regulation is important, but overregulation stifles the market and hinders its growth. In the case of the Code, it is not only overregulating but also demanding unnecessary paperwork and creating a bureaucracy monster. Coming from an ex-communist country, I am well aware of the consequences, but Franz Kafka’s “Der Prozess” illustrates the absurdity even more clearly. Under these conditions, companies working with AI models will face the same fate as Josef K., or, like my company, be forced to relocate outside the EU.
Is that what the EU needs right now in the light of the current economic challenges? Obviously, this is not the right environment for an equal, vivid battle for innovation in the sphere of AI. Ah, and I forgot the best part – the public comment period is over in just a couple of days – on January 15. It’s no surprise that Christmas and New Year were chosen for the public comment period for this draft. If policymakers don’t extend it until at least the end of January, it will be nothing more than tick-boxing.
One of the key messages from the AI session at SEEDIG 8 in Zagreb in 2023 was: “AI models represent a paradigm shift in innovation, but pose serious risks including data breaches, bias, and manipulation, and should be designed and deployed in a responsible and trustworthy manner based on principles of data protection, privacy, human control, transparency, and democratic values.” Is this the approach being taken? Companies, like users and individuals, have an equal right to engage in discussions about legislation. The process must be comprehensive and inclusive to avoid creating consequences that could take years to rectify. Otherwise, people and companies – play with AI while you can; soon you will start to produce the paperwork!
Dusan Stojicevic
SEEDIG