New EU rules on large AI models are underway

Information sharing, copyright compliance, and transparency: these are the new obligations that companies developing general-purpose AI (GPAI) models will have to comply with in Europe starting today. One year after the Artificial Intelligence Act came into force, following the bans on prohibited AI practices and the introduction of mandatory training , it is now the turn of the rules on large AI models.
What is a GPAI?These are models trained with large amounts of data and capable of performing a wide range of tasks, often integrated into other AI systems. In the GPAI guidelines , the Commission clarified that an algorithm falls into this category if it has been trained with computational resources exceeding 10^23 FLOPs and is capable of generating language (in the form of text or audio), images, or videos from text.
The model must also exhibit significant generality and be capable of performing a wide range of distinct tasks competently. Therefore, GPAIs include language models trained with the indicated computing power, not those designed to play chess or video games or to enhance photo resolution.
Then there are systemic risk models, for which the AI Act provides additional obligations. These are algorithms capable of having a significant impact on the European market. In this case, the threshold to be considered is 10^25 FLOPs, and the Commission has the power to designate them ex officio.
Transparency obligationsFirst and foremost, vendors are required to be more transparent. The AI Act requires them to prepare and maintain technical documentation relating to the model, including details of the training process, which they can then provide to authorities upon request.
Companies will also have to provide information and documentation to downstream suppliers – those who intend to integrate a GPAI into an AI system – to help them understand the model's capabilities and limitations and comply with their legal obligations.
The AI Act lists the minimum information to be collected. Furthermore, the GPAI Code of Conduct published a standard, easy-to-use template to meet these requirements.
Copyright protectionThe law also requires providers to implement effective policies to comply with European copyright law, using cutting-edge technologies to detect and enforce reservations of rights.
The Code of Conduct also lists a series of practical solutions to meet this new obligation.
Model training reportAI companies must also publish a sufficiently detailed summary of the content used to train the algorithm. To help providers comply, the Commission has published a template that companies can use as a reference and that will also allow citizens and copyright holders to inform and protect themselves.
Rules in case of systemic riskStarting today, additional requirements must also be met for systemically risky models. Providers must perform a model assessment using standardized protocols and cutting-edge tools. Potential systemic risks arising from these models must also be assessed and mitigated, and serious incidents must be reported to the authorities.
These companies are also required to ensure adequate cybersecurity protection, both for the model and its physical infrastructure, to prevent theft, misuse, or widespread consequences resulting from malfunctions.
The Code of Conduct dedicates an entire chapter to AI models with systemic risk, listing concrete practices for managing these risks.
La Repubblica