Using AI: How is it regulated in Europe and Italy?

The widespread diffusion of artificial intelligence systems capable of generating output based on human-provided input and able to influence the external environment and society has highlighted the need to regulate the production, training, use and development of such systems. In particular, the European Union has taken a leading role at the international level in defining common rules for the regulation of AI, including generative AI, such as ChatGPT and others.

The AI Act, which the European Commission intends to finalize by the end of 2023, should apply to both providers from any country that place AI systems on the EU market, and to EU citizens who use AI systems, as well as to users from other countries who use AI-generated output within the EU. It seems like a complex path of rules and regulations for the legislator. And indeed it is.

AI Regulation in Europe

In 2018, the potential of AI for society, people, businesses and institutions was presented to the European Commission for the first time. Although generative AI was not yet a topic of discussion, the European Commission had already begun a long journey towards identifying common rules for the use and development of safe and transparent AI systems.

In 2020, the EU Commission launched an online consultation involving over 1200 individuals, associations and companies. The consultation found an almost unanimous consensus on the need to intervene to fill the legislative gaps on artificial intelligence. All comments pointed to the need to avoid conflicting obligations within the various EU countries or excessive regulation, while also emphasizing the importance of a proportionate and technologically neutral regulatory framework that would take into account both the many positive contributions and potential risks of artificial intelligence.

In fact, the use of AI was already bringing and is still bringing important positive contributions in the healthcare sector for the more accurate diagnosis of many diseases and for improved prevention, in the industrial sector for greater efficiency in production systems and predictive maintenance, but also a number of potential risks such as “opaque decision-making mechanisms, discrimination based on gender or other factors, intrusions into our private lives or criminal use”[1].

In 2021, Europe proposed that the first AI regulation was based on a risk classification system for users. On June 14, 2023, the European Parliament issued the AI Act, starting the legislative process for the first set of EU rules to regulate artificial intelligence. The text will then be subject to a three-way negotiation with the European Council and the European Commission, with the aim of reaching final approval by the end of the current year, or before the 2024 elections. However, it is expected that the entry into force will be postponed at least until 2025, in order to give economic operators time to adapt to the rules, which, being of a European regulation nature, will be valid for all member countries, with the possibility for the latter to make only minimal changes. It is important to highlight that the legislative action taken puts the EU at the forefront worldwide in terms of regulation on the topic of AI; in fact there have been no similar initiatives in other countries, or they are only at an early stage, as for example in the US [8].

As mentioned, the AI Act introduces different rules depending on the different levels of risk associated, namely unacceptable, high, limited and finally minimal or null, and establishes obligations for producers, suppliers and users depending on the level of risk of the AI. In general, and by way of example, artificial intelligence systems are considered to be at unacceptable risk, and therefore subject to prohibition, when they constitute a “clear threat to the safety, livelihoods and rights of people”.

According to the AI Act, the prohibition on the use of artificial intelligence extends to all the systems used for “cognitive behavioral manipulation of specific vulnerable people or groups, social scoring and real-time and remote biometric identification systems, such as facial recognition”. An exception is made for real-time and remote biometric identification systems used by law enforcement agencies such as the State Police, to prosecute serious crimes and, in any case, with prior authorization from the court.

Artificial intelligence systems that have a negative impact on health, fundamental rights and security are considered to be at high risk and are therefore subject to an evaluation process as well as mitigation of potential negative effects both before being placed on the market and throughout their life cycle.

In this regard, the European Commission includes a broader concept of personal safety, compared to the one already in use for some time within the European regulation on data protection, privacy, non-discrimination, liability and product safety, and other consumer protection rules.

The need to broaden the concept of safety is due to the fact that risks not contemplated by the current AI usage regulation may arise in the future, for example in products such as household appliances, and for services that may arise from loss of connectivity, software upgrades or AI machine learning during use of the product/service itself [2].

Significant examples of high-risk systems include the use of AI in the fields of critical infrastructure (such as transport and water or energy networks), product safety, education, essential services and work personnel selection [8].

The obligation of transparency and communication of adequate information to users then exists for the category with a limited level of risk, which includes AI applications that aim to create or manipulate images, videos and audio content [9] and therefore all generative AI tools [10].

Given the considerable interest and topicality of the issue, a specific in-depth analysis is dedicated to it in the following paragraph. Finally, for the minimum risk class, which includes for example video games or AI-enhanced spam filters, there are no restrictions on use [8].

For an objective assessment of the regulatory process underway in Europe, it should be emphasized that the version of the AI Act voted on by the Strasbourg Parliament, while representing a fundamental step forward at the legislative level, according to some observers overlooks or even omits specific issues of primary importance related to the use of artificial intelligence.

In this regard, it has been noted by various commentators [8] [11] [12] [13] that there is a lack of adequate protection for migrants against potential AI tools aimed at control and profiling based on sensitive characteristics, which could generate discriminatory problems since they are generally prohibited for the rest of the population.

Another critical opinion that has been expressed concerns facial recognition, which is prohibited in real time but allowed a posteriori, with the consequent risk of miscarriages of justice arising from cases of similarity or disguise [14].

Some perplexity has also been raised by the proposal of the European Parliament rapporteur, Brando Benifei, to bring into force the part of the regulation concerning generative AI before the conclusion of the normal negotiation process [15]; the reason is that this would constitute a dangerous precedent at the level of legislative procedure and would also oblige companies to adapt, in addition to very tight deadlines compared to those necessary, to a complex of rules subject to possible subsequent changes in the immediate future [16].

Generative AI Regulation

Since the launch of OpenAI’s ChatGPT, in the past year several generative AI producers, including Microsoft, have been the subject of various copyright infringement lawsuits.

The accusations, which mainly concern AI training based on data, images, codes and texts protected by copyright laws, have led to petitions in several countries around the world, including the United States, calling for a suspension of the development of artificial intelligence, including generative AI, at least until there is greater knowledge and transparency about how AI makes its decisions and how to protect sensitive data.

For example, some platforms that use images and videos such as Adobe and Shutterstock have trained and implemented AI systems based only on fully licensed or public domain data [3].

Transparency about generative AI, its training and the generated content of all types and forms guides the entire EU regulation. The aim is to allow users to be informed when they are interacting with generative AI and to make informed decisions about what is generated by artificial intelligence.

In this regard, the AI Act in fact requires the obligation to “declare whether the content has been generated by artificial intelligence, to publish summaries of the copyrighted data used for training, but also to design the model in such a way as to prevent the generation of illegal content”, with the aim of starting from this basis to reach an agreement with all EU countries for the final drafting of the law by the end of 2023 [4].

AI Regulation in Italy

To date, there are no specific laws or decrees in Italy that regulate the use of artificial intelligence. However, there are a number of general laws and decrees that can be applied to AI, such as the privacy law (GDPR), the copyright law (law 22-04-1941 n. 633) and the cybersecurity law (law 109/2021) [5].

On the other hand, there are several AI bills, one even written by ChatGPT on the “provocative” question of a Lombard councilor [6], but which are still under discussion.

For example, after an initial blockage of the most famous generative AI for violating the rules on privacy and personal data management according to the current GDPR, the regulatory discussion in Italy, in line with the European one, aims to promote the development and use of AI in a responsible and sustainable way, to provide a series of measures to guarantee safety, ethics and privacy, together with transparency of AI, or the comprehensibility, knowability and explainability of the functioning of algorithms [7].

The debate is open, to regulate without banning. Finally, it is worth noting the recent establishment by the government of both a commission of thirteen experts coordinated by the director of the Department of Mathematics and Computer Science of the University of Reggio Calabria Gianluigi Greco, which by January 31, 2024 should formulate a set of guidelines to be followed at national level in the field of artificial intelligence, and a committee, chaired by Giuliano Amato, whose task is to assess the impact of AI algorithms in the publishing world [17] [18].


[1] Libro bianco sull’intelligenza artificiale – un approccio-1_IT_ACT_part1_v2.pdf 2020



[4] 2021















By Luigi Simeone, Chief Technology Officer Moxoff