Artificial intelligence law: over 150 companies ask EU to review proposed law

Have you already tried an AI tool? Find out in this post about all the risks behind this cutting-edge technology.

News and trends

Have you already used the famous ChatGPT to work or search for information on a personal level? If you have, you will have seen that its speed and efficiency are as fascinating as they are disturbing. If, while still in its “infants”, this technology is capable of yielding these results, imagine what it will be like in a few years, when a single tool is capable of doing all the work and the market becomes a red ocean full of competing sharks. by the AI ​​oligopoly.

To stop this near-apocalyptic situation that is coming, and taking into account the collateral problems that are emerging related to privacy, data and the disappearance of many of the current jobs, the EU has prepared a proposal for a law that regulates AI development.

What is generative AI and how is it different from artificial intelligence?

Before delving into the foundations and reasons that have driven this bill, it is important that we first clarify the difference between conventional artificial intelligence and generative artificial intelligence, since all the controversy has come with the latter.

Artificial intelligence focuses on the analysis and interpretation of data, while generative artificial intelligence can create original and unique content from scratch. That this technology has broken into the world with the creative factor means an unexpected advance, but also a threat, since, until now, we all felt safer behind the limit that human creativity placed between technology and people. . Well, that barrier no longer exists.

Artificial Intelligence Law: different rules for each level of risk

To shed light on this uncertainty that has spread among companies and users around the world, the European Union wants to protect its members from the usurpation of data and the violation of privacy on the Internet with the new Artificial Intelligence Law. Do you want to know what it consists of? We will tell you point by point below!

The priority of this law is that the AI ​​systems used within the EU are safe, traceable, transparent and inclusive. To do this, AI systems are analyzed and classified according to the risk they pose to users, and for the different levels of danger, a more or less flexible regulation will be developed that will divide them into: unacceptable risk, high risk, generative AI. and limited risk. Once approved, they will be the world’s first AI standards, so let’s see what each type of level consists of.

What are the different types of AI risks contemplated by the new Artificial Intelligence Law?

Unacceptable risk: Unacceptable risk AI systems are considered a direct threat to people and will be prohibited for including cognitive manipulation of the behaviour of vulnerable people or groups and the classification of people based on their behaviour, status or personal characteristics. Biometric identification systems in real time are also included.

High risk: are all AI systems that violate fundamental rights, and are divided into two groups: AI systems that are used in products subject to EU legislation on product safety and AI systems belonging to the following eight areas, which must be registered in a database:

  • Biometric identification and categorization of natural persons.
  • Management and exploitation of critical infrastructures.
  • Education and professional training.
  • Employment, self-employment and worker management.
  • Access and enjoyment of essential private services and public benefits.
  • Application of the law.
  • Immigration management, border control and refugee asylum.
  • Assistance to legal interpretation.

Generative AI: Generative AI systems will be able to continue with their activity as long as they meet the following requirements:

  • Guarantee that the content has been created by AI.
  • You design a model that avoids illegal content.
  • Publish summaries of data protected by copyright.

Limited risk: they are those that meet minimum requirements and the user will always be aware of the facts. For example, they must be informed if the content being consumed is a deep fake or when it is tampered with.

Some practices prohibited in the Artificial Intelligence Law

Depending on the country, the jurisprudence regarding artificial intelligence may vary, but, nevertheless, there are certain prohibited practices extended to all laws, such as the following:

Unfair Discrimination: The law may prohibit the use of Artificial Intelligence algorithms that perpetuate discrimination based on characteristics such as: race, gender, religion, or sexual orientation. This includes automated decisions in areas like hiring, credit, and criminal justice.

Mass surveillance without consent: The law may place restrictions on the use of artificial intelligence technologies for mass surveillance of people without their consent. This includes recording or tracking individuals without a strong legal basis or adequate safeguards to protect privacy.

Manipulation of information: The use of Artificial Intelligence to deliberately disseminate false or misleading information with the aim of influencing people’s opinions or actions may be prohibited. This includes the manipulation of recommendation algorithms or the creation of profiles that generate filter bubbles or information biases.

Risks to safety and human life: the law will establish specific regulations to ensure safety and minimize the risks associated with the use of Artificial Intelligence systems. This may include banning autonomous systems that pose an unacceptable danger to human life or requiring adequate safeguards in sectors such as autonomous transportation or AI-assisted healthcare.

If your company is already one of the pioneers in using AI for its activity or, simply, you think that it can mean a great advance for your workflow or that of your team, do not hesitate and subscribe to Educa.Pro, where you will find all kinds of related training and information. We will wait for you!

Keep reading