Regulation of facial recognition technologies and other artificial intelligences: controversies and questions.

In April of this year the European Commission presented the first regulation on Artificial Intelligence that introduces a legal context for these new technologies.

This regulation seeks to guarantee security and provide a legal framework for the introduction of these new technologies, creating levels of risk and, among other assumptions, prohibiting facial recognition under certain circumstances.

The European Commission’s proposal to regulate Artificial Intelligence establishes four levels of risk: minimal risk, limited risk, high risk e unacceptable. They also inform that the regulations must be implemented by all member states, excluding the use of Artificial Intelligence at the military level from the regulations.

The facial recognition It raises many questions regarding its application and must be subject to strict obligations, including risk analysis, traceability of results, detailed documentation, human supervision and a high level of robustness of the systems.

One of the points that you want to avoid, and that is carried out in some governments of other countries, is the so-called social score, which is listed as unacceptable in the regulations, which consists of determining the credibility or reputation of a person based on several factors, such as their personal data or their activity in social networks, to assign a score that can generate inequality and other discrimination in many areas of the life (that a bank does not accept a loan, denial of a scholarship, negative evaluations to get a job, …).

Well, facial recognition can help fight terrorism and many types of crime, the truth is that current systems do not generate the necessary reliability to be able to apply it with certainty and without harming the personal rights of those involved.

Leaving in the hands of an Artificial Intelligence the own recognition of individuals and the attribution of certain unsupervised information can be considered negligent and even, if I may use the expression, terrifying.Who does not remember those works of fiction in which the AI ​​achieves self-conscience and rebel against humans?

READ  Education will propose to bring recoveries forward to June

For my part, I think that it is still early for its application to such sensitive areas and we will have to wait for this technology to advance to the point that it is considered infallible and have full confidence in it or that we simply accept it with its defects, whatever happens. before.

Rubn Rodrguez Migulez, DLTCode Consultant

Copyright © Grupo Edefa SA Total or partial reproduction of this article is prohibited without permission and prior authorization from the publishing company.

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.