The European Commission does not sufficiently understand the need for a better AI Act
- 24 februari 2022
- Syd Sujuaan on Unsplash
The Dutch Senate shares the concerns we have about the Artificial Intelligence Act and wrote a letter to the European Commission about the need to better protect people from harmful uses of AI such as through biometric surveillance. The Commission has given a response to this which is not exactly reassuring.
The gaps in the AI Act
At the end of last year, the Senate expressed its concernsHere you can read the letter of the Senate to the European Commission about the proposed Artificial Intelligence Act. In a letter, they questioned the Commission about the formulated prohibited applications, including social scoring, the abuse of vulnerabilities of groups of people and the use of remote biometric identification. The Senate also expressed concerns about the risk-based approach and lack of legal protection. We are pleased to read that the Senate has voiced the same concerns we've expressedHere you can read what we've written before. Unfortunately, we are less happy with the answers from the European CommissionHere you can read the answer from the European Commission.
Leaving the job half done
While more and more institutions and organisations plead for a complete ban on biometric surveillance in publicly accessible spacesDo you want to plead for this as well? Sign our ECI Reclaim Your Face!, the European Commission maintains its position that the ban should be limited to real-time remote biometric identification by police. These are uses of artificial intelligence systems that can recognise people instantly or at least "without considerable delay". This still allows all kinds of very harmful applications, including applications where identification takes place afterwards that can just as easily be used to follow people in the public space. There are even a number of risks associated with post biometric identification systemsHere you can read more about the risks associated with post biometric identification systems that are not present with real-time applications. For example, the images can be tampered with. Or large quantities of image material can be examined afterwards to see whether someone appears in them and where someone has been and with whom, potentially over a number of months or years. This entails a great risk for whistle-blowers or the source protection of journalists.
Contradictory answers
On the one hand, the European Commission seems to want to reassure the Dutch Senate by indicating that there is already a ban on the use of biometric data in European privacy law. However, current rules are less restrictive for law enforcement purposes. Some countries in the European Union have legislation to specify the more permissive approach for law enforcement, others have not. That is why the European Commission now finds it necessary to ensure a 'level playing field' by providing the same rules for law enforcement throughout the European Union. There is, of course, something to be said for this.
What is worrying, however, is that a few paragraphs later the European Commission lists all kinds of 'advantages' of biometric identification that are deeply questionable, to say the least. For example, they say that recognising emotions could be useful for blind people and neurodivergent people. It could warn people about falling asleep at the wheel. And it could check the age of gaming children to protect them from harmful content.
The idea is very noble, but as yet it appears that these types of applications are full of errors. The faces of people of color are poorly or not at all recognisedHere you can read more about facial recognition and racism by such systems and there is a lack of a credible scientific basis for being able to derive emotions from the external features of a facehere you can read why so much is going wrong with emotion recognition. Moreover, it is based on the assumption that all people express emotions in the same way, and facial expressions always mean the same thing, while this is not the case. As people from different cultures and environments we express our emotions differently, which is arguably part of the richness of being human. And is the use of biometric identification really what we want to use to protect our children?
It's all very noble thinking, but as yet it appears that these types of applications are full of errors.
It is not reassuring that in listing what it sees as advantages, the European Commission completely ignores the serious, well-established risks to human rights associated with these applications.
In the coming period, the European Union will discuss the legislative proposal in Parliament and in the Council. We remain committed to achieving a law that effectively protects fundamental rights, and preventing those uses that are so risky, no safeguards can mitigate their harms.