Navigating AI and Digital Identity Regulations: Insights from the EU's eIDAS and AI Act

Share

Key Takeaways:

 

Implement Third-Party Assessments: To enhance transparency and independent verification, the AI Act should incorporate conventional assessments by notified bodies, similar to the eIDAS framework.

 

Mandate Third-Party Classification for High-Risk AI: The AI Act should require third-party classification of high-risk AI systems to prevent misuse and ensure a thorough understanding of AI capabilities and risks.

 

Standardize AI System Classification: To ensure consistent AI system classification across the market and maintain innovation, a third party should determine the classification of AI systems based on their use case context, facilitating rapid innovation while upholding fairness and transparency.

 

1. Background about eIDAS and AI Act

 

eIDAS regulation launched in 2014 was the first pillar of a digitalisation strategy driven by the EU that officially launched in 2015. The regulation paved the way for trust services that today serve as a backbone for high-assurance identification, signing, and supporting services such as timestamps, seals, and e-delivery. Most importantly, eIDAS ushered in an era of transparency in the methods used for identity verification, both for eIDs and qualified electronic signatures. As the industry developed, more and more solutions began using machine learning and artificial intelligence to improve their identification methods. 

 

In parallel AI became more and more relevant virtually in any service or product globally, driving significant efficiency gains and even scientific breakthroughs. This, however, does not come without a risk of potential abuse. Generative AI models can be used to generate falsified information at scale. To get ahead of possible abuse and increase transparency the EU released the so-called AI Act

 

The EU Artificial Intelligence Act aims to provide guidelines for AI system classification and establish governance for high-risk AI systems. It is refreshing to observe that the mechanism for high-risk AI governance requires a third-party assessment as well as the further proliferation of information security management systems.

 

2. eIDAS conformity assessments and resulting products

 

When eIDAS came into existence, initially it was not entirely clear what the certification schemes would look like and whether there would be differences between member states in the acceptance of different reports. Over time, the certification industry has settled on a generally interoperable conformity assessment and there are few challenges to delivering a report produced by a Conformity Assessment Body (CAB) for one country to a Supervisory Body (SB) residing in another. 

 

The unification of the industry led to highly interesting product development opportunities for the trust services providers. Today we have a very clear level playing field established by the use of common standards produced by ETSI, CEN, various RFCs, and even a couple of standards that agree on the usage of various cryptographic algorithms used for encryption and security. A company then is given almost free reign to develop its product as it deems fit as long as it does take those standards into account. The result of this is a massively transparent market for the consumer. For example, it is clear to them what are the benefits of a qualified electronic signature, and that the result will be the same, regardless if they have spent 10 minutes registering for one or 2 weeks with a drive to another city.

 

3. Conformity assessment of an AI system

 

The AI Act aims to implement a similar conformity assessment scheme to the one present in eIDAS, however, there are some notable differences. 

 

Most importantly, conformance can be achieved without an external assessment. By itself, the provision allows for much faster iterative development of the AI systems in question. On the other hand, it creates a grey area, where a company can claim compliance, and not necessarily be in full understanding of their AI. 

 

The second route of achieving conformance is more conventional. External conformity assessment is done by a notified body that is defined in the AI Act, and generally follows the same cadence of auditing present in eIDAS. Although it is a more cumbersome process compared to self-assessment, it does provide major transparency gains and most importantly, creates an independent verification of whether the system is considered high-risk or not.

 

4. Conformance Assessment and Transparency in AI Regulation: Challenges and Improvements

 

The EU has created what is considered a landmark legal text when it comes to regulating AI. However, to retain a high level of innovation that is required by the advancements in hardware technology and software models powering cutting-edge AI systems, two clear faults were introduced.

 

  1. An AI system that would regularly be considered as a high-risk one, can be downgraded from that status by changing the definition of its role in the process or adding some level of human supervision.
  2. Creating a conformance assessment scheme that allows self-audit. 

 

Let’s add some context to this. 

 

A company creates an AI that is exceptionally great at using biometrics to determine whether a person is under duress at a given moment. Most of the companies dealing with remote identity proofing or qualified electronic signatures have to solve this risk in one way or another. One could argue that such AI falls under prohibited use cases, as it uses cues and observations to determine the emotional state of the person. That said, it is relatively easy to counterargue, that this is not to the detriment of some specific group of people, but is aimed at improving fraud detection. Such arguments would move the AI to the high-risk category. However, if further narrow the context and add some level of human supervision, say, the AI will be used by a qualified trust service to issue qualified certificates. The system will be only a small part of their identity verification that also contains a manual supervision element. Then such an AI system becomes generally unsupervised as the provider of this solution is only required to conduct self-assessments. And it is exactly the scenario that the Act should be aimed at preventing. 

 

The improvement is rather simple - a third party should determine the classification of the AI system based on the use case context. It would allow companies to keep iterating at a great speed but would align the playing field when it comes to AI system classification, leading to greater transparency in the whole market.