Experimentation under new AI law puts European startups ahead of the game

News

The development and use of artificial intelligence has been “curbed” by Brussels in the EU Artificial Intelligence (AI) regulation that took effect Aug. 1, 2024. But, experts say, this new law is not a restriction, it is an opportunity. The new law invites startups to experiment and learn and makes clear what is and what is not allowed. A fertile basis for innovation. Plus, by getting to know the new law now, European developers will soon have a big head start on competitors from, for example, America and China.

What is the difference between: A, a bank that evaluates whether a person gets a mortgage based on someone's facial expression and B, a web builder that wants to test the user experience of a website based on someone's facial expression?
Go figure. Because it is precisely to be able to make fair choices in this kind of dilemma that Europe called into being as of August 2024 the AI Regulation (in English: AI Act), or law to ensure that AI is used only for “good purposes. The AI Act is intended to regulate the use of algorithms and AI applications and minimize risks to people and society.


New law offers opportunities
Mirjam Elferink (Elferink & Kortier Advocaten) is a lawyer and specializes in intellectual property, ICT law and privacy. Among other things, Lucas Noldus is a creator of AI software that can read people's emotions. With a difficult word: affective computing.
What the two have in common: they are not shocked by the new law, but see it as an opportunity for, among others, startups that develop or use useful AI models with the best of intentions.

On Sept. 10, 2024, AI-hub Oost-Nederland is organizing the event “The EU AI Act - From a Legal and Investor Perspective,” where startups will be made aware of the rules arising from the new law, but especially of the opportunities it offers, through a lecture by Elferink, among others.

The bank versus the web builder
As an entrepreneur, Noldus is excited that there are rules. 'That bank, that's dubious,' he says, referring to Case A at the beginning of this article. 'Social scoring, the evaluation of individuals with potentially disproportionate consequences, lurks in that example. What emotion recognition may be used for, however, is research. So, for example, for measuring user experience. If a web builder has redesigned the payment page on a site, he or she wants to know whether that actually makes it more convenient for the user. One way to measure that is to monitor the expression of the user's face.'

Testing AI invention against the law
In cases A and B, about the bank and about the web builder, questions may arise. Such as, what if the technology is misused by another company (the customer), who is liable for that? And, do the people testing such a Web site know at all that their faces are being read? The AI regulation provides guidelines for such things.
But where do you start if you want to test your AI invention against the law? Elferink recommends the following four-step plan to startups getting started with AI: “First determine whether it is an AI system as defined in the AI regulation. If so, what role does your organization assume: are you the developer, distributor or user? Then, using the criteria in the AI regulation, consider what risks your AI solution poses to your role. And finally, what obligations you have as a result.'

An AI testing ground for startups
For startups that “just have a good idea” and have no grasp of legal prerequisites, the above can be quite daunting. 'Understandable,' says Elferink, 'but I would also like to draw startups' attention to the privilege they have surrounding the introduction of the new law. After all, they are allowed to participate in a living lab for startups. This allows them to learn about the possibilities and test the viability of their own product.'
Elferink is referring to the so-called “regulatory sandboxes,” intended to practice with the law on the one hand and to stimulate innovation on the other. 'With the arrival of a new law, people often see the limitations, but you can also look at it in such a way that frameworks become clearer and so you can immediately position your product well, because you are certain of what is and is not allowed.'
Noldus submitted his case for a “sandbox. 'Very clever,' says Elferink. 'The law requires companies to test themselves. Compare it to the CE marking of a kitchen ladder. Manufacturers test themselves whether users do not fall through the steps. In the sandbox, Noldus can get that 'CE mark for AI,' to call it that for convenience, right away.

Getting clarity quickly
For Noldus, participation in the sandboxes has a second goal: to get clarity as quickly as possible about what exactly is and is not allowed under the EU AI regulation. 'We currently do not know exactly to what extent we may have to adjust our products. By 'we' I mean my own company and industry peers in Europe. The law states that emotion recognition using AI systems is prohibited in the workplace and in education, unless it is for medical or safety applications. But what is meant by “emotion recognition,” “workplace,” “education,” “medical” and “safety” is not clear from the legal text.
Elferink adds that she expects the European AI regulators to create policies that further elaborate and clarify certain concepts: 'It is then the courts that will have to review the interpretation of the law and those policies in concrete issues, which will lead to further law formation and clarify the frameworks of the law. It is ultimately the European Court of Justice that will have the final say in this.'

Preventing abuse of AI applications
To keep up the momentum, Noldus, together with 2 peers in affective computing, founded the working group “Affective Activists,” which has now been joined by more than 120 institutes and experts from a total of 16 European countries. 'A real badge of honor,' he says with a laugh, 'but we take a constructive attitude, mind you. We would like to help work toward clear guidelines for the application of the law, so that the benefits of AI can be exploited, while preventing abuse.'

Making your product 'legal proof'
Startups normally focus on bringing in funding. Elferink predicts that in addition, more legal efforts will be needed in the future. 'The availability of money can make or break a startup. But complying with the rules and thus being 'compliant' is at least as important. Especially now that violation of those rules carries sky-high fines and AI regulators are actively enforcing. For example, fines can reach 35 million euros per violation or 7 percent of a company's global annual turnover. For smaller administrative violations, it is up to 7.5 million euros or 1.5 percent of annual sales. So my advice to startups is: seek timely legal advice and also look beyond the AI regulation. After all, with AI applications, all kinds of laws come together: such as intellectual property laws, ICT law, privacy and, of course, the new AI regulation itself. So making your product 'legal proof' is a task in itself and necessary for the viability of the product.

Ethics within AI
Within Noldus, time and money have been set aside since 2019 to base the AI solutions the company develops on ethics policies. 'In consultation with professors in ethics and communication, we have put on paper what our AI may be used for and what it may not be used for. Out of that came our sales policy. That includes 'end-use statements,' on which customers fill in what they are going to do with the product. Moreover, we have built into the software that the user, such as a researcher at the police academy, agrees to the rules we have set. For example, that our software cannot be used in public spaces (where people have therefore not given permission to have their facial expression read) or as a kind of lie detector.

Advantage over America and China
According to Elferink, the head start that companies have if they start working with the European AI regulation already is worth its weight in gold. Noldus adds, “If an American manufacturer wants to put an AI product on the Dutch market, it must comply with the AI regulation. We are already working on it and are therefore ahead of the game!'

Stay tuned:
Sign up for the Affective Activists working group newsletter at: lucas.noldus@noldus.com
Sign up for updates and blogs from Mirjam Elferink and her colleagues at Elferink & Kortier Lawyers on the EU AI Regulation.