X
Innovation

Europe wants to set tough rules for AI. Not everyone thinks it's a good idea

Navigating between business needs and citizen concerns is difficult. The European Commission's new rules on AI give it a go, and it will inevitably raise skepticism.
Written by Daphne Leprince-Ringuet, Contributor
istock-973159608.jpg

The European Commission has published a new legal framework that will regulate the use of AI in the bloc.  

Image: Getty Images/iStockphoto

After years of consulting with experts, a few leaked drafts, and plenty of petitions and open letters from activist groups, the European Union has finally unveiled its new rules on artificial intelligence – a world-first attempt to temper fears that the technology could lead to an Orwellian future in which automated systems will make decisions about the most sensitive aspects of our lives. 

The European Commission has published a new legal framework that will apply to both the public and private sectors, for any AI system deployed within the bloc or affecting EU citizens, whether the technology is imported or developed inside member states. 

At the heart of the framework is a hierarchy comprising four levels of risk, topped by what the Commission describes as "unacceptable risk": those uses of AI that violate fundamental rights, and which will be banned.  

SEE: Building the bionic brain (free PDF) (TechRepublic)

They include, for example, automated systems that manipulate human behavior to make users act in a way that might cause them harm, as well as systems that allow governments to socially score their citizens. 

But all eyes are on the contentious issue of facial recognition, which has stirred much debate in the past years because of the technology's potential to enable mass surveillance. The Commission proposes a ban on facial recognition, and more widely on biometric identification systems, when used in public spaces, in real time, and by law enforcement agencies.  
 
This comes with some exceptions: on a case-by-case basis, law enforcement agencies will still be able to carry out surveillance thanks to technologies like live facial recognition to search for victims of a crime (such as missing children), to prevent a terror attack, or to detect the perpetrator of a criminal offence. 

The rules, therefore, fall short of the blanket ban that many activist groups have been pushing for on the use of facial recognition for mass surveillance, and criticism is already mounting of a proposal that is deemed too narrow, and that allows for too many loopholes. 

"This proposal does not go far enough to ban biometric mass surveillance," tweeted the European digital rights network EDRi

For example, biometric identification systems that are not used by law enforcement agencies, or which are not carried out in real-time, will slip from "unacceptable risk" to "high risk" – the second category of AI described by the Commission, and which will be authorized subject to specific requirements. 

High-risk systems also include emotion recognition systems, as well as AI models that determine access to education, employment, or essential private and public services such as credit scoring. Algorithms used at the border to manage immigration, to administer justice or that interfere with critical infrastructure equally fall under the umbrella of high-risk systems. 

For those models to be allowed to enter the EU market, strict criteria will have to be met, ranging from carrying out adequate risk assessments to ensuring that algorithms are trained on high-quality datasets, through providing high levels of transparency, security and human oversight. All high-risk systems will have to be registered within a new EU database. 

Crucially, the providers of high-risk AI systems will have to make sure that the technology goes through assessments to certify that the tool complies with legal requirements of trustworthy AI. But this assessment, except in specific cases such as for facial recognition technology, will not have to be carried out by a third party. 

"In effect, what this is going to do is allow AI developers to mark their own homework," Ella Jakubowska, policy and campaigns officer at EDRi, tells ZDNet. "And of course the ones developing it will be incentivized to say that what they are developing does conform.

"It's a real stretch to call it regulation if it's being outsourced to the very entities that profit from having their AI in as many places as possible. That's very worrying." 

A world-first

Despite its shortcomings, Jakubowska observes that the European Commission's recognition that some uses of AI should be prohibited is a positive step in a field that is lacking regulation, which has at times caused the industry to be described as a "Wild West". 

To date, businesses have mostly relied on self-ascribed codes of conducts to drive their AI initiatives – that is, when they weren't held to account by employee activism voicing concerns at the development of harmful algorithms

The evidence suggests that the existing methods, or rather lack thereof, have some shortcomings. From biometric technologies keeping track of Muslim Uighur minorities in China, through policing algorithms unfairly targeting citizens on the basis of race: examples abound of AI systems informing high-stakes decisions with little oversight, but often dramatic life-changing consequences for those who are affected. 

Calls to develop clear rules to control the technology have multiplied over the years, with a particular focus on restricting AI models that can automatically recognize sensitive characteristics such as gender, sexuality, race and ethnicity, health status or disability.  

This is why facial recognition has been in the spotlight – and in this context, the Commission's proposed ban is likely to be welcomed by many activist groups. For Jakubowska, however, the rules need to go one step further, with a more extensive list of prohibited uses of AI. 

"Civil society is being listened to, to some extent," she says. "But the rules absolutely don't go far enough. We'd like to see, for example, predictive policing, uses of AI at the border for migration, and the automated recognition of people's protected characteristics, also prohibited – as well as a much stronger stance against all forms of biometric mass surveillance, not just the limited examples covered in the proposal." 

But while Jakubowska's stance will be shared by many digital rights groups, it is by no means a position shared by all within the industry.  

In effect, what is seen by some as an attempt to prevent AI from wreaking social havoc can also be perceived as placing barriers in the way of the best-case scenario – that where innovative businesses are incentivized to develop AI systems in the EU that could hugely benefit society, from improving predictions in healthcare to better combatting climate change. 

The case for AI doesn't need to be made anymore: the technology is already known to contribute significantly to economic and social growth. In sales and marketing, AI could generate up to $2.6 trillion worldwide, according to analysts; while World Bank reports show that data companies have up to 20% higher operating margins than traditional companies.  

It's not only about revenue. AI can help local councils deliver more efficient public services, assist doctors in identifying diseases, tell farmers how to optimize crop yields and bring about the future smart city with, among other things, driverless cars. 

For all this and more to happen, businesses have to innovate, and entrepreneurs need a welcoming environment to launch their startups. This is why the US, for example, has adopted a lax approach to regulation, with a "do what it takes" position that promotes light-touch rules that won't come in the way of new ideas. 

It's easy to argue that the EU Commission is doing the exact opposite. With AI still a young and fast-evolving technology, any attempt at prematurely regulating some use cases could stop many innovative projects from even being given a chance. 

For example, the rules ban algorithms that manipulate users into acting in a way that might cause them harm; but the nuances of what does and does not constitute harm are yet to be defined, even though they could determine whether a system should be allowed on the EU market. 

For Nick Holliman, professor at Newcastle University's school of computing, the vagueness of the EU's new rules reflect a lack of understanding of a technology that takes on many different shapes. "There are risks of harm from not regulating AI systems, especially in high-risk areas, but the nature of the field is such that regulations are being drafted onto a moving target," Holliman tells ZDNet. 

In practice, says Holliman, the regulation seems unworkable, or designed to be defined in detail through case law – and the very idea of having to worry about this type of overhead is likely to drive many businesses away.  

"It seems that it will push EU AI systems development down very different risk-averse directions to that in the UK, US and China," says Holliman. "While other regions will have flexibility, they will have to account for EU regulations in any products that might be used in the EU." 

The race for AI 

When it comes to leading in AI, the EU is not winning. In fact, it is falling behind: the bloc is rarely cited as even participating in the race for AI, which is rather more often shown as a competition between the US and China.  

The EU's tendency to embrace the regulation of new technologies has previously been pointed to as the reason for the bloc's shortcomings. A recent World Bank report showed that the EU launched 38% of investigations into data compliance in 2019, compared to only 12% in North America. For some economists, this "business-unfriendly" environment is the reason that many companies pick other locations to grow. 

"The EU has wider issues to do with the tech ecosystem: it's very bureaucratic, it's hard to get funding, it's a top-down mentality," Wolfgang Fengler, lead economist in trade and competitiveness at the World Bank, tells ZDNet. "The challenge is that these new rules can be seen as business-unfriendly – and I'm not talking for Google, but for small startups operating in the EU." 

In its new regulations of AI, the Commission lays down the expected costs of compliance. Supplying a high-risk system could cost up to €7,000 ($8,400), with another €7,500 ($9,000) to be spent on verification costs.  

SEE: The algorithms are watching us, but who is watching the algorithms?

Perhaps more importantly, penalties are prohibitive. Commercializing a banned system could lead to fines of up to €30 million ($36 million), or 6% of turnover; failing to comply with the requirements tied to high-risk systems could cost €20 million ($24 million) or 4% of turnover; and supplying incorrect information about the models could lead to €10 million ($12 million) fines, or 2% of turnover. 

For Fengler, the lesson is clear: talented AI engineers and entrepreneurs will be put off by the potential costs of compliance, which only adds to an existing mentality that stifles innovation. And without talent, Europe will find it hard to compete against the US and China. 

"We don't want Big Brother societies, and there is a clear danger of that," says Fengler. "It's good to protect against fears, but if that's your main driver, then we'll never get anywhere. You think you can plan this out exactly, but you can't. We don't know how some AI experiments are going to end, and there are many examples where AI will make the world a much better place." 

Competing at all costs 

For digital rights experts like EDRi's Jakubowska, however, AI systems have gone past raising fears, and have already demonstrated tangible harms that need to be addressed.  

Rather than calling for a ban on all forms of AI, she says, EDRi is pledging for restrictions on use cases that have been shown to impact fundamental rights. Just like knives should be allowed, but the use of a knife as a weapon should be illegal, so should problematic uses of AI be banned, she argues.  

More importantly, the EU should not seek to compete against other nations for the development of AI systems that might threaten fundamental rights. 

"We shouldn't be competing at all costs. We don't want to compete for the exceptionally harmful applications of AI," says Jakubowska. "And we shouldn't see boundless innovation as being on the same level as fundamental rights. Of course, we should do whatever we can to make sure that European businesses can flourish, but with the caveat that this has to be within the bounds of fundamental rights protections." 

This is certainly the narrative adopted by the European Commission, which cites the need to remain human-centric while not unnecessarily constraining businesses. In striving to achieve this delicate, near-impossible balance, however, the bloc seems to have inevitably failed to satisfy either end of the spectrum. 

For Lilian Edwards, professor of law, innovation and society at Newcastle University, the Commission's new rules are hardly surprising given the EU's long-established positioning as the regulator of the world. More importantly, like all laws, they will continuously be debated and defied. 

"As an academic, I'll say: What did you expect?" she tells ZDNet. "The devil is going to be in the detail. That is the nature of the law, and people will fight for years on the wording." 

Whether the strategy will bear fruit is another question entirely. The European Parliament and the member states will now need to adopt the Commission's proposals on AI following the ordinary legislative procedure, after which the regulations will be directly applicable across the EU. 

Editorial standards