X
Innovation

The trouble with AI: Why we need new laws to stop algorithms ruining our lives

Ethical guidelines are the only tool to date to control AI systems. They're not doing much for our rights, finds a new report.
Written by Daphne Leprince-Ringuet, Contributor

Stronger action needs to be taken to stop technologies like facial recognition from being used to violate fundamental human rights, because the ethics charters currently adopted by businesses and governments won't cut it, warns a new report from digital rights organization Access Now. 

The past few years have seen "ethical AI" become a hot topic, with requirements such as oversight, safety, privacy, transparency, or accountability being added to codes of conduct for private and public organizations alike. From 5% in 2019, in fact, the proportion of organizations that now have an AI ethics charter has jumped to 45% in 2020.  

The EU's guidelines for "Trustworthy AI" have informed many of these documents; in addition, the European bloc recently published a white paper on artificial intelligence presenting a so-called "European framework for AI", with ethics at its core. 

SEE: Managing AI and ML in the enterprise 2020: Tech leaders increase project development and implementation (TechRepublic Premium)

How much real change has happened as a result of those ethical guidelines is up for debate. Access Now's new report describes the spread of "trustworthy AI" in the past few years as nothing more than a "branding exercise", that failed to actually provide accountability for the content of charters and codes, no matter how carefully drafted they are. 

The problem, argues Fanny Hidvégi, Europe policy manager at Access Now and co-author of the report, is that ethics can mean anything and everything. "In ethics, the moral impetus comes from the inside of a human or an organization," she tells ZDNet. "It's about your own internal commitment to that framework. 

"Ethics is not enforceable, it's not binding, it's not a legal system," she continues. "We need that external legal system that can be enforced in a transparent and accessible way, and that individuals, businesses and organizations can turn to." 

According to the researcher, the regulation of AI applications is not an ethical issue, but a human rights one. As such, it shouldn't be up to each individual organization to draft their own ethical framework for the technology. In the EU, AI applications should instead come under the umbrella of the existing, and legally binding, EU Charter of Fundamental Rights. 

There are countless cases of algorithms interfering with citizens' fundamental rights. Some high-profile examples have involved facial-recognition technology, to the extent that the EU Commission's vice president Margrethe Vestager has called applications such as predictive policing "not acceptable" in the EU. The European Data Protection Supervisor Wojciech Wiewiórowski has also announced his goal to convince the European Commission to implement a moratorium on the use of facial recognition in public spaces. 

But algorithms can also step on human rights in more pernicious ways. The UK's Center for Data Ethics and Innovation (CDEI) recently found that biased AI systems deployed in welfare, transportation, education or housing could result in unfair outcomes that discriminate against certain citizens. This is especially the case in policing, where data analytics tools based on historical data are likely to perpetuate unfair patterns and biases. 

Even the most well-intentioned ethics charters have little power to prevent the harms caused by the use of those AI systems. From an exam algorithm that unfairly downgraded students' exam results in the UK, to search engines that fail to show high-paid jobs ads to women: the past few years have all but shown that when it comes to AI, existing ethical frameworks are often a case of too little, too late. 

Access Now argues that a legal framework based on human rights – and not self-ascribed ethics – needs to be enforced to control AI systems. The organization has called, for example, for an outright ban on applications like facial recognition, which the researchers argue requires red lines rather than risk mitigation. 

"We see this as a layered approach," says Hidvégi. "For certain areas, where there are no remedies to redress the violations, we think a ban is appropriate, as the highest level of protection. But there could be other layers with different safeguards, for example a mandatory human rights impact assessment as a tool to enforce human rights, or a public registry to record the automated systems used in the public sector.

"What we want is to translate the different safeguards that are currently in the EU Charter on a principle level, into practical terms," continues the researcher. 

SEE: The algorithms are watching us, but who is watching the algorithms?

How well the idea will go down with industry players is uncertain. The regulation of AI has often been condemned as potentially hampering innovation and stifling the freedom of businesses. Earlier this year, the US government actively advised the European bloc against "heavy-handed" laws that might hamper the growth of the sector. 

Hidvégi has an entirely different perspective on the matter. She says that regulating AI, far from putting European businesses at a disadvantage, is an opportunity for the EU. Taking leadership in the governance of AI is only likely to increase the European bloc's influence in the industry, eventually leading to EU standards becoming a guarantee of quality.  

"There is renewed interest among consumers in digital rights," says Hidvégi. "These rights matter to people, therefore businesses must care too. There is a competitive advantage we hope to provide in Europe by creating a higher trust in services and products." 

The General Data Protection Regulation (GDPR), passed two years ago, has grown into a gold standard for digital privacy, and Hidvégi hopes that a similar trend could emerge for AI systems, with the EU at its lead.  

Taking an even stronger stance on the matter could let the EU distinguish itself against China and the US in the race for AI, while providing an extra layer of security for citizens that are directly impacted by algorithmic systems. "For the EU, it's both an opportunity and an obligation to follow through," says Hidvégi. Whether the call will be heard, however, remains to be seen. 

Editorial standards