EU framework for AI usage will soon be established

During Sweden's current presidency of the Council of the European Union, the upcoming AI-act is one of the issues on the table. And it is a regulation that is highly important, according to several researchers at the Department of Law, Stockholm University.

Närbild på en mänsklig hand som skakar hand med en robot. I förgrunden syns en domarklubba
Already in April 2021, the European Commission presented a first draft of the proposed AI regulation and the issue now lies with the European Parliament. The hope is to come to a vote at the first part of 2023. Foto: Andrey Popov/Mostphotos

"It's an exciting time! Right now, a lot is happening both in terms of the development of AI and in legislation when it comes to regulating AI at the EU level. There are many interesting questions to investigate, both from practical and legal perspectives," says Sonia Bastigkeit Ericstam, a doctoral student in civil law at Stockholm University.

For Sonia, the upcoming AI regulation will be of great importance for the questions in her ongoing dissertation project, Algorithms in the workplace, as it classifies the use of AI in employment relationships as a so-called "high-risk" use. This means that employers who want to introduce AI will need to take several security measures, such as conducting risk assessments. The regulation's requirements for AI systems classified as high-risk have also led Claes Granmar, associate professor in European law, to take a closer look;

Claes Granmar

"I am currently writing an article on the requirement for human oversight of high-risk systems found in proposed Article 14 of the regulation. In addition to a requirement for technical solutions to handle 'black boxes' - i.e., when it is not possible to understand after the fact what led a machine to make a certain decision - the provision sets an organizational requirement that there are people who can easily correct an action and turn off the AI system in order to then find out what happened.

Claes explains that even though it may still seem a bit futuristic that an AI system would run amok and cause damage and devastation, it could be a fairly significant societal problem in a few years. Most likely, everything from household cleaning to waste collection, transportation, heavy industry, and public elections will be handled by AI systems in the future. This also applies to simpler types of law.

"EU's desire to create a common system for approving, monitoring, and sanctioning AI use in society is good. The opposite - allowing AI use to occur completely unregulated - would be incompatible with the rule of law and the Union's values," says Claes.

The first law in the world to define AI

Stanley Greenstein, associate professor of legal informatics at the Department of Law, has several ongoing research projects that are affected by the upcoming regulation. One of them is "Regulating Artificially Intelligent Diagnostic Algorithms in Orthopaedic Medicine." The project deals with the application of AI in healthcare, for example, to assist doctors in diagnosing and suggesting optimal treatments for an individual. He explains that even though we don't yet know what the final draft of the law will look like, it gives a good indication of where we are headed in terms of regulating AI.

"The EU's AI regulation is the first law in the world that attempts to define the concept of an 'AI system,'" says Greenstein. "I would say that it's very difficult, if not impossible, to define what AI is. It can be very subjective. One risk is that AI is defined so broadly that it captures a lot of techniques that aren't really considered to be AI. This, in turn, could lead to a lot more actors being forced to comply with the AI regulation even if they're not really developing or applying the kind of AI that is actually the subject of the regulation. It could be very burdensome and resource-intensive, which could hit smaller actors, organizations, and companies hard."

Claes Granmar is on the same track in terms of the potential negative consequences that can be envisaged from the proposed regulation. He says that the proposal may need to be adjusted to ensure that the target area really matches the intended scope of application and that the current risk is that uncertainties create problems and costs for both market actors and supervisory authorities.

"Of course, there is an enormous regulatory burden placed on providers and users of AI systems in professional activities with the new AI regulation. Few companies have any deeper understanding of the meaning of the EU's Charter of Rights and what the requirements for social responsibility, environmental responsibility, and more entail."

Aims to promote the emergence of AI

According to the EU Commission, the purpose of the proposed regulation is to increase trust in AI and thereby promote innovation and the growth of AI in Europe. This would make Europe "a global center for trustworthy artificial intelligence," but there are differing opinions on this within the Council of Ministers. An ongoing discussion concerns whether the security measures that surround AI classified as high-risk may disproportionately hinder the potential of AI development.

"It is of course important that it is possible to predict how the rules will be applied. At the same time, given that it is an area under development, it is important that there is an opportunity to classify, for example, new areas of use as high-risk or completely prohibited use. But I perceive that the EU legislator has put a lot of energy into striking this balance, not least in view of the fact that the regulation text is now being revised," says Sonia Bastigkeit Ericstam.

Claes Granmar explains that the regulation also provides for specific exemptions from product and usage requirements in order to promote research and development of AI systems. The proposed "regulatory sandboxes" aim to ensure that the Union's integrated regulatory framework does not hinder the growth of AI. Through such sandboxes, a controlled environment is created to test innovative technology for a limited time based on a test plan agreed with the competent authorities.

Balancing rights and innovation

Whether the regulation will actually impose such heavy requirements on those who use AI that it may ultimately slow down the development of AI in the EU compared to other countries that do not have to comply with the same regulations is difficult to predict, according to the three researchers. In addition, it is ultimately a question of balancing and what is valued most - human rights or technical innovation.

Stanley Greenstein

"AI is a technology that consists of powerful algorithms that can analyze incredibly large amounts of data in a short time through complex mathematical rules and calculations. We humans will never be able to completely understand how AI works one hundred percent. But we will continue to rely on it because it will lead to increased efficiency in many operations," says Stanley Greenstein, continuing,

"AI will become better than humans at making certain decisions in certain contexts because it is self-learning and has the capacity to process large amounts of data, but we will never really be able to understand how it has come to these decisions. This may eventually lead to a situation where, in the absence of cognitive ability to explain how AI works, we hand over control to machines. This would lead to problems in terms of how we apply already established legal concepts, such as liability. Ultimately, this is rooted in philosophical considerations about what kind of society we actually want to live in and how much we value being treated as humans and not as a digital representation of ourselves in the form of a collection of data points."

Claes Granmar adds,

"I would say that we are not facing a technological shift, but a humanity shift. Human existence will be changed by the various technological solutions that are now just around the corner, and the human hunger for 'knowledge' is always risky. It is to this changing reality that the law must relate, while also being a tool for shaping a new reality."

Text: Natalie Oliwsson, Department of Law