The European Union’s law on artificial intelligence came into force on 1 August. The new AI Act essentially regulates what artificial intelligence can and cannot do in the EU. A team led by computer science professor Holger Hermanns from Saarland University and law professor Anne Lauber-Rönsberg from Dresden University of Technology has examined how the new legislation impacts the practical work of programmers. The results of their analysis will be published in the autumn.
Credit: Oliver Dietze
The European Union’s law on artificial intelligence came into force on 1 August. The new AI Act essentially regulates what artificial intelligence can and cannot do in the EU. A team led by computer science professor Holger Hermanns from Saarland University and law professor Anne Lauber-Rönsberg from Dresden University of Technology has examined how the new legislation impacts the practical work of programmers. The results of their analysis will be published in the autumn.
‘The AI Act shows that politicians have understood that AI can potentially pose a danger, especially when it impacts sensitive or health-related areas,’ said Holger Hermanns, professor of computer science at Saarland University. But how does the AI Act affect the work of the programmers who actually create AI software? According to Hermanns, there is one question that almost all programmers are asking about the new law: ‘So what do I actually need to know?’. After all, there aren’t many programmers with the time or inclination to read the full 144-page regulation from start to finish.
But an answer to this frequently asked question can be found in the research paper ‘AI Act for the Working Programmer’, which Holger Hermanns has written in collaboration with his doctoral student Sarah Sterz, postdoctoral researcher Hanwei Zhang, professor of law at TU Dresden Anne Lauber-Rönsberg and her research assistant Philip Meinel. Sarah Sterz summarized the main conclusion of the paper as follows: ‘On the whole, software developers and AI users won’t really notice much of a difference. The provisions of the AI Act only really become relevant when developing high-risk AI systems.’
The European AI Act aims to protect future users of a system from the possibility that an AI could treat them in a discriminatory, harmful or unjust manner. If an AI does not intrude in sensitive areas, it is not subject to the extensive regulations that apply to high-risk systems. Holger Hermanns offered the following concrete example as an illustration of what this means in practice: ‘If AI software is created with the aim of screening job applications and potentially filtering out applicants before a human HR professional is involved, then the developers of that software will be subject to the provisions of the AI Act as soon as the program is marketed or becomes operational. However, an AI that simulates the reactions of opponents in a computer game can still be developed and marketed without the app developers having to worry about the AI Act.’
But high-risk systems, which in addition to the applicant tracking software referred to above, also include algorithmic credit rating systems, medical software or programs that manage access to educational institutions such as universities, must conform to a strict set of rules set out in the AI Act now coming into force. ‘Firstly, programmers must ensure that the training data is fit for purpose and that the AI trained from it can actually perform its task properly,’ explained Holger Hermanns. For example, it is not permissible that a group of applicants is discriminated against because of representational biases in the training data. ‘These systems must also keep records (logs) so that it is possible to reconstruct which events occurred at what time, similar to the black box recorders fitted in planes,’ said Sarah Sterz. The AI Act also requires software providers to document how the system functions – as in a conventional user manual. The provider must also make all information available to the deployer so that the system can properly be overseen during its use in order to detect and correct errors. (Researchers have recently discussed the search for effective ‘human oversight’ strategies in another paper.)
Holger Hermanns summarized the impact of the AI Act in the following way: ‘The AI Act introduces a number of very significant constraints, but most software applications will barely be affected.’ Things that are already illegal today, such as the use of facial recognition algorithms for interpreting emotions, will remain prohibited. Non-contentious AI systems such as those used in video games or in spam filters will be hardly impacted by the AI Act. And the high-risk systems mentioned above will only be subject to legislative regulation when they enter the market or become operational,’ added Sarah Sterz. There will continue to be no restrictions on research and development, in either the public or private spheres.
‘I see little risk of Europe being left behind by international developments as a result of the AI Act,’ said Hermanns. In fact, Hermanns and his colleagues take an overall favourable view of the AI Act – the first piece of legislation that provides a legal framework for the use of artificial intelligence across an entire continent. ‘The Act is an attempt to regulate AI in a reasonable and fair way, and we believe it has been successful.’
Original publication
Preprint, the paper will appear in AISoLA 2024, Springer LNCS:
Hermanns, H., Lauber-Rönsberg, A., Meinel, P., Sterz, S., Zhang, H. (2024). AI Act for the Working Programmer:
Questions can be addressed to:
Prof. Dr. Holger Hermanns
Tel.: +49 681 302-5630
Email: hermanns@cs.uni-saarland.de
Sarah Sterz
Tel.: +49 681 302-5589
Email: sterz@depend.uni-saarland.de
Method of Research
Systematic review
Subject of Research
Not applicable
Article Title
AI Act for the Working Programmer (Preprint)
Article Publication Date
23-Jul-2024
Discover more from Science
Subscribe to get the latest posts sent to your email.