Areas of Practice
6th Oct 2023
Introduction:
In recent years, the development of artificial intelligence (AI) has advanced rapidly, promising economic and societal benefits and building expectations for innovation across a wide spectrum of industries. However, the increasing use of AI has also raised concerns about its ethical and legal implications. The scale at which AI can be used, the speed at which AI systems operate and the complexity of the underlying models may pose challenges to both market participants and supervisory authorities. Consequently, most regulators are in the early stages of developing AI specific governance principles or guidance. For instance, in 2021 the European Insurance and Occupational Pensions Authority (EIOPA) published a report on AI governance principles addressing the importance of digital ethics in insurance. Similarly, in November 2021 the European Banking Authority (EBA) published a discussion paper on Machine Learning (ML) for internal rating-based (IRB) models, aiming to address the challenges and opportunities associated with using ML to IRB models for calculating regulatory capital for credit risk.
Notwithstanding the above, the most important EU effort for developing a regulatory framework on AI came from the EU Commission by introducing its AI strategy package in April 2021. In order to promote the development of AI and address the potentially high risks it poses to safety and fundamental rights, the Commission presented the following:
The AI Act is expected to be a landmark piece of legislation governing the use of AI in the EU. According to the explanatory memorandum of the Proposal, the AI Act aims to achieve the double objective of promoting the development and use of AI technology, and addressing the risks associated with certain uses of such technology, by proposing a legal framework for “trustworthy” AI. The Commission further clarified that the Proposal is based on EU values and fundamental rights (“the rules available in the EU market…. shall be human centric”), and that the AI Act aims to motivate users to embrace AI-based solutions, while simultaneously encouraging businesses to develop them.
Main objectives and subject matter of the AI Act
The AI Act has the following specific objectives, as outlined in the explanatory memorandum of the Proposal:
Scope and Definitions of the AI Act (Title I):
The AI Act is expansive in its scope of application since it applies to providers that put into service AI systems in the EU (irrespective of whether those providers are established within the EU or in a non-EU country). It also applies to users of AI systems located within the EU and to providers and users of AI systems that are located in a third country, “where the output produced by the system is used in the Union”.
In respect to the definitions, the AI Act introduced a definition for “artificial intelligence system” (“AI system”).[2] According to the explanatory memorandum, the said definition aims to be “as technology neutral as possible and future proof as possible”, by taking into account the fast technological and market developments related to AI. This is achieved through the inclusion of Annex I in the AI Act which lists approaches and techniques for the development of AI systems, such as machine learning approaches, logic and knowledge-based approaches, statistical approaches etc.
The AI Act also introduces a definition of “providers” which includes not only a natural or legal person, but also public authorities and other bodies that develop AI systems, or that have an AI system developed with a view to placing it on the market.
Prohibited Artificial Intelligence Practices (Title II)
Title II contains a list of prohibited artificial intelligence practices, by following a risk-based approach, distinguishing between uses of AI that create:
High-Risk AI Systems (Title III)
Title III contains special regulations for AI systems that may pose significant harm to the health and safety or fundamental rights of natural persons. These high-risk systems are allowed on the EU market, provided that certain mandatory requirements are met (under certain EU harmonization legislations listed in Annex II of the AI Act), and an ex-ante conformity assessment is completed.
According to the explanatory memorandum of the AI Act, Chapter 1 sets the classification rules and identifies two main categories of high-risk AI systems:
Annex III lists AI systems which relate to the following specific areas (and which the Commission would be able to update by way of a delegated act in accordance with Article 7 of the AI Act):
Chapter 2 sets out several legal requirements for high-risk AI systems in relation to data and data governance (Article 10), the technical documentation (Article 11) and record keeping (Article 12), transparency and provision of information to users (Article 13), human oversight (Article 14), accuracy, robustness and security (Article 15).
Chapter 3 provides a set of obligations for providers of high-risk AI which extend also to users and other participants such as importers, distributors or any other third parties.
Chapter 4 sets the regulatory framework for notified bodies which are the conformity assessment bodies that perform the actual third-party conformity. Notified bodies act as independent third parties in conformity assessment activities, which include testing, certification and inspection of high-risk AI systems. In addition, Chapter 4 obliges all member states to designate or establish a notifying authority that will be responsible for setting up and carrying out the necessary procedures for the registration and monitoring of the notified bodies.
Chapter 5 explains in detail the conformity assessment procedures to be followed for each type of high-risk AI system.
It is clarified in the Proposal that the provisions of the AI Act which relate to the conformity assessment will be integrated into existing safety legislation (as outlined in Annex II) to ensure consistency and minimise additional burdens. Therefore, the requirements of the AI Act will be assessed as part of the existing conformity assessment procedures under the relevant legislation of Annex II.
Transparency Obligations for certain AI systems (Title IV)
Title IV stipulates the transparency obligations of AI systems providers that apply to systems that: (i) interact with natural persons; (ii) are used to detect emotions or determine the social category of a person via biometric data (biometric categorisation system); (iii) generate or manipulate content (deep fakes). The general principle underlying the transparency obligations is that when an AI system performs the abovementioned functions, people must be informed accordingly. For instance, if an AI system is used to generate or manipulate video or audio content that resembles authentic content, there should be an obligation to disclose or inform the person that the content is generated through AI systems.
There are exceptions to the transparency obligations when the AI systems are used for legitimate purposes, such as law enforcement, freedom of expression etc.
Measures in Support of Innovation (Title V)
The AI Act aims to strike a fair balance between regulation and innovation on AI systems, and to that end, it encourages national competent authorities to establish regulatory sandboxes, which would allow businesses to test and experiment with new and innovative products and services under the supervision of the national competent authority.
Moreover, an important feature of this part of the AI Act encourages member states to take measures to reduce the regulatory burden of SMEs and start-ups (small-scale providers). Such measures include the reduction of fees for conformity assessment, and the establishment of a dedicated channel for communication with small-scale providers.
Governance and Implementation (Titles VI, VII and VII)
Title VI sets up the governance systems at EU and national levels. At EU level, the AI Act requires the establishment of a European Artificial Intelligence Board (composed of representatives from the member states and the Commission), whose aim will be the effective and harmonised implementation of the AI Act, by supporting the national supervisory authorities and providing advice and expertise to the Commission.[5]
At national level, the AI Act requires member states to designate one or more national competent authorities and, among them, the national supervisory authority, for the purpose of supervising the application and implementation of the AI Act.
Title VII aims to facilitate the monitoring work of the Commission and national authorities through the establishment of an EU-wide database for stand-alone high-risk AI systems.
Title VIII requires providers of AI systems to establish a post-market monitoring[6] system to ensure that possible risks emerging from AI systems which continue to “learn” after being placed on service can be efficiently traced and handled. This part sets out the post-market monitoring and reporting obligations for providers of AI systems, and obligations for investigating AI-related incidents and malfunctioning.
To ensure appropriate and effective enforcement of the above, the system of market surveillance and product conformity created by Regulation (EU) 2019/1020 on Market Surveillance and Compliance of Products[7] should be applied in its entirety. The aforementioned regulation and the AI Act, empower public authorities to intervene in case AI systems generate unexpected risks, which require prompt action.
Codes of Conduct (Title IX)
Title IX aims to encourage providers of non-high-risk AI systems to create and implement their own codes of conduct. Such codes of conduct may be related, inter alia, to environmental sustainability, accessibility for disabled persons, stakeholders’ participation in the design and development of AI systems etc.
Confidentiality and Penalties (Title X)
Title X, Article 70 stresses the requirement for all parties to uphold the confidentiality of information and data and lays forth guidelines for the sharing of information gathered from the regulation’s implementation. Article 71 requires member states to establish rules for effective and proportionate penalties including administrative fines for violating the AI Act.
The AI Act provides that administrative fines of up to EUR 30 million or, if the offender is a company, up to 6% of its total worldwide annual turnover (for the preceding financial year), whichever is higher, will be imposed for the following infringements:
(a) non-compliance with the Article 5 prohibition on the use of artificial intelligence practices; and
(b) non-compliance of the AI system with the requirements laid down in Article 10.
The non-compliance of the AI system with any requirements or obligations under the AI Act, other than those of Article 5 and 10, will be subject to administrative fines of up to EUR 20 million or, if the offender is a company, up to 4% of its total worldwide income (for the preceding financial year), whichever is higher.
The supply of incorrect, incomplete, or misleading information to notified bodies and national competent authorities in reply to a request will be subject to administrative fines of EUR 10 million, or in case of a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
Conclusion
To sum up, the proposed AI Act represents a major step towards addressing the ethical, legal, and societal challenges posed by the rapid advancement of AI technologies. The AI Act aims to balance the potential risks and benefits of AI by first recognizing such potential risks and benefits associated with AI, while also safeguarding human rights, privacy, and transparency.
The proposed framework recognizes the need for comprehensive regulations to ensure accountability, fairness, and safety in the development, deployment, and use of AI systems. It highlights the importance of a human-centric AI, emphasizing that AI should always serve human interests and not compromise fundamental rights or exacerbate social inequalities.
However, it is important to acknowledge that implementing a comprehensive regulatory framework on AI is a complex task, since balancing innovation with regulation requires continuous assessment and adaptation to keep a steady pace with fast evolving AI technology. The AI Act aims to satisfy the requirement of striking a balance between innovation and regulation, although there has been great criticism against it, mainly, by arguments that overregulation may hinder innovation and that compliance with the AI Act could cause a significant burden on EU companies, especially SMEs.
Next Steps
It is anticipated that the AI Act will be finalised after the conclusion of the negotiations among the three EU policymaking bodies (the Parliament, Council and Commission), which is expected to be enacted by the end of 2023.
[1] the Proposal for a Regulatory Framework on Artificial Intelligence, the Communication on Fostering a European approach to Artificial Intelligence, and the review of the Coordinated Plan on Artificial Intelligence can be all found at: https://digital-strategy.ec.europa.eu/en/library/communication-fostering-european-approach-artificial-intelligence
[2] ‘artificial intelligence system’ (AI system) means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with;
[3]Adopted in 2008, the new legislative framework aims to improve the internal market for goods and strengthen the conditions for placing a wide range of products on the EU market. It is a package of measures that aim to improve market surveillance and boost the quality of conformity assessments. It also clarifies the use of CE marking and creates a toolbox of measures for use in product legislation.
[4]In Annex II-A, AI systems intended to be used as a safety component of a product, or themselves a product, which are already regulated under the NLF (e.g. machinery, toys, medical devices) and, in Annex II-B, other categories of harmonised EU law (e.g. boats, rail, motor vehicles, aircraft, etc.).
[5] It will also collect and share best practices among the member states.
[6] ‘post-market monitoring’ means all activities carried out by providers of AI systems to proactively collect and review experience gained from the use of AI systems they place on the market or put into service for the purpose of identifying any need to immediately apply any necessary corrective or preventive actions.
[7] The regulation aims to protect customers’ health and safety, the environment and other public interests by improving and modernising market surveillance.
Our website will provide you with an overview of our services and the advice we provide. If you would like further information about how we can assist you, please contact us.
Call: +357 22 777000 | Email: info@chrysostomides.com.cy