The European Commission released draft legislation on artificial intelligence (AI). The proposed text lays out a framework for addressing critical issues related to implementation of AI practices. The draft legislation is thoughtfully crafted and develops key criteria for mitigating risks of AI use, while recognizing its benefits.
The legislation proposes jurisdiction over providers of AI-based systems in Europe, regardless of their location, and users of AI-based systems located in Europe. Significantly, jurisdiction is also extended to providers and users located outside Europe, if the output of the AI-based system is used in Europe. Application of such a principle can become complex and consequential, for example, for businesses that use data on European citizens and entities, without actively operating in Europe. The scope of definition of key terms, such as “provider,” “user,” “output,” and others may be re-evaluated during the legislative process as well. The legislation does not seek jurisdiction over AI-based systems developed or used exclusively for military purposes.
The legislation creates a risk-based categorization of uses of AI—unacceptable risk, high risk, and minimal risk. The legislation prohibits certain practices of AI for having unacceptable risk. These include, among others, (i) deploying “subliminal techniques beyond a person’s consciousness” or “exploiting vulnerabilities of a specific group of persons due to their age, [and] physical or mental disability” to materially change a person’s behavior, leading to physical or psychological harm to that or another person; (ii) evaluating or classifying trustworthiness of a person based on social behavior or known/predicted personal characteristics, leading to harmful treatment of the person; and (iii) using “real-time” biometric identification system in public places for policing, unless related to prevention of crimes, terrorist attacks, etc.
The legislation mandates specific requirements for high-risk uses of AI, including uses that have a significant harmful impact on health, safety, and fundamental rights of persons. These include uses as, among others, a product or as a safety component of a product, for screening employment applications, for evaluating credit scores, and the like. Implementation of high-risk AI-based systems would, under the proposal, require (i) establishment and maintenance of a risk management system; (ii) use of appropriate training, validation, and testing data; (iii) preparation of specific technical documentation related to the AI-based system; (iv) record-keeping of incidents and accidents; (v) transparency and provision of information to users; (vi) enablement of human oversight; and (vii) achievement of an appropriate level of accuracy, robustness, and cybersecurity. Additionally, providers would be required to comply with a number of measures like undergoing a “conformity assessment procedure” before implementing high-risk AI-based systems, registering the AI-based system in a public database in Europe, and taking appropriate corrective action when needed.
The legislation provides guidelines for enforcing regulations of AI-based systems and imposition of penalties for non-compliance. The proposed penalties appear to be substantial and may be deliberated during the legislative process. Finally, the legislation proposes establishing a European Artificial Intelligence Board for providing advice and assistance in contributing to development and consistent application of the regulations.
The draft legislation is currently open for comments and will be presented to the European Parliament for further consideration. The legislative framework in the proposal, with its underlying focus on ethical principles and risk mitigation, may well be largely adopted. However, some definitions and provisions will likely change, subject to negotiations during the legislative process. We will monitor any such developments and assess their potential impact on business decisions of companies developing or using AI-based systems.