EU Commission Grapples with Regulation of Artificial Intelligence

BY Aya M. Hoffman

A leaked white paper reveals that the European Commission is grappling with the question of how to regulate artificial intelligence across the European Union (EU). The Commission's paper outlines the key elements of a future comprehensive European legislative framework for artificial intelligence.

Although artificial intelligence is subject to a broad spectrum of existing legislation in the EU, including data protection, gender equality, consumer law, and product safety, the Commission acknowledged that the current state of the law "might not fully cover all of the specific risks that artificial intelligence brings." In particular, the Commission raised concerns regarding a lack of regulatory tools to effectively ensure that artificial intelligence complies with current requirements.

The white paper set forth five regulatory options for implementation across the EU: (i) a voluntary labeling framework for certified "ethical/trustworthy artificial intelligence," (ii) specific requirements for the use of artificial intelligence by public authorities, (iii) the application of a risk-based approach to regulation of artificial intelligence applications, (iv) targeted amendments to existing EU safety and liability legislation, and (v) a system of public oversight for artificial intelligence, building on the existing network of authorities for product and consumer safety.

Participation in the proposed labeling framework would be voluntary, but once developers opt-in, they would be bound to comply with certain requirements for ethical and trustworthy artificial intelligence. The white paper suggests that the labeling requirements will build upon the Guidelines for Trustworthy Artificial Intelligence, published in April 2019. The guidelines established seven criteria necessary for artificial intelligence systems to be considered trustworthy: (i) human agency and oversight, (ii) technical robustness and safety, (iii) privacy and data governance, (iv) transparency, (v) diversity, (vi) non-discrimination and fairness, and (vii) societal and environmental well-being and accountability.

Regulation of the use of artificial intelligence by public authorities would focus on a particular area of public concern and has the potential to have "an important signaling effect on the private sector." Of particular note, the white paper suggests that requirements for public authorities could be coupled with specific rules on the use of facial recognition technology, in both the public and private sectors, and dovetail with the provisions of the General Data Protection Regulation. In order to take adequate time to consider the risks of facial recognition technology, the Commission proposed a three-to-five-year ban on the use of such technology in public spaces. However, the white paper acknowledged the possible detrimental impacts of such a ban on the development of facial recognition technologies.

The white paper also included a proposal for a differentiated, risk-based approach to regulation, in which certain sectors and applications designated "high-risk" would be subject to greater regulation than "low-risk" uses of artificial intelligence systems. The types of industries and activities subject to greater regulation might include health care, transportation, infrastructure, and predictive policing.

While recognizing that the EU has an existing body of product safety and liability laws, the white paper proposes the development of a new horizontal piece of legislation to establish transparency and accountability requirements, in conjunction with targeted amendments to existing legislation. The white paper envisions that each EU member state would appoint authorities to monitor and enforce the regulatory framework.

The final version of the European Commission's white paper on artificial intelligence is anticipated to be released in February 2020.

author img


Aya M. Hoffman


Posts By this author