Different levels of government struggle with artificial intelligence regulation

BY Daniel J. Schwartz

Recognizing the importance of artificial intelligence (AI) to the economy and our daily lives, several branches of the federal government, along with various state executive, legislative, and rule-making bodies have recently addressed whether and how to regulate the development, use, and protection of AI. The developments discussed below underlie one core principle: governments are still grappling with the issues of when and how to regulate/legislate AI technologies. These issues will become more prevalent as the economy emerges from COVID-19 shutdowns.

As COVID-19 began sweeping across the country, the White House Office of Science and Technology published its 2020 Annual Report on the “American Artificial Intelligence Initiative” (“AI Initiative”) highlighting its most recent developments. While the AI Initiative outlines long-term goals concerning the United States’ prominence in AI, the report primarily outlines internal governmental use and regulation of AI. For instance, the White House has proposed doubling the government’s FY2021 budget dedicated to non-defense AI research and development, and provided guidance to all agencies to further the “safe development, testing, deployment, and adoption of AI technologies,” by “driving the development of appropriate AI technical standards” and by increasing access to federal datasets for AI R&D.

In addition, the U.S. Patent & Trademark Office (USPTO) has embarked on its own Artificial Intelligence Initiative, issuing two separate requests for public comments on various AI issues. The requests have drawn substantial interest and feedback from industry, academia, and legal practitioners. The USPTO has not yet instituted any substantive changes in response to the recent public submissions—a summary and analysis will be provided here shortly—but the USPTO has made certain rulings impacting AI and patent issues. For instance, the USPTO recently decided that only natural persons qualify as inventors, addressing the question of whether a “machine” could invent something and be rewarded a patent for that invention. The USPTO’s decision is similar to decisions by the European Patent Office and the UK Patent Office.

Similar to the federal government, local governments are looking at unique issues related to AI. New York City created the first-of-its-kind “Algorithm Task Force” to address the use of automated decision-making across various functions of local government (e.g., from determining which schools are attended by which families, to deciding whether individuals should be released on bail prior to trial). The Task Force completed its work in late 2019 with few tangible developments to its credit, but it underscored the difficulties governmental units have with determining when and how to regulate these technologies.

Litigation involving AI technologies has also begun. In addition to patent litigation (discussed in a separate post), litigation has spanned various areas of law. In Wisconsin, the state Supreme Court addressed the use of AI technologies in sentencing determinations, finding that criminal defendants do not have the right to know how algorithms produced the risk-factors used by the trial court in determining a prison sentence. In Michigan, litigation is ongoing over the state’s use of an automated system that wrongly identified over 20,000 individuals as having committed unemployment benefits fraud. Using the results of the automated system—eventually determined to have been designed with flawed assumptions about who would commit fraud and how it would be committed—the state began enforcement proceedings against thousands of Michigan residents. While litigation continues in these cases, Michigan is also considering legislation to address the use of such automated systems.

We will continue to monitor these developments and others as they impact how AI is developed and used throughout the United States.

author img


Daniel J. Schwartz


Posts By this author