The GAO recently published “Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities” (along with highlights), and government contractors should heed those insights from the Comptroller General’s Forum on the Oversight of Artificial Intelligence.
Recognizing that AI “inputs and operations are not always visible,” the GAO sought to “identify key practices to help ensure accountability and responsible AI use by federal agencies and other entities involved in the design, development, deployment, and continuous monitoring of AI systems.” Highlights at 1. The GAO’ s AI accountability framework is organized around four principles:
- “Governance: Promote accountability by establishing processes to manage, operate, and oversee implementation; ”
- “Data: Ensure quality, reliability, and representativeness of data sources and processing; ”
- “Performance: Produce results that are consistent with program objectives; ”
- “Monitoring: Ensure reliability and relevance over time.”
According to the GAO, each principle is considered on two levels.
Governance must be considered at both the organizational level and the systems level. At the organizational level, AI must have defined goals, and stakeholders must ensure sustained oversight to foster public trust and mitigate risk. Framework at 5. At the systems level, AI must not only comply with relevant laws and standards, but must provide access for external stakeholders to information about the AI's design, operation, and limitations. Id.
Data must be considered at the model development level and the system operation level. At the model development level, sources of data for developing the models must be documented, and the collection and augmentation of such data should be detailed to confirm reliability. Id. at 6. At the system operation level, biases and security must be regularly tested and assessed. Id.
Performance must be considered at the component level and the system level. At the component level, the performance of each component should be measured against defined metrics to confirm that outputs are appropriate within the AI's operational context. Id. at 7. At the system level, AI must be supervised by a person to ensure accountability and to assess performance, output, and biases. Id.
Monitoring — continuous monitoring and expanded use — AI must be monitored continuously to ensure that it is performing as intended, including by documenting all monitoring activities and corrective actions. Id. at 8. Changes in data and models must also be continually assessed to confirm that the AI is still performing as appropriate and to expand its use into other spheres. Id.
As international organizations, Congress, and the Executive branch continue to weigh in on the development, implementation, and oversight of AI, Nixon Peabody will continue to monitor legislative developments and provide practical considerations.