Artificial Intelligence & Machine Learning , Governance & Risk Management , Next-Generation Technologies & Secure Development

NIST Works to Create AI Risk Management Framework

Agency Now Seeks Feedback to Help Address Governance Challenges
NIST Works to Create AI Risk Management Framework
(Photo: Gerd Altmann/Pixabay)

Citing a need to secure artificial intelligence technologies, the National Institute of Standards and Technology is working to develop risk management guidance around the use of artificial intelligence and machine learning, the agency has announced.

See Also: Take Inventory of Your Medical Device Security Risks

A NIST request for information appearing in the Federal Register aims to collect stakeholder input for guidance that will help technology developers, users and evaluators improve the trustworthiness of AI systems as well as create a framework.

The request from NIST follows direction from Congress for the agency, which is part of the U.S. Commerce Department, as well as the executive order signed in February 2019 by former President Donald Trump.

The congressional action and executive order direct NIST to create "a plan for federal engagement in the development of technical standards and related tools in support of … systems that use AI technologies."

"Each day it becomes more apparent that artificial intelligence brings us a wide range of innovations and new capabilities that can advance our economy, security and quality of life. It is critical that we are mindful and equipped to manage the risks that AI technologies introduce, along with their benefits," Deputy Commerce Secretary Don Graves says. "This [framework] will help designers, developers and users of AI take all of these factors into account - and thereby improve U.S. capabilities in a very competitive global AI market."

Meeting a Major Need

NIST says it hopes to understand how organizations and individuals involved with developing and using AI systems might be able to address cybersecurity, privacy and safety risks.

"The [framework] will meet a major need in advancing trustworthy approaches to AI to serve all people in responsible, equitable and beneficial ways," says Lynne Parker, director of the National AI Initiative Office in the White House Office of Science and Technology Policy. "AI researchers and developers need and want to consider risks before, during and after the development of AI technologies, and this framework will inform and guide their efforts."

In its request for information, NIST notes: "Trust is established by ensuring that AI systems are cognizant of and are built to align with core values in society, and in ways which minimize harm to individuals, groups, communities, and societies at large."

"Inside and outside the U.S., there are diverse views about what that entails, including who is responsible for installing trustworthiness" throughout the product life cycle, NIST says.

In response, the standards body endeavors to "cultivate the public's trust in … AI in ways that enhance economic security," NIST continues.

AI Framework

Components of the framework, the agency says, will focus on trustworthiness, explainability and interoperability, data reliability, privacy, robustness, safety, security and mitigation of unintended or harmful uses. The framework will be:

  • Consensus-driven and regularly updated: It will also offer common definitions.
  • In plain language: It will be understandable to a broad audience, with sufficient technical depth.
  • Adaptable: It will apply to a variety of organizations, technologies, life cycle phases, sectors and uses.
  • Risk-based: It will be voluntary and nonprescriptive.
  • Readily usable and consistent: It will be useful for a broad risk management strategy.

"For AI to reach its full potential as a benefit to society, it must be a trustworthy technology," says Elham Tabassi, federal AI standards coordinator for NIST and a member of the National AI Research Resource Task Force. "While it may be impossible to eliminate the risks inherent in AI, we are developing this guidance framework [in a way] we hope will encourage its wide adoption."

Responses to NIST's request for information are due Aug. 19. The standards agency will hold a workshop in September to shape the framework.

Other AI Activity

AI has remained a substantial focus for the U.S. government in recent weeks.

In a National Security Commission on Artificial Intelligence conference this month, U.S. Secretary of State Tony Blinken emphasized the importance of collaborating with like-minded democratic nations around AI and emerging technologies. Blinken said the U.S. must develop an AI governance model that embodies common values.

At the same conference, Defense Secretary Lloyd Austin told attendees that more than 600 AI research, development and testing efforts are in progress across the Defense Department, and that AI remains a top research and development priority.

In a virtual web briefing this month, the National Security Agency's Jason Wang, who serves as technical director for the Computer and Analytic Sciences Research Group, called AI the next cybersecurity frontier, touting its ability to exceed human-time responses for dynamic problems, while citing a need to "mature" AI security mechanisms.

In addition, the Government Accountability Office recently developed a framework for the use of AI by federal agencies - covering governance, data, performance and monitoring.

"As a nation, we have yet to grasp the full benefits or unwanted effects of artificial intelligence," the GAO's report says.

Its four-part framework ensures quality and reliability of data sources, monitoring reliability and relevance, accountability around implementation, and consistent results.

In response to directives included in President Joe Biden's May executive order on cybersecurity, NIST also recently defined and later published best practices for "critical software" (see: NIST Publishes 'Critical Software' Security Guidance).


About the Author

Dan Gunderman

Dan Gunderman

Former News Desk Staff Writer

As staff writer on the news desk at Information Security Media Group, Gunderman covered governmental/geopolitical cybersecurity updates from across the globe. Previously, he was the editor of Cyber Security Hub, or CSHub.com, covering enterprise security news and strategy for CISOs, CIOs and top decision-makers. He also formerly was a reporter for the New York Daily News, where he covered breaking news, politics, technology and more. Gunderman has also written and edited for such news publications as NorthJersey.com, Patch.com and CheatSheet.com.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing devicesecurity.io, you agree to our use of cookies.