Tech
New White House directive prods DOD, intelligence agencies to move faster adopting AI capabilities
A new national security memorandum from President Biden aims to speed up the Pentagon’s and intelligence community’s adoption of emerging artificial intelligence capabilities while addressing security concerns associated with the technology.
The document, released Thursday, includes provisions to accelerate the U.S. government’s use of AI to further national security missions, including by tapping into fast-moving innovation in the private sector.
Speaking to military service members and others at the National Defense University during the rollout of the guidance, White House National Security Adviser Jake Sullivan said the United States is currently the top dog when it comes to “latent” capabilities that could be applied to these types of missions, but America risks squandering its lead if it doesn’t move faster in fielding new tools to its forces.
“The core insight that we’ve come to in the last couple of years is that we’re ahead when it comes to the latent capability, the United States has the best latent capability on AI in the world. How do we transform that into actual application on the battlefield, in our logistics, in our intelligence enterprise?” Sullivan said.
“If you think about the … national security memorandum, what it actually tries to do is put down a roadmap that says: Here’s how the national security enterprise, the joint force, the intelligence community should work with private sector partners, and here’s how that you can work in a transparent, effective and, yes, legal way to make sure that we are adopting private sector developed technologies, capabilities and solutions into the force, into our intelligence community, in a way that’s also shared across the entire national security enterprise. So the whole design of this second pillar is about answering … how do we take a solution that [a company like] IBM has developed that has applications for warfighting or for logistics or for intelligence analysis, incorporate it, and then also make sure that it is available on a consistent basis across the board and that we’re not setting up multiple different, competing or inconsistent solutions?” he said.
Sullivan added: “Before this NSM was passed, some of this work was getting done in a patchwork way by, you know, entrepreneurial people within the various services, but for the first time, we now have a framework to say: Here is a demand signal to industry, we want what you are offering, we want to incorporate it rapidly, effectively, comprehensively and in a way that reduces overlap, gaps and conflicts.”
The Pentagon is pursuing new artificial intelligence tools with the hopes of deploying new technologies across its vast enterprise, from back offices to the battlefield.
AI-enabled applications will transform the way the U.S. military trains and fights, but it’s not easy for government officials to predict exactly what form they will take and how fast they will come, Sullivan noted.
“Bottom line: Opportunities are already at hand and more soon will be. So we’ve got to seize them quickly and effectively, or our competitors will first,” Sullivan said, adding that there needs to be significant technical, organizational and policy changes that ease collaboration with the innovators that are driving the development of the tech.
Agencies are being directed to look for ways to boost collaboration with nontraditional vendors, such as leading AI companies and cloud computing providers.
“In practice, that means quickly putting the most advanced systems to use in our national security enterprise just after they’re developed, like how many in private industry are doing. We need to be getting fast adoption of these systems, which are iterating and advancing, as we see every few months,” Sullivan said.
The new memo highlights the need for more coordinated and effective acquisition and procurement systems across national security agencies, including a bolstered capacity to assess, define and articulate AI-related requirements and greater accessibility for companies in this sector that lack significant prior experience working with Uncle Sam.
The guidance directs the Defense Department and Office of the Director of National Intelligence, in cooperation with the White House Office of Management and Budget and other agencies, to establish a working group within 30 days to address issues involving the procurement of artificial intelligence technologies by DOD and the intel community and develop new recommendations for acquiring them for use on national security systems.
Within 210 days of the memo’s issuance, the working group is tasked with providing written recommendations to the Federal Acquisition Regulatory Council for changing existing regulations.
“DOD and ODNI shall seek to engage on an ongoing basis with diverse United States private sector stakeholders — including AI technology and defense companies and members of the United States investor community — to identify and better understand emerging capabilities that would benefit or otherwise affect the United States national security mission,” the memo states.
However, officials also note that there are numerous risks associated with adopting artificial intelligence tools for national security missions.
The Pentagon previously laid out a plan for implementing “responsible AI” and updated its autonomous weapons policy, both of which are intended to provide safeguards for making sure artificial intelligence-enabled systems don’t go off the rails after they’re developed and fielded.
The new White House memo outlines a number of concerns related to the deployment of AI tech on national security systems, including risks to physical safety; privacy; discrimination and bias; inappropriate use; lack of transparency and accountability; data spillage; poor performance; and deliberate manipulation and misuse.
Operators may not fully understand the capabilities and limitations of AI tools, including in warfighting scenarios. That could hinder their ability to exercise appropriate levels of human judgment. Additionally, insufficient training programs and guidance could result in over-reliance on these types of systems, to include so-called “automation bias,” the memo noted.
There are also concerns that use of the technology by U.S. national security agencies could end up benefitting adversaries without proper safeguards.
“AI systems may reveal aspects of their training data — either inadvertently or through deliberate manipulation by malicious actors — and data spillage may result from AI systems trained on classified or controlled information when used on networks where such information is not permitted,” the memo states. Additionally, “foreign state competitors and malicious actors may deliberately undermine the accuracy and efficacy of AI systems, or seek to extract sensitive information from such systems.”
Within 180 days of the issuance of the memorandum, the heads of the Defense Department, ODNI and other relevant agencies are tasked with updating guidance to their components on AI governance and risk management for national security systems, which will subsequently be reviewed on an annual basis and updated as needed.
A “Framework to Advance AI Governance and Risk Management in National Security” is to be approved by the NSC Deputies Committee and reviewed “periodically” to determine whether changes are needed to address risks identified in the memo.