U.S. Navy Not Ready to Set Sale on AI Just Yet

Uncle Sam is generally bullish on all things artificial intelligence – among other ventures, the federal government is leveraging AI to safeguard vital U.S. government software, such as the software code that manages the internet and runs the nation’s infrastructure.

When it comes to the armed forces, government leaders are more reluctant to embrace AI unless significant guardrails are in place.

Take the U.S. Navy, which is reportedly proceeding with caution on full-scale AI implementation.

In an internal memo, the Navy’s acting Chief Information Officer Jane Rathbun noted “interim guardrail guidance” when considering the use of Generative Artificial Intelligence (AI) or Large Language Models (LLM).

“Generative AI models present unique and exciting opportunities for the Department of the Navy,” the memo stated. “These models have the potential to transform mission processes by automating and executing certain tasks with unprecedented speed and efficiency. Other service components are already performing extensive experiments with this advancing technology. This has led to an extensive demand for their adoption across the DON.”

Citing “powerful” artificial intelligence-based algorithms that possess the “extraordinary ability to return human-like responses to user-created prompts,” Rathbun says some LLMs bear close scrutiny before being deployed by the Navy.

Such AI models include OpenAI’s ChatGPT, Google’s Bard, and Meta’s LLaMA, the memo stated. “While offering great promise, LLMs warrant a cautious approach as indicated by the concern expressed by the National Command Authority, emerging technology industry leaders, and academia,” it noted.

A Path to Compliance

In the memo, Rathbun cited several guidance areas to cover.

On Guidance. As generative AI tools “are not infallible and can produce “hallucinations” and biased results” that can lead to fabricated data that appears authentic,” for this, and many other reasons, these tools must be accompanied by a robust review process which includes the critical thinking skills of human expertise,” the memo stated.

On Military Use. Commercial AI language models “are not recommended for operational use cases until security control requirements have been fully investigated, identified, and approved for use within controlled environments,” the memo noted.

On Data Protection and Security. AI users have to know that LLMs save every prompt they are given. This scenario presents a “persistent Operational Security (OPSEC) risk,” Rathbun says.

According to the memo, the use of proprietary or sensitive information “poses a unique security risk and has the potential to lead to data compromise when employed by commercial generative AI models.”

Consequently, the aggregation of individual generative AI prompts and outputs could lead to an inadvertent release of sensitive or classified information.

“In response to these security concerns, the DON is in the process of establishing rules of engagement and access to LLM technology through Jupiter, the DON Enterprise data and analytics platform,” the memo states. “This will be accompanied by established security measures to ensure continued alignment with existing policies and the protection of government data.”

Risk Mitigation Against a Forced Multiplier

The US Navy memo also includes a verification and validation section, which calls for a “meticulous information validation process before model generation”, with “human” oversight.


Simultaneously, the memo calls for responsibility and accountability guardrails on AI “for the responsibility and accountability for user-induced vulnerabilities, violations and unintended consequences incurred by the use and adoption of LLMs ultimately resides with each individual organization’s respective leadership.”

Apparently, the U.S. Navy is slowing its roll with AI, calling the technology a “force multiplier” in the memo. That’s all by design.

“Gen AI . . . comes with inherent security vulnerabilities, risks, and potential secondary consequences, which should be carefully considered while we learn how to employ them safely for the benefit of the DON’s mission,” Rathbun adds.




Recent Posts

Leave a Reply

Your email address will not be published. Required fields are marked *