The U.S. Army wants to better understand how its acquisition and contracting workforce could use generative AI to improve efficiency and is launching a pilot next month to explore those questions.

Jennifer Swanson, deputy assistant secretary of the Army for data, engineering and software, said the effort will shed light on how the service’s acquisition and logistics enterprise could take advantage of generative AI tools to make processes like contract writing and data analysis more efficient.

“The pilot’s not just about increasing our productivity, which will be great, but also — what are the other things that we can do and what are the other industry tools that are out there that we might be able to leverage or add on,” Swanson said June 18 at Defense One’s Tech Summit in Arlington, Va.

The Army is the latest Defense Department agency to announce efforts to experiment with generative AI. The Air Force and Space Force last week unveiled their own experimental tool — the Non-classified Internet Protocol Generative Pre-Training Transformer, or NIPRGPT. And in 2023, the Navy rolled out a conversational AI program called Amelia that sailors could use to troubleshoot problems or provide tech support.

Swanson said she’s optimistic about the potential for generative AI, especially for laborious specialties like contract writing and policy where automation could release some strain on the Army’s workforce.

“In the area of contracts and in the area of policy, I think there’s a huge return on investment for us,” she said. “Might [AI] one day be able to write a contract? We hope so. But we’ve got to pilot and test it and make sure everybody’s comfortable with it first.”

The large language model the service will use for the effort is different from systems like ChatGPT, Swanson said, because it is trained on Army data. It will also provide citations that indicate where the data it provides originated, a feature that will help the service fact-check that information.

The pilot is part of a broader effort within the Army to identify both the pitfalls and the opportunities that come with widely adopting AI tools. In March, the service announced a 100-day plan focused on reducing the risk associated with integrating AI algorithms.

As part of that exercise, Swanson said, the Army reviewed its spending on AI research and found that testing and security are the two biggest gaps toward fielding these tools more broadly. The service also identified 32 risks and 66 mitigations it can implement to reduce their impact. Further, it created a generative AI policy that it will apply to the pilot in order to set parameters for the effort. That policy includes a requirement that there be a “human in the loop.”

The generative AI pilot will lead into the next phase of the effort — a 16-month focus on how to use the technology operationally. Findings from that work will inform the Army’s budget for fiscal 2026.

“So the 100 day plan is setting the conditions — where are we at — and then the 500 day plan is really about operationalizing it,” she said.

Florent Groberg, vice president of strategy and optimization at private investment firm AE Industrial partners, said that as the Army moves through these review processes and experiments with AI, it should be transparent with industry about what it wants and then move quickly to leverage the tools companies are developing.

“To me, it’s really understanding the framework of what you want to accomplish,” he said during the same panel with Swanson. “Put some boundaries out there and then go do it.”

Courtney Albon is C4ISRNET’s space and emerging technology reporter. She has covered the U.S. military since 2012, with a focus on the Air Force and Space Force. She has reported on some of the Defense Department’s most significant acquisition, budget and policy challenges.

Share:
More In AI & ML