In short
- Federal AI use has grown quickly, however adoption stays closely concentrated amongst a handful of huge companies.
- Key bottlenecks embody a scarcity of AI-specialized expertise, a risk-averse company tradition, and procurement guidelines ill-suited to fast-moving AI techniques.
- Public belief is a crucial hurdle, with solely 17% of Individuals believing AI will profit the nation, making transparency important to constructing confidence.
Using synthetic intelligence throughout the U.S. federal authorities has expanded dramatically lately, however vital obstacles—from expertise shortages to public skepticism—are slowing the know-how’s accountable integration into authorities companies, in line with a brand new report from the Brookings Establishment.
The Wednesday report attracts on AI use case inventories from 2023 to 2025, federal jobs information, Workplace of Administration and Finances memoranda, and interviews with present and former federal technologists throughout eight companies.
The numbers inform a narrative of speedy acceleration. In 2025, 41 companies documented greater than 3,600 particular person AI use circumstances—69% above the overall reported in 2024 and 5 occasions the quantity reported in 2023. The purposes span a variety of presidency features: Greater than half of the Social Safety Administration’s reported use circumstances assist service supply and advantages processing, whereas over half of the Division of Justice’s stock helps regulation enforcement efforts.
But the expansion is way from evenly distributed. For the previous three years, 5 massive companies accounted for over half of all reported AI use circumstances, and huge companies contributed 76% of the overall stock in 2025. Smaller companies are barely preserving tempo: The 11 small companies that reported in 2025 collectively submitted simply 60 use circumstances, representing solely 2% of the overall stock.
The report identifies a number of structural obstacles holding again broader adoption. Probably the most urgent is a scarcity of specialised expertise. Of greater than 56,000 technical job listings posted by the federal authorities since 2016, simply over 1,600—fewer than 3%—explicitly reference AI capabilities.
A Biden-era hiring surge aimed to handle this hole, however workforce reductions in early 2025 might have undermined these efforts, as a minimum of 25% of AI-specific job listings have been posted from 2024 onward—that means lots of these newly employed employees might have been among the many most lately and simply dismissed.
Past staffing, the report factors to a deeply ingrained tradition of threat aversion inside federal companies. Practically 60% of all AI use circumstances are both within the pilot or pre-deployment stage, suggesting the federal AI panorama continues to be in a speedy development section—one which requires devoted time for schooling and experimentation that many companies wrestle to carve out. The report additionally notes that the Trump administration’s express linkage of AI deployment to workforce cuts by way of the Division of Authorities Effectivity (DOGE) could also be reinforcing that hesitancy.
Accountability gaps are one other concern. Greater than 85% of all high-impact deployed AI use circumstances in 2025 lack some required details about threat mitigation measures, regardless of express necessities from the OMB.
Public confidence poses one more problem. In keeping with current Pew Analysis Heart information, about half of Individuals now say they’re extra involved than excited concerning the rising prominence of AI, up from 37% 4 years prior, and simply 17% of the American public believes AI will positively affect the U.S. within the subsequent 20 years.
The report warns that the stakes are excessive. Public belief within the federal authorities stays close to historic lows, with current information exhibiting solely 16% of Individuals saying they belief Washington to do what is correct most or practically the entire time. Towards that backdrop, the authors argue that poorly executed AI deployments might trigger severe harm—however that well-designed purposes targeted on tangible service enhancements might, conversely, assist rebuild confidence in authorities establishments.
To get there, Brookings recommends increasing AI literacy coaching throughout companies, reforming procurement guidelines that have been designed for extra static software program techniques, strengthening transparency practices round high-risk AI use, and prioritizing use circumstances that produce clear, optimistic advantages for the general public.
Day by day Debrief E-newsletter
Begin day-after-day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.

