Integrating artificial intelligence in energy transition: A comprehensive review
AI In The Federal Government: A Fragmented Reality
The United States aims for AI leadership, a goal underscored by presidential directives from both the Trump and Biden administrations. President Trump's "Removing Barriers to American Leadership in Artificial Intelligence" Executive Order and President Biden's Executive Order 14110, which directed all government departments to "develop AI strategies and pursue high-impact AI use cases," underscore a bipartisan recognition of AI's transformative potential. These directives have been accompanied by significant financial investment, with federal agencies spending $831 million on AI-related software contracts in 2023 alone—a figure poised for substantial growth.
Despite this clear mandate and investment, the federal government's implementation of AI remains an early work in progress. Our examination of the Department of Justice (DOJ) use cases highlights a fragmented AI strategy that can lead to inefficiencies and hinder information flows. Addressing this fragmentation presents an opportunity to leverage the benefits AI promises: improved efficiency, accelerated analysis, and enhanced decision-making for public service and national security.
Federal Use Cases
More on:
Artificial Intelligence (AI)
Digital Policy
Politics and Government
By the end of 2024, the Federal AI Use Case Repository listed over 2,100 supposed "use cases" across major government entities. However, this figure is likely inflated because individual AI systems are often conflated with actual use cases. For instance, the Department of Energy recorded an AI-assisted word processor three separate times as distinct "use cases," even though it represents one. Our analysis will account for these categorization differences.
DOJ: A Case Study in Fragmentation
Net PoliticsCFR experts investigate the impact of information and communication technologies on security, privacy, and international affairs. 2-4 times weekly.
The Department of Justice, with 240 reported AI systems supporting approximately 100 to 110 unique use cases, exemplifies the challenges of fragmented AI implementation. These systems span diverse applications, from pattern recognition in violent crime data to predicting inmate security levels and transforming audio into text. While 70 percent of these systems are operational, a closer look reveals fragmentation across two critical dimensions: bureaus and workflows.
These fragmentations, compounded by an outdated four-year-old AI strategy, are also costly as each individual system demands separate acquisition process, updates, support, and training, straining departmental budgets and personnel.
Harnessing Artificial Intelligence in the Federal Government
Advocating for AI adoption is not enough. To truly harness the power of AI, the federal government—especially agencies like the DOJ—needs a comprehensive and forward-looking strategy. This strategy must prioritize two critical areas:
More on:
Artificial Intelligence (AI)
Digital Policy
Politics and Government
Addressing these elements will position the federal government agencies to maximize AI's potential, improving the speed and quality of their decisions and actions, strengthening national security, and better serving the American public.
The U.S. Government Wants To Go 'all In' On AI. There Are Big Risks
Under a newly released action plan for artificial intelligence, the technology will be integrated into U.S. Government functions. The plan, announced July 23, is another step in the Trump administration's push for an "AI-first strategy."
In July, for instance, the U.S. Department of Defense handed out $200 million contracts to Anthropic, Google, OpenAI and xAI. Elon Musk's xAI announced "Grok for Government," where federal agencies can purchase AI products through the General Services Administration. And all that comes after months of reports that the advisory group called the Department of Government Efficiency has gained access to personal data, health information, tax information and other protected data from various government departments, including the Treasury Department and Veteran Affairs. The goal is to aggregate it all into a central database.
Sign up for our newsletterWe summarize the week's scientific breakthroughs every Thursday.
But experts worry about potential privacy and cybersecurity risks of using AI tools on such sensitive information, especially as precautionary guardrails, such as limiting who can access certain types of data, are loosened or disregarded.
To understand the implications of using AI tools to process health, financial and other sensitive data, Science News spoke with Bo Li, an AI and security expert from University of Illinois Urbana-Champaign, and Jessica Ji, an AI and cybersecurity expert at Georgetown University's Center for Security and Emerging Technology in Washington, D.C. This interview has been edited for length and clarity.
SN: What are the risks of using AI models on private and confidential data?
Li: First is data leakage. When you use sensitive data to train or fine-tune the model, it can memorize the information. Say you have patient data trained in the model, and you query the model asking how many people have a particular disease, the model may exactly answer it or may leak the information that [a specific] person has that disease. Several people have shown that the model can even leak credit card numbers, email addresses, your residential address and other sensitive and personal information.
Second, if the private information is used in the model's training or as reference information for retrieval-augmented generation, then the model could use such information for other inferences [such as tying personal data together].
SN: What are the risks associated with consolidating data from different sources into one large dataset?
Ji: When you have consolidated data, you just make a bigger target for adversarial hackers. Rather than having to hack four different agencies, they can just target your consolidated data source.
In the U.S. Context, previously, certain organizations have avoided combining, for example, personally identifiable information and linking someone's name and address with health conditions that they may have.
On consolidating government data to train AI systems, there are major privacy risks associated with it. The idea that you can establish statistical linkages between certain things in a large dataset, especially containing sensitive information such as financial and medical and health information, just carries civil liberties and privacy risks that are quite abstract. Certain people will be adversely impacted but they may not be able to link the impacts to this AI system.
SN: What cyberattacks are possible?
Sponsor Message
Li: A membership attack is one, which means if you have a model trained with some sensitive data, by querying the models, you want to know, basically the membership, if a particular person is in this [dataset] or not.
Second is model inversion attack, in which you recover not only the membership but also the whole instance of the training data. For example, there's one person with a record of their age, name, email address and credit card number, and you can recover the whole record from the training data.
Then, model stealing attack means you actually steal the model weights [or parameters], and you can recover the model [and can leak additional data].
SN: If the model is secure, would it be possible to contain the risk?
Li: You can secure the model in certain ways, like by forming a guardrail model, which identifies the sensitive information in the input and output and tries to filter them, outside the main model as an AI firewall. Or there are strategies for training the model to forget information, which is called unlearning. But it's ultimately not solving the problem because, for example, unlearning can hurt the performance and also cannot guarantee that you unlearn certain information. And for guardrail models, we will need stronger and stronger guardrails for all kinds of diverse attacks and sensitive information leakage. So I think there are improvements on the defense side, but not a solution yet.
SN: What would your recommendations be for the use of AI with sensitive, public, government data?
Ji: Prioritizing security and thinking about the risks and benefits and making sure that your existing risk management processes can adapt to the nature of AI tools.
What we have heard from various organizations both in government and the private sector is that you have a very strong top-down messaging from your CEO or from your agency head to adopt AI systems right away to keep up with the rivals. It's the people lower down who are tasked with actually implementing the AI systems and oftentimes they're under a lot of pressure to bring in systems very quickly without thinking about the ramifications.
Li: Whenever we use the model, we need to pair it with a guardrail model as a defense step. No matter how good or how bad it is, at least you need to get a filter so that we can offer some protection. And we need to continue red teaming [with ethical hackers to assess weaknesses] for these types of applications and models so that we can uncover new vulnerabilities over time.
SN: What are the cybersecurity risks of using AI?
Ji: When you're introducing these models, there's a process-based risk where you as an organization have less control, visibility and understanding of how data is being circulated by your own employees. If you don't have a process in place that, for example, forbids people from using a commercial AI chatbot, you have no way of knowing if your workers are putting parts of your code base into a commercial model and asking for coding assistance. That data could potentially get exposed if the chatbot or the platform that they're using has policies that say that they can ingest your input data for training purposes. So not being able to keep track of that creates a lot of risk and ambiguity.
Anthropic To Offer Its Products To The Government For $1
Anthropic has plans to offer its products to the government for as little as $1, according to sources close to the artificial intelligence company. The source told Axios that the pricing could vary by customer or agency, and that the government hasn't yet been informed of the offer. This news aligns with the recent AI action plan, in which the Trump administration embraced leading AI companies, and announced a focus on accelerating developments to have an edge against China.
"Our commitment to responsible AI deployment, including rigorous safety testing, collaborative governance development, and strict usage policies, has made Claude uniquely suited for public sector and national security applications. And now, government agencies can more easily transform how they work, all while still meeting federal security and compliance requirements," Anthropic said via a blog post. The company also stated that the deployment of AI across federal agencies "enhances productivity, streamlines operations, and enables more effective and responsive government services."
READ: Anthropic's pull: Big money, bigger talent returns, and $100 billion valuation (July 17, 2025)
The U.S. Government recently approved Anthropic, OpenAI, and Google as "official vendors" for the federal government for AI tools. This announcement comes from the General Services Administration (GSA), the government's main purchasing body. These AI tools, which include OpenAI's ChatGPT, Google's Gemini, and Anthropic's Claude, will now be available through a central federal contracting platform called the Multiple Award Schedule (MAS).
This arrangement will allow for the tools to be bought and deployed much more quickly. GSA officials said the approved tools met performance and security standards though the specific terms of the contracts have not been made public. Officials also mentioned that other AI providers might be added later.
Anthropic stated that the partnership with GSA enables it to "to help government organizations
harness the power of AI to modernize operations." The AI company recently announced partnerships with national laboratories for scientific research, integration into defense and intelligence workflows through technology partners, and custom Claude Gov models designed specifically for national security applications.
READ: Anthropic restricts OpenAI's access to Claude models (August 4, 2025)
According to Anthropic, the Claude Gov models were "built based on direct feedback from government customers to address real-world operational needs." Unlike its consumer- and enterprise-facing models, the new custom Claude Gov models were designed to be applied to government operations like strategic planning, operational support, and intelligence analysis.
Anthropic has been increasingly engaging U.S customers as it looks for dependable new sources of revenue. In November, the company teamed up with Palantir and AWS, the cloud computing division of Anthropic's major partner and investor Amazon to sell Anthropic's AI to defense customers.

Comments
Post a Comment