Guest Post: “AI’s Sputnik Moment: What National Security Lawyers Must Do to Stay Ahead”
Today we introduce a new Lawfire® contributor, U.S. Navy Commander Delicia Gonzales Zimmerman, who teaches at the U.S. Army’s Judge Advocate General’s School and Legal Center in Charlottesville, Virginia. Her expertise includes artificial intelligence (AI), so she speaks with real authority about one of the hottest topics today: the proverbial “Sputnik Moment“ occasioned by the emergence of the generative AI model, DeepSeek.
CDR Zimmerman gives us a thoughtful analysis of this phenomena, as well as providing some practical advice for lawyers, particularly those practicing in the national security enterprise. Here’s CDR Zimmerman:
AI’s Sputnik Moment: What National Security Lawyers Must Do to Stay Ahead
By Delicia Gonzales Zimmerman
AI’s Sputnik Moment
In the high-stakes world of national security law, a technological tsunami just rewrote the rules of engagement for lawyers. On January 20, 2025, DeepSeek launched its latest Generative Artificial Intelligence (GenAI) model, R1, which rivals OpenAI’s ChatGPT and Meta’s Llama 3.1 in performance. Within seven days of its release, DeepSeek R1 surpassed ChatGPT in downloads and has sent shockwaves through the AI market.
This has been called AI’s Sputnik Moment. Just as Sputnik demonstrated the Soviet Union’s technological prowess by surpassing U.S. space capabilities and igniting the space race, DeepSeek’s R1 breakthrough in technology and cost efficiency now serves as a global wake-up call, challenging the established technological thought and hierarchy.
Over the past few years, the U.S. and the People’s Republic of China (PRC) have been locked in an escalating AI arms race, marked by increasingly stringent U.S. export controls on advanced chips and AI technologies. Ironically, these measures may have forced the PRC to innovate around restrictions, culminating in breakthroughs like DeepSeek’s R1 model.
Critics argue that DeepSeek’s claims lack independent validation and question its alleged use of restricted H100 GPUs and/or improperly distilled data from ChatGPT. Regardless of whether DeepSeek represents true innovation or a clever circumvention of export controls, its success highlights the flaws in the current U.S. strategy. More importantly, it underscores the urgent need for a new approach if the U.S. aims to sustain its technological and economic leadership in this critical domain.
One strategy may be diffusion. Mr. Jeffrey Ding, Associate Professor of Political Science at George Washington University, makes the argument that a state’s innovation superiority is not as important as a state’s ability to diffuse (e.g. spread and adopt) the technology that it innovates. He cites as an example the Soviet Union during 1950-1970 in which it made many prominent scientific and technological innovations, including Sputnik.
While the Soviet Union was the world leader in introducing new technologies, its economy stagnated in the 1970s due, in part, to its failure to integrate and commercialize these advancements. Conversely, the U.S.’s space innovations during the same time period were readily adopted and commercialized (e.g. GPS, LED lighting, portable cordless vacuums, freeze dried food, memory foam, etc.).
These inventions helped fuel the postwar economy and the Soviet Union’s inability to diffuse its technological innovations eventually led its collapse. Prof. Ding argues that China, like the Soviet Union, has a diffusion problem. There are several reasons for this which are outside the scope of this article; however, for more information I highly recommend reading his paper or his Congressional testimony on the topic.
So what does this have to do with the national security lawyers? Well, everything. AI’s effectiveness depends not just on its existence but on its widespread implementation across sectors. Just as the U.S. must be proactive in adopting and integrating AI, so too must legal professionals, especially those in national security roles.
If the U.S. fails to effectively integrate AI into legal and national security decision-making, it risks falling into the same diffusion trap that hindered the Soviet Union. Additionally, national security lawyers who fail to integrate AI risk being outpaced not just by adversaries, but by our own AI-literate colleagues.
Lawyers Who Use AI
There has been a saying in the last couple of years that AI will not replace lawyers but lawyers who use AI will replace those that don’t. This adage appears to be coming into fruition as AI adoption among lawyers has more than doubled since last year, with 82% now using or planning to use it, up from 39% for the previous year.
This is a monumental shift for a community that is inherently risk adverse. The changing attitudes towards GenAI reflect a growing understanding that while still flawed, GenAI enhances efficiency.
We know that GenAI is skilled at processing large amounts of data, conducting legal research, drafting legal documents, and automating routine tasks. While GenAI’s ability to enhance efficiency is well known, there are still some drawbacks. GenAI hallucinates, lacks contextual understanding and ethical judgment, is biased, and most critically for national security lawyers – poses significant risks to data security and confidentiality.
Despite these challenges, embracing AI technology is crucial for national security attorneys. Early adoption offers significant advantages. Enhancing legal office efficiency and automating mundane tasks enables attorneys to provide timely legal advice during shortened decision-making cycles. The ability to deliver faster legal advice is crucial in maintaining our competitive edge.
However, to preserve and strengthen this advantage, we must consistently use and master GenAI technologies. An additional consideration is that our clients and junior attorneys are already using commercial GenAI tools without oversight, increasing the risk of inadvertent security breaches and confidential information disclosure.
To effectively advise clients and junior attorneys on these risks, it is imperative that we gain firsthand experience with these tools or, at the very least, develop a comprehensive understanding of their capabilities and potential pitfalls.
With the federal government, including the Department of Defense and military services, failing to issue comprehensive guidelines on the use of AI, government lawyers face a critical choice: passively wait and risk falling behind, or proactively develop adaptable frameworks that mitigate potential risks.
The conservative approach of inaction is paradoxically the riskiest strategy, as it allows adversarial nations to gain technological advantages while permitting uncontrolled AI usage by clients and subordinates that could compromise sensitive information.
By contrast, taking a calculated, principled approach to AI integration—establishing clear, flexible guidelines and training protocols—allows legal professionals to harness this transformative technology responsibly and strategically. The following principles are provided as a starting guide.
Basic Principles for Responsible AI Use
Rule 1. Protect Government Data
“AI can be a powerful tool, but only if used responsibly. Protect government data like national security depends on it – because it does.” – ChatGPT 4o. This fundamental principle serves as the overarching lens through which all the other rules should be interpreted and applied.
Rule 2. Think Before You Input
Never enter classified, sensitive, or operationally critical data into GenAI tools unless they are government approved for secure use. Employees should exercise caution when inputting even unclassified or seemingly harmless data into unclassified AI tools, recognizing that the aggregation of these inputs could allow the AI to construct or deduce sensitive information.
Rule 3. The AI Model Matters
When using GenAI tools for government work, employees must carefully consider the source and security implications of the model they choose. The following guidelines apply:
Green. Government employees should default to government approved GenAI tools (e.g. NIPRGPT, Ask Sage, etc.) when conducting government work.
Yellow. Government employees should exercise caution when using commercial GenAI tools, especially public versions of such tools. Before using a commercial GenAI tool, assess its necessity and the sensitivity of the data it will process.
Review the privacy policy. Understand what data is being collected, how it is being stored, and how it will be used. Before you use commercial, publicly available tools, recommend notifying your supervisor so that a proper risk assessment can be conducted.
Red. Government employees should not use DeepSeek or other Chinese GenAI models for government work (or any capacity if you belong to the U.S. Navy). DeepSeek collects “text or audio input, prompt, uploaded files, feedback, chat history, or other content that [the user] provide[s] to our model and Services,” but it also collects information from your device, including “device model, operating system, keystroke patterns or rhythms, IP address, and system language.”
While U.S. AI companies also collect this type of information, they do not collect keystroke patterns. Moreover, DeepSeek retains this data in Chinese servers and is subject to the laws of the PRC. Of concern, DeepSeek also appears to censor answers on sensitive Chinese topics which will taint outputs.
Rule 4. Additional Rules for Public Versions of GenAI tools
Assume that everything entered into a commercial GenAI tool – including Google searches – is public. Don’t type anything into a prompt that you don’t want publicly available. Protect data before inputting. Never enter classified or sensitive data. See Rules 1 & 2. Use data anonymization and/or redaction techniques before inputting government information.
Rule 5. Verify Outputs
GenAI is a helpful assistant, not an infallible expert. Carefully review and double-check all information generated to ensure its accuracy, relevance, and appropriateness. Remember, you are ultimately responsible for any outputs you rely on – do not delegate your professional judgement to AI. Use it as a tool, but remain the final authority in your work.
Rule 6. Watch for the “Mosaic Effect”
Even unclassified data, when combined, can reveal sensitive information. Be mindful of what you input and especially review AI-generated outputs for compilation.
Rule 7. Monitor and Log AI Use
Log all AI interactions, including inputs and outputs, and regularly review them for security risks.
Rule 8. Report and Respond Quickly
If an GenAI tool compromises government data security, report it immediately and take corrective action to prevent further breaches.
Rule 9. Keep Up with AI Tech Updates and Risks
Stay informed about evolving AI technology, AI threats, government policies and best practices to ensure continued compliance.
Rule 10. Semper Gumby
These rules are subject to change as AI technology evolves (e.g. agentic AI).
Parting Thoughts
The principles above are meant to provide general guidance. You may have unique situations in your office that dictate more stringent guidelines. This article aims to spark crucial conversations about AI adoption and to embolden national security lawyers who have hesitated to embrace this transformative technology.
In today’s landscape of global technological competition, the adoption of AI is no longer a choice—it’s a strategic imperative. By thoughtfully integrating AI into our practices, we not only enhance our capabilities but also ensure America’s continued leadership in the digital age.
About the Author:
Delicia Gonzales Zimmerman is a Commander and Judge Advocate in the U.S. Navy. She currently serves as an Associate Professor of National Security Law at The Judge Advocate General’s School and Legal Center in Charlottesville, Virginia, where she teaches Artificial Intelligence & Ethics, Law of the Sea, Law of Naval Warfare, China Law & Strategy, and International Agreements.
She holds a MA, with distinction, in National Security Studies, from the Naval War College, JD from Southern Methodist University Dedman School of Law; and BA in Government from the University of Texas at Austin.
Disclaimers:
The views presented in this article are those of the author and do not necessarily represent the views, positions, or policies of the Department of Defense, the U.S. Army, the U.S. Navy or any other agency of the U.S. government.
The views expressed by guest authors do not necessarily reflect my views or those of the Center on Law, Ethics and National Security, or Duke University (see also here).
Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!