Are we ready for war in the infosphere?

How can the U.S. and rule-of-law democracies counter increasingly sophisticated and weaponized disinformation?  How should they wage “information warfare” during period of putative peace?  What are some of the rewards – and risks – of “democratization” of intelligence that commercial satellites and advanced communications capabilities permit?  Can rule-of-law democracies compete successfully against those hyperempowered by technology?  In short, are we ready for war in the infosphere?

These are big questions that I suspect we will revisit with some frequency in future posts but let’s get the conversation underway.


Here’s some context: as things continue to heat up in the Ukraine, and the specter of war seems to be growing, Russia and its proxies are working feverishly to fabricate a pretext for military action.  The latest shenanigan seems to be artillery exchanges between the separatists who control part of eastern Ukraine, and Ukrainian forces.  The plan apparently is to goad Ukrainian forces into a response that harms civilians.   

The New York Times reports: “As shelling intensified in the east, officials warned that Moscow might use false claims of “genocide” against Russians in the region as a pretext for an attack.”  U.S Secretary of State Antony Blinken put it bluntly:

“Russia plans to manufacture a pretext for its attack,” he said, citing a “so-called terrorist bombing” or “a fake, even a real attack” with chemical weapons. “This could be a violent event that Russia will blame on Ukraine,” he said, “or an outrageous accusation that Russia will level against the Ukrainian government.”

The information/disinformation conundrum

What we are seeing on display is a means of coercion in which the Russians have proven to have real expertise: information/disinformation warfare.  It is a serious mistake to underestimate their capabilities and how they have used technology to amplify them.  A 2016 RAND report pointed out that “Russia has taken advantage of technology and available media in ways that would have been inconceivable during the Cold War.”  Importantly, RAND finds that Russia’s “tools and channels now include the Internet, social media, and the evolving landscape of professional and amateur journalism and media outlets.”  It adds:

We characterize the contemporary Russian model for propaganda as “the firehose of falsehood” because of two of its distinctive features: high numbers of channels and messages and a shameless willingness to disseminate partial truths or outright fictions. In the words of one observer, “[N]ew Russian propaganda entertains, confuses and overwhelms the audience.”

We are certainly seeing much of that in the current crisis, so what can we do about it?  A little more than a week ago a reporter asked me about Russian information/disinformation war.  His questions included: was this a new form of warfare? Or is this just something that has gone on forever? Can the US succeed at information warfare or are there too many constraints?

Here’s a lightly edited and somewhat expanded version of what I said then:

The challenge

It is very difficult for any rule-of-law democracy to compete with an authoritarian regime bent upon weaponzing disinformation.  This is especially so with the Russians, who have long considered deception and disinformation to be a key stratagems, and who now have become super-empowered with the advent of cyber technology, to include advanced techniques like deep fakes, and more.

For principled reasons, countries like the U.S. are loathe to use deception and disinformation, particularly during periods of putative peace, but this means they can often be disadvantaged.  I believe these days it is challenging to get approval for an aggressive counter-disinformation campaign, even if it relies upon accurate data.  The Russians are not constrained that way, and it gives them the opportunity to get inside the decision and response loops of the U.S. and NATO countries.

It can be easier for a military commander to get approval to use deadly defensive force than it is to get the OK to respond with an aggressive, albeit non-kinetic, information strike.  The bureaucracy may insist upon multiple approvals that simply cannot be obtained in a timely manner.  Consequently, a Russian narrative laced with disinformation can stay ahead of any effort to counter it.

One way to counter the bureaucratic lag would be to assess likely Russian information offensives and to pre-position approved responses.  In the 21st century, information warfare can take place at hyper speed, and any slower response could be doomed as too little too late.  Whole societies may rapidly form adversary-inclined opinions that may be difficult or impossible to dislodge in the necessary timeframe.

“Grey zone” war and exploiting the unsettled nature of international law

The Russians are waging what is called “grey zone” war which is the use of coercive means that fall below the threshold that traditional legal analysis would say permits defensive responses. It is a period of not quite peace, but also not quite “war” – at least as that term has historically been understood.

For example, a “gray zone” technique would be to exploit the unsettled nature of international law as to what can or cannot be done in cyberspace without legally constituting an “attack.” Additionally, it is unsettled as to exactly what kind of response is permissible when the disinformation campaign is not only broadcasting falsities, but also benefits from affirmatively going into U.S. and allied databases and changing data to hostile advantage.

Historically, for example, propaganda—however false it may be—has not been considered to meet the threshold that legally triggers a right to “self-defense” as that term is understood under the U.N. and NATO charters.

But the U.S. and other democracies may need to reconsider these interpretive norms given the unprecedented potential of cyber and high-tech communications to distribute highly-volatile disinformation to billions in a matter of hours.   Such disinformation/propaganda can instigate an armed conflict or other tragic event, so there is a need to know what responses and counters are internationally acceptable norms.  The answer may not appropriately be a kinetic response, but may rely on other means to, for example, stifle its distribution and effectiveness.

Regardless, highly sophisticated and possibly politically decisive disinformation operation could have a strategic impact that threatens vital American interests, and set off reactions that pose a serious threat to peace.  

Propaganda and disinformation during armed conflict

Yes, rights to expression do need to be protected, but so do innocents.  And, yes, there can be appropriate limits.  For example, there is no First Amendment right for an enemy to communicate during an armed conflict, and this is especially true with respect to propaganda and disinformation.  However, in a globalized media environment where the propaganda and/or disinformation may be reported by third party or even American news organizations, complications result.

Consequently, the challenge is to figure out how to stop such operations while respecting free speech as well as the public’s right to know during peace, war, and – yes – “grey zone’ periods.  Another complication would be an adversary’s use of proxies, including unwitting ones in friendly countries, to spread false information which could potentially have catastrophic strategic effects.

Red lines for the “grey zone”?

The U.S. and its allies need to decide exactly where it considers the international law lines to be, understanding that establishing any legal “red line” would also limit what they could do to an adversary.  Given the U.S. is reportedly the world’s premier cyber power, some may argue that the current legal ambiguity serves American interests.  I believe deterrence is better served by being clear about what the U.S. and its allies deem as legally acceptable or not.

Frankly, it is not unthinkable to imagine conflicts decided entirely in the infosphere.  Clausewitz, the great military theorist, said that war is “an act of force to compel our enemy to do our will,” but in the future, it might be possible to substitute “act of information” into that axiom.  Once a society is made to believe it cannot win, or it becomes so internally disrupted it cannot organize to fight effectively, the “war” may be over without a shot being fired.

Let’s not forget that Sun Tzu, another great theorist observed: “To win one hundred victories in one hundred battles is not the acme of skill. To subdue the enemy without fighting is the acme of skill.”

Some good news…

I do think the U.S. and its NATO allies have done a good job, for example, in exposing a Russian plan to fabricate a “video showing an attack by Ukrainians on Russian territory or Russian speakers in eastern” as a “pretext for invasion.”   The Wall Street Journal observed:

Releasing information to damage or deter an enemy is an ancient tactic. What is new here is the scale of it, said Jonathan Eyal, an associate director at the Royal United Services Institute, a British defense think tank. By flagging operations early, it stops Russia’s President Vladimir Putin “resorting to the same old techniques” that Moscow used to justify incursions into Crimea in 2014 and Georgia in 2008, he said.

This is the kind of thing that the US, and its allies need to do more frequently, even if there are risks.  The Journal points out:  

The moves aren’t without risk for U.S. and U.K. intelligence agencies. They potentially expose sources in Russia. Furthermore, if war doesn’t materialize, the U.S. and U.K. governments, which have provided little evidence for their claims, could be accused of scaremongering. It could also have no effect at all.

Of course, the calculation as to whether secret intelligence ought to be used to counter disinformation is very fact-specific as it may put at risk costly sources and methods.  The New York Times reported today that U.S. intelligence “learned last week that the Kremlin had given the order for Russian military units to proceed with an invasion of Ukraine.”  The paper also said that ‘[o]fficials declined to describe the intelligence in any detail, anxious to keep secret their method of collecting the information.” 

That concern is understandable but the disclosure was the right thing to do in this instance, even if there was price in terms intelligence sources and methods.  Operating in the infosphere is not, and will not be, a cost free endeavor, but the disclosure option must be “on the table” in situations where major war is possible. 

The emerging opportunities and challenges of “open source intelligence” 

However, not all the data used to expose Russian disinformation comes from Western intelligence agencies.  Increasingly, commercial satellites and social media users are the source.  Writing in The Conversation professor Craig Nazareth says that “open source intelligence” (OSINT) has “democratized” access to data once the sole province of the specialized government intelligence-gathering systems. 

Nazareth believes such as technologies as “[s]ocial media, big data, smartphones and low-cost satellites have taken center stage, and scraping Twitter has become as important as anything else in the intelligence analyst toolkit.”  Nazareth adds:

Through information captured by commercial companies and individuals, the realities of Russia’s military posturing are accessible to anyone via internet search or news feed. Commercial imaging companies are posting up-to-the-minute, geographically precise images of Russia’s military forces. Several news agencies are regularly monitoring and reporting on the situation. TikTok users are posting video of Russian military equipment on rail cars allegedly on their way to augment forces already in position around Ukraine. And internet sleuths are tracking this flow of information.

There are at least two issues raised by this development, one of which was mentioned by the author: “sifting through terabytes of publicly available data for relevant information is difficult” particularly since “much of the data could be intentionally manipulated to deceive complicates the task.”  

A new article in The Economist expands upon the cautions.  It notes that OSINT is “not a panacea” and observes that “overhead pictures, while very useful, never show everything.”  It also says that satellite images can be “beguilingly concrete in a way that can mislead the inexperienced.”  Consequently:

Modern armed forces appreciate the role that open sources have begun to play in crises, and can use this to their advantage. An army might, for instance, deliberately show a convoy of tanks headed in the opposite direction to their intended destination, in the knowledge that the ensuing TikTok footage will be dissected by researchers. The location signals broadcast by ships can be spoofed, placing them miles from their true locations.

The second issue involves the use of “open source intelligence” during wartime specifically.  Russians (or, really, any military) may seek, for example, to block the acquisition and/or dissemination of “geographically precise images” of their forces, which could quite obviously be used for targeting.  They may consider the commercial satellites to be targetable, and the means the Russians (or others) use to attack them could have significant adverse collateral effects on a world much dependent upon satellite sourced-data.

Additionally, people who may think of themselves as civilians uninvolved in the conflict may find the Russians consider their activities as sufficient to make them targets.  Even the International Committee of the Red Cross concedes that “transmitting tactical targeting intelligence for a specific attack” is sufficient “direct involvement in hostilities” to make a civilian lawfully targetable under the law of war. 

Russia, as wll as other countries, may have a broader interpretation of what constitutes “direct involvement in hostilities” (e.g., transmitting information enabling targeting beyond specific tactical attacks or otherwise intended to expose military operations to an advasary) as this is an area of the law that is unsettled and developing as more technologies become available that have valuable intelligence capabilities.

One more thought: it won’t always be friendly countries who exploit open source intelligence.  Open societies like the U.S. are bonanzas for open source intelligence gatherers.  We have to think through what this means for our military operations: do you fight they same way when an adversary, even a relatively low-tech one, can use OSINT to track your every move? 

The future of war in the infosphere

In a fascinating article in POLITICO (‘Kill Your Commanding Officer’: On the Front Lines of Putin’s Digital War With Ukraine), journalist Kenneth R. Rosen reports:

The Russians have for nearly a decade used Ukraine as a proving ground for a new and highly advanced type of hybrid warfare — a digital-meets-traditional kind of fighting defined by a reliance on software, digital hardware and cognitive control that is highly effective, difficult to counter and can reach far beyond the front lines deep into Ukrainian society. It is a type of high-tech conflict that many military experts predict will define the future of war. It has also turned Ukraine, especially its eastern provinces, but also the capital, into a bewildering zone of instability, disinformation and anxiety.

This echoes a phenomena raised in the 2018 book, War in 140 Characters, where the author discussed a young Palestinian woman named Farah who “armed with only a smart phone” produced tweets espousing a view of the conflict that were so effective they could defeat a militarily more powerful opponent on the “narrative battlefield.”  In the information realm this gave her power “akin to the most élite special forces unit” that could resulting nations losing wars.  He explained:

This is because when war becomes “armed politics” and the Claueswitzian paradigm becomes less relevant, one side can win militarily but lose politically.  This idea lies at the center of Farah’s power.  She cannot shoot, but she can tweet, and the latter is now arguably more important in an asymmetric conflict that Palestinians cannot hope to win militarily.  It is this newfound ability to spread narratives via tweets and posts that allows hyperempowered, networked individuals such as Farah to affect the battlefield.

As I said above, it is not unthinkable to imagine conflicts decided entirely in the infosphere. Clausewitz, the great military theorist, said that war is “an act of force to compel our enemy to do our will,” but in the future, it might be possible to substitute “act of information” into that axiom. 

Questions to ponder: are the U.S. and other rule-of-law democracies sufficiently prepared to fight in the infosphere?  Do we have the right legal norms and policies in place to compete effectively in 21st century conflicts where a small number of individuals super-empowered by technology can potentially dominate the narrative battlespace?  Are we ready to win in the infosphere?

Remember what we like to say on Lawfire®: gather the facts, examine the law, evaluate the arguments – and then decide for yourself!


You may also like...