Category Archives: Cyberspace

What Is Absent From the U.S. Cyber Command ‘Vision’

Written together With Herb Lin.

United States Cyber Command recently released a new “command vision” entitled “Achieve and Maintain Cyberspace Superiority.” The document seeks to provide: “a roadmap for USCYBERCOM to achieve and maintain superiority in cyberspace as we direct, synchronize, and coordinate cyberspace planning and operations to defend and advance national interests in collaboration with domestic and foreign partners.”

Taken as a whole, the document emphasizes continual and persistent engagement against malicious cyberspace actors. One could summarize the new U.S. vision using Muhammad Ali’s famous phrase: “Float like a butterfly, sting like a bee.” Cyber Command aims to move swiftly to dodge opponents’ blows while simultaneously creating and recognizing openings to strike.

Cyber Command’s new vision is noteworthy in many ways. Richard Harknett’s March Lawfare post provides more context on “what it entails and how it matters.”

The emergence of this new vision—coinciding with a new administration—recognizes that previous strategies for confronting adversaries in cyberspace have been less than successful:

[A]dversaries direct continuous operations and activities against our allies and us in campaigns short of open warfare to achieve competitive advantage and impair US interests. … Our adversaries have exploited the velocity and volume of data and events in cyberspace to make the domain more hostile. They have raised the stakes for our nation and allies. In order to improve security and stability, we need a new approach.

Another key realization is that activities in cyberspace that do not rise to the level of armed conflict (as traditionally understood in international law) may nevertheless have strategically significant effects:

The spread of technology and communications has enabled new means of influence and coercion. Adversaries continuously operate against us below the threshold of armed conflict. In this “new normal,” our adversaries are extending their influence without resorting to physical aggression. They provoke and intimidate our citizens and enterprises without fear of legal or military consequences. They understand the constraints under which the United States chooses to operate in cyberspace, including our traditionally high threshold for response to adversary activity. They use this insight to exploit our dependencies and vulnerabilities in cyberspace and use our systems, processes, and values against us to weaken our democratic institutions and gain economic, diplomatic, and military advantages.

Although the document never says so explicitly, it clearly contemplates Cyber Command conducting many cyber activities below the threshold of armed conflict as well.

At the same time, the vision is silent on a number of important points—after all, it is a short, high-level document. In this piece, we have highlighted some of these gaps to identify critical stumbling blocks and necessary areas of research. We categorized our comments below following the basic building blocks of any good strategy: ends, ways and means.

Ends

First, Cyber Command’s objective to “gain strategic advantage” seems obviously desirable. Yet, the vision doesn’t address what that actually means and how much it will cost. Based on Harknett and Fischerkeller’s article, strategic advantage can be interpreted as changing the distribution of power in favor of the United States. (This is in line with the observation made at the start of Harknett’s Lawfare piece: The cyber activity of adversaries that takes place below the threshold of war is slowly degrading U.S. power toward rising challengers—both state and non-state actors.)

But Cyber Command needs to be clear about the consequences of seeking this objective: A United States that is more powerful in cyberspace does not necessarily mean that it is more secure. The best-case scenario following the vision is that the United States achieves the end it desires and dramatically improves the (general or cyber) distribution of power—that is, it achieves superiority through persistence.

Yet, it remains unclear what will be sacrificed in pursuit of this optimal outcome. Some argued at Cyber Command’s first symposium that strategic persistence may first worsen the situation before improving it. This presumes that goals will converge in the future; superiority in cyberspace will in the long run also lead to a more stable environment, less conflict, norms of acceptable behavior, and so on. If this win-win situation is really the intended outcome, Cyber Command needs to provide the basis for its logic in coming to this conclusion—potentially through describing scenarios and variables that lead to future change. Also helpful would be an explanation of the timeframe in which we can expect these changes.

After all, one could equally argue that a strategy of superiority through persistence comes with a set of ill-understood escalation risks about which the vision is silent (Jason Healey has made a similar point). Indeed, it is noteworthy that neither “escalate” or “escalation” appear in the document. Fears of escalation have accounted for much of the lack of forceful response to malicious cyber activities in the past, and it can be argued that such fears have carried too much weight with policy makers—but ignoring escalation risks entirely does not seem sensible either.

Furthermore, high-end conflict is still an issue. True, the major security issue in cyberspace today is the possibility of death by a thousand cuts, and failure to respond to that issue will over time have strongly negative consequences. But this should not blind us to the fact that serious, high-profile cyber conflict remains possible, perhaps in conjunction with kinetic conflict as well. One consequence of the post-9/11 security environment has been that in emphasizing the global war on terror, the U.S. military allowed its capabilities for engaging with near-peer adversaries to atrophy. We are on a course to rebuild those capabilities today, but we should not make a similar mistake by neglecting high-end cyber threats that may have significant consequences.

Ways

The way Cyber Command aims to accomplish its goals, as noted above, is to seize the initiative, retain momentum and disrupt adversaries’ freedom of action.

Given the low signal-to-noise ratio of policy discussions about cyber deterrence over the past several years, it is reasonable and understandable that the vision tries to shift the focus of cyber strategy toward an approach that is more closely matched to the realities of today. But in being silent about deterrence, it goes too far and implies that concepts of cyber deterrence have no relevance at all to U.S. cyber policy. At the very least, some form of deterrence is still needed to address low-probability cyber threats of high consequence.

The vision acknowledges the importance of increasing the resilience of U.S. cyber assets in order to sustain strategic advantage. But the only words in the document about doing so say that Cyber Command will share “intelligence and operational leads with partners in law enforcement, homeland security (at the federal and state levels), and the Intelligence Community.” Greater U.S. cyber asset resilience will enhance our ability to bring the cyber fight to adversaries by reducing their benefits from escalating in response. And yet, the coupling between cyber defense and offense goes unmentioned.

The vision correctly notes that “cyberspace threats … transcend geographic boundaries and are usually trans-regional in nature.” It also notes “our scrupulous regard for civil liberties and privacy.” But U.S. guarantees of civil liberties and privacy are grounded in U.S. citizenship or presence on U.S. soil. If cyber adversaries transcend geographic boundaries, how will Cyber Command engage foreign adversaries who operate on U.S. soil? The vision document is silent on this point.

Means

Of the strategy’s three dimensions, Cyber Command’s new vision is least explicit about the means required to enable and execute strategic persistence.

However, a better understanding of the available means is essential if we want to know how much the U.S. will go on the offense based on this new strategy. In theory, a strategy of persistence could be the most defensive strategy out there. Think about how Muhammed Ali famously dodged punches from his opponents: the other guy in the ring desperately punches but Ali has the upper hand and wears him out; he mentally dominates his opponent. A strategy of persistence could also be the most aggressive one. Muhammed Ali would also punch his opponents repeatedly, leaving them no opportunity to go on the offense—and sometimes being knocked out.

While the command vision has remained silent on available means, others seem to be moving into this direction and offering some examples. In a recent Foreign Affairs article, Michael Sulmeyer argues that the U.S. should ‘hack the hacker’: “It is time to target capabilities, not calculations. […] Such a campaign would aim to make every aspect of hacking much harder: because hackers often reuse computers, accounts, and infrastructure, targeting these would sabotage their capabilities or render them otherwise useless.” Such activities would indeed increase the friction that adversaries encounter while conducting hostile cyber activities against the United States—but whether that approach will result in persistent strategic advantage remains to be seen.

Also, Muhammad Ali boxed differently against different opponents—especially if he was up against taller boxers. Analogously, there might not be a one-size-fits-all solution when it comes to strategic persistence in the cyber domain. The means used to gain superiority against ISIS aren’t the same as those that are effective against China. Future research will have to list them and parse out the value of different approaches.

What Muhammad Ali was most famous for—and what remained constant throughout all of his matches—was his amazing speed. The new vision shows that the Cyber Command is well-aware of the importance of speed. Operational speed and agility (each mentioned four times in the vision and central to the vision’s fourth imperative) will manifest differently against different opponents; moreover, significant government reorganization will be required to increase operational speed and agility. We should, however, watch out that these concepts do not become meaningless buzzwords: An article on the meaning of an agile cyber command would be a welcome contribution to the field.

Prioritizing

Muhammad Ali boxed 61 matches as a professional. He would not have won 56 of those fights if he had fought all of his opponents at the same time. The Cyber Command is operating in a space in which it has to seize the initiative against a large and ever-growing number of actors. In seeking to engage on some many levels against so many actors, prioritization (as discussed in the strategy) will become a top issue when implementing this new vision.

What’s not in the strategy is as important as what is. Having said that, a short 12-page document cannot be expected to address all important issues. So the gaps described above should be taken as a sampling of issues that will need to be addressed as the vision is implemented.

This article was first published on Lawfare

Contesting “Cyber” – Introduction and Part I

By Max Smeets and James Shires. More info about the series here

Introduction

Over the last few decades there has been a proliferation of the term “cyber”, and commensurate levels of inconsistency. This series argues that the inconsistent application of the prefix “cyber” stems not only from confusion, as some scholars and policymakers have proposed, but also from contest. Our goal of this series is not to resolve conceptual disputes, but instead to understand how and why contests occur, and whether, once the lines along which contests occur are identified, resolution is possible.

As the prefix “cyber” has rarely been used alone, we place the concept of cyberspace at the centre of analysis, for two reasons. First, it is considered to be the “elemental” concept in the field, and demarcates the boundaries of relevant technical and social activity through an intuitive geographical metaphor. Second, selecting the concept “cyberspace” for analysis can be considered a least-likely (or least-obvious) study of contest. The attachment of the prefix “cyber” to various nouns has left cyber-related concepts with a variety of underlying normative connotations. On the one side, some concepts describe a clear activity or state of affairs, which are prima facie undesirable, like “cyber warfare” or “cyber threat”. On the other side, various concepts reflect a more positive degree of attractiveness—“cyber democracy” is a good example of this. The obvious normative aspects of these terms to which the cyber prefix is attached make these likely sites for contest, whereas “cyberspace” is seemingly more neutral. We suggest instead that it is the ominous calm at the heart of the storm, providing an excellent case in which to study the tension regarding the prefix more broadly.

Over the next six days, we will publish a series of blog post that show that cyberspace is contested in a number of ways: through its change in connotations from opportunity to threat; through the existence of substantive and implied definitions, with different rhetorical functions; and through competing understandings of the key historical exemplar for cyberspace: that of ARPANET. We therefore note that the prospects for agreement regarding cyberspace are low. Overall, this presents the choice of what we term, following Hirschman, an ‘exit’ rather than ‘voice’ strategy, to use other concepts instead. An initial post in this series was published last Friday at Slate’s Future Tense and can be found here.

PART 1. Cyber: not just a confused but also a contested concept.

Since the early 1990s the prefix “cyber” has become widespread. As often noted, its use stretches back to Norbert Wiener’s coinage of “cybernetics” from its Greek equivalent in the 1940s. It is similarly canonical to cite novelist William Gibson as creating the “ur” metaphor for this prefix in the early 1980s by combining it with “space”. Almost three decades later in an interview with The A.V. Club, Gibson argued that “‘cyberspace’ as a term is sort of over. It’s over in the way that after a certain time, people stopped using the prefix ‘-electro’ to make things cool, because everything was electrical. ‘Electro’ was all over the early twentieth century, and now it’s gone. I think ‘cyber’ is sort of the same way”.

In contrast to Gibson’s prediction, a simple automated content analysis using Google Trends indicates that the popularity of the prefix “cyber” has remained stable (with a spike in November each year for “cyber Monday”). There are ever more applications of this prefix, to words such as crime, law, cafe, hate, bullying, attack, war, vandalism, politics, dating, security, and power. Today, more people enter the search term “cyber” into Google than the term “democracy” or “terrorist”. Needless to say, the term “cyber” has also gained in prominence in academia and policymaking.

The proliferation of this prefix has, inevitably, led to substantial inconsistencies in its use. On one level, these contradictions may stem from simple confusion. As Michael Hayden, former director of the CIA and NSA, remarked: “rarely has something been so important and so talked about with less clarity and apparent understanding than this phenomenon.” Scholars and policy-makers, among others, are not always consistent in their own usage of cyber-related concepts, and they sometimes reinterpret the definitions employed by others, especially when given a liberal dose of cross-disciplinary fertilization.

Many hold that such disagreement is primarily caused by the apparently abstruse and multifaceted nature of the phenomenon. For example, in a Foreign Policy article, Stephen Walt notes that “the whole issue is highly esoteric—you really need to know a great deal about computer networks, software, encryption, etc., to know how serious the danger might be,” concluding that “here are lots of different problems being lumped under a single banner, whether the label is ‘cyber-terror’ or ‘cyber-war’. If this is the case, more research can iron out the lack of clarity surrounding this relatively young concept, and then we can get to the one and only “meaning of the cyber revolution,” as Lucas Kello emphasizes in his recent book (and earlier article). However, in this article series we argue that the inconsistent application of the prefix “cyber” stems not only from confusion, but also from contestation.

In other words, the roots of disagreement are deeper than a mere struggle to absorb the collective knowledge of another discipline, but stem from underlying normative disagreements.

Understanding the nature and extent of this contestation of “cyber” is important for both policy-making and academic research. For policy-makers, the promise of what Joseph Nye Jr. calls “rules of the road” in cyberspace is much diminished if the very domain itself remains in question (also see the UK government strategy). Constructing effective international cyber-governance becomes more difficult—although not impossible—if the scope of what to be governed is fundamentally disputed.

For academics, if the roots of disagreement are deeper, then faith in a unified understanding of the cyber-issue is utopic; and further investigation of why and how broader political disputes are translated into problems with this proliferating prefix is urgently required.

Here we will explore what it means when we talk about cyber, and address the nature of contestation from various angles.

This article was originally posted @NewAmerica

The Word Cyber Now Means Everything—and Nothing At All

By James Shires and Max Smeets

In early October, at the launch of Stanford’s Global Digital Policy Incubator, Secretary Hillary Clinton said, “We need to get serious on cybersecurity.”

It’s hard to argue with the sentiment, but what does it actually mean? Is she suggesting that companies should invest in data breach insurance? That governments should build new weapons? That police should have better decryption tools? That tech companies should write safer code, especially for critical infrastructure? That international differences in internet governance must be resolved? That individual citizens should review their online behavior? Or all of the above?

The problem is in the word cyber. At first, the word’s flexibility was a good thing—it helped raise awareness and offered an accessible gateway to discussing all kinds of security. But it has now become an obstacle to articulating credible solutions.

The term cyber has been around for decades, stretching back to MIT mathematician Norbert Wiener’s coinage of cybernetics in the 1940s. Wiener borrowed the ancient Greek adjective ‘kubernētikós’, meaning governing,piloting, or skilled in steering, to describe then futuristic idea that one day we would have a self-regulating computing system, solely running on information feedback. In the 1980s, novelist William Gibson married the prefix to space, creating the term so ubiquitous today. Since then, cyber has been used by anarchists and policymakers, scholars and laymen, artists and spies. It has been attached to concepts ranging from warfare to shopping, and it can denote opportunity as well as threat.

Yet, cyber is, in a way, empty: It acts like a sponge for meaning, soaking up whatever content is nearby. Gibson described this nicely in an interview with the Paris Review: “The first thing I did was to sit down with a yellow pad and a Sharpie and start scribbling—infospace, dataspace. I think I got cyberspace on the third try, and I thought, oh, that’s a really weird word. I liked the way it felt in the mouth—I thought it sounded like it meant something while still being essentially hollow.”

The hollow aesthetic captured by Gibson—the peculiar position of being both intuitively meaningful and a self-consciously strange word—is part of the appeal of cyber. The prefix is popular, and growing in use, not despite its hollowness, which is bemoaned by many, but because of it.

Thomas Rid, in his book Rise of the Machines, shows how various narratives have accompanied the prefix cyber since World War II, all of which cross boundaries between technology and society, between science and culture, and between the impetus created by war and security and more benign visions.

As Rid explains in the preface, the cyber idea is “self-adapting, ever expanding its scope and reach, unpredictable, yet threatening, yet seductive, full of promise and hope, and always escaping into the future.” In short, it is a sponge—but one that fails to clean up the conceptual problems of its terrain.

We can see this clearly in recent events. With new information seeping in on an almost daily basis about the Russian meddling in the 2016 elections, the cyber sponge has been absorbing everything related to disinformation campaigns, information warfare, social media bots, and election hacking.

Clinton’s talk demonstrates all of this. “In the 21st century, war will increasingly be fought in cyberspace. As Americans we need to approach this new threat with focus and resolve. Our security, physical or otherwise can’t be taken for granted,” she said. She went on to discuss the various new “weapons of choice” coming from “the highest bowels of the Kremlin”: email releases, probing voting systems, the industrialization of fake news, targeted use of Facebook ads, and more.

She isn’t wrong about these things, but speaking about them in this manner mashes them together with previous uses of the term in relation to militarized cyber operations, critical infrastructure attacks, DDoS attacks against Estonia and Georgia, and Stuxnet. In this case, the cyber label doesn’t improve our understanding of this influence. Instead, the generic term flattens the terrain by conflating the potential hacking of critical infrastructure systems and the buying of advertisements by foreign nations. This incorrectly implies similarities in response, suggesting that we can handle all of these things in a similar manner. But ensuring that the industrial control systems of a power plant will not be accessed by a malicious actor requires a very different set of actions than curbing the spread of fake news. Labeling both actions as cyber encourages the inappropriate transplant of policies and technologies across these issues.

Finally, cyber also masks significant political and organizational hurdles. Clinton speaks about “the need for public and private cooperation,” but this cooperation takes very different forms for critical infrastructure and social media, not to mention questions of state and commercial offensive actions—yet all fall ostensibly under the rubric of cybersecurity.

We’ve wrung all the utility we can out of the cybersecurity sponge. To address the “serious and urgent challenges” of our time, we need to acknowledge that they are indeed challenges plural—not one single, monolithic domain.

This article was first published @ Slate Future Tense.  Future Tense is a partnership of Slate, New America, and Arizona State University.

 

Cyber References Project

I started my graduate studies a few years ago thinking not much was published in the field of cyber conflict. I quickly found my assumption was wrong when I optimistically began a systematic literature review of ‘all’ the relevant works in the field. It was a project I had to abandon after a few weeks (although I do believe that more reviews like this should be conducted).

Even though it is true that still not enough has been published in the top academic journals, one can hardly say that people don’t write on ‘cyber’. With relevant readings currently being scattered across journal articles, books, blog posts, news articles, cyber security firm reports, and more, it becomes increasingly difficult to know what’s out there and build upon earlier insights and arguments published by others.

Whereas this has led some ( Oxford Bibliographies Project and State of the Field of Conference 2016) to direct efforts towards finding the ‘core’ of the field – focusing on key readings –  I have started a complimentary ‘Cyber References Project with as aim to be much more inclusive.

The database currently includes about 800-1000 readings (and also lists a few podcasts and documentaries), which I have sorted into 48 categories. The categories are not mutually exclusive. The goal is not to search based on author (or title) like conventional search engines.

This database includes the references listed on various cyber security course syllabi, State of the Field of Conference 2016,  Oxford Bibliographies Project, SSRN, Google Scholar, Oxford SOLO, PhD-Manuscripts, and think-tank search engines.

Where I see this project going: I plan to include another 150+ academic articles & 200+ blog posts in the near future. I also hope to improve formatting and sort the current list of readings (by year & add categories). In addition, Olivia Lau maintains a great notes/summary pool of key readings on International Relations. It would be great if we could establish something similar for cyber conflict.

Please let me know if readings are missing or categorized incorrectly. Of course, any ideas on how to make this platform easier to use are also very welcome.

Organizational Integration of Offensive Cyber Capabilities: A Primer on the Benefits and Risks

Below you can find the abstract of the paper I’ll present at the 9th International Conference on Cyber Conflict (CyCon 2017) in Tallinn, Estonia. The paper will be published after the conference.

Organizational Integration has become a key agenda point for policy makers as governments continue to change and create new organizations to address the cyber threat. Passing references on this topic, however, far outnumber systematic treatments. The aim of this paper is to investigate the potential effects of organizational integration of offensive cyber capabilities (OIOCC).  I argue that OIOCC may lead to three key benefits: enhance interaction efficiency, stimulate knowledge transfer and improve resource allocation. There are however several negative effects of integration too, which have so far received little attention. OIOCC may lead to an intensification of the cyber security dilemma, increase costs in the long run, and impel – what I call – ‘cyber mission creep’. Though the benefits seem to outweigh the risks, I note that ignoring the potential negative effects may be dangerous – as activity is more likely to go beyond the foreign-policy goals of governments and intrusions are more likely to trigger a disproportionate response by the defender.

Talk Global Cyberspace Cooperation Summit VII

I was part of a great panel at the Global Cyberspace Cooperation Summit VII, organized by the East West Institute.

The summit brought together policymakers, business leaders and technical experts to discuss the most pressing issues in international cyberspace, including securing the Internet of Things, balancing encryption and lawful access to data, developing norms of behavior, improving the security of information and communications technology (ICT) and strengthening the resilience of critical infrastructure.

If you’d like to know more about cyber and dinosaurs (!), start at 38.00 min. Also some great points on cyber risk, non-state actors in cyberspace and more from the other panelists.

More at http://cybersummit.info/.

 

On the transitory nature of cyberweapons

The abstract of my forthcoming article ‘A matter of time: On the transitory nature of cyberweapons’ in the Journal of Strategic Studies:

This article examines the transitory nature of cyberweapons. Shedding light on this highly understudied facet is important both for grasping how cyberspace affects international security and policymakers’ efforts to make accurate decisions regarding the deployment of cyberweapons. First, laying out the life cycle of a cyberweapon, I argue that these offensive capabilities are both different in ‘degree’ and in ‘kind’ compared with other weapons with respect to their temporary ability to cause harm or damage. Second, I develop six propositions which indicate that not only technical features, inherent to the different types of cyber capabilities – that is, the type of exploited vulnerability, access and payload – but also offender and defender characteristics explain differences in transitoriness between cyberweapons. Finally, drawing out the implications, I reveal that the transitory nature of cyberweapon’s benefits great powers, changes the incentive structure for offensive cyber cooperation and induces a different funding structure for (military) cyber programs compared with conventional weapon programs. I also note that the time-dependent dynamic underlying cyberweapons potentially explains the limited deployment of cyberweapons compared to espionage capabilities

How Much Does a Cyber Weapon Cost? Nobody Knows

Can a non-state actor take down critical infrastructure with a cyberattack? If it is not possible today, will it be possible in the future? Experts disagree about the capabilities of non-state actors in cyberspace, let alone agree on their future capability.

There is debate within cybersecurity community and academia whether cyber weapons are getting cheaper and thus within the reach of the self-proclaimed Islamic State or other non-state groups. Although there is some generalconsensus that offensive cyber operations will be less expensive in the future, there is very little understanding of what influences the cost of a cyber weapon. Making sense of the inputs and defensive environment that drive the cost of a cyber weapon is essential to understanding what actors—whether state, non-state, or criminal—will attain what kinds of cyber capability in the future.

There are four processes that make cyber weapons cheaper. First, labor becomes more efficient; attackers become more dexterous in that they spend less time learning, experimenting, and making mistakes in writing code. The observation has been made that Iranian cyber activities are not necessarily the most sophisticated. Yet, since the Shamoon virus wiped the hard drives of 30,000 workstations at Saudi Aramco in 2012, there have been significant improvements in their coding. Whereas Shamoon contained at least four significant coding errors, newer malware seems to be more carefully designed.

Second, developers standardize their malware development process and become more specialized. Some parts of cyber weapons have become increasingly standardized, such as exploit tool kits, leading to an increase in efficiency. The growth of offensive cyber capabilities in militaries allows for greater specialization in cyber weapon production. The U.S. Cyber Command now has 133 teams in operation, making it easier to dedicate specialized units to specific types of cyber operations—even if these units need to be integrated within a general force structure. According to one report, Russia was able to do the same thing for its cyber campaigns against Ukraine.

Third, reusing and building upon existing malware tools allows attackers to learn to produce cyber weapons more cost effectively. The wiper cases Groovemonitor (2012), Dark Seoul (2013), and Destover (2014) are illustrative of this process. Actors who seem to have relatively limited resources have in recent years been getting more bang for their buck.

Fourth, there are shared experience effects, which allow lessons from one piece of malware to shed light on other offensive capabilities. Cyber weapons are generally part of a large collection of capabilities—sharing vulnerability, exploits, propagation techniques, and other features. Stuxnet’s ‘father’, for example, is thought to be USB worm Fanny, and Stuxnet has also been linked to espionage platforms like Duqu, Flame, miniFlame, Gauss, and Duqu 2.0.

In sum, many of the drivers that can make cyber weapons cheaper come from ‘experience’ and ‘learning curve’ effects, where malware developers learn from the work of others.

Although attackers might rejoice at the prospect of weapons getting cheaper, there are significant barriers that can hamper the cost reduction. The defensive measures put in place as a result of advanced persistent threats have forced attackers to develop more complex capabilities to remain effective. Although it is still the case that most computer breaches could have been avoided by simple patching, basic measures such as network segmentation, firewall implementation, and the use of secure remote access methods are becoming increasingly common. Furthermore, IT security professionals communicate more regularly with management about cyber threats than they did a decade ago.

At a recent Royal United Services Institute conference, a military cyber commander clearly stated that the main problem for conducting effective operations is “people, people, people.” For a government, attracting the brightest minds does not come cheap—especially when a person has the opportunity to work in the private sector for a much higher salary. Historically, foreign intelligence agencies have needed foreign language professionals. Today, they need people able to interpret and write code. However, since coding is a highly transferable skill, these people are able to switch to the private sector easily—making the government’s job of retaining them much harder.

Finally, a cyber weapon program requires continuous production, not just intermittent projects. The malleability of cyberspace gives these weapons a highly transitory nature; they’re only effective for a short while. Therefore, the development of cyber weapons must be unceasing and resources must be constantly available. Ideally, cyber weapons would be produced on an assembly line, ensuring that when one weapon becomes ineffective, the next can be put to use. However, it is hard to estimate the costs of maintaining a cyber capability. Because vulnerabilities can be patched, cyber weapons can suddenly lose their effectiveness, unlike traditional weapons where their effectiveness decays over time.

In 2006, sixty-one years after the first atomic bomb was dropped on Hiroshima, Robert Harney and his colleagues published “Anatomy of a Project to Produce a First Nuclear Weapon.” They outlined almost 200 tasks required to produce a nuclear weapon. Undertaking a similar exercise to identify the costs and barriers to the development of a cyber weapon may be challenging considering the rapid pace of technological change, but it should be done nonetheless. Until military strategists, policymakers and intelligence officials understand the cost drivers for cyber weapons, they will not have any basis to claim whether cyber tools are getting cheaper or who can access them. In other words, unless policymakers have a better understanding of the cost of a cyber weapon, they won’t be able to know whether the Islamic State has the capability to develop and deploy one.

This article was first published on the Net Politics Blog of the Council on Foreign Relations

When Naming Cyber Threat Actors Does More Harm Than Good

Cybersecurity firms, despite their increasing prominence in light of greater media attention at Russian and Chinese cyber operations, are often criticized for their biases when identifying advanced persistent threat actors (APT). Two critiques are most-often heard. Security researcher Carr put his finger on one of the sore spots:

“How is it that our largest infosec companies fail to discover APT threat groups from Western nations (w/ @kaspersky as the exception)” (Twitter)

A second issue frequently mentioned is that threat intelligence firms have an incentive to exaggerate the cyber threat. If a firm is able to discover a highly advanced threat, it must mean that it has advanced detection capabilities and you should buy their product.

There is a third and potentially more damning charge that can be levelled against cybersecurity firms. Like palaeontologists or astronomers, cybersecurity firms like to name their new discoveries. But unlike other sciences, the liberal naming of threat actors and incidents causes a host of problems that confuses accurate data collection and determining whether a threat group still constitutes a threat.

First, giving the same name to different cyber incidents is unnecessarily confusing. Cloud Atlas is also named Inception. Saffron Rose also goes by the name Flying Kitten and Ajax Team. Dark Hotel is also called Tapaoux, Luder or Nemim. Dyncalc is APT12 or Numbered Panda. Hangover is Viceroy Tiger. Mirage is Vixen Panda. Cabarnak is Anunak. Sofacy is also called APT28, OP Pawn Storm or Fancy Bear. The list goes on. Can you still keep them separate?

Granted, attribution is more difficult in cyberspace. Unlike palaeontologists, cyber threat intelligence firms can’t use carbon dating to identify the origins or age of their discoveries. But that makes it all the more important that firms are cautious with their labelling.

Cybersecurity firms mostly rely on circumstantial evidence, and different firms rely on different data, techniques and resources to extract this information. New pieces of evidence can increase the plausibility of a given attributive theory or raise doubts about it, but are not decisive by themselves. It means security researchers constantly need to link (new) pieces of evidence to update their beliefs about a threat actor. By giving the same threat different names, they might miss out on knitting the pieces of evidence together.

Perhaps some in the information security community have less difficulties understanding the diverse threat landscape. However, the confusing labelling creates a barrier for others, particularly with policymakers and journalists who do not have the time or knowledge to cross-reference the alphabet soup of labels. When the information security community claim that ‘others’ don’t get it, the accusation might sometimes be a fair one. However, the liberal labelling behavior is more likely to widen than narrow the gap.

The constant urge to (re)name makes it also more likely that cybersecurity firms refer to old threats as new ones. The same actor may have simply acquired new skills. A hacker group on a given day might have analyzed the code of another cyberattack and realized they could include a certain part in their platform as well. Being too quick in naming new threat actors, firms are more likely to lose sight of how actors might have evolved. They are more likely to exaggerate network learning effects (i.e. that one threat actor learned from another actor) and underestimate a single threat actor’s ability to learn (i.e. that the same actor acquired new skills).

There are a few steps that cybersecurity firms could do to remedy the naming problem. First, if a competitor has already discovered a threat actor, the threat actor shouldn’t be renamed to fit another company’s branding. Even though renaming is in a firm’s interest to promote its brand, it sows confusion across the cybersecurity community and frustrates efforts to obtain accurate data on incidents and threat actors.

Second, when a firm decides to name a new cyber threat, it should also publish a public threat report about it. Dmitri Alperovitch, co-founder of Crowdstrike, presented a paper in 2014 listing various adversaries.  However, Crowdstrike hasn’t published any technical reports on many of these APTs—like Foxy Panda and Cutting Kitten. Additionally, when naming a cyber threat, cybersecurity firms need to be clearer whether it refers to a campaign (e.g. a series of activities carried out by a specific actor), the type of malware, the incident or a specific actor.

Third, the cybersecurity industry should create a set of common criteria to determine when an APT should be classified as such. Currently, it is unclear which criteria companies use before publicizing and categorizing the discovery of a new threat. For example, Stuxnet is often referred to as a single cyber weapon despite the fact that it is two separate entities, each with different targets. One focused on closing the isolation valves of the Natanz uranium enrichment facility and the other aimed to change the speeds of the rotors in the centrifuges. The second one was also heavily equipped with four zero-day exploits and used various propagation techniques, whereas the first one did not. Finally, some have hypothesized that Stuxnet changed hands a few times before it was deployed. If the target, technique, and threat actor are not the same, why do so many still refer to Stuxnet as one APT?

If cybersecurity firms were bit more careful with labelling, they would help themselves and others in the field find out which ATPs are new and which ones are extinct.

This article was first published on the Net Politics Blog of the Council on Foreign Relations.