Category Archives: cyber security

When Routine Isn’t Enough: Why Military Cyber Commands Need Human Creativity

Former Secretary of Defense Ashton Carter recently published a report on the campaign to destroy ISIL. Particularly notable was what Carter said about the “cyber component” (or lack thereof) of the U.S. efforts:

I was largely disappointed in Cyber Command’s effectiveness against ISIS. It never really produced any effective cyber weapons or techniques. When CYBERCOM did produce something useful, the intelligence community tended to delay or try to prevent its use, claiming cyber operations would hinder intelligence collection. This would be understandable if we had been getting a steady stream of actionable intel, but we weren’t. The State Department, for its part, was unable to cut through the thicket of diplomatic issues involved in working through the host of foreign services that constitute the Internet. In short, none of our agencies showed very well in the cyber fight.

The statement sounds alarm bells about the current organizational efforts of U.S. Cyber Command. In fact, the United States is not the only one struggling. A growing number of countries are said to be establishing military cyber commands or equivalent units to develop offensive cyber capabilities, and they all seem to have their growing pains stemming from the unique nature and requirements of offensive cyber operations.

Carter’s statement primarily refers to interagency problems, for instance, on how the use of militarized cyber operations by CYBERCOM may endanger current or future intelligence collection operations by the NSA. But the problems with successfully carrying out offensive cyber operations are deeper and more complicated. Specifically, military cyber commands require individual creativity — which is too often is sacrificed on the altar of organizational routines.

Routines are considered to be the oil that keeps government institutions running. In the academic literature, routines are defined as ‘‘an executable capability for repeated performance in some context that has been learned by an organization in response to selective pressures.” One benefit of routines is that they provide stability, which in turn leads to predictability. In the cyber domain, where there is already considerable uncertain and imprecise information, predictability of actions is certainly a welcome asset.

Yet offensive cyber capabilities are inherently based on unpredictability. As the RAND Corporation’s Martin Libicki observes, there is no “forced entry” when it comes to offensive cyber operations. “If someone has gotten into a system from the outside, it is because that someone has persuaded the system to do what its users did not really want done and what its designers believed they had built the system to prevent,” Libicki argues. Thus, to ensure repeated success, one must find different ways to fool a system administrator. Repetition of an established organizational routine is likely to be insufficient when conducting military cyber operations. The command must foster an environment in which operators can depart from routine and nimbly adapt their actions to stay ahead of their adversaries.

More specifically, Jon Lindsay and Erik Gartzke note that “cyber operations alone lack the insurance policy of hard military power, so their success depends on the success of deception.” Deception as a strategy is based on two tactics: dissimulation, or hiding what’s there; and simulation, or showing something that’s not. The cyber weapon Stuxnet, for example, utilized both tactics. Through what is known as a “man-in-the-middle attack,” Stuxnet intercepted and manipulated the input and output signals from the control logic of the nuclear centrifuge system in Natanz, Iran. In this way, it was able to hide its malicious payload (simulation) and instead replayed a loop of 21 seconds of older process input signals to the control room, suggesting a normal operation to the operators (dissimulation). To ensure that an offensive cyber attack is successful, the attacker needs to constantly find innovative ways to mislead the enemy — which may mean deviating from routines, or crafting routines that permit individuals to make adjustments at their discretion.

There is no easy resolution of this dilemma. Few of the mechanisms organizations use to encourage creative behavior can be applied to military cyber commands. Instead, what governments can focus on to foster creativity in these organizations is workforce diversification and purpose creation.

First, a common form of encouragement is to reward risk-takers in the organization. Yet military cyber commands need to be risk-averse and cautious. It is essential for “cyber soldiers” to stick to the rules to avoid escalation and possible violation of the laws of armed conflict, just as it is for more traditional soldiers. Despite the need for unpredictable and deceptive responses, military cyber commands cannot simply try things out and see what happens. Indeed, though offensive cyber capabilities are not inherently indiscriminate, without careful design and deployment there is a high potential for severe collateral damage. The Morris Worm of 1988 is an illustrative case in this regard. Robert Morris “brought the internet to its knees” due to a supposed error in the worm’s spreading mechanism. The worm illustrated the potential of butterfly effects in cyberspace – small changes in code can escalate into large-scale crises.

Similarly, military cyber commands will find it more difficult than private companies to grant autonomy to individuals. The underlying management logic for granting personal autonomy was perhaps most famously spelled out (and radically implemented) by Brazilian entrepreneur Ricardo Semler: Let employees decide how to get something done, and they will naturally find the best way to do it. For cyber operations, while outcomes are important, precisely how the job gets done is equally relevant. After all, unlike most conventional capabilities, the modus operandi of one cyber operation may greatly affect the effectiveness of other operations.

This is partially due to what’s known as the “transitory nature” of cyber weapons. Cyber weapons are often described as having “single-use” capabilities. The idea is that once a zero-day vulnerability – that is, a publicly undisclosed vulnerability – has been exploited and becomes known to the public, the weapon loses its utility. Although I’ve argued before that this view lacks nuance – as in reality it often still takes time before patches are installed and vulnerabilities closed (and only the minority of cyber weapons exploit zero-days) – the likelihood of successfully accessing the target system does nonetheless reduce after initial use. In other words, the use of a zero-day exploit by one operator may complicate efforts for other operators.

So, what can be done? At a minimum, military commands should make sure they attract a diverse group of people. Only recruiting people within government organizations for the command, as for example the Netherlands supposedly does, should be discouraged. Conventional human resource matrices (i.e., the candidate should have a university bachelor’s degree, good grades, courses in certain areas etc.) should be reconsidered too.

We have already seen various encouraging initiatives on this front. The U.S. Army recently launched the cyber direct commissioning program, so (qualified) civilians can now directly apply to become officers. Countries like the United Kingdom, the Netherlands, and Estonia are also setting up cyber reserve units to attract civilians with the right skill set. Yet these programs are not yet widely adopted across states, nor do they tend to extend far enough (the responsibilities of reserve officers are often unclear).

Military cyber commands should also make sure they create an inspiring workplace to capitalize on people’s intrinsic motivation. Senior leaders have generally been good at providing a vision for their cyber command; this is normally expressed as a desire to become a world leader in offensive cyber operations (see, for instance, the UK’s cyber security strategy). They are also explicit about their mission. Yet, hardly ever do they provide purpose: how does the command fit into the big picture, and what is the strategic framework being followed? Jim Ellis, the former commander of U.S. Strategic Command, has noted the shortcomings of the cybersecurity discourse, saying the debate is “like the Rio Grande, a mile wide and an inch deep.” A deeper focus on purpose-driven values is needed to motivate people to enter a field like cyber operations.

As more countries look to get into the business of offensive cyber operations, the inherent tension between the requirements of these operations and the regimented tendencies of national security bureaucracies will become starker and starker. If governments want to bring together different minds, inspire creativity, and maximize human performance, they need to clearly communicate the value of cyber commands to their people.

This article was first published @WarontheRocks

Contesting “Cyber” – Introduction and Part I

By Max Smeets and James Shires. More info about the series here

Introduction

Over the last few decades there has been a proliferation of the term “cyber”, and commensurate levels of inconsistency. This series argues that the inconsistent application of the prefix “cyber” stems not only from confusion, as some scholars and policymakers have proposed, but also from contest. Our goal of this series is not to resolve conceptual disputes, but instead to understand how and why contests occur, and whether, once the lines along which contests occur are identified, resolution is possible.

As the prefix “cyber” has rarely been used alone, we place the concept of cyberspace at the centre of analysis, for two reasons. First, it is considered to be the “elemental” concept in the field, and demarcates the boundaries of relevant technical and social activity through an intuitive geographical metaphor. Second, selecting the concept “cyberspace” for analysis can be considered a least-likely (or least-obvious) study of contest. The attachment of the prefix “cyber” to various nouns has left cyber-related concepts with a variety of underlying normative connotations. On the one side, some concepts describe a clear activity or state of affairs, which are prima facie undesirable, like “cyber warfare” or “cyber threat”. On the other side, various concepts reflect a more positive degree of attractiveness—“cyber democracy” is a good example of this. The obvious normative aspects of these terms to which the cyber prefix is attached make these likely sites for contest, whereas “cyberspace” is seemingly more neutral. We suggest instead that it is the ominous calm at the heart of the storm, providing an excellent case in which to study the tension regarding the prefix more broadly.

Over the next six days, we will publish a series of blog post that show that cyberspace is contested in a number of ways: through its change in connotations from opportunity to threat; through the existence of substantive and implied definitions, with different rhetorical functions; and through competing understandings of the key historical exemplar for cyberspace: that of ARPANET. We therefore note that the prospects for agreement regarding cyberspace are low. Overall, this presents the choice of what we term, following Hirschman, an ‘exit’ rather than ‘voice’ strategy, to use other concepts instead. An initial post in this series was published last Friday at Slate’s Future Tense and can be found here.

PART 1. Cyber: not just a confused but also a contested concept.

Since the early 1990s the prefix “cyber” has become widespread. As often noted, its use stretches back to Norbert Wiener’s coinage of “cybernetics” from its Greek equivalent in the 1940s. It is similarly canonical to cite novelist William Gibson as creating the “ur” metaphor for this prefix in the early 1980s by combining it with “space”. Almost three decades later in an interview with The A.V. Club, Gibson argued that “‘cyberspace’ as a term is sort of over. It’s over in the way that after a certain time, people stopped using the prefix ‘-electro’ to make things cool, because everything was electrical. ‘Electro’ was all over the early twentieth century, and now it’s gone. I think ‘cyber’ is sort of the same way”.

In contrast to Gibson’s prediction, a simple automated content analysis using Google Trends indicates that the popularity of the prefix “cyber” has remained stable (with a spike in November each year for “cyber Monday”). There are ever more applications of this prefix, to words such as crime, law, cafe, hate, bullying, attack, war, vandalism, politics, dating, security, and power. Today, more people enter the search term “cyber” into Google than the term “democracy” or “terrorist”. Needless to say, the term “cyber” has also gained in prominence in academia and policymaking.

The proliferation of this prefix has, inevitably, led to substantial inconsistencies in its use. On one level, these contradictions may stem from simple confusion. As Michael Hayden, former director of the CIA and NSA, remarked: “rarely has something been so important and so talked about with less clarity and apparent understanding than this phenomenon.” Scholars and policy-makers, among others, are not always consistent in their own usage of cyber-related concepts, and they sometimes reinterpret the definitions employed by others, especially when given a liberal dose of cross-disciplinary fertilization.

Many hold that such disagreement is primarily caused by the apparently abstruse and multifaceted nature of the phenomenon. For example, in a Foreign Policy article, Stephen Walt notes that “the whole issue is highly esoteric—you really need to know a great deal about computer networks, software, encryption, etc., to know how serious the danger might be,” concluding that “here are lots of different problems being lumped under a single banner, whether the label is ‘cyber-terror’ or ‘cyber-war’. If this is the case, more research can iron out the lack of clarity surrounding this relatively young concept, and then we can get to the one and only “meaning of the cyber revolution,” as Lucas Kello emphasizes in his recent book (and earlier article). However, in this article series we argue that the inconsistent application of the prefix “cyber” stems not only from confusion, but also from contestation.

In other words, the roots of disagreement are deeper than a mere struggle to absorb the collective knowledge of another discipline, but stem from underlying normative disagreements.

Understanding the nature and extent of this contestation of “cyber” is important for both policy-making and academic research. For policy-makers, the promise of what Joseph Nye Jr. calls “rules of the road” in cyberspace is much diminished if the very domain itself remains in question (also see the UK government strategy). Constructing effective international cyber-governance becomes more difficult—although not impossible—if the scope of what to be governed is fundamentally disputed.

For academics, if the roots of disagreement are deeper, then faith in a unified understanding of the cyber-issue is utopic; and further investigation of why and how broader political disputes are translated into problems with this proliferating prefix is urgently required.

Here we will explore what it means when we talk about cyber, and address the nature of contestation from various angles.

This article was originally posted @NewAmerica

When Naming Cyber Threat Actors Does More Harm Than Good

Cybersecurity firms, despite their increasing prominence in light of greater media attention at Russian and Chinese cyber operations, are often criticized for their biases when identifying advanced persistent threat actors (APT). Two critiques are most-often heard. Security researcher Carr put his finger on one of the sore spots:

“How is it that our largest infosec companies fail to discover APT threat groups from Western nations (w/ @kaspersky as the exception)” (Twitter)

A second issue frequently mentioned is that threat intelligence firms have an incentive to exaggerate the cyber threat. If a firm is able to discover a highly advanced threat, it must mean that it has advanced detection capabilities and you should buy their product.

There is a third and potentially more damning charge that can be levelled against cybersecurity firms. Like palaeontologists or astronomers, cybersecurity firms like to name their new discoveries. But unlike other sciences, the liberal naming of threat actors and incidents causes a host of problems that confuses accurate data collection and determining whether a threat group still constitutes a threat.

First, giving the same name to different cyber incidents is unnecessarily confusing. Cloud Atlas is also named Inception. Saffron Rose also goes by the name Flying Kitten and Ajax Team. Dark Hotel is also called Tapaoux, Luder or Nemim. Dyncalc is APT12 or Numbered Panda. Hangover is Viceroy Tiger. Mirage is Vixen Panda. Cabarnak is Anunak. Sofacy is also called APT28, OP Pawn Storm or Fancy Bear. The list goes on. Can you still keep them separate?

Granted, attribution is more difficult in cyberspace. Unlike palaeontologists, cyber threat intelligence firms can’t use carbon dating to identify the origins or age of their discoveries. But that makes it all the more important that firms are cautious with their labelling.

Cybersecurity firms mostly rely on circumstantial evidence, and different firms rely on different data, techniques and resources to extract this information. New pieces of evidence can increase the plausibility of a given attributive theory or raise doubts about it, but are not decisive by themselves. It means security researchers constantly need to link (new) pieces of evidence to update their beliefs about a threat actor. By giving the same threat different names, they might miss out on knitting the pieces of evidence together.

Perhaps some in the information security community have less difficulties understanding the diverse threat landscape. However, the confusing labelling creates a barrier for others, particularly with policymakers and journalists who do not have the time or knowledge to cross-reference the alphabet soup of labels. When the information security community claim that ‘others’ don’t get it, the accusation might sometimes be a fair one. However, the liberal labelling behavior is more likely to widen than narrow the gap.

The constant urge to (re)name makes it also more likely that cybersecurity firms refer to old threats as new ones. The same actor may have simply acquired new skills. A hacker group on a given day might have analyzed the code of another cyberattack and realized they could include a certain part in their platform as well. Being too quick in naming new threat actors, firms are more likely to lose sight of how actors might have evolved. They are more likely to exaggerate network learning effects (i.e. that one threat actor learned from another actor) and underestimate a single threat actor’s ability to learn (i.e. that the same actor acquired new skills).

There are a few steps that cybersecurity firms could do to remedy the naming problem. First, if a competitor has already discovered a threat actor, the threat actor shouldn’t be renamed to fit another company’s branding. Even though renaming is in a firm’s interest to promote its brand, it sows confusion across the cybersecurity community and frustrates efforts to obtain accurate data on incidents and threat actors.

Second, when a firm decides to name a new cyber threat, it should also publish a public threat report about it. Dmitri Alperovitch, co-founder of Crowdstrike, presented a paper in 2014 listing various adversaries.  However, Crowdstrike hasn’t published any technical reports on many of these APTs—like Foxy Panda and Cutting Kitten. Additionally, when naming a cyber threat, cybersecurity firms need to be clearer whether it refers to a campaign (e.g. a series of activities carried out by a specific actor), the type of malware, the incident or a specific actor.

Third, the cybersecurity industry should create a set of common criteria to determine when an APT should be classified as such. Currently, it is unclear which criteria companies use before publicizing and categorizing the discovery of a new threat. For example, Stuxnet is often referred to as a single cyber weapon despite the fact that it is two separate entities, each with different targets. One focused on closing the isolation valves of the Natanz uranium enrichment facility and the other aimed to change the speeds of the rotors in the centrifuges. The second one was also heavily equipped with four zero-day exploits and used various propagation techniques, whereas the first one did not. Finally, some have hypothesized that Stuxnet changed hands a few times before it was deployed. If the target, technique, and threat actor are not the same, why do so many still refer to Stuxnet as one APT?

If cybersecurity firms were bit more careful with labelling, they would help themselves and others in the field find out which ATPs are new and which ones are extinct.

This article was first published on the Net Politics Blog of the Council on Foreign Relations.