Below you can find the abstract of the paper I’ll present at the 9th International Conference on Cyber Conflict (CyCon 2017) in Tallinn, Estonia. The paper will be published after the conference.
Organizational Integration has become a key agenda point for policy makers as governments continue to change and create new organizations to address the cyber threat. Passing references on this topic, however, far outnumber systematic treatments. The aim of this paper is to investigate the potential effects of organizational integration of offensive cyber capabilities (OIOCC). I argue that OIOCC may lead to three key benefits: enhance interaction efficiency, stimulate knowledge transfer and improve resource allocation. There are however several negative effects of integration too, which have so far received little attention. OIOCC may lead to an intensification of the cyber security dilemma, increase costs in the long run, and impel – what I call – ‘cyber mission creep’. Though the benefits seem to outweigh the risks, I note that ignoring the potential negative effects may be dangerous – as activity is more likely to go beyond the foreign-policy goals of governments and intrusions are more likely to trigger a disproportionate response by the defender.
I was part of a great panel at the Global Cyberspace Cooperation Summit VII, organized by the East West Institute.
The summit brought together policymakers, business leaders and technical experts to discuss the most pressing issues in international cyberspace, including securing the Internet of Things, balancing encryption and lawful access to data, developing norms of behavior, improving the security of information and communications technology (ICT) and strengthening the resilience of critical infrastructure.
If you’d like to know more about cyber and dinosaurs (!), start at 38.00 min. Also some great points on cyber risk, non-state actors in cyberspace and more from the other panelists.
More at http://cybersummit.info/.
The abstract of my forthcoming article ‘A matter of time: On the transitory nature of cyberweapons’ in the Journal of Strategic Studies:
This article examines the transitory nature of cyberweapons. Shedding light on this highly understudied facet is important both for grasping how cyberspace affects international security and policymakers’ efforts to make accurate decisions regarding the deployment of cyberweapons. First, laying out the life cycle of a cyberweapon, I argue that these offensive capabilities are both different in ‘degree’ and in ‘kind’ compared with other weapons with respect to their temporary ability to cause harm or damage. Second, I develop six propositions which indicate that not only technical features, inherent to the different types of cyber capabilities – that is, the type of exploited vulnerability, access and payload – but also offender and defender characteristics explain differences in transitoriness between cyberweapons. Finally, drawing out the implications, I reveal that the transitory nature of cyberweapon’s benefits great powers, changes the incentive structure for offensive cyber cooperation and induces a different funding structure for (military) cyber programs compared with conventional weapon programs. I also note that the time-dependent dynamic underlying cyberweapons potentially explains the limited deployment of cyberweapons compared to espionage capabilities
Can a non-state actor take down critical infrastructure with a cyberattack? If it is not possible today, will it be possible in the future? Experts disagree about the capabilities of non-state actors in cyberspace, let alone agree on their future capability.
There is debate within cybersecurity community and academia whether cyber weapons are getting cheaper and thus within the reach of the self-proclaimed Islamic State or other non-state groups. Although there is some generalconsensus that offensive cyber operations will be less expensive in the future, there is very little understanding of what influences the cost of a cyber weapon. Making sense of the inputs and defensive environment that drive the cost of a cyber weapon is essential to understanding what actors—whether state, non-state, or criminal—will attain what kinds of cyber capability in the future.
There are four processes that make cyber weapons cheaper. First, labor becomes more efficient; attackers become more dexterous in that they spend less time learning, experimenting, and making mistakes in writing code. The observation has been made that Iranian cyber activities are not necessarily the most sophisticated. Yet, since the Shamoon virus wiped the hard drives of 30,000 workstations at Saudi Aramco in 2012, there have been significant improvements in their coding. Whereas Shamoon contained at least four significant coding errors, newer malware seems to be more carefully designed.
Second, developers standardize their malware development process and become more specialized. Some parts of cyber weapons have become increasingly standardized, such as exploit tool kits, leading to an increase in efficiency. The growth of offensive cyber capabilities in militaries allows for greater specialization in cyber weapon production. The U.S. Cyber Command now has 133 teams in operation, making it easier to dedicate specialized units to specific types of cyber operations—even if these units need to be integrated within a general force structure. According to one report, Russia was able to do the same thing for its cyber campaigns against Ukraine.
Third, reusing and building upon existing malware tools allows attackers to learn to produce cyber weapons more cost effectively. The wiper cases Groovemonitor (2012), Dark Seoul (2013), and Destover (2014) are illustrative of this process. Actors who seem to have relatively limited resources have in recent years been getting more bang for their buck.
Fourth, there are shared experience effects, which allow lessons from one piece of malware to shed light on other offensive capabilities. Cyber weapons are generally part of a large collection of capabilities—sharing vulnerability, exploits, propagation techniques, and other features. Stuxnet’s ‘father’, for example, is thought to be USB worm Fanny, and Stuxnet has also been linked to espionage platforms like Duqu, Flame, miniFlame, Gauss, and Duqu 2.0.
In sum, many of the drivers that can make cyber weapons cheaper come from ‘experience’ and ‘learning curve’ effects, where malware developers learn from the work of others.
Although attackers might rejoice at the prospect of weapons getting cheaper, there are significant barriers that can hamper the cost reduction. The defensive measures put in place as a result of advanced persistent threats have forced attackers to develop more complex capabilities to remain effective. Although it is still the case that most computer breaches could have been avoided by simple patching, basic measures such as network segmentation, firewall implementation, and the use of secure remote access methods are becoming increasingly common. Furthermore, IT security professionals communicate more regularly with management about cyber threats than they did a decade ago.
At a recent Royal United Services Institute conference, a military cyber commander clearly stated that the main problem for conducting effective operations is “people, people, people.” For a government, attracting the brightest minds does not come cheap—especially when a person has the opportunity to work in the private sector for a much higher salary. Historically, foreign intelligence agencies have needed foreign language professionals. Today, they need people able to interpret and write code. However, since coding is a highly transferable skill, these people are able to switch to the private sector easily—making the government’s job of retaining them much harder.
Finally, a cyber weapon program requires continuous production, not just intermittent projects. The malleability of cyberspace gives these weapons a highly transitory nature; they’re only effective for a short while. Therefore, the development of cyber weapons must be unceasing and resources must be constantly available. Ideally, cyber weapons would be produced on an assembly line, ensuring that when one weapon becomes ineffective, the next can be put to use. However, it is hard to estimate the costs of maintaining a cyber capability. Because vulnerabilities can be patched, cyber weapons can suddenly lose their effectiveness, unlike traditional weapons where their effectiveness decays over time.
In 2006, sixty-one years after the first atomic bomb was dropped on Hiroshima, Robert Harney and his colleagues published “Anatomy of a Project to Produce a First Nuclear Weapon.” They outlined almost 200 tasks required to produce a nuclear weapon. Undertaking a similar exercise to identify the costs and barriers to the development of a cyber weapon may be challenging considering the rapid pace of technological change, but it should be done nonetheless. Until military strategists, policymakers and intelligence officials understand the cost drivers for cyber weapons, they will not have any basis to claim whether cyber tools are getting cheaper or who can access them. In other words, unless policymakers have a better understanding of the cost of a cyber weapon, they won’t be able to know whether the Islamic State has the capability to develop and deploy one.
This article was first published on the Net Politics Blog of the Council on Foreign Relations.
Cybersecurity firms, despite their increasing prominence in light of greater media attention at Russian and Chinese cyber operations, are often criticized for their biases when identifying advanced persistent threat actors (APT). Two critiques are most-often heard. Security researcher Carr put his finger on one of the sore spots:
“How is it that our largest infosec companies fail to discover APT threat groups from Western nations (w/ @kaspersky as the exception)” (Twitter)
A second issue frequently mentioned is that threat intelligence firms have an incentive to exaggerate the cyber threat. If a firm is able to discover a highly advanced threat, it must mean that it has advanced detection capabilities and you should buy their product.
There is a third and potentially more damning charge that can be levelled against cybersecurity firms. Like palaeontologists or astronomers, cybersecurity firms like to name their new discoveries. But unlike other sciences, the liberal naming of threat actors and incidents causes a host of problems that confuses accurate data collection and determining whether a threat group still constitutes a threat.
First, giving the same name to different cyber incidents is unnecessarily confusing. Cloud Atlas is also named Inception. Saffron Rose also goes by the name Flying Kitten and Ajax Team. Dark Hotel is also called Tapaoux, Luder or Nemim. Dyncalc is APT12 or Numbered Panda. Hangover is Viceroy Tiger. Mirage is Vixen Panda. Cabarnak is Anunak. Sofacy is also called APT28, OP Pawn Storm or Fancy Bear. The list goes on. Can you still keep them separate?
Granted, attribution is more difficult in cyberspace. Unlike palaeontologists, cyber threat intelligence firms can’t use carbon dating to identify the origins or age of their discoveries. But that makes it all the more important that firms are cautious with their labelling.
Cybersecurity firms mostly rely on circumstantial evidence, and different firms rely on different data, techniques and resources to extract this information. New pieces of evidence can increase the plausibility of a given attributive theory or raise doubts about it, but are not decisive by themselves. It means security researchers constantly need to link (new) pieces of evidence to update their beliefs about a threat actor. By giving the same threat different names, they might miss out on knitting the pieces of evidence together.
Perhaps some in the information security community have less difficulties understanding the diverse threat landscape. However, the confusing labelling creates a barrier for others, particularly with policymakers and journalists who do not have the time or knowledge to cross-reference the alphabet soup of labels. When the information security community claim that ‘others’ don’t get it, the accusation might sometimes be a fair one. However, the liberal labelling behavior is more likely to widen than narrow the gap.
The constant urge to (re)name makes it also more likely that cybersecurity firms refer to old threats as new ones. The same actor may have simply acquired new skills. A hacker group on a given day might have analyzed the code of another cyberattack and realized they could include a certain part in their platform as well. Being too quick in naming new threat actors, firms are more likely to lose sight of how actors might have evolved. They are more likely to exaggerate network learning effects (i.e. that one threat actor learned from another actor) and underestimate a single threat actor’s ability to learn (i.e. that the same actor acquired new skills).
There are a few steps that cybersecurity firms could do to remedy the naming problem. First, if a competitor has already discovered a threat actor, the threat actor shouldn’t be renamed to fit another company’s branding. Even though renaming is in a firm’s interest to promote its brand, it sows confusion across the cybersecurity community and frustrates efforts to obtain accurate data on incidents and threat actors.
Second, when a firm decides to name a new cyber threat, it should also publish a public threat report about it. Dmitri Alperovitch, co-founder of Crowdstrike, presented a paper in 2014 listing various adversaries. However, Crowdstrike hasn’t published any technical reports on many of these APTs—like Foxy Panda and Cutting Kitten. Additionally, when naming a cyber threat, cybersecurity firms need to be clearer whether it refers to a campaign (e.g. a series of activities carried out by a specific actor), the type of malware, the incident or a specific actor.
Third, the cybersecurity industry should create a set of common criteria to determine when an APT should be classified as such. Currently, it is unclear which criteria companies use before publicizing and categorizing the discovery of a new threat. For example, Stuxnet is often referred to as a single cyber weapon despite the fact that it is two separate entities, each with different targets. One focused on closing the isolation valves of the Natanz uranium enrichment facility and the other aimed to change the speeds of the rotors in the centrifuges. The second one was also heavily equipped with four zero-day exploits and used various propagation techniques, whereas the first one did not. Finally, some have hypothesized that Stuxnet changed hands a few times before it was deployed. If the target, technique, and threat actor are not the same, why do so many still refer to Stuxnet as one APT?
If cybersecurity firms were bit more careful with labelling, they would help themselves and others in the field find out which ATPs are new and which ones are extinct.
This article was first published on the Net Politics Blog of the Council on Foreign Relations.