Tag Archives: cyber

Cyber References Project

I started my graduate studies a few years ago thinking not much was published in the field of cyber conflict. I quickly found my assumption was wrong when I optimistically began a systematic literature review of ‘all’ the relevant works in the field. It was a project I had to abandon after a few weeks (although I do believe that more reviews like this should be conducted).

Even though it is true that still not enough has been published in the top academic journals, one can hardly say that people don’t write on ‘cyber’. With relevant readings currently being scattered across journal articles, books, blog posts, news articles, cyber security firm reports, and more, it becomes increasingly difficult to know what’s out there and build upon earlier insights and arguments published by others.

Whereas this has led some ( Oxford Bibliographies Project and State of the Field of Conference 2016) to direct efforts towards finding the ‘core’ of the field – focusing on key readings –  I have started a complimentary ‘Cyber References Project with as aim to be much more inclusive.

The database currently includes about 800-1000 readings (and also lists a few podcasts and documentaries), which I have sorted into 48 categories. The categories are not mutually exclusive. The goal is not to search based on author (or title) like conventional search engines.

This database includes the references listed on various cyber security course syllabi, State of the Field of Conference 2016,  Oxford Bibliographies Project, SSRN, Google Scholar, Oxford SOLO, PhD-Manuscripts, and think-tank search engines.

Where I see this project going: I plan to include another 150+ academic articles & 200+ blog posts in the near future. I also hope to improve formatting and sort the current list of readings (by year & add categories). In addition, Olivia Lau maintains a great notes/summary pool of key readings on International Relations. It would be great if we could establish something similar for cyber conflict.

Please let me know if readings are missing or categorized incorrectly. Of course, any ideas on how to make this platform easier to use are also very welcome.

When Naming Cyber Threat Actors Does More Harm Than Good

Cybersecurity firms, despite their increasing prominence in light of greater media attention at Russian and Chinese cyber operations, are often criticized for their biases when identifying advanced persistent threat actors (APT). Two critiques are most-often heard. Security researcher Carr put his finger on one of the sore spots:

“How is it that our largest infosec companies fail to discover APT threat groups from Western nations (w/ @kaspersky as the exception)” (Twitter)

A second issue frequently mentioned is that threat intelligence firms have an incentive to exaggerate the cyber threat. If a firm is able to discover a highly advanced threat, it must mean that it has advanced detection capabilities and you should buy their product.

There is a third and potentially more damning charge that can be levelled against cybersecurity firms. Like palaeontologists or astronomers, cybersecurity firms like to name their new discoveries. But unlike other sciences, the liberal naming of threat actors and incidents causes a host of problems that confuses accurate data collection and determining whether a threat group still constitutes a threat.

First, giving the same name to different cyber incidents is unnecessarily confusing. Cloud Atlas is also named Inception. Saffron Rose also goes by the name Flying Kitten and Ajax Team. Dark Hotel is also called Tapaoux, Luder or Nemim. Dyncalc is APT12 or Numbered Panda. Hangover is Viceroy Tiger. Mirage is Vixen Panda. Cabarnak is Anunak. Sofacy is also called APT28, OP Pawn Storm or Fancy Bear. The list goes on. Can you still keep them separate?

Granted, attribution is more difficult in cyberspace. Unlike palaeontologists, cyber threat intelligence firms can’t use carbon dating to identify the origins or age of their discoveries. But that makes it all the more important that firms are cautious with their labelling.

Cybersecurity firms mostly rely on circumstantial evidence, and different firms rely on different data, techniques and resources to extract this information. New pieces of evidence can increase the plausibility of a given attributive theory or raise doubts about it, but are not decisive by themselves. It means security researchers constantly need to link (new) pieces of evidence to update their beliefs about a threat actor. By giving the same threat different names, they might miss out on knitting the pieces of evidence together.

Perhaps some in the information security community have less difficulties understanding the diverse threat landscape. However, the confusing labelling creates a barrier for others, particularly with policymakers and journalists who do not have the time or knowledge to cross-reference the alphabet soup of labels. When the information security community claim that ‘others’ don’t get it, the accusation might sometimes be a fair one. However, the liberal labelling behavior is more likely to widen than narrow the gap.

The constant urge to (re)name makes it also more likely that cybersecurity firms refer to old threats as new ones. The same actor may have simply acquired new skills. A hacker group on a given day might have analyzed the code of another cyberattack and realized they could include a certain part in their platform as well. Being too quick in naming new threat actors, firms are more likely to lose sight of how actors might have evolved. They are more likely to exaggerate network learning effects (i.e. that one threat actor learned from another actor) and underestimate a single threat actor’s ability to learn (i.e. that the same actor acquired new skills).

There are a few steps that cybersecurity firms could do to remedy the naming problem. First, if a competitor has already discovered a threat actor, the threat actor shouldn’t be renamed to fit another company’s branding. Even though renaming is in a firm’s interest to promote its brand, it sows confusion across the cybersecurity community and frustrates efforts to obtain accurate data on incidents and threat actors.

Second, when a firm decides to name a new cyber threat, it should also publish a public threat report about it. Dmitri Alperovitch, co-founder of Crowdstrike, presented a paper in 2014 listing various adversaries.  However, Crowdstrike hasn’t published any technical reports on many of these APTs—like Foxy Panda and Cutting Kitten. Additionally, when naming a cyber threat, cybersecurity firms need to be clearer whether it refers to a campaign (e.g. a series of activities carried out by a specific actor), the type of malware, the incident or a specific actor.

Third, the cybersecurity industry should create a set of common criteria to determine when an APT should be classified as such. Currently, it is unclear which criteria companies use before publicizing and categorizing the discovery of a new threat. For example, Stuxnet is often referred to as a single cyber weapon despite the fact that it is two separate entities, each with different targets. One focused on closing the isolation valves of the Natanz uranium enrichment facility and the other aimed to change the speeds of the rotors in the centrifuges. The second one was also heavily equipped with four zero-day exploits and used various propagation techniques, whereas the first one did not. Finally, some have hypothesized that Stuxnet changed hands a few times before it was deployed. If the target, technique, and threat actor are not the same, why do so many still refer to Stuxnet as one APT?

If cybersecurity firms were bit more careful with labelling, they would help themselves and others in the field find out which ATPs are new and which ones are extinct.

This article was first published on the Net Politics Blog of the Council on Foreign Relations.