Modeling rules of engagement in the net

I have just received a complimentary copy of the book “The Ethics of Information Warfare” edited by Luciano Floridi and Mariarosaria Taddeo and published by Springer in the Law, Governance and Technology Series (of which I am scientific advisory board member). While information ethics are not part of my current research interest, the title captured my attention.

The use of the term ‘information warfare‘ by the authors is twofold (and different from previous common uses of the term): the provision of unmanned weapons to be deployed in the battelfield (e.g. drones) and the creation of a radically new battlefield, the “cyberdomain” were warfare is waged with software tools. Particularly, the latter is in my opinion a genuine field for Web (Internet) science.

cyberwarfare

[image by manfrancisco]

Cyberwarfare, information warfare or cyberdefence, all are catchy terms for a new phenomenon which is, again, the rebirth of a human activity in a new domain, the Internet. And in this case, the activity is war (in the broadest sense imaginable). But maybe it is not “war as usual” but something new. Fioridi and Tadeo give some interesting ideas on this in their introductory chapter (emphasis is mine):

[Information Warfare] provides the means to carry out war in a completely different manner. The changes determined by IW are of astounding importance as they concern both the way the military and politicians consider and wage war, and the way war is perceived by the civil society. […] IW is very powerful and potentially highly disruptive. However, unlike traditional warfare, IW is potentially bloodless, cost effective, and does not require military expertise. In short, ICTs have modified the costs of war […].

A consequence of cost effectiveness might be that of a potential increase in the number of conflicts and casualties. Also, the cyberspace has the potential to make war pervasive, easily reaching personal computers and mobile devices. Further, tracing the responsibilities of war actions is in many cases challenging or impossible in pragmatic terms. Particularly, no government has publicly declared responsibility for a cyberattack, violating one of the classic conditions of jus ad bellum (“right to war”). So,  are there rules that govern the behaviour of cyberwar? Well, there are no specific laws today, but there have always been laws and rules at least restricting acceptable behaviour for the military (jus in bello), nowadays called “rules of engagement”.

However, the new cyberdomain requires a rethinking of these rules of engagement (ROE), as initiated by the  Talinn Manual, and this is where cyber-ethics come into play. Having worked in formal models to capture rules of engagment for decision making, I know how difficult it is to make them clear and concise, especially in the changing and complex world of information security. In words of Lt. G. Ricardo S. Sanchez (about the joint operations in Iraq in his book Wiser in battle):

The restrictions were so complex that I had to carry a five-page spreadsheet listing all the countries, their rules of engagement, and who was authorized to do what.

So the question is that of to which extent new non-binding studies as the Talinn manual are able to account for the subtleties of cyber-strikes.  And that is a matter of clarity and depth. Understanding rules of engagement requires first clarifying the ontology (in the sense of conceptualization) of the universe of discourse. The Talinn manual does a good work in laying out the foundations for such an ontology. Let’s take a look at an example.

Rule 11 “Definition of Use of Force” says:

A cyber operation constitutes a use of force when its scale and effects are comparable to non-cyber operations rising to the level of a use of force.

This is saying that use of force are actions of the “cyber operation” kind that are defined in parallel to their non-cyber counterparts. The problem comes with the demarcation criteria:

‘scale and effects’ is a shorthand term that captures the quantitative and qualitative factors to be analysed in determining whether a cyber operation qualifies as a use of force.

That’s the challenge for defining use of force. There is a bunch of quantitative and qualitative factors involved, and this requires human judgement, so the definition cannot be decided by a computer or by a human following some rules that do not require interpretation. However, the text of rule 11 then goes through a number of definitions and in many cases, gives examples. This is an important as the definition of what is a use of force action is not provided in a form that can be formalized in a knowledge-based system, but what is possible is the representation of the particular cases. This is an example:

In Nicaragua, the International Court of Justice found that arming and training a guerrilla force that is engaged in hostilities against another State qualified as a use of force. Therefore, providing an organized group with malware and the training necessary to use it to carry out cyber attacks against another State would also qualify.

So, the case of arming a hostile organized group with weapons is a use of force. Now it comes a sketch in OWL Mancheser syntax of some of the definitions in rule 11:


Class: Action
....
Class: Malware
SubclassOf:Weapon
...
Class: Organized_group
...
Class: Hacktivist_group
SubclassOf:Organized_group
...
Class: Guerrila_force
SubclassOf:Organized_group
....
Class:Cyber_operation
SubclassOf: Action
...
Class:Arming_action
SubclassOf: Action that hasObject Weapon
...
Class: Use_of_force
SubclassOf: Cyber_operation
DisjointWith: Economic_coercion
DisjointWith: Political_coercion
DisjointWith: Funding_action that hasBeneficiary some Hacktivist_group
...
Class: Arming_action that hasBeneficiary some (Organized_group that engagesWith Blue)
SubclassOf: Use_of_force

The definitions in the code sketch represent some of the knowledge that can be interpreted from the text in the manual. For example, it can be entailed that malware is a kind of weapon, and it is thus applicable to the definition of arming actions. The model requires a further clarification of the role of weapons in an arming action, but this is beyond the simple illustration I wanted to sketch here. Obviously, there is a need to frame the concrete terms and predicates into some sort of commonsense or upper ontology as Cyc.

In summary, is it possible to build models to aid in decision making for jus in bello in the cyberspace? It seems it is (to some extent).

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s