Article Preview
TopIntroduction
Cyber deception is a strategy that has been widely used in cyber conflict. How to detect it in a timely manner is always a challenge.
Caddell (2004) defines two criteria for deception in general: “first, it is intentional; and second, it is designed to gain an advantage for the practitioner”. He further makes a distinction between two forms of deception in the economic and political arenas, i.e. fabrication and manipulation. He states, “If false information is created and presented as true, this is fabrication.” “Manipulation, on the other hand, is the use of information which is technically true, but is being presented out of context in order to create a false implication.” These two forms of deception also exist in the cyberspace. A hacker may create a piece of malware, attach it to a seemingly innocent email, and send the email to a specific group of potential victims. An ignorant user of the group may open the email and click the attached file. Immediately, the user’s system is infected with the malware. This is an example of spear phishing. It is also an example of fabrication in cyberspace. A hacker may also inject a true error message for one application into another irrelevant environment, say another application. Every time when the second application is activated and runs normally, that error message pops up on the screen, confusing the ignorant user and making the application seemingly unusable. This is an example of manipulation. How can these forms of deception be detected in a timely manner? This is the challenge that we are facing.
Besides, Caddell (2004) puts deception into active and passive categories. He states, “Put simply, passive deception is designed to hide real intentions and capabilities from an adversary. You are hiding something which really exists. Active deception, on the other hand, is the process of providing an adversary with evidence of intentions and capabilities which you do not, in fact, possess. Here you are showing your enemy something which is not real.” In cyberspace, a honeypot is a good example of passive deception on the defense side. A honeypot appears to be a regular system, but it contains extra functionalities, such as logging the attack pattern of an attacker and collecting the attack evidence for prosecution. The use of an appropriate indicator of mutual exclusion (MUTEX) to confuse a zero-day worm in its propagation is a good example of active deception on the defense side. Having seen the indicator, the worm may stop its infection process as it considers the system being infected already. This method helps defenders to gain some valuable time in figuring out an effective countermeasure.
To deal with deceptions, Caddell (2004) suggests understanding the “enemy’s intentions and capabilities” and “never relying on a limited number of sources of information or a limited number of collection methodologies”. He notes that “the more one knows, the harder it is for someone to manipulate information out of context. The more one knows, the more likely one will detect a fabrication”. However, he argues, “A comprehensive methodology for dealing with deception will never be written”, as “it is nebulous and ever changing field of virtually infinite proportion”. “We can never be confident we are not being deceived.”