A Universal Attack Against Histogram-Based Image Forensics

A Universal Attack Against Histogram-Based Image Forensics

Mauro Barni, Marco Fontani, Benedetta Tondi
Copyright: © 2013 |Pages: 18
DOI: 10.4018/jdcf.2013070103
OnDemand:
(Individual Articles)
Available
$37.50
No Current Special Offers
TOTAL SAVINGS: $37.50

Abstract

In this paper the authors propose a universal image counter-forensic scheme that contrasts any detector based on the analysis of the image histogram. Being universal, the scheme does not require knowledge of the detection algorithms available to the forensic analyst, and can be used to conceal traces left in the histogram of the image by any processing tool. Instead of adapting the histogram of the image to fit some statistical model, the proposed scheme makes it practically identical to the histogram of an untouched image, by solving an optimization problem. In doing this, the perceptual similarity between the processed and counter-attacked image is preserved to a large extent. The validity of the scheme in countering both contrast-enhancement and splicing- detection is assessed through experimental validation.
Article Preview
Top

Previous Works In Counter Forensics

Counter-forensics was firstly introduced in a seminal work by Kirchner and Böhme (2007), where the concept of fighting against image forensics was introduced together with a practical application, namely a method for resampling an image without introducing pixel correlations. Furthermore, a simple yet important taxonomy was introduced in Kirchner and Böhme (2007) distinguishing between post-processing and integrated techniques, and between targeted and universal ones. Briefly speaking, counter-forensic techniques in the post-processing class consist of two steps: first the attacker performs the tampering, thus obtaining a desired modified content, then she processes the content so to conceal/erase the detectable traces left during the first step. On the opposite, an integrated counter-forensic technique modifies the image so that by construction it does not introduce detectable traces. Of course, developing integrated methods is much harder in most cases. A second distinction is based on the target of the counter forensic method: if it aims at removing the trace searched for by a specific detector, then it belongs to the targeted family. A universal method, instead, attempts to maintain as many statistical properties as possible, so to make the processed image hard to detect also with tools unknown to the AD.

Cao et al. (2010) proposed a targeted method to hide traces of contrast enhancement, a common image processing operator that leaves traces in the image histogram, so to deceive the detector developed by Stamm and Liu (2008). The method is based on the introduction of local random dithering in the enhancement step, so it can be classified as an integrated attack. Nevertheless, the authors also mention the possibility of turning this attack into a post-processing one.

Several works have been proposed by Stamm et al. to hide traces of JPEG compression (Stamm, Tjoa, Lin, & Liu, 2010; Stamm et al., 2010), that also allow to hide some kinds of tampering that are revealed thanks to JPEG compression side effects. The basic idea is to remove the most important trace left by JPEG compression into the image, namely the quantization of DCT coefficients. Since the goal is pursued by introducing additive noise to remove discontinuities in DCT coefficients values, these methods can be thought of as post-processing CF attacks. However, introducing noise has obviously a cost in terms of quality, as Valenzise et al. (2011) show.

Counter-forensic methods for video have also been proposed: Stamm and Liu proposed a method (2011) that allows to remove/add frames from a MPEG video without introducing statistical artifacts in the prediction error, a trace exploited in the detector introduced by Wang and Farid (2006) to detect video doctoring.

With the goal of devising a more theorethical and general formulation for counter-forensics, Stamm et al. (2012) and Barni and Tondi (2013) recently proposed frameworks based on game theory. Stamm et al. (2012) propose a framework to evaluate the probability that a forgery is detected assuming that both the AD and the FA play their optimal strategies. In Barni and Tondi (2013) the source-identification problem with known statistics is modelled as a zero-sum game played by the AD and the FA: the task of the FA is to perform classification through hypothesis testing, while the AD wants to perform the attack in such a way that FA's classification is deceived. Under the limited resources assumption for the analyst, the authors derive the optimal strategies for the two players and prove that the correspondent profile is the Nash equilibrium for the game. A considerable step forward in this direction is made in Barni and Tondi (2012), in which the known statistics assumption is removed. This provides the appropriate theoretical framework for casting the problem faced in Barni et al. (2012), allowing us to derive the approach proposed in this paper.

Complete Article List

Search this Journal:
Reset
Volume 16: 1 Issue (2024)
Volume 15: 1 Issue (2023)
Volume 14: 3 Issues (2022)
Volume 13: 6 Issues (2021)
Volume 12: 4 Issues (2020)
Volume 11: 4 Issues (2019)
Volume 10: 4 Issues (2018)
Volume 9: 4 Issues (2017)
Volume 8: 4 Issues (2016)
Volume 7: 4 Issues (2015)
Volume 6: 4 Issues (2014)
Volume 5: 4 Issues (2013)
Volume 4: 4 Issues (2012)
Volume 3: 4 Issues (2011)
Volume 2: 4 Issues (2010)
Volume 1: 4 Issues (2009)
View Complete Journal Contents Listing