Benchmarking Steganalysis

Benchmarking Steganalysis

Andrew D. Ker (Oxford University Computing Laboratory, UK)
Copyright: © 2009 |Pages: 25
DOI: 10.4018/978-1-59904-869-7.ch013

Abstract

This chapter discusses how to evaluate the effectiveness of steganalysis techniques. In the steganalysis literature, numerous different methods are used to measure detection accuracy, with different authors using incompatible benchmarks. Thus it is difficult to make a fair comparison of competing steganalysis methods. This chapter argues that some of the choices for steganalysis benchmarks are demonstrably poor, either in statistical foundation or by over-valuing irrelevant areas of the performance envelope. Good choices of benchmark are highlighted, and simple statistical techniques demonstrated for evaluating the significance of observed performance differences. It is hoped that this chapter will make practitioners and steganalysis researchers better able to evaluate the quality of steganography detection methods.
Chapter Preview
Top

Background

The terminology of steganography and steganalysis is now settled: the covert payload is embedded into a cover object producing a stego-object. Details of the stego-system (the embedding and extraction methods) are not relevant to this chapter, but it is generally assumed that the sender and recipient share knowledge of an embedding key, and that the recipient does not have access to the original cover object. The communicating parties’ enemy is the steganalyst (often referred to as a Warden) and this is the role we are taking in this work, assuming that we are given steganalysis methods which try to determine whether an object is an innocent cover or a payload-carrying stego-object. Usually, different embedding methods and cover media types are attacked by specific steganalysis methods.

Complete Chapter List

Search this Book:
Reset