Show all abstracts
View Session
- Front Matter: Volume 6819
- Steganography I
- Theoretical Methods
- Physical Media
- Forensics
- Audio and Video I
- Biometrics
- Applications
- Audio and Video II
- Steganalysis
- Embedding
Front Matter: Volume 6819
Front Matter: Volume 6819
Show abstract
This PDF file contains the front matter associated with SPIE
Proceedings Volume 6819, including the Title Page, Copyright
information, Table of Contents, Introduction, and the
Conference Committee listing.
Steganography I
Influence of embedding strategies on security of steganographic methods in the JPEG domain
Show abstract
In this paper, we study how specific design principles and elements of steganographic schemes for the JPEG
format influence their security. Our goal is to shed some light on how the choice of the embedding operation and
domain, adaptive selection channels, and syndrome coding influence statistical detectability. In the experimental
part of this paper, the detectability is evaluated using a state-of-the-art blind steganalyzer and the results are
contrasted with several adhoc detectability measures, such as the embedding distortion. We also report the
first results of our steganalysis of the recently proposed YASS algorithm and compare its security to other
steganographic methods for the JPEG format.
WLAN steganography revisited
Show abstract
Two different approaches for using a sequence of packets of the IEEE 802.11 (WLAN) protocol as cover for a stenographic communication can be found in literature: in 2003 Krzysztof Szczypiorski introduced a method constructing a hidden channel using deliberately corrupted WLAN packets for communication. In 2006 Kraetzer et al. introduced a WLAN stenography approach that works without generating corrupted network packets. This later approach, with hidden storage channel scenario (SCI) and the timing channel based scenario (SCII), is reconsidered here. Fixed parameter settings limiting SCIs capabilities in the implementation (already introduced in 2006) motivated an enhancement. The new implementation of SCI increases the capacity, while at the same time improving the reliability and decreasing the detectability in comparison to the work described in 2006. The timing channel based approach SCII from 2006 is in this paper substituted by a completely new design based on the usage of WLAN Access Point addresses for the synchronization and payload transmission. This new design now allows a comprehensive practical evaluation of the implementation and the evaluations of the scheme, which was not possible with the original SCII before. The test results for both enhanced approaches are summarised and compared in terms of detectability, capacity and reliability.
Steganographic strategies for a square distortion function
Show abstract
Recent results on the information theory of steganography suggest, and under some conditions prove, that the detectability of payload is proportional to the square of the number of changes caused by the embedding. Assuming that result in general, this paper examines the implications for an embedder when a payload is to be spread amongst multiple cover objects. A number of variants are considered: embedding with and without adaptive source coding, in uniform and nonuniform covers, and embedding in both a fixed number of covers (so-called batch steganography) as well as establishing a covert channel in an infinite stream (sequential steganography, studied here for the first time). The results show that steganographic capacity is sublinear, and strictly asymptotically greater in the case of a fixed batch than an infinite stream. In the former it is possible to describe optimal embedding strategies; in the latter the situation is much more complex, with a continuum of strategies which approach the unachievable asymptotic optimum.
Revisiting weighted stego-image steganalysis
Show abstract
This paper revisits the steganalysis method involving a Weighted Stego-Image (WS) for estimating LSB replacement
payload sizes in digital images. It suggests new WS estimators, upgrading the method's three components:
cover pixel prediction, least-squares weighting, and bias correction. Wide-ranging experimental results (over two
million total attacks) based on images from multiple sources and pre-processing histories show that the new
methods produce greatly improved accuracy, to the extent that they outperform even the best of the structural
detectors, while avoiding their high complexity. Furthermore, specialised WS estimators can be derived
for detection of sequentially-placed payload: they offer levels of accuracy orders of magnitude better than their
competitors.
Theoretical Methods
Security analysis of robust perceptual hashing
Show abstract
In this paper we considered the problem of security analysis of robust perceptual hashing in authentication
application. The main goal of our analysis was to estimate the amount of trial efforts of the attacker, who is
acting within the Kerckhoffs security principle, to reveal a secret key. For this purpose, we proposed to use
Shannon equivocation that provides an estimate of complexity of the key search performed based on all available
prior information and presented its application to security evaluation of particular robust perceptual hashing
algorithms.
A low-rate fingerprinting code and its application to blind image fingerprinting
Show abstract
In fingerprinting, a signature, unique to each user, is embedded in each distributed copy of a multimedia content,
in order to identify potential illegal redistributors. This paper investigates digital fingerprinting problems
involving millions of users and a handful of colluders. In such problems the rate of the fingerprinting code is
often well below fingerprinting capacity, and the use of codes with large minimum distance emerges as a natural
design. However, optimal decoding is a formidable computational problem. We investigate a design based on
a Reed-Solomon outer code modulated onto an orthonormal constellation, and the Guruswami-Sudan decoding
algorithm. We analyze the potential and limitations of this scheme and assess its performance by means of
Monte-Carlo simulations. In the second part of this paper, we apply this scheme to a blind image fingerprinting
problem, using a linear cancellation technique for embedding in the wavelet domain. Dramatic improvements
are obtained over previous blind image fingerprinting algorithms.
Improved lower bounds on embedding distortion in information hiding
Show abstract
The previous steganographic algorithm results in the modification of the image statistic, especially histogram of
the coefficients. We propose a method to compensate for the histogram modification due to embedding. The
new algorithm estimates the modification before the embedding process has started and modifies the histogram
in advance so that the modification due to embedding is less noticeable. We have implemented our methods in
Java and performed the extensive experiments with them. The experimental results have shown that our new
method improves our previous steganographic algorithms by decreasing distortion and histogram modification.
Asymptotically optimum embedding strategy for one-bit watermarking under Gaussian attacks
Show abstract
The problem of asymptotically optimum watermark detection and embedding has been addressed in a recent
paper by Merhav and Sabbag where the optimality criterion corresponds to the maximization of the false negative
error exponent for a fixed false positive error exponent. In particular Merhav and Sabbag derive the optimum
detection rule under the assumption that the detector relies on the second order statistics of the received signal
(universal detection under limited resources), however the optimum embedding strategy in the presence of attacks
and a closed formula for the negative error exponents are not available. In this paper we extend the analysis by
Merhav and Sabbag, by deriving the optimum embedding strategy under Gaussian attacks and the corresponding
false negative error exponent. The improvement with respect to previously proposed embedders are shown by
means of plots.
A high-rate fingerprinting code
Show abstract
In fingerprinting, a signature, unique to each user, is embedded in each distributed copy of a multimedia content,
in order to identify potential illegal redistributors. As an alternative to the vast majority of fingerprinting codes
built upon error-correcting codes with a high minimum distance, we propose the construction of a random-like
fingerprinting code, intended to operate at rates close to fingerprinting capacity. For such codes, the notion
of minimum distance has little relevance. As an example, we present results for a length 288,000 code that
can accommodate 33 millions of users and 50 colluders against the averaging attack. The encoding is done
by interleaving the users' identifying bitstrings and encoding them multiple times with recursive systematic
convolutional codes. The decoding is done in two stages. The first outputs a small set of possible colluders
using a bank of list Viterbi decoders. The second stage prunes this set using correlation decoding. We study
this scheme and assess its performance through Monte-Carlo simulations. The results show that at rates ranging
from 30% to 50% of capacity, we still have a low error probability (e.g. 1%).
Physical Media
Analysis of physical unclonable identification based on reference list decoding
Show abstract
In this paper we advocate a new approach to item identification based on physical unclonable features. Being
unique characteristics of an item, these features represent a kind of unstructured random codebook that links
the identification problem to digital communications via composite hypothesis testing. Despite the obvious
similarity, this problem is significantly different in that a security constraint prohibits the disclosure of the entire
codebook at the identification stage. Besides this, complexity, memory storage and universality constraints
should be taken into account for databases with several hundred millions entries. Therefore, we attempt to find
a trade-off between performance, security, memory storage and universality constraints. A practical suboptimal
method is considered based on our reference list decoding (RLD) framework. Simulation results are presented
to demonstrate and support the theoretical findings.
Data embedding in hardcopy images via halftone-dot orientation modulation
Show abstract
The principal challenge in hardcopy data hiding is achieving robustness to the print-scan process. Conventional
robust hiding schemes are not well-suited because they do not adapt to the print-scan distortion channel, and hence are fundamentally limited in a detection theoretic sense. We consider data embedding in images printed with clustered dot halftones. The input to the print-scan channel in this scenario is a binary halftone image, and hence the distortions are also intimately tied to the nature of the halftoning algorithm employed. We propose a new framework for hardcopy data hiding based on halftone dot orientation modulation. We develop analytic halftone threshold functions that generate elliptically shaped halftone dots in any desired orientation. Our hiding strategy then embeds a binary symbol as a particular choice of the orientation. The orientation is identified at the decoder via statistically motivated moments following appropriate global and local synchronization to adress the geometric distortion introduced by the print scan channel. A probabilistic model of the print-scan process, which conditions received moments on input orientation, allows for Maximum Likelihood (ML) optimal decoding. Our method bears similarities to the paradigms of informed coding and QIM, but also makes departures from classical results in that constant and smooth image areas are better suited for embedding via our scheme as opposed to busy or "high entropy" regions. Data extraction is automatically done from a scanned hardcopy, and results indicate significantly higher embedding rate than existing methods, a majority of which rely on visual or manual detection.
Secure surface identification codes
Show abstract
This paper introduces an identification framework for random microstructures of material surfaces. These microstructures
represent a kind of unique fingerprints that can be used to track and trace an item as well as for
anti-counterfeiting. We first consider the architecture for mobile phone-based item identification and then introduce
a practical identification algorithm enabling fast searching in large databases. The proposed algorithm is
based on reference list decoding. The link to digital communications and robust perceptual hashing is shown. We
consider a practical construction of reference list decoding, which comprizes computational complexity, security,
memory storage and performance requirements. The efficiency of the proposed algorithm is demonstrated on
experimental data obtained from natural paper surfaces.
Forensics
Camera identification from cropped and scaled images
Show abstract
In this paper, we extend our camera identification technology based on sensor noise to a more general setting when
the image under investigation has been simultaneously cropped and scaled. The sensor fingerprint detection is
formulated using hypothesis testing as a two-channel problem and a detector is derived using the generalized
likelihood ratio test. A brute force search is proposed to find the scaling factor which is then refined in a detailed
search. The cropping parameters are determined from the maximum of the normalized cross-correlation between two
signals. The accuracy and limitations of the proposed technique are tested on images that underwent a wide range of
cropping and scaling, including images that were acquired by digital zoom. Additionally, we demonstrate that sensor
noise can be used as a template to reverse-engineer in-camera geometrical processing as well as recover from later
geometrical transformations, thus offering a possible application for re-synchronizing in digital watermark detection.
On the detectability of local resampling in digital images
Show abstract
In Ref. 15, we took a critical view on the reliability of forensic techniques as tools to generate evidence of
authenticity for digital images and presented targeted attacks against the state-of-the-art resampling detector by
Popescu and Farid. We demonstrated that a correct detection of manipulations can be impeded by resampling
with geometric distortion. However, we constrained our experiments to global image transformations. In a more
realistic scenario, most forgeries will make use of local resampling operations, e.g., when pasting a beforehand
scaled or rotated object. In this paper, we investigate the detectability of local resampling without and with
geometric distortion and study the influence of the size both of the tampered and the analyzed image region.
Although the detector might fail to reveal the characteristic periodic resampling artifacts, a forensic investigator
can benefit from the generally increased correlation in resampled image regions. We present an adapted targeted
attack, which allows for an increased degree of undetectability in the case of local resampling.
Scanner identification with extension to forgery detection
Show abstract
Digital images can be obtained through a variety of sources including digital cameras and scanners. With rapidly
increasing functionality and ease of use of image editing software, determining authenticity and identifying forged
regions, if any, is becoming crucial for many applications. This paper presents methods for authenticating and
identifying forged regions in images that have been acquired using flatbed scanners. The methods are based on
using statistical features of imaging sensor pattern noise as a fingerprint for the scanner. An anisotropic local
polynomial estimator is used for obtaining the noise patterns. A SVM classifier is trained for using statistical
features of pattern noise for classifying smaller blocks of an image. This feature vector based approach is shown
to identify the forged regions with high accuracy.
Individuality evaluation for paper based artifact-metrics using transmitted light image
Show abstract
Artifact-metrics is an automated method of authenticating artifacts based on a measurable intrinsic characteristic.
Intrinsic characters, such as microscopic random-patterns made during the manufacturing process, are very difficult to
copy. A transmitted light image of the distribution can be used for artifact-metrics, since the fiber distribution of paper is
random. Little is known about the individuality of the transmitted light image although it is an important requirement for
intrinsic characteristic artifact-metrics. Measuring individuality requires that the intrinsic characteristic of each artifact
significantly differs, so having sufficient individuality can make an artifact-metric system highly resistant to brute force
attack. Here we investigate the influence of paper category, matching size of sample, and image-resolution on the
individuality of a transmitted light image of paper through a matching test using those images. More concretely, we
evaluate FMR/FNMR curves by calculating similarity scores with matches using correlation coefficients between pairs
of scanner input images, and the individuality of paper by way of estimated EER with probabilistic measure through a
matching method based on line segments, which can localize the influence of rotation gaps of a sample in the case of
large matching size. As a result, we found that the transmitted light image of paper has a sufficient individuality.
Camera identification from printed images
Show abstract
In this paper, we study the problem of identifying digital camera sensor from a printed picture. The sensor is identified
by proving the presence of its Photo-Response Non-Uniformity (PRNU) in the scanned picture using camera ID
methods robust to cropping and scaling. Two kinds of prints are studied. The first are postcard size (4" by 6") pictures
obtained from common commercial printing labs. These prints are always cropped to some degree. In the proposed
identification, a brute force search for the scaling ratio is deployed while the position of cropping is determined from
the cross-correlation surface. Detection success mostly depends on the picture content and the quality of the PRNU
estimate. Prints obtained using desktop printers form the second kind of pictures investigated in this paper. Their
identification is complicated by complicated geometric distortion due to imperfections in paper feed. Removing this
distortion is part of the identification procedure. From experiments, we determine the range of conditions under which
reliable sensor identification is possible. The most influential factors in identifying the sensor from a printed picture
are the accuracy of angular alignment when scanning, printing quality, paper quality, and size of the printed picture.
Audio and Video I
Toward robust watermarking of scalable video
Show abstract
This paper pulls together recent advances in scalable video coding and protection and investigates the impact on watermarking. After surveying the literature on the protection of scalable video via cryptographic and watermarking means, the robustness of a simple wavelet-based video watermarking scheme against combined bit stream adaptations performed on JSVM (the H.264/MPEG-4 AVC scalable video coding extension) and MC-EZBC scalable video bit streams is examined.
The video watermarking container: efficient real-time transaction watermarking
Show abstract
When transaction watermarking is used to secure sales in online shops by embedding
transaction specific watermarks, the major challenge is embedding efficiency:
Maximum speed by minimal workload. This is true for all types of media. Video
transaction watermarking presents a double challenge. Video files not only are larger
than for example music files of the same playback time. In addition, video
watermarking algorithms have a higher complexity than algorithms for other types of
media. Therefore online shops that want to protect their videos by transaction
watermarking are faced with the problem that their servers need to work harder and
longer for every sold medium in comparison to audio sales. In the past, many
algorithms responded to this challenge by reducing their complexity. But this usually
results in a loss of either robustness or transparency.
This paper presents a different approach. The container technology separates
watermark embedding into two stages: A preparation stage and the finalization stage.
In the preparation stage, the video is divided into embedding segments. For each
segment one copy marked with "0" and anther one marked with "1" is created. This
stage is computationally expensive but only needs to be done once. In the finalization
stage, the watermarked video is assembled from the embedding segments according
to the watermark message. This stage is very fast and involves no complex
computations. It thus allows efficient creation of individually watermarked video files.
Robust audio hashing for audio authentication watermarking
Show abstract
Current systems and protocols based on cryptographic methods for integrity and authenticity verification of media
data do not distinguish between legitimate signal transformation and malicious tampering that manipulates the
content. Furthermore, they usually provide no localization or assessment of the relevance of such manipulations
with respect to human perception or semantics. We present an algorithm for a robust message authentication code
in the context of content fragile authentication watermarking to verify the integrity of audio recodings by means
of robust audio fingerprinting. Experimental results show that the proposed algorithm provides both a high level
of distinction between perceptually different audio data and a high robustness against signal transformations that
do not change the perceived information. Furthermore, it is well suited for the integration in a content-based
authentication watermarking system.
Biometrics
Comparison of compression algorithms' impact on iris recognition accuracy II: revisiting JPEG
Show abstract
The impact of using different lossy compression algorithms on the recognition accuracy of iris recognition systems
is investigated. In particular, we consider the general purpose still image compression algorithms JPEG,
JPEG2000, SPIHT, and PRVQ and assess their impact on ROC of two different iris recognition systems when
applying compression to iris sample data.
Biometric hashing for handwriting: entropy-based feature selection and semantic fusion
Show abstract
Some biometric algorithms lack of the problem of using a great number of features, which were extracted from the raw
data. This often results in feature vectors of high dimensionality and thus high computational complexity. However, in
many cases subsets of features do not contribute or with only little impact to the correct classification of biometric
algorithms. The process of choosing more discriminative features from a given set is commonly referred to as feature
selection. In this paper we present a study on feature selection for an existing biometric hash generation algorithm for the
handwriting modality, which is based on the strategy of entropy analysis of single components of biometric hash vectors,
in order to identify and suppress elements carrying little information. To evaluate the impact of our feature selection
scheme to the authentication performance of our biometric algorithm, we present an experimental study based on data of
86 users. Besides discussing common biometric error rates such as Equal Error Rates, we suggest a novel measurement
to determine the reproduction rate probability for biometric hashes. Our experiments show that, while the feature set size
may be significantly reduced by 45% using our scheme, there are marginal changes both in the results of a verification
process as well as in the reproducibility of biometric hashes. Since multi-biometrics is a recent topic, we additionally
carry out a first study on a pair wise multi-semantic fusion based on reduced hashes and analyze it by the introduced
reproducibility measure.
Realization of correlation attack against the fuzzy vault scheme
Show abstract
User privacy and template security are major concerns in the use of biometric systems. These are serious concerns
based on the fact that once compromised, biometric traits can not be canceled or reissued. The Fuzzy Vault
scheme has emerged as a promising method to alleviate the template security problem. The scheme is based on
binding the biometric template with a secret key and scrambling it with a large amount of redundant data, such
that it is computationally infeasible to extract the secret key without possession of the biometric trait. It was
recently claimed that the scheme is susceptible to correlation based attacks which assume the availability of two
fuzzy vaults created using the same biometric data (e.g. two impressions of the same fingerprint) and suggests
that correlating them would reveal the biometric data hidden inside.
In this work, we implemented the fuzzy vault scheme using fingerprints and performed correlation attacks
against a database of 400 fuzzy vaults (200 matching pairs). Given two matching vaults, we could successfully
unlock 59% of them within a short time. Furthermore, it was possible to link an unknown vault to a short list
containing its matching pair, for 41% of all vaults. These results prove the claim that the fuzzy vault scheme
without additional security measures is indeed vulnerable to correlation attacks.
Error exponent analysis of person identification based on fusion of dependent/independent modalities: multiple hypothesis testing case
Show abstract
In this paper we analyze the performance limits of multimodal biometric identification systems in the multiple
hypothesis testing formulation. For the sake of tractability, we approximate the performance of the actual system
by a set of pairwise binary tests. We point out that the attainable error exponent that can be achieved for such
an approximation is limited by the worst pairwise Chernoff distance between alternative hypothesis prior models.
We consider impact of the inter-modal dependencies on the attainable performance measure and demonstrate
that, contrarily to the binary multimodal hypothesis testing framework, an expected performance gain from
fusion of independent modalities does not any more play the role of lower bound on the gain one can expect
from multimodal fusion.
Bridging biometrics and forensics
Show abstract
This paper is a survey on biometrics and forensics, especially on the techniques and applications of face recognition in forensics. This paper describes the differences and connections between biometrics and forensics, and bridges each other by formulating the conditions when biometrics can be applied in forensics. Under these conditions, face recognition, as a non-intrusive and non-contact biometrics, is discussed in detail as an illustration of applying biometrics in forensics. The discussion on face recognition covers different approaches, feature extractions, and decision procedures. The advantages and limitations of biometrics in forensic applications are also addressed.
Security issues of Internet-based biometric authentication systems: risks of Man-in-the-Middle and BioPhishing on the example of BioWebAuth
Show abstract
Beside the optimization of biometric error rates the overall security system performance in respect to intentional security
attacks plays an important role for biometric enabled authentication schemes. As traditionally most user authentication
schemes are knowledge and/or possession based, firstly in this paper we present a methodology for a security analysis of
Internet-based biometric authentication systems by enhancing known methodologies such as the CERT attack-taxonomy
with a more detailed view on the OSI-Model. Secondly as proof of concept, the guidelines extracted from this
methodology are strictly applied to an open source Internet-based biometric authentication system (BioWebAuth). As
case studies, two exemplary attacks, based on the found security leaks, are investigated and the attack performance is
presented to show that during the biometric authentication schemes beside biometric error performance tuning also
security issues need to be addressed. Finally, some design recommendations are given in order to ensure a minimum
security level.
Applications
Anticollusion watermarking of 3D meshes by prewarping
Show abstract
A novel pre-warping technique for 3D meshes is presented to prevent collusion attacks on fingerprinted 3D
models. By extending a similar technique originally proposed for still images, the surface of watermarked 3D
meshes is randomly and imperceptibly pre-distorted to protect embedded fingerprints against collusion attacks.
The peculiar problems set by the 3D nature of the data are investigated and solved by preserving the perceptual
quality of warped meshes. The proposed approach is independent of the chosen fingerprinting system. The
proposed algorithm can be implemented inside a watermarking chain, as an independent block, before performing
features extraction and watermark embedding. It follows that the detection algorithm is not influenced by the
anti-collusion block. The application of different collusion strategies has revealed the dificulty for colluders to
inhibit watermark detection while ensuring an acceptable quality of the attacked model.
In-theater piracy: finding where the pirate was
Show abstract
Pirate copies of feature films are proliferating on the Internet. DVD rip or screener recording methods involve the
duplication of officially distributed media whereas 'cam' versions are illicitly captured with handheld camcorders in
movie theaters. Several, complementary, multimedia forensic techniques such as copy identification, forensic tracking
marks or sensor forensics can deter those clandestine recordings. In the case of camcorder capture in a theater, the image
is often geometrically distorted, the main artifact being the trapezoidal effect, also known as 'keystoning', due to a
capture viewing axis not being perpendicular to the screen. In this paper we propose to analyze the geometric distortions
in a pirate copy to determine the camcorder viewing angle to the screen perpendicular and derive the approximate
position of the pirate in the theater. The problem is first of all geometrically defined, by describing the general projection
and capture setup, and by identifying unknown parameters and estimates. The estimation approach based on the
identification of an eight-parameter homographic model of the 'keystoning' effect is then presented. A validation
experiment based on ground truth collected in a real movie theater is reported, and the accuracy of the proposed method
is assessed.
A theoretical analysis of spatial/temporal modulation-based systems for prevention of illegal recordings in movie theaters
Show abstract
This document proposes a convenient theoretical analysis of light modulation-based systems for prevention
of illegal recordings in movie theaters. Although the works presented in this paper do not solve the problem of
camcorder piracy, people in the security community may find them interesting for further work in this area.
Toward DRM for 3D geometry data
Show abstract
Computationally efficient encryption techniques for polygonal mesh data are proposed which exploit the prioritization
of data in progressive meshes. Significant reduction of computational demand can be achieved as
compared to full encryption, but it turns out that different techniques are required to support both privacy-focussed
applications and try-and-buy scenarios.
Audio and Video II
Establishing target track history by digital watermarking
Show abstract
Automatic tracking of targets in image sequences is an important capability. Although effective algorithms exist
to implement frame to frame registration, connecting the tracks across frames is of recent interest. The current
approach to this problem is by building a spatio-temporal graph. In this work we argue that the same rationale
used to fingerprint multimedia content for tracing purposes can be used to follow targets across frames. Riding
on top of a tracker, tracked targets receive unique watermarks which propagate throughout the video. These
watermarks can then be searched for and used in a newly defined target adjacency matrix. The properties of
this matrix establishes how target sequencing evolves across frames. The watermarked video is self-contained
and does not require buiding and maintaining of a spati-temporal graphs.
MPEG recompression detection based on block artifacts
Show abstract
With sophisticated video editing technologies, it is becoming increasingly easy to tamper digital video without
leaving visual clues. One of the common tampering operations on video is to remove some frames and then
re-encode the resulting video. In this paper, we propose a new method for detecting this type of tampering by
exploring the temporal patterns of the block artifacts in video sequences. We show that MPEG compression
introduces different block artifacts into various types of frames and that the strength of the block artifacts as
a function over time has a regular pattern for a given group of pictures (GOP) structure. When some frames
are removed from an MPEG video file and the file is then recompressed, the block artifacts introduced by the
previous compression would remain and affect the average of block artifact strength of the recompressed one in
such a way that depends on the number of deleted frames and the type of GOP used previously. We propose a
feature curve to reveal the compression history of an MPEG video file with a given GOP structure, and use it as
evidence to detect tampering. Experimental results evaluated on common video benchmark clips demonstrate
the effectiveness of the proposed method.
Cover signal specific steganalysis: the impact of training on the example of two selected audio steganalysis approaches
Show abstract
The main goals of this paper are to show the impact of the basic assumptions for the cover channel characteristics as well
as the impact of different training/testing set generation strategies on the statistical detectability of exemplary chosen
audio hiding approaches known from steganography and watermarking. Here we have selected exemplary five
steganography algorithms and four watermarking algorithms. The channel characteristics for two different chosen audio
cover channels (an application specific exemplary scenario of VoIP steganography and universal audio steganography)
are formalised and their impact on decisions in the steganalysis process, especially on the strategies applied for training/
testing set generation, are shown. Following the assumptions on the cover channel characteristics either cover dependent
or cover independent training and testing can be performed, using either correlated or non-correlated training and test
sets.
In comparison to previous work, additional frequency domain features are introduced for steganalysis and the
performance (in terms of classification accuracy) of Bayesian classifiers and multinomial logistic regression models is
compared with the results of SVM classification. We show that the newly implemented frequency domain features
increase the classification accuracy achieved in SVM classification. Furthermore it is shown on the example of VoIP
steganalysis that channel character specific evaluation performs better than tests without focus on a specific channel (i.e.
universal steganalysis). A comparison of test results for cover dependent and independent training and testing shows that
the latter performs better for all nine algorithms evaluated here and the used SVM based classifier.
Evaluation of robustness and transparency of multiple audio watermark embedding
Show abstract
As digital watermarking becomes an accepted and widely applied technology, a number of concerns regarding its
reliability in typical application scenarios come up. One important and often discussed question is the robustness of
digital watermarks against multiple embedding. This means that one cover is marked several times by various users with
by same watermarking algorithm but with different keys and different watermark messages. In our paper we discuss the
behavior of our PCM audio watermarking algorithm when applying multiple watermark embedding. This includes
evaluation of robustness and transparency. Test results for multiple hours of audio content ranging from spoken words to
music are provided.
Forensic watermarking and bit-rate conversion of partially encrypted AAC bitstreams
Show abstract
Electronic Music Distribution (EMD) is undergoing two fundamental shifts. The delivery over wired broadband
networks to personal computers is being replaced by delivery over heterogeneous wired and wireless networks,
e.g. 3G and Wi-Fi, to a range of devices such as mobile phones, game consoles and in-car players. Moreover,
restrictive DRM models bound to a limited set of devices are being replaced by flexible standards-based DRM
schemes and increasingly forensic tracking technologies based on watermarking. Success of these EMD services
will partially depend on scalable, low-complexity and bandwidth eficient content protection systems.
In this context, we propose a new partial encryption scheme for Advanced Audio Coding (AAC) compressed
audio which is particularly suitable for emerging EMD applications. The scheme encrypts only the scale-factor
information in the AAC bitstream with an additive one-time-pad. This allows intermediate network nodes to
transcode the bitstream to lower data rates without accessing the decryption keys, by increasing the scale-factor
values and re-quantizing the corresponding spectral coeficients. Furthermore, the decryption key for each user
is customized such that the decryption process imprints the audio with a unique forensic tracking watermark.
This constitutes a secure, low-complexity watermark embedding process at the destination node, i.e. the player.
As opposed to server-side embedding methods, the proposed scheme lowers the computational burden on servers
and allows for network level bandwidth saving measures such as multi-casting and caching.
Steganalysis
Estimation of primary quantization matrix for steganalysis of double-compressed JPEG images
Show abstract
A JPEG image is double-compressed if it underwent JPEG compression twice, each time with a different quantization
matrix but with the same 8 × 8 grid. Some popular steganographic algorithms (Jsteg, F5, OutGuess)
naturally produce such double-compressed stego images. Because double-compression may signficantly change
the statistics of DCT coefficients, it negatively influences the accuracy of some steganalysis methods developed
under the assumption that the stego image was only single-compressed. This paper presents methods for detection
of double-compression in JPEGs and for estimation of the primary quantization matrix, which is lost during
recompression. The proposed methods are essential for construction of accurate targeted and blind steganalysis
methods for JPEG images, especially those based on calibration. Both methods rely on support vector machine
classifiers with feature vectors formed by histograms of low-frequency DCT coefficients.
Textural features based universal steganalysis
Show abstract
This paper takes the task of image steganalysis as a texture classification problem. The impact of steganography to an
image is viewed as the alteration of image texture in a fine scale. Specifically, stochastic textures are more likely to
appear in a stego image than in a cover image from our observation and analysis. By developing a feature extraction
technique previously used in texture classification, we propose a set of universal steganalytic features, which are
extracted from the normalized histograms of the local linear transform coefficients of an image. Extensive experiments
are conducted to make comparison of our proposed feature set with some existing universal steganalytic feature sets on
gray-scale images by using Fisher Linear Discriminant (FLD). Some classical non-adaptive spatial domain
steganographic algorithms, as well as some newly presented adaptive spatial domain steganographic algorithms that have
never been reported to be broken by any universal steganalytic algorithm, are used for benchmarking. We also report the
detection performance on JPEG steganography and JPEG2000 steganography. The comparative experimental results
show that our proposed feature set is very effective on a hybrid image database.
Isotropy-based steganalysis in multiple least significant bits
Show abstract
In this paper, we extend Isotropy-based LSB steganalysis for detecting the existence of hidden message and estimating
hidden message length when the embedding is performed using both of two distinct embedding paradigms in one or
more than one LSB. The extended method is based on the analysis of image isotropy, which is usually affected by secret
message embedding. The proposed method is general framework because it encompasses a more general case of LSB
steganalysis, namely when the embedding is performed in more LSBs and using both embedding paradigms. Compared
with our previous proposed weighted stego-image based method, the detection accuracy is high. Experimental results
and theoretical verification show that this framework is an effective framework of LSB steganalysis.
Nonparametric steganalysis of QIM data hiding using approximate entropy
Show abstract
This paper proposes a nonparametric steganalysis method for quantization index modulation (QIM) based steganography. The proposed steganalysis method uses irregularity (or randomness) in the test-image to distinguish between the cover- and the stego-image. We have shown that plain-quantization (quantization without message embedding) induces regularity in the resulting quantized-image; whereas message embedding using QIM increases irregularity in the resulting QIM-stego image. Approximate entropy, an algorithmic entropy measure, is used to quantify irregularity in the test-image. Simulation results presented in this paper show that the proposed
steganalysis technique can distinguish between the cover- and the stego-image with low false rates (i.e. Pfp < 0.1
& Pfn < 0.07 for dither modulation stego and Pfp < 0.12 & Pfn < 0.002 for QIM-stego).
Steganalysis-aware steganography: statistical indistinguishability despite high distortion
Show abstract
We consider the interplay between steganographer and the steganalyzer, and develop a steganalysis aware framework
for steganography. The problem of determining a stego image is posed as a feasibility problem subject to constraint of data communication, imperceptibility, and statistical indistinguishability with respect to steganalyzer's features. A stego image is then determined using set theoretic feasible point estimation methods. The proposed framework is applied effectively on a state of the art steganalysis method based on higher order statistics (HOS) steganalysis. We first show that the steganographer can significantly reduce the classification performance of the steganalyzer by employing a statistical constraint during embedding, although the image is highly distorted. Then we show that steganalyzer can develop a counter-strategy against steganographer's action, gaining back some classification performance. This interchange represents an empirical iteration in this game between the steganographer and steganalyzer. Finally we consider mixture strategies to find the Nash equilibrium of the interplay.
Steganographic capacity estimation for the statistical restoration framework
Show abstract
In this paper we attempt to quantify the "active" steganographic capacity - the maximum rate at which data can be hidden, and correctly decoded, in a multimedia cover subject to noise/attack (hence - active), perceptual distortion criteria, and statistical steganalysis. Though work has been done in studying the capacity of data hiding as well as the rate of perfectly secure data hiding in noiseless channels, only very recently have all the constraints been considered together. In this work, we seek to provide practical estimates of steganographic capacity in natural images, undergoing realistic attacks, and using data hiding methods available today. We focus here on the capacity of an image data hiding channel characterized by the use of statistical restoration to satisfy the constraint of perfect security (under an i.i.d. assumption), as well as JPEG and JPEG-2000 attacks. Specifically we provide experimental results of the statistically secure hiding capacity on a set of several hundred images for hiding in a pre-selected band of frequencies, using the discrete cosine and wavelet transforms, where a perturbation of the quantized transform domain terms by ±1 using the quantization index modulation scheme, is considered to be perceptually transparent. Statistical security is with respect to the matching of marginal statistics of the quantized transform domain terms.
Further study on YASS: steganography based on randomized embedding to resist blind steganalysis
Show abstract
We present further extensions of yet another steganographic scheme (YASS), a method based on embedding data in randomized locations so as to resist blind steganalysis. YASS is a JPEG steganographic technique that hides data in the discrete cosing transform (DCT) coefficients of randomly chosen image blocks. Continuing to focus on JPEG image steganography, we present, in this paper, a further study on YASS with the goal of improving the rate of embedding. Following are the two main improvements presented in this paper: (i) a method that randomizes the quantization matrix used on the transform domain coefficients, and (ii) an iterative hiding method that utilizes the fact that the JPEG "attack" that causes errors in the hidden bits is actually known to the encoder. We show that using both these approaches, the embedding rate can be increased while maintaining the same level of undetectability (as the original YASS scheme). Moreover, for the same embedding rate, the proposed steganographic schemes are more undetectable than the popular matrix embedding based F5 scheme, using features proposed by Pevny and Fridrich for blind steganalysis.
Embedding
Nested object watermarking: transparency and capacity evaluation
Show abstract
Annotation watermarking (also called caption or illustration watermarking) is a specific application of image watermarking, where supplementary information is embedded directly in the media, linking it to media content, whereby it is not get separated from the media by non-malicious processing steps like image cropping or non-lossy compression. Nested object annotation watermarking (NOAWM) was recently introduced as a specialization within annotation watermarking, for embedding hierarchical object relations in photographic images. In earlier work, several techniques for NOAWM have been suggested and have shown some domain-specific problems with respect to transparency (i.e. preciseness of annotation regions) and robustness (i.e. synchronization problems due to high density, multiple watermarking), which is addressed in this paper. A first contribution of this paper is therefore proposed a theoretical framework to characterize requirements and properties of previous art and suggest a classification of known NOAWM schemes. The second aspect is the study of one specific transparency aspect, the preciseness of the spatial annotations preserved by NOAWM schemes, based on a new area-based quality measurement. Finally, the synchronization problems reported from earlier works is addressed. One possible solution is to use content-specific features of the image to support synchronization. We discuss various theoretical approaches based on for example visual hashes and image contouring and present experimental results.
Reduced embedding complexity using BP message passing for LDGM codes
Show abstract
The application of Low-Density Generator Matrix (LDGM) Codes combined with Survey Propagation (SP) in steganography seems to be advantageous, since it is possible to approximate the coset leader directly. Thus, large codeword length can be applied, resulting in an embedding efficiency close to the theoretical upper bound of embedding efficiency. Since this approach is still quite complex, this paper deals with the application of Belief Propagation (BP) to LDGM Codes used to reduce the complexity of embedding while keeping the embedding efficiency constant.
A joint asymmetric watermarking and image encryption scheme
Show abstract
Here we introduce a novel watermarking paradigm designed to be both asymmetric, i.e., involving a private key
for embedding and a public key for detection, and commutative with a suitable encryption scheme, allowing
both to cipher watermarked data and to mark encrypted data without interphering with the detection process.
In order to demonstrate the effectiveness of the above principles, we present an explicit example where the
watermarking part, based on elementary linear algebra, and the encryption part, exploiting a secret random
permutation, are integrated in a commutative scheme.
Robust digital image watermarking in curvelet domain
Show abstract
A robust image watermarking scheme in curvelet domain is proposed. The curvelet transform directly takes edges as the
basic representation element; it provides optimally sparse representations of objects along edges. The image is
partitioned into blocks and curvelet transform is applied to those blocks with strong edges. The watermark consists of a
pseudorandom sequence is added to the significant curvelet coefficients. The embedding strength of watermark is constrained by a Just Noticeable Distortion model based on Barten's contrast sensitivity function. The developed JND model enables highest possible amount of information hiding without compromising the quality of the data to be protected. The watermarks are blindly detected using correlation detector. A scheme for detection and recovering geometric attacks is applied before watermark detection. The proposed scheme provides an accurate estimation of single and/or combined geometrical distortions and is relied on edge detection and radon transform. The selected threshold for watermark detection is determined on the statistical analysis over the host signals and embedding schemes. Experiments show the fidelity of the protected image is well maintained. The watermark embedded into curvelet coefficients provides high tolerance to severe image quality degradation and robustness against geometric distortions as well.
A joint digital watermarking and encryption method
Show abstract
In this paper a joint watermarking and ciphering scheme for digital images is presented. Both operations are
performed on a key-dependent transform domain. The commutative property of the proposed method allows to
cipher a watermarked image without interfering with the embedded signal or to watermark an encrypted image
still allowing a perfect deciphering. Furthermore, the key dependence of the transform domain increases the
security of the overall system. Experimental results show the effectiveness of the proposed scheme.
Embedding considering dependencies between pixels
Show abstract
This paper introduces a steganographic algorithm that generates a stego image based on a model describing plausible values for the pixels of that image. The model is established by analyzing a number of realizations of the cover image. The main contribution of the approach is that the model describes dependencies on adjacent pixels. Consequently, embedding according to the suggested approach considers image structures. The general approach includes parameters that influence the achievable security of embedding. Extensive practical tests using selected steganalytical methods illustrate the improvements of security compared to additive steganography and point out reasonable settings for the parameters.
A reversible data hiding method for encrypted images
Show abstract
Since several years, the protection of multimedia data is becoming very important. The protection of this multimedia data can be done with encryption or data hiding algorithms. To decrease the transmission time, the data compression is necessary. Since few years, a new problem is trying to combine in a single step, compression, encryption and data hiding. So far, few solutions have been proposed to combine image encryption and compression for example. Nowadays, a new challenge consists to embed data in encrypted images. Since the entropy of encrypted image is maximal, the embedding step, considered like noise, is not possible by using standard data hiding algorithms. A new idea is to apply reversible data hiding algorithms on encrypted images by wishing to remove the embedded data before the image decryption. Recent reversible data hiding methods have been proposed with high capacity, but these methods are not applicable on encrypted images. In this paper we propose an analysis of the local standard deviation of the marked encrypted images in order to remove the embedded data during the decryption step. We have applied our method on various images, and we show and analyze the obtained results.
Improved embedding efficiency and AWGN robustness for SS watermarks via pre-coding
Show abstract
Spread spectrum (SS) modulation is utilized in many watermarking applications because it offers exceptional
robustness against several attacks. The embedding rate-distortion performance of SS embedding however, is
relatively weak compared to quantization index modulation (QIM). This limits the relative embedding rate of
SS watermarks. In this paper, we illustrate that both the embedding effciency, i.e. bits embedded per unit
distortion and robustness against additive white gaussian noise (AWGN) can be improved by pre-coding of
message followed by constellation adjustment on the SS detector to minimize the distortion on the cover image
introduced by coded data. Our pre-coding method encodes p bits as a 2p x 1 binary vector with a single nonzero
entry whose index indicates the value of the embedded bits. Our analysis show that the method improves
embedding rate by approximately p/4 without increasing embedding distortion or sacrificing robustness to AWGN
attacks. Experimental evaluation of the method using a set theoretic embedding framework for the watermark
insertion validates our analysis.
Perceptual hash based blind geometric synchronization of images for watermarking
Show abstract
In this work, we consider the problem of blind geometric synchronization of images for watermarking. Existing
solutions involve insertion of periodic templates, geometrically-invariant domains, and feature-point-based
techniques. However, security leakage and poor watermark detection performance under under lossy geometric
attacks are some known disadvantages of these methods. Different from above, recently a perceptual-hash-based
secure and robust image synchronization scheme has been proposed. Although it has some promising results, it
requires a series of heavy computations which prevents it from being employed in real time (or near real time)
applications and from being extended to wider range of geometric attack models. In this paper, we focus on the
computational efficiency of this scheme and introduce a novel randomized algorithm which conducts geometric
synchronization much faster.