Proceedings Volume 8665

Media Watermarking, Security, and Forensics 2013

cover
Proceedings Volume 8665

Media Watermarking, Security, and Forensics 2013

View the digital version of this volume at SPIE Digital Libarary.

Volume Details

Date Published: 26 March 2013
Contents: 10 Sessions, 26 Papers, 0 Presentations
Conference: IS&T/SPIE Electronic Imaging 2013
Volume Number: 8665

Table of Contents

icon_mobile_dropdown

Table of Contents

All links to SPIE Proceedings will open in the SPIE Digital Library. external link icon
View Session icon_mobile_dropdown
  • Front Matter: Volume 8665
  • Watermark
  • Security and Biometric
  • Camera Identification
  • General Forensics
  • Steganography
  • Steganalysis I
  • Steganalysis II
  • Miscellaneous
  • Interactive Paper Session
Front Matter: Volume 8665
icon_mobile_dropdown
Front Matter: Volume 8665
This PDF file contains the front matter associated with SPIE Proceedings Volume 8665 including the Title Page, Copyright Information, Table of Contents, Introduction, and Conference Committee listing.
Watermark
icon_mobile_dropdown
Insertion, deletion robust audio watermarking: a set theoretic, dynamic programming approach
Andrew Nadeau, Gaurav Sharma
Desynchronization vulnerabilities have limited audio watermarking’s success in applications such as digital rights management (DRM). Our work extends (blind-detection) spread spectrum (SS) watermarking to withstand time scale desynchronization (insertion/deletions) by applying dynamic programming (DP). Detection uses short SS watermark blocks with a novel O(N logN) correlation algorithm. These calculations provide robustness to time shifts and the resulting offsets to the watermarking domain transform. To withstand insertion/deletion, DP techniques then search for sequences of blocks rather than detecting SS watermarks individually. This allows DP techniques to govern the tradeoff between long/short SS blocks for non-desynchronization/desynchronization robustness. However, high dimensional searches and short SS blocks both increase false detection rates. Consequently, we verify detections between multiple, simultaneously embedded watermarks. Embedding multiple watermarks while considering host interference, compression robustness, and perceptual degradation to the host audio is a complex problem, solved using a set theoretic embedding framework. Proposed techniques improve performance by multiple orders of magnitude compared with naive SS schemes. Results also demonstrate the tradeoff between non-desynchronization/desynchronization robustness.
Impeding forgers at photo inception
Matthias Kirchner, Peter Winkler, Hany Farid
We describe a new concept for making photo tampering more difficult and time consuming, and for a given amount of time and effort, more amenable to detection. We record the camera preview and camera motion in the moments just prior to image capture. This information is packaged along with the full resolution image. To avoid detection, any subsequent manipulation of the image would have to be propagated to be consistent with this data - a decidedly difficult undertaking.
Watermark embedding in optimal color direction
Robert Lyons, Alastair Reed, John Stach
To watermark spot color packaging images one modulates available spot color inks to create a watermark signal. By perturbing different combinations of inks one can change the color direction of the watermark signal. In this paper we describe how to calculate the optimal color direction that embeds the maximum signal while keeping the visibility below some specified acceptable value. The optimal color direction depends on the starting color for the image region, the ink density constraints and the definition of the watermark signal. After a description of the general problem of N spot color inks we shall describe two-ink embedding methods and try to find the optimal direction that will maximize robustness at a given visibility. The optimal color direction is usually in a chrominance direction and the resulting ink perturbations change the luminosity very little. We compare the optimal color embedder to a single-color embedder.
Video game watermarking
Waldemar Berchtold, Marcel Schäfer, Huajian Liu, et al.
The publishers of video games suffer from illegal piracy and information leakage caused by end-consumers, "release groups" or insiders shortly after or even before the official release of a new video game. Mechanisms to prevent or at least postpone this illegal redistribution are DRM or copy protection mechanisms. However, these mechanisms are very unpopular, because they restrict the customers in playing the game and demand a high administration effort from the developers and/or distributors. Even worse, most copy protection mechanisms have proven to be insecure as "patches" for circumvention usually are available quickly and easy to get. To satisfy the challenges of security and usability, this work introduces the idea of using digital watermarking to protect all available and suitable media types and software binaries contained in a video game. A three-layered watermarking deployment approach along the production chain is proposed to detect leakage in the release phase as well as during the development process of a boxed video game. The proposed approach features both copyright watermarking and collusion secure fingerprints embedded as transaction watermark messages in components of video games. We discuss the corresponding new challenges and opportunities. In addition, a prototype watermarking algorithm is presented to demonstrate the adaption necessity of classical image watermarking when applied to video games to satisfy the requirements for transparency, security as well as performance. The watermark capacity is significantly increased while inter-media and inter-file embedding is enabled and the associated synchronization challenge is solved by robust hashes.
Security and Biometric
icon_mobile_dropdown
Banknote authentication with mobile devices
Volker Lohweg, Jan Leif Hoffmann, Helene Dörksen, et al.
Maintaining confidence in security documents, especially banknotes, is and remains a major concern for the central banks in order to maintain the stability of the economy around the world. In this paper we describe an image processing and pattern recognition approach which is based on the Sound-of-Intaglio principle for the usage in smart devices such as smartphones. Today, in many world regions smartphones are in use. These devices become more and more computing units, equipped with resource-limited, but effective CPUs, cameras with illumination, and flexible operating systems. Hence, it is obvious to apply smartphones for banknote authentication, especially for visually impaired persons. Our approach shows that those devices are capable of processing data under the constraints of image quality and processing power. Strictly a mobile device as such is not an industrial product for harsh environments, but it is possible to use mobile devices for banknote authentication. The concept is based on a new strategy for constructing adaptive Wavelets for the analysis of different print patterns on a banknote. Furthermore, a banknote specific feature vector is generated which describes an authentic banknote effectively under various illumination conditions. A multi-stage Lineardiscriminant- analysis classifier generates stable and reliable output.
Visibility enhancement and validation of segmented latent fingerprints in crime scene forensics
Andrey Makrushin, Tobias Kiertscher, Mario Hildebrandt, et al.
Forensic investigators are permanently looking for novel technologies for fast and effective recovering of latent fingerprints at a crime scene. Traditionally, this work is done manually and therefore considered very time consuming. Highly skilled experts apply chemical reagents to improve visibility of traces and use digital cameras or adhesive tape to lift prints. Through an automation of the surface examination, larger areas can be investigated faster. This work amplifies the experimental study on capabilities of a chromatic white-light sensor (CWL) regarding the contact-less lifting of latent fingerprints from differently challenging substrates. The crucial advantage of a CWL sensor compared to taking digital photographs is the simultaneous acquisition of luminance and topography of the surface, extending the standard twodimensional image processing to the analysis of three-dimensional data. The paper focuses on the automatic validation of localized fingerprint regions. In contrast to statistical features from luminance data, previously used for localization, we propose the streakiness of a pattern as the basic feature indicating the fingerprint presence. Regions are analyzed for streakiness using both luminance and topography data. As a result, the human experts significantly save time by dealing with a limited number of approved fingerprints. The experiments show that the validation performance in terms of equal error rate does not exceed 6% even on very challenging substrates regarding high-quality fingerprints.
Printed fingerprints at crime scenes: a faster detection of malicious traces using scans of confocal microscopes
Fingerprint traces are an important part of forensic investigations to identify potential perpetrators. With the possibility of printing traces for quality assurance purposes it is also possible to place malicious traces on crime scenes. In forensics examiners are already aware of multiple identical traces, e.g. produced by stamping fingerprints. The technique of printing fingerprints using artificial sweat allows to create different versions of the same fingerprint, similar to the residue from a finger, which is almost never 100 percent identical to another latent fingerprint. Hence, Kiltz et al.1 introduce a first framework for the detection of such malicious traces in subjective evaluations based on dot patterns of amino acid. Hildebrandt et al.2 introduce a first automated approach for the detection of printed fingerprints using high resolution scans from a chromatic white light sensor. However, the reported recognition accuracy is insufficient for forensic investigations.

In this paper we propose an improved feature extraction for scans using a confocal microscope to reduce the overall analysis time and to increase the recognition accuracy. Our evaluation is based on 3000 printed and 3000 real fingerprints on the three surfaces hard disk platter, overhead foil and compact disk advancing the research from Hildebrandt et al.2 Our goal is to benchmark the feature extraction and recognition of printed fingerprints for the three substrates as well as for the combination thereof. The results indicate a significant reduction of the necessary analysis time to less than one minute as well as an improved recognition rate of up to 99.7 percent for all samples on the three surfaces in comparison to the previously achieved 91.48 percent on two surfaces as reported in Hildebrandt et al.2
Camera Identification
icon_mobile_dropdown
Sensor fingerprint digests for fast camera identification from geometrically distorted images
In camera identification using sensor fingerprint, it is absolutely essential that the fingerprint and the noise residual from a given test image be synchronized. If the signals are desynchronized due to a geometrical transformation, fingerprint detection becomes significantly more complicated. Besides constructing the detector in an invariant transform domain (which limits the type of the geometrical transformation) a more general approach is to maximize the generalized likelihood ratio with respect to the transform parameters, which requires a potentially expensive search and numerous resamplings of the entire image (or fingerprint). In this paper, we propose a measure that significantly reduces the search complexity by reducing the need to resample the entire image to a much smaller subset of the signal called the fingerprint digest. The technique can be applied to an arbitrary geometrical distortion that does not involve spatial shifts, such as digital zoom and non-linear lens-distortion correction.
Case studies and further improvements on source camera identification
Kenji Kurosawa, Kenro Kuroki, Ken’ichi Tsuchiya, et al.
Actual case examples and further improvements on source camera identification are shown. There are three specific topics in this paper: (a) In order to improve performance of source camera identification, the hybrid identification scheme using both dark current non-uniformity (DCNU) and photo-response non-uniformity (PRNU) is proposed. The experimental results indicated that identification performance would be improved by properly taking advantage of their features; (b) Source camera identification using non-uniform nature of the CCD charge transfer circuit is proposed. The experimental results with twenty CCD modules of the same model showed that individual camera identification for dark images was possible by the proposed method. Furthermore, it was shown that the proposed method had higher discrimination capability than the method using pixel non-uniformity when the number of recorded image was small; (c) The authors have been performed source camera identification in the five actual criminal cases, such as homicide case, and so on. The analytical procedure was a sequential examination of hot pixel coordinates validation followed by the similarity evaluation of sensor noise pattern. The authors could clearly prove that the questioned criminal scenes had been recorded by the questioned cameras in four cases of the five.
Forensic analysis of interdependencies between vignetting and radial lens distortion
Optical aberrations are an inherent part of all images captured with a digital camera and might form another valuable characteristic to build reliable image forensic methods. Within this paper we focus on vignetting and radial lens distortion and investigate interdependencies between both optical aberrations. Using checkerboards and a homogeneously illuminated white wall as ideal test patterns, we investigate the influence of lens settings and camera orientation. More precisely, we use images of the `Dresden Image Database' and a specifically created data set to investigate relations between the appearance of optical aberrations and lens settings, focal length, focus, and aperture. Our experiments use images of natural scenes to point to general difficulties that have to be considered when vignetting and radial lens distortion are analyzed in realistic scenarios.
A sneak peek into the camcorder path
Cherif Ben Zid, Séverine Baudry, Bertrand Chupeau, et al.
A number of technologies claim to be robust against content re-acquisition with a camera recorder e.g. water- marking and content ngerprinting. However, the benchmarking campaigns required to evaluate the impact of the camcorder path are tedious and such evaluation is routinely overlooked in practice. Due to the interaction between numerous devices, camcording displayed content modi es the video essence in various ways, including geometric distortions, temporal transforms, non-uniform and varying luminance transformations, saturation, color alteration, etc. It is necessary to clearly understand the di erent phenomena at stake in order to design ef- cient countermeasures or to build accurate simulators which mimic these e ects. As a rst step in this direction, we focus in this study solely on luminance transforms. In particular, we investigate three di erent alterations, namely: (i) the spatial non uniformity, (ii) the steady state luminance response, and (iii) the transient luminance response.
General Forensics
icon_mobile_dropdown
Ballistic examinations based on 3D data: a comparative study of probabilistic Hough Transform and geometrical shape determination for circle-detection on cartridge bottoms
The application of contact-less optical 3D sensing techniques yielding digital data for the acquisition of toolmarks on forensic ballistic specimens found at crime scenes, as well as the development of computer-aided, semi-automated firearm identification systems that are using 3D information, are currently emerging fields of research with rising importance. Traditionally, the examination of forensic ballistic specimen is done manually by highly skilled forensic experts using comparison microscopes. A partly automation of the comparison task promises examination results that are less dependent on subjective expertise and furthermore a reduction of the manual work needed. While there are some partly automated systems available they are all of proprietary nature to our current knowledge. One necessary requirement for the examination of forensic ballistic specimens is a reliable circle-detection and segmentation of cartridge bottoms. This information is later used for example for alignment and registration tasks, determination of regions of interest, and locally restricted application of complex feature-extraction algorithms. In this work we are using a Keyence VK-X 105 laser-scanning confocal microscope to acquire a very high detail topography image, a laserintensity image, and a color image of the assessed cartridge bottoms simultaneously. The work is focused on a comparison of Hough Transform (21HT) and Geometric Shape Determination for circle-detection on cartridge bottoms using 3D as well as 2D information. We compare the pre-processing complexity, the required processing time, and the ability for a reliable detection of all desired circles. We assume that the utilization of Geometric Shape Detection can reduce the required processing time due to a less complex processing. For application of shape determination as well as for Hough Transform we expect a more reliable circle-detection when using additional 3D information. Our first experimental evaluation, using 100 9mm center fire cartridges shot from 3 different firearms shows positive tendency to verify these suppositions.
Photocopier forensics based on arbitrary text characters
Changyou Wang, Xiangwei Kong, Shize Shang, et al.
A photocopied document can expose the photocopier characteristics to identify the source photocopier, so how to extract the optimal intrinsic features is critical for photocopier forensics. In this paper, a photocopier forensics method based on the texture features analysis for arbitrary characters is proposed and the features are considered as the intrinsic features. Firstly, an image preprocessing process is practiced to get individual character images. Secondly, three sets of features are extracted from each individual character image, including the gray level features, the gradient differential matrix (GDM) features and the gray level gradient co-occurrence matrix (GLGCM) features. Last, each individual character in a document is classified using a Fisher classifier and a majority vote is performed on the character classification results to identify the source photocopier. Experimental results on seven photocopiers prove the effectiveness of our proposed method and an average character classification accuracy of 88.47% can be achieved.
Accelerating video carving from unallocated space
Hari Kalva, Anish Parikh, Avinash Srinivasan
Video carving has become an essential tool in digital forensics. Video carving enables recovery of deleted video files from hard disks. Processing data to extract videos is a computationally intensive task. In this paper we present two methods to accelerate video carving: a method to accelerate fragment extraction, and a method to accelerate combining of these fragments into video segments. Simulation results show that complexity of video fragment extraction can be reduced by as much as 75% with minimal impact on the videos recovered.
Steganography
icon_mobile_dropdown
On the role of side information in steganography in empirical covers
In an attempt to alleviate the negative impact of unavailable cover model, some steganographic schemes utilize the knowledge of the so-called “precover” when embedding secret data. The precover is typically a higherresolution (unquantized) representation of the cover, such as the raw sensor output before it is converted to an 8-bit per channel color image. The precover object is only available to the sender but not to the Warden, which seems to give a fundamental advantage to the sender. In this paper, we provide theoretical insight for why side-informed embedding schemes for empirical covers might provide high level of security. By adopting a piece-wise polynomial model corrupted by AWGN for the content, we prove that when the cover is sufficiently non-stationary, embedding by minimizing distortion w.r.t. the precover is more secure than by preserving a model estimated from the cover (the so-called model-based steganography). Moreover, the side-informed embedding enjoys four times lower steganographic Fisher information than LSB matching.
A study of embedding operations and locations for steganography in H.264 video
Andreas Neufeld, Andrew D. Ker
This work studies the fundamental building blocks for steganography in H.264 compressed video: the embedding operation and the choice of embedding locations. Our aim is to inform the design of better video steganography, a topic on which there has been relatively little publication so far. We determine the best embedding option, from a small menu of embedding operations and locations, as benchmarked by an empirical estimate of Maximum Mean Discrepancy (MMD) for rst- and second-order features extracted from a video corpus. A highly-stable estimate of MMD can be formed because of the large sample size. The best embedding operation (so-called F5) is identical to that found by a recent study of still compressed image steganography, but in video the options for embedding location are richer: we show that the least detectable option, of those studied, is to spread payload unequally between the Luma and the two Chroma channels.
Video steganography with multi-path motion estimation
Y. Cao, X. Zhao, F. Li, et al.
This paper proposes a novel video steganography during motion estimation (ME). Compared with existing schemes using motion vector (MV) as the information carrier, the new approach has been enhanced from two aspects to improve steganographic security. First, to reduce the single change distortion, a technique called multi-path ME is introduced to generate optimized alternates for MV replacement. Secondly, to improve both the embedding and computational efficiencies, a flexible embedding structure is designed to perform matrix embedding.
Steganalysis I
icon_mobile_dropdown
Random projections of residuals as an alternative to co-occurrences in steganalysis
Today, the most reliable detectors of steganography in empirical cover sources, such as digital images coming from a known source, are built using machine-learning by representing images with joint distributions (co-occurrences) of neighboring noise residual samples computed using local pixel predictors. In this paper, we propose an alternative statistical description of residuals by binning their random projections on local neighborhoods. The size and shape of the neighborhoods allow the steganalyst to further diversify the statistical description and thus improve detection accuracy, especially for highly adaptive steganography. Other key advantages of this approach include the possibility to model long-range dependencies among pixels and making use of information that was previously underutilized in the marginals of co-occurrences. Moreover, the proposed approach is much more flexible than the previously proposed spatial rich model, allowing the steganalyst to obtain a significantly better trade off between detection accuracy and feature dimensionality. We call the new image representation the Projection Spatial Rich Model (PSRM) and demonstrate its effectiveness on HUGO and WOW – two current state-of-the-art spatial-domain embedding schemes.
The challenges of rich features in universal steganalysis
Contemporary steganalysis is driven by new steganographic rich feature sets, which consist of large numbers of weak features. Although extremely powerful when applied to supervised classification problems, they are not compatible with unsupervised universal steganalysis, because the unsupervised method cannot separate the signal (evidence of steganographic embedding) from the noise (cover content). This work tries to alleviate the problem, by means of feature extraction algorithms. We focus on linear projections informed by embedding methods, and propose a new method which we call calibrated least squares with the specific aim of making the projections sensitive to stego content yet insensitive to cover variation. Different projections are evaluated by their application to the anomaly detector from Ref. 1, and we are able to retain both the universality and the robustness of the method, while increasing its performance substantially.
Exploring multitask learning for steganalysis
Julie Makelberge, Andrew D. Ker
This paper introduces a new technique for multi-actor steganalysis. In conventional settings, it is unusual for one actor to generate enough data to be able to train a personalized classi er. On the other hand, in a network there will be many actors, between them generating large amounts of data. Prior work has pooled the training data, and then tries to deal with its heterogeneity. In this work, we use multitask learning to account for di erences between actors' image sources, while still sharing domain (globally-applicable) information. We tackle the problem by learning separate feature weights for each actor, and sharing information between the actors through the regularization. This way, the domain information that is obtained by considering all actors at the same time is not disregarded, but the weights are nevertheless personalized. This paper explores whether multitask learning improves accuracy of detection, by benchmarking new multitask learners against previous work.
Steganalysis II
icon_mobile_dropdown
Quantitative steganalysis using rich models
Jan Kodovský, Jessica Fridrich
In this paper, we propose a regression framework for steganalysis of digital images that utilizes the recently proposed rich models – high-dimensional statistical image descriptors that have been shown to substantially improve classical (binary) steganalysis. Our proposed system is based on gradient boosting and utilizes a steganalysis-specific variant of regression trees as base learners. The conducted experiments confirm that the proposed system outperforms prior quantitative steganalysis (both structural and feature-based) across a wide range of steganographic schemes: HUGO, LSB replacement, nsF5, BCHopt, and MME3.
A cost-effective decision tree based approach to steganalysis
An important issue concerning real-world deployment of steganalysis systems is the computational cost of ac- quiring features used in building steganalyzers. Conventional approach to steganalyzer design crucially assumes that all features required for steganalysis have to be computed in advance. However, as the number of features used by typical steganalyzers grow into thousands and timing constraints are imposed on how fast a decision has to be made, this approach becomes impractical. To address this problem, we focus on machine learning aspect of steganalyzer design and introduce a decision tree based approach to steganalysis. The proposed steganalyzer system can minimize the average computational cost for making a steganalysis decision while still maintaining the detection accuracy. To demonstrate the potential of this approach, a series of experiments are performed on well known steganography and steganalysis techniques.
Miscellaneous
icon_mobile_dropdown
Stegatone performance characterization
Embedding data in hard copy is in widespread use for applications that include pointing the reader to on-line content by means of a URL, tracing the source of a document, labeling, and packaging. Most solutions involve placing overt marks on the page. The most common are 1D, 2D, and 3D (color) barcodes. However, while barcodes are a popular means for encoding information for printed matter, they add unsightly overt content. In order to avoid such overt content, Stegatones are clustered-dot halftones that encode a data payload by single-pixel shifts of selected dot-clusters. In a Stegatone, we can embed information in images or graphics – not in the image file, as is done in traditional watermarking, but in the halftone on the printed page. However, the recovery performance of stegatones is not well understood across a wide variety of printing technologies, models, and resolutions, along with variations of scanning resolution. It would thus be very useful to have a tool to quantify stegatone performance under these variables. The results would then be used to better calibrate the encoding system. We develop and conduct a test procedure to characterize Stegatone performance. The experimental results characterize Stegatone performance for a number of printers, scanners, and resolutions.
Image tampering localization via estimating the non-aligned double JPEG compression
Lanying Wu, Xiangwei Kong, Bo Wang, et al.
In this paper, we present an efficient method to locate the forged parts in a tampered JPEG image. The forged region usually undergoes a different JPEG compression with the background region in JPEG image forgeries. When a JPEG image is cropped to another host JPEG image and resaved in JPEG format, the JPEG block grid of the tampered region often mismatches the JPEG block grid of the host image with a certain shift. This phenomenon is called non-aligned double JPEG compression (NA-DJPEG). In this paper, we identify different JPEG compression forms by estimating the shift of NA-DJPEG compression. Our shift estimating approach is based on the percentage of non zeros of JPEG coefficients in different situations. Compared to previous work, our tampering location method (i) performances better when dealing with small image size, (ii) is robust to common tampering processing such as resizing, rotating, blurring and so on, (iii) doesn't need an image dataset to train a machine learning based classifier or to get a proper threshold.
Interactive Paper Session
icon_mobile_dropdown
A histogram shifting based RDH scheme for H.264/AVC with controllable drift
This paper presents an efficient method for high payload reversible data-hiding in H.264/AVC intra bitstream with a minimal drift, which is controllable and proportional to the payload. In contrast to previously presented open-loop reversible data hiding techniques for H.264/AVC bitstream, which mainly perform drift-compensation, we propose a new reversible data hiding technique for H.264/AVC intra bitstream which avoids drift . The major design goals of this novel H.264/AVC reversible data hiding algorithm have been runtime-efficiency, high perceptual quality with a minimal effect on bit-rate. The data is efficiently embedded in the compressed domain in an open-loop fashion, i.e., all prediction results are reused. Nevertheless intra-drift is avoided, as only specific solution patterns are added, which are solutions of a system of linear equations that guarantee the preservation of the block’s edge pixel values.