Share Email Print

Spie Press Book • new

HDR Scene Capture and Appearance
Author(s): John J. McCann; Vassilios Vonikakis; Alessandro Rizzi
Format Member Price Non-Member Price

Book Description

High dynamic range (HDR) capture and display systems have proven capable of dramatically improving images, ranging from Renaissance paintings and early silver-halide photography to sensor research, camera design, image processing, display technology, and human-vision research. This Spotlight provides a gateway to understanding HDR imaging. Topics include how painters and photographers succeeded before electronic imaging; how optical glare transforms scene radiances; and how sensors, signal processing, and human spatial image processing generates sensations.

Book Details

Date Published: 31 December 2017
Pages: 92
ISBN: 9781510618541
Volume: SL35

Table of Contents
SHOW Table of Contents | HIDE Table of Contents

1 Introduction
1.1 Goals
     1.1.1 Paint an image using visual feedback
     1.1.2 Calculate appearances from camera data: Write sensations on LDR media
     1.1.3 Reproduce radiances
1.2 Different ground truths for different goals
     1.2.1 Paint an image using Visual Inspection
     1.2.2 Calculate appearances from camera data: Write sensations on LDR media
     1.2.3 Capturing and accurately reproducing scene radiances
1.3 Summary

Part I: The Physics of Scene Capture

2 Multiple Exposures
2.1 Camera response
2.2 Thought experiment: Multiple Exposures in an idealized world
2.3 HDR photography: Multiple Exposures
2.4 Summary

3 Camera Reciprocity and Linearity
3.1 Reciprocity: the foundation of Multiple-Exposure photography
3.2 Linearity: the foundation of computational imaging
3.3 Summary

4 Optical Veiling Glare
4.1 Measurements of range limits from camera optics
4.2 Properties of optical veiling glare
4.3 Summary

5 Faith in a Camera's Digital Values
5.1 Standard photograph versus LibRAW data extraction
5.2 Minimal range target
5.3 Multiple exposures of minimal range target
5.4 Calculate reflectances from linear RAW digits
5.5 Plots of the CRF
5.6 Summary

6 Multiple Exposures: A New Application for an Old Trick
6.1 CRF from RAW data
6.2 Two CRF techniques: single versus Multiple Exposures
6.3 Algorithm to calculate the inverse CRF
6.4 Summary

7 Camera Limits in LDR Single Exposures
7.1 Beach scene
7.2 Uniform illumination in the laboratory
7.3 Summary

8 Measure the Effects of Glare in Dark Image Segments
8.1 Adding glare to the grayscale
8.2 Adding glare to colors
     8.2.1 Chroma enhancement
     8.2.2 Chromaticity and glare
8.3 Summary

9 Need for a Paradigm: "The Path Not Taken"
9.1 Explaining the success of HDR
9.2 Computational HDR using Vision-based Models
9.3 Summary

Part II: The Psychophysics of Appearance

10 Vision-Based Models: The General Solution is Spatial Image Processing
10.1 Black-and-White Mondrian experiment
10.2 Color Mondrians and color constancy
10.3 Summary

11 LDR and HDR Color Constancy
11.1 Flat Color Mondrians in uniform illumination
11.2 3-D Color Mondrians
11.3 Departures from constancy
11.4 Summary

12 Surrounds, Averages, and Histograms
12.1 Edges, gradients, and illusions
12.2 Spatial relationships versus image statistics (histograms)
12.3 Summary

13 Appearance and Scene Maxima
13.1 HDR appearances
13.2 Luminance and scene content affect appearance

14 Retinal Contrast
14.1 Vos and van den Berg's glare spread function
14.2 Appearance versus retinal contrast
14.3 Efficient multi-resolution spatial comparisons
14.4 Summary

15 Review
15.1 Common thread of Vision-based Models


High-dynamic-range (HDR) imaging is a partnership of physics-based photography and physiological human vision. Both partners play essential but different image-processing roles.

Physics-based photography uses cameras that combine light sensing and optical imaging. Traditionally, photography used silver halide (AgX) grains in film as the sensor. Electronic silicon sensors have superseded film. Most cameras use a lens to image the field of radiances coming from a scene onto the sensor. In film, the AgX grains are developed, and in digital photography, the quanta catch in each silicon well undergoes electronic signal processing. In conventional photography, all pixels have the same response to light. Thus, two different image segments that generate identical quanta catches must have identical output responses.

Human vision has a fundamentally different response to light. It has three stages: eye optics forms an image on the retina, light receptors that transform quanta catches into neural responses, and post-quanta-catch neural processing. A unique set of quanta catches at a point can appear in any color: white, black, red, green, blue, etc. Human vision uses spatial image processing. The appearance of a point is a joint function of the quanta catch at that point and the quanta catches from the rest of the scene. Appearance is the result of comparisons of quanta catches across the scene, instead of the response to a single point.

In practice, we find that HDR imaging favors one or the other of these approaches. Many HDR implementations use physics-based reproduction that attempts to capture accurate scene-radiance information in order to reproduce it. Its ground truth requires a physical metric, namely, light-meter readings. Can the physics-based-HDR approach capture radiance accurately?

Other implementations use spatial image processing designed to mimic human vision. It calculates appearance for display on standard media. The ground truth for spatial image processing uses psychophysical metrics. Do the scene and the display have the same appearance rather than the same radiance? Can a computer-algorithm approach replace the fine-art painters who rendered HDR scenes using low-dynamic-range media?

The choice between these approaches brings to mind Robert Frost's 1920 poem entitled "The road not taken":

"TWO roads diverged in a yellow wood,
And sorry I could not travel both
And be one traveler, long I stood
And looked down one as far as I could
To where it bent in the undergrowth;

Then took the other, as just as fair,
And having perhaps the better claim,
Because it was grassy and wanted wear;
Though as for that the passing there
Had worn them really about the same,
And both that morning equally lay
In leaves no step had trodden black.
Oh, I kept the first for another day!
Yet knowing how way leads on to way,
I doubted if I should ever come back.

I shall be telling this with a sigh
Somewhere ages and ages hence:
Two roads diverged in a wood, and I -
I took the one less traveled by,
And that has made all the difference."

Which path should HDR imaging follow?

The HDR imaging chain has the following steps, along with their primary function:

  • Physics-based photography - capture of scene information:
  •      Radiance field - interaction of light and matter,
  •      Camera optics - optical veiling glare,
  •      Sensor response - quanta catches,
  •      Signal amplification - development in AgX and signal processing in digital, and
  •      Image storage and display.
  • Human vision:
  •      Radiance field - light from reproduction media,
  •      Eye optics - intraocular veiling glare,
  •      Sensors' responses - quanta catches,
  •      Neural image processing - visual pathway,
  •      Appearance, and
  •      Image-quality evaluation.

To understand HDR imaging, we have to understand all of the steps in its imaging pipeline and their individual limits. We have to look for hidden assumptions about cameras and about human vision. Each step has its own response to the information from the previous step. When we do this, we find that cameras have limits such as optical glare, reciprocity failure, and nonlinearity controlling the scene-capture accuracy. Although visual optics introduces more veiling glare than camera lenses, human vision has a far superior HDR performance. How can that be?

This Spotlight book travels down both paths: the physics-based scene capture and display, and the psychophysical calculation of appearance. It attempts to review the parts of the HDR imaging chain and provide useful techniques for measuring the accuracy limits along the chain. These limits help us to understand how HDR works.

The best answer is to use both paths. Camera capture is the first stage of HDR. The best practices from physics-based capture are needed as input to a second stage that mimics vision. Psychophysics-based HDR uses human vision's strategy to compensate for the limitations caused by glare in the first stage.

The considerable success of HDR photography is based on the fact that cameras capture inaccurate scene radiances that are manipulated to make better spatial renditions of scene information. The most successful renditions anticipate the human visual system. That rendition on a display is subsequently transformed by human vision. Both physics-based and human neural processing have major roles in making beautiful HDR reproductions.

The book contains the following components:

  • An introduction with the history and definitions of HDR imaging.
  • Part I: The Physics of Scene Capture describes a series of camera calibration measurements that determine the limits of a camera's dynamic range using multiple and single exposures, i.e., the physics path.
  • Part II: The Psychophysics of Appearance describes a series of matching experiments that demonstrate the relationship of scene radiance and appearance, i.e., the psychophysics path.
  • A review that argues that both paths are needed for a general computational solution of HDR imaging.
  • An appendix that includes references, recommended reading, and the definition of key terms.

This Spotlight uses a number of technical definitions that are easily confused with common usage. We use Multiple Exposures to mean a specific set of digital image-processing algorithms. In common usage, "multiple exposure" means the practice of making slight exposure adjustments to produce the best-looking picture. The glossary at the end of the book defines terms in radiometry, psychophysics, optical imaging, image processing, and digital imaging.

John McCann
Vassilios G. Vonikakis
Alessandro Rizzi
December 2017

© SPIE. Terms of Use
Back to Top