Create a high dynamic-range image from multiple exposures of a static scene. The input files may be JPEG or TIFF, but must be 24-bit RGB (trichromatic) images. The output is your choice of a Radiance HDR picture or a 32-bit LogLuv TIFF image. The syntax of the hdrgen command is:
hdrgen –o out_file [–r cam.rsp] [–m cachesiz] [–a] [–e] [–s stonits1] image1 [–s stonits2] image2 É
As many exposures may be given as necessary, and should ideally be spaced within two f–stops of each other. The brightest exposure should have no black pixels, and the darkest exposure should have no white pixels, but there is little point in extending beyond these limits, which may cause problems in determining the camera response function. The order of options and input files is unimportant, with the exception of the –s option, which must preceed the corresponding exposure. Following is an explanation of the options and their meanings:
–o out_file
Write high dynamic-range image to the given file. If the file has a Ô.tifÕ suffix, it will be written out as a
LogLuv TIFF image. If it has a
Ô.exrÕ suffix, it will be written out as an ILM OpenEXR image. If it has a Ô.jpgÕ suffix, it will be
written out in JPEG-HDR format. If
it has any other suffix or none at all, it will be written out as an RLE RGBE
Radiance picture.
-q quality
Set output quality to quality
(0-100). This affects the JPEG
output compression, and potentially the details of the other formats written as
well. (For example, writing out a
TIFF with –q 100 results in a 96-bit/pixel IEEE floating-point file
rather than a LogLuv encoding.)
–r cam.rsp
Use the given file for the cameraÕs response curves. If this file exists, it must contain the coefficients of
three polynomials, one for each color primary. If the file does not exist, hdrgen will use its principal
algorithm to derive these coefficients and write them out to this file for
later use. If a scene contains no
low frequency content or gradations of intensity, it may be impossible to
derive the response curve from the exposure sequence. Thus it is better to create this information once for a
given camera and reuse it for other sequences.
–m cachesiz
Specify the cache size to use in megabytes. No more than this much memory will be allocated to hold
image data during processing. The
default value is 100. Using a
smaller value may require longer processing if many input images are used,
since some will need to be read in twice rather than once, but specifying a
larger value than there is memory available will definitely be worse, due to
virtual memory swapping.
–a Toggle automatic exposure alignment. The default value is Òon,Ó so giving this option one time switches it off. The alignment algorithm examines neighboring exposures and finds the pixel offset in x and y that minimizes the difference in the two images. It may be necessary to switch this option off when dealing with very dark or very bright exposures taken in a tripod-stabilized sequence.
–e Toggle exposure adjustment. Normally Òon,Ó exposure adjustment fine-tunes the scale difference between adjacent images to account for slight inaccuracies in the aperture or speed settings of the camera.
–f Toggle lens flare removal. Normally, Òoff,Ó this option is designed to reduce the scattered light from a cameraÕs lens and aperture, which results in a slightly fogged appearance in high dynamic-range images.
–s stonits
Set the sample-to-nits (cd/m2) conversion factor for the following
image to the floating-point value stonits. This is normally
determined automatically by the program from camera information stored in the
Exif image header. If the image
did not come directly from a digital camera, then it will be necessary to use
this option for each image. If the
absolute conversion is unknown, then simply pick a value for the brightest
image, and increase it subsequently for each exposure in the sequence. One f-stop requires doubling this conversion factor, and
two f-stops requires quadrupling.
The primary failure mode for this algorithm is the one mentioned in the description of the –r option, when the exposures contain too little information to solve for the camera response function. The best solution to this problem is to take off the exposures that are very light and very dark, or to use a different sequence of images to generate a response file. This file may then be used to combine the entire set of images, since the program no longer needs to solve for the responses.
Most of the other diagnostics you will encounter are Òwarnings,Ó which means that the final image will be written, but may have problems. In particular, when the alignment algorithm fails on a hand-held sequence, some ghosting may be visible on high contrast edges in the output. Using the –a option to turn off automatic alignment will eliminate the warning, but unless the sequence was taken on a very stable tripod, the results will usually be worse rather than better.
To combine all JPEG images matching a given wildcard and put into a LogLuv TIFF:
hdrgen P13351?.JPG –o testimg.tif
This software was written by Greg Ward of Exponent Corporation. Send comments or questions to gward@exponent.com or gward@lmi.net.
Tomoo Mitsunaga and Shree Nayar, ÒRadiometric Self-Calibration,Ó Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, June, 1999.
Greg Ward, ÒLogLuv encoding for full-gamut, high-dynamic range images ,Ó Journal of Graphics Tools, 3(1):15-31 1998.
Greg Ward, High Dynamic Range Images, web page.
Paul Debevec, web page.