Please visit the new home of Majikthise at bigthink.com/blogs/focal-point.

« Post Office to squeeze small publications | Main | Do you blog like a girl? »

April 19, 2007

Vancouver HDR


Aquabus, originally uploaded by Stuck in Customs.

 

My friend Macon sent me a link to a remarkable Flickr portfolio.

The photographer, Stuck In Customs, is doing really interesting things with high dynamic range (HDR) imaging. HDR software combines multiple exposures of the same scene to create images that have more gradations in tone between light and dark.

Today's FlickrFind is a great example of what HDR can do aesthetically.

Digital pictures tend to have a restricted dynamic range compared to film. HDR is one way to compensate for the limited dynamic range of digital sensors.

TrackBack

TrackBack URL for this entry:
http://www.typepad.com/services/trackback/6a00d8341c61e653ef00d83452559269e2

Listed below are links to weblogs that reference Vancouver HDR:

Comments

I don't know. It just looks to me like an illustration from a cyberpunk novel.

Ask "Blogging Heads TV" to invite Lindsay Beyerstein to be interviewed in an upcoming episode:
feedback@bloggingheads.tv

Even film has a limited DR, though negative film approaches its limits more subtly than does positive film. The problem I have with many HDR photos is that they look so plastic and fake. Impressive, yes, but unreal and highly processed. The goal should be not just to squeeze in more info between pure white and pure black, but to somehow capture difficult-to-expose scenes in a way that overcomes a sensor's limited DR and manages to convey the image as humans see it. In other words, a great HDR image should look _more_ real to us, not less.

Can it show you what makes John Edwards' $400 haircut so special? So special that he had two of them?

http://www.slate.com/id/2164520?nav=ais

HDR is one way to compensate for the limited dynamic range of digital sensors.

It's true, but it's not like HDR makes things more lifelike. It's very pretty stuff, but it's stylized.

Thanks, Eric.

Cakesniffer, I think this image does capture details that are difficult to expose. What makes the picture interesting to me is the detail in the clouds, the reflections on the water, and the texture of the gray buildings.

So many landscape shots of Vancouver look flat because all the detail is in subtle variants of gray and green. Here's an example. I actually like the flattening effect in this image, it makes the scene look like a Japanese print--but it's a less than faithful representation of the scene as it appeared to me at the time.

The HDR picture doesn't look naturalistic, but it captures something about of that particular view that doesn't necessarily come across in conventional photos.

There are days when the Vancouver skyline has that kind of unreal vibrancy--usually when the sun is breaking through the clouds after a storm.

"Digital pictures tend to have a restricted dynamic range compared to film."

This would be solved if the camera makers would upgrade to 16 bit color instead of 8 bit color, but they are being very slow to upgrade. Photoshop has had 16 bit support since CS 2. (There was some limited 16 bit support in CS 1, but its been full featured since CS 2). Most good scanners offer at least 16 bit color, some even offer 48 bit color. But the camera's are way behind. As far as I know, they all offer 8 bit color, even the high-end professional ones.

I imagine at some point in the next 5 years 16 bit cameras will become common. At least, I hope so. At 16 bits, cameras would come much closer to matching the tonal range of film.

I agree with commentators who regard these images as unnatural, plastic, unreal. Over-saturated colors contribute to the "plastic" look. I have also seen HDR photos having a much more natural appearance, i.e., rich highlight and shadow details that rival an Ansel Adams print. One reason why HDR prints look artificial is that they are overprocessed in Photoshop during the final steps. HDR works best, not necessarily with wide gamut subjects, but with a more restricted range of tonalities, which may seem contradictory at first, but consider Adams' technique for expanding the range of middle tones to the extremes in the Zone System. Of course, you may not necessarly need HDR for that; the careful designation of black and white point settings will suffice.

I am not inclined to complain that it does not seem as "real" as what I have grown accustomed to in digital photography...its not exactly a photograph but it is art.

swampcracker writes;
that rival an Ansel Adams print.

Doyle;
Ansel Adams stood in the way of color phtography in his time and if one considers his technical literature which I've read and am familiar with is not very insightful about pictures outside of his zone system.

HDR strikes me as an interesting technique that needs application to political subjects. It's expressive in ways ordinary photography is not. And therefore lends itself to creating a vocabulary of expression that might have powerful portrait possibilities. A lot of the work in HDR looks fussy to me right now. Too tied to looks to have depth of content yet. And I think the concept of trying to fill in areas of relatively narrow contrast 'zones' implies we go further into the meaning of contrast as well.

What I know is that the dark areas are seen somewhat different than light areas by human eyes in most conventional images. So one needs to be sensitive to image contrast in different ways down in the shadows. A lot of the work I see that looks HDR has cloud in it, or some surface with a lot of constrast variation. People seem to pick out old cars for some reason as a favorite. However the few knowledgeable (aren't naive about what photography does) portraits I've seen have vastly expanded the comment of about the face and hair. There I saw great potential of expression.

I wouldn't say ignore Adams, but he was a mediocre people portraitist. So I feel he was burdened by cliche in his technology. People demand a lot to really see human beings in a non dull way.
Doyle

All media have limits to range (though things like oil paint are so under the control of the artist that those limits don't really intrude, water color, however, is more obviously limited... pure black is almost impossible to get, and the white value is determined by the paper, which affects the rest of the palette, but I digress).

Film stocks are varied in the limitations.

Black and white films have more range than color negative films, which have more range than color positive films.

They are all limited by the more limited range of the printing medium. Further the dynamic ranges of papers are variable as well.

Where film really shines, when compared to digital is gradation. The shift of values, inside the available ranges (in both the film stock, and the paper stocks) Digital tends to have much sharper contrast breaks, which can make things look strange.

By way of an example of this http://pics.livejournal.com/pecunium/pic/0009pk31>Hummingbird which I don't care for as much as I might because the bird looks almost as if it were pasted in, the contrast between the background and the breast is so sharply defined.

Compare it to http://pics.livejournal.com/pecunium/pic/00038c7p/g5>Kelp Bubble (shot on slide film) which (if you look at the left side of the flotation sack) has the same issue of sharp edge in focus, and out of focus background, yet. doesn't suffer so much from the edge effect. [the second, and next are larger images, if you click on them you will get them full sized. At that size they are too large to appreciate (I was less aware of sizing on the computer then) but are useful to examine detail]

There are some quirks of experience. Shooters, like myself, who grew into photography when slide film (with a range of five stops) was the professional medium of choice; and lots of pros underexposed by 1/3rd to 1/2 stop to increase saturation, and then you come to digital, which has about the same range as positive film (7 stops), and the range looks wrong, because detail is being held in places it doesn't really belong.

B&W has a range of 9 stops (look at this http://pics.livejournal.com/pecunium/pic/0003yhk5/g12>self-portrait which has a little bit of blow-out in the bookcase to the back left, and starts to block up on the table, but retains detail without going black, even with brown on brown where the hat sits on the table).

I like the images using HDR, but they don't, to my mind, look more "real" than a film image. I happen to think the range/saturation of slide-film is the better approximation, because the human eye can't apprehend all that range, at once, and in a two-dimensional image it can; which my brain knows is false.

Such images also have a higher saturation than the eye will take in; what is in shadow is more pastel than what is in light, and what is in hard light will be washed out.

Then again, all images are manipulated. It's up to the artist to decide what level of obvious manipulation is going to be apparent to the audience.

I think digital now matches the DR of most kinds of film. Many cameras have 12-bit sensors and can reach 8 stops of DR.

Danny Yee: The real limit to most films is the paper.

http://photo.net/bboard/q-and-a-fetch-msg?msg_id=009gas>Dynamic Range of Films points out that modern films have ranges of as much as 20 stops, but the means of transfering that to image (other than the actual negative/positive image) is limited to about 9 stops, for low/moderate contrast papers.

Higher contrast films have fewer stops, but even the published data from Kodak is 13 stops of effective range for the most restricted films they have.

So it's going to be awhile before the sensors can duplicate that, and the edge-effects/transitional behavior are still going to be different (just as calotype and daguerreotype are different from ortho, and pan-chromatic films, and accutance from modern chemistry/bases are all different in response curves and diffusion effects).

I use a 12 bit sensor camera, for my digital work, and even with 16 bit image-manipulators (such as LightZone) the range is problematic, mostly because the shadows have so little information (which is why it's better to overexpose a digital image; blowing the highlights, where the ideal with film is to hold the shadows. With a good editing program a lot of the "lost" data can be recovered, because it's got 1024 bits to work with, whereas a blocked up shadow has only 64. With less room to hold information it's harder to interpolate it out).

"Many cameras have 12-bit sensors and can reach 8 stops of DR."

Which cameras have 12 bit sensors?

It's not actually sensors (which are actually single channel receptors, in RGB, unless you are using an Olympus, with the Foveon chips, which have layers, so that each pixel is really three pixels, sort of like color film stocks, which are actually differentially sensitised B&W film, which replaces the silver salts with dyes in processing, but I digress).

What you have are working spaces, in which the analysis takes place. Higher-end cameras use a 12-bit workspace to do the mosaic/demosaic math which renders the single-channel pixels into the working colors.

Pretty much any camera working in a .RAW format has a 12 bit workspace.

One of the things digital cameras have done is take the genric function of the body (a box to keep light out, until the shutter allowed it in, via the lens) and made them the folm stock, as well.

If I am using my FE2, my F3, Hassleblad, or Sinar 4x5, I can choose the color space/contrast values I want to work. I do that by switching film stocks. really saturated greens, Fuji Velvia 50. If I want less green, and less grain, I can use the 100 (which is, oddly enough, smaller grained than the 50, even though it's twice as fast). If I want higher contrast, with emphasis on reds and oranges, Kodachrome 200. For european skin tone, Portra, for asian, Provia.

But all that's out the window with digital. The sensor, high-pass filters and the math controlling the result are what determine my baselines. Everything else I have to do myself. Where I spend money to have my printer make adjustments to my color film (I can do it myself for black and white) I have to spend time to do it with digital images.

And converting digital to decent B&W is lot of time (because the humor of it is that B&W film is a digital medium, where B&W digital is an analog medium; and the analogs are less than satisfactory, as they come, and need some tweaking).

Question: do these HDR pictures actually look "more unreal" than "normal" pictures, or is it just that we're used to the unreality of "normal" pictures, and so overlook it? And how would you tell the difference?

pecunium writes;
The real limit to most films is the paper.

Doyle;
I would say it would be the work you want out of the image. Printing to paper puts a burden on an image that is going to erase other uses. A journalist like Lindsay isn't going to wonder around looking for well defined fill the bucket dynamic range in a news setting. Sloppy ole Dorethea Lange could shoot great portraits on the road while Ansel Adams couldn't.

The advantage of digital is not about printing on paper. It has to do with the mobility of the medium to use in the reality. In many ways low quality imagery from a cell phone gets the job done. No one aesthetically inclined can push their creativity by studying Kodak data sheets. However smart Kodak film balances were for giving a life like look to images, they are very far away from what the eye really does. And that sort of close attachment to 'printing' imposes an ideal of one to many content which is what computed imagery can do, but computing does better if the content is looked at as interactive. The information contained in networked information is about network content that film grains cannot capture. Period and printing a good picture ignores that limitation.
Doyle

O Vancouver... where did ye go?
Re .."There are days when the Vancouver skyline has that kind of unreal vibrancy--usually when the sun is breaking through the clouds after a storm.."-

Right- Seattle's like that, too... with a fat, gray background, and late afternoon sun angle, it has the presence of an old stereopticon slide- and the air has just been washed clean- AND you get the "bounce" of light off the water to brighten the foreground perceptibly (which really shows up better on movie film than still media, because of the "movement" in the "mirror"). Sidelight is sweet on the Wet Coast... ^..^

"It's not actually sensors

What matters, in the end, is what the physical sensor is doing. The sensors in high end scanners can scan up to 48 bits of color. That allows those scans to capture nearly the whole digital range of a negative (negatives have much larger dynamic ranges than positives). Some day cameras will also have sensors that physically capture 48 bits worth of range. But that day seems to be slow in coming.

Lawrence Krubner: I have to disagree. The sensors are never going to be a 48 bit device, the means of doing it (even with massive advances in the technology) aren't going to make turning a camera sensor into a scanner a practical method.

Each pixel (again, with the exception of the Olymous Foveon) captures one wavelength. It also registers intensity/brightness/saturation of that color, and only that color.

For good reason the only colors the pixel are sensitized to record are Red, Green and Blue (the subtractive primaries).

Then the math takes over. That's where the 8/12/16/24/48 bit question will come into play. The camera's computer takes patches of the pixels (typically 5x5, for a total of 25 in a piece) and assigns a color value. Then it shifts the start point, and does a new overlay, and assigns a color value. It does this some number of times, and then interprets all the color values to give the actual color for each pixel's final output value.

This is why, at one level, the numbe of megapixels a camera has isn't relevant. Each of those pixels needs a lens, each lens needs a coating, the coatings have reflective values, each lens has to be tuned to the wavelength they are supposed to focus, the lenses scatter, that makes noise. The noise has to be dealt with, etc., etc. (once again, I'm digressing).

The camera, even now, with a 12 bit processor can match the color range of the scanner. But we are (to date) used to film, and the digital uses a different means of controlling color values, instead of being built around the mid-point, it's hinged on white value, which affects its contrasts, the relative colors, the hue, etc..

The comments to this entry are closed.