I’d wanted to implement my own over-the-top HDR ever since.
What is HDR?
So for the uninitiated, what is HDR? In the real world, we humans experience a vast range of contrast. We can see very bright objects, such as the sun, as well as very dark objects such as a moonlit scene. In computers and modern graphics, however, we're limited to a very, very small range. The brightest colour that can be expressed in a 32-bit ARGB texture is (255, 255, 255, 255), or pure white. And it isn't very bright. So we use a technique called HDR to alleviate this. When rendering the scene, we render it to a format with a very high dynamic range, for example a 64-bit texture. Most commonly, we use a floating point format in the form: A16R16G16B16F, which means a 16-bit float for each component. Each component can now roughly store a value up to about ~65,000... much better than the 255 we were limited to on 32-bit ARGB.
That means that in our scene, very very bright objects will have colours greater than 255, while dark objects will be able to have precise values near 0 (thanks to floating-point). But now we've got a problem. Regular monitors and displays can't display such bright values: the hardware itself is still limited to 8-bit component RGB, with its annoying 255 maximum. So an operation called tonemapping is performed. Tonemapping is the process of converting the HDR image into a Low Dynamic Range image (LDR), so that the monitor can actually display it. This means shifting and squeezing the brightness of a scene so that it best fits within the display limits of the monitor. This is why, in HDR-rendered scenes, the brightness changes as you look at bright/dark objects.
After tonemapping, you've got an LDR image which you can now display to the monitor. That's really all that HDR needs. But if you've ever used a camera, you may notice some things are missing from our HDR scene. In a camera (and in the human eye), we see that bright things tend to get lens flares, as well as blooming. So often you'll see these things implemented along with HDR (and, indeed, seem to have become synonymous with the term). Bloom, starring and lens flares are all post-processing effects, and they simulate the effect of light passing through a lens and can greatly increase the image quality of a scene. More information can be found in the resources I'll list later on.
HDR in Clouds
Clouds doesn’t use any dynamic lighting. All shading done on the clouds is static – cheap shader approximations because I didn’t want to implement anything overly complicated. Nevertheless, I made the decision to implement HDR because it provided an excellent real-world test of SolSector Engine’s shader system.
I set out by reading up on the relevant material. As HDR had gained popularity for use in games over the past few years, there was no shortage of information. The first thing I took a look at was the famous implementation by Masaki Kawase, in the form of his demo dubbed “Rthdribl” (Real-Time High Dynamic Range Image Based Lighting). It’s available here. I used his demo as a base for what HDR should look like - and set out to write my own shaders in HLSL from scratch.
The full HDR pipeline is pretty complex. I drew up an initial diagram for the pipeline: (warning: large)
But, this was unoptimised and took up a horrific amount of memory:
After some reorganisation, I was able to significantly reduce the amount of buffers to just a few, but it really convoluted the pipeline: (warning, very large)
Problems
One of the main problems was simply getting it to look right. Initially, I used the Reinhard operator to tonemap the HDR scene, but I found that it was ill-suited for this purpose. It would overdarken bright scenes, and overbrighten dark scenes. In other words, the dynamic range was a little too high. After many, many hours of tweaking, I resorted to simply removing the Reinhard operator in favour of a simple linear one. The Reinhard operator (as well as the Reinhard improved operator) would simply reduce contrast over the entire scene to fit the HDR space into LDR - a byproduct of its hyperbolic curve, even if it did guarantee that the HDR scene mapped fully into LDR. It may have been theoretically ideal, but it just didn't look good. The linear operator, in addition to a simple luminance clamp, is certainly not "technically" correct. But it doesn't matter; it looks nice and the tonemapping effect is now quite subtle (as it should be).
HDR is just one of those things that needs a lot of tweaking to get right. Especially the tonemapping, and various parameters for bloom/glare/etc.
For anyone looking to implement HDR for themselves, here are a few of the things that helped me along the way:
- Masaki Kawase's GDC 2004 slides, available here and here. Go through them both!
- For those interested in some theory, be sure to check out Erik Reinhard's whitepaper here.
- The DirectX SDK samples. They provide full source code and explanations, and a heck of a lot of good information. See: HDR Lighting (Direct3D 9), HDRLighting Sample (with executable and source code available the SDK), HDRDemo Sample. The HDRLighting Sample is particularly useful. It implements the entire HDR pipeline and the HLSL shader code provided was a great reference point.
9 comments:
I'm pretty sure lens flares don't occur in the human eye... at least I've never seen one. ;)
Also, woot, I got vaguely mentioned on the blogosphere. Celebrity is only a few blocks away. Or something.
Hello Sc4Freak,
nice optimisation.
Could you give me the right link to the pipeline after reorganisation.
It seems to be broken.
Thanks,
Sascha
Hi, I've reuploaded the image and fixed the link. Sorry about that.
Hey
thanks for the fast answer.
At the moment i have a pipeline between your default and organized scenario. But it helps a lot for further optimazation.
I think in one or two weeks i have deeper question regarding the tone map operator because i have the same problem at the moment. To bright and to dark.
At the moment i´am workin on the ghost and the i have to add the hole Tone mapping, Bloom, Glare, Ghost to this VSM Scenario.
HDR pysicalized
http://picasaweb.google.com/lh/photo/MkfYBq7wcv3GNrfLBG-5iw?feat=directlink">
HDR pysicalized
http://picasaweb.google.com/lh/photo/83p0LN3JzNmpsOU1-PP81w?feat=directlink">
HDR pysicalized VSM only
http://picasaweb.google.com/lh/photo/X6ebIyboN_dk5HRIt2MkeA?feat=directlink">
HDR pysicalized VSM only
http://picasaweb.google.com/lh/photo/3UUNHRP-WNDjOG2t7Z2lsw?feat=directlink">
HDR pysicalized VSM Detail only
http://picasaweb.google.com/lh/photo/CC2mVzOSXpr56K45gWLYLg?feat=directlink
and because of wasting a lot of video ram for the VSM shadow every MB is helpful.
ps. I enjoyed your laptop alarm clock and great blog
Test.
Hi. I answered twice put lost in space.-)
Hey
thanks for the fast answer.
At the moment i have a pipeline between your default and organized scenario. But it helps a lot for further optimisation.
I think in one or two weeks i have deeper question regarding the tone map operator because i have the same problem at the moment. To bright and to dark.
At the moment i´am workin on the ghost and the i have to add the hole Tone mapping, Bloom, Glare, Ghost to this VSM Scenario.
HDR pysicalized
http://picasaweb.google.com/lh/photo/MkfYBq7wcv3GNrfLBG-5iw?feat=directlink">
HDR pysicalized
http://picasaweb.google.com/lh/photo/83p0LN3JzNmpsOU1-PP81w?feat=directlink">
HDR pysicalized VSM only
http://picasaweb.google.com/lh/photo/X6ebIyboN_dk5HRIt2MkeA?feat=directlink">
HDR pysicalized VSM only
http://picasaweb.google.com/lh/photo/3UUNHRP-WNDjOG2t7Z2lsw?feat=directlink">
HDR pysicalized VSM Detail only
http://picasaweb.google.com/lh/photo/CC2mVzOSXpr56K45gWLYLg?feat=directlink
and because of wasting a lot of video ram for the VSM shadow every MB is helpful.
ps. I enjoyed your laptop alarm clock and great blog
Hi,
could you give me some tips with the Ghost Rendering. It´s very hard to find out how it works from Kawases 2003 GDC Slides.
I think I understand some wrong.
I hace a sharper and a blurred bright pass. I combine them. Sharper one scaled in. Blurred one scaled out and mirrored to the screen center.
In the second pass i loop a translation with scalling out this result from the screen center. You can see it here. Middle RT View of the rendering screen.
But something is completly wrong with translation.
And i find no solution to color this.
http://picasaweb.google.com/lh/photo/zmiBrGL3WgZ0b9bB2wfWtQ?feat=directlink
http://picasaweb.google.com/lh/photo/cV6WSS78lkLIJc7r7lTwlA?feat=directlink
Thanks,
Sascha
That's more or less the gist of it. Kawase's GDC2003 slides should be enough to implement it. There isn't much to it After applying a bright-pass and blurring the image, it's just a matter of scaling the image around the center of the screen and recombining it. Is there any specific problem you're having?
Post a Comment