博客
关于我
强烈建议你试试无所不能的chatGPT,快点击我
近距离观察Tone mapping.
阅读量:2438 次
发布时间:2019-05-10

本文共 8368 字,大约阅读时间需要 27 分钟。

原文: http://mynameismjp.wordpress.com/2010/04/30/a-closer-look-at-tone-mapping/

A CLOSER LOOK AT TONE MAPPING

A few months ago my coworker showed me some slides from a  by tri-Ace regarding their game “Star Ocean 4″.  The slides that really caught my eye were pages 90 to 96, where they discussed their approach to tone mapping. Instead of using the standard Reinhard tone mapping operator that everybody is so fond of, they decided to instead use curves based on actual specifications from different film types and CMOS sensors. This not only produced some really nice results (the screenshots in the slides speak for themselves), but it fit very nicely into their “virtual camera” approach towards post processing.  While I was intrigued by their approach, it wasn’t until I read through ‘s recent  on gamma and HDR lighting that I decided to start doing my own research.  His presentation gave an overview of Uncharted 2’s approach to tone mapping, which (like Star Ocean 4) eschews Reinhard’s operator in favor of mimicking a filmic response curve. Once again the images in the slides speak for themselves, and they intrigued me enough to make me dig deeper.

Like always, I started off by making a test application that would let me try out different approaches and observe their results. Initially my app started out with approach taken by pretty much    out there: render a model and a skybox to a floating-point texture, calculate the log luminance of the scene and repeatedly downsample to determine a single log-average luminance value, and then use that value in Reinhard’s tone mapping equations to scale pixel values down to the visible range (if you’re not familar, this “standard” approach is outlined in detail ). At this point I thought I would just copy over Hable’s equations and I would have something nice…but after some ugly results I realized I needed to take a step back and rethink the process a bit. After some experimentation and  a bit of reading through , I started to think of the whole process in terms of a more generalized approach:

1. Run a (simplified) light transport simulation, and determine the amount of incoming light energy for each pixel. This is done by rendering all objects in the scene, and determining the energy reflected off an object’s surface towards the camera/eye. Ideally for this step we would use radiometric units (radiance/irradiance) to represent light intensity and we would also maintain the distribution of that energy across the entire visible spectrum, but to actually make this feasible on graphics hardware we run the simulation for 3 discrete wavelengths (red, green, and blue).  In my app, this step is performed by rendering a single mesh and sampling an HDR environment map to determine the amount of light reflected off the surface.  For a background the environment is sampled directly by a skybox.

2. Scale the incoming light to determine the amount that would hit the film/sensor/retina. This step is referred to as “calibration.”  One possible way to implement this stuff is to model a camera, where the total amount of light that hits the film is affected by the focal length of the lens, the aperture size(f-number), and the shutter speed. Together they can be manipulated to scale range of incoming light intensities such that the important parts of the scene are neither under-exposed nor over-exposed. In my app I kept things simple, and exposed three different methods for calibration:

  • Manual exposure: a slider lets you choose values between -10 to 10. The HDR pixel value is then scaled by 2^exposure.
  • Geometric mean of luminance: this is pretty much the exact approach outlined in Reinhard’s paper, where the geometric mean (log average) of scene luminance is calculated and used to scale the luminance of each pixel. With this approach a “key value” is user-controlled, and is meant to be chosen based on whether the scene is “high-key” (bright, low contrast) or “low-key” (dark, high contrast).
  • Geometric mean, auto key value: Same as above, except that the  key value is automatically chosen using Equation 10 from .

To calculate the geometric mean, I simply calculate the log of luminance and write the results to a 1024×1024 texture. I then call GenerateMips to automatically generate the full mip-map chain. At that point I can apply exp() to the last mip level to get a full log-average of the scene. One extra trick I added to my app was a slider that let you choose the mip level that would be sampled when scaling the pixel intensities. Doing this allows you to essentially use local averages rather than a global average, which lets you have different exposure values for different parts of the image.  In my app, there’s a display below the tone curve that shows the average luminance value being used for each part of the image.

3. Map calibrated light intensities to display values by applying a tone curve to either RGB values or luminance values. This curve can have a significant impact on not only which details are visible in the final image, but also the overall visual characteristics. Because of this I find it difficult selecting the right curve for a particular scene…in some cases you can pretty objectively determine that one curve is better than another at making details visible, but at the same time some curves will subjectively look better to my eyes due to their resulting  levels of contrast and saturation.  My app offers a variety of curves to choose from, including:

  • Linear
  • Logarithmic
  • Exponential
  •  (Equation 3)
  •  (Equation 4)
  • Filmic (Haarm-Pieter Duiker’s curve, using the ALU-only version from Hable’s presentation)
  • Uncharted 2 (customizable filmic curve)

Now for the exciting part: pictures! For this first set of shots, I used an an HDR environment map taken from the Ennis House. I liked this map because it gave a great test case for detail preservation: a mostly-dark room, with an extremely bright window through which a landscape is visible. For reference, this is what the shot looks like with no exposure or tone mapping applied:

Here’s what the shot looks like for each tone mapping curve, with “auto-exposure” applied using a global geometric mean:

   

   

Both Drago and Reinhard look pretty decent in this case, while with filmic you pretty much lose everything in the darks and in the brights. The Uncharted 2 curve doesn’t have such a strong toe so the blacks aren’t crushed, and the contrast is a bit better than in Reinhard. But you do lose the coloring in the sky with both filmic curves, since those curves are applied to the RGB channels which means color ratios aren’t preserved like they are when you tone map luminance.  However I think the sky looks rather unnatural in Drago and Reinhard, despite the colors being preserved.

For this next set, I sampled the 9th mip level which essentially gives you a 2×2 grid of local luminance averages. This essentially applies a higher exposure to the left portion of the image, and lower exposure to the right portion.

  

   

Using local averages works pretty well for the filmic techniques. Areas that used to be underexposed or overexposed now clearly show more detail, and the overall the image has a nice level of contrast and saturation. Reinhard and Drago, on the other hand, look more washed-put than they did previously.

Here’s some other assorted screenshots I took using other environment maps, and with bloom enabled:

   

   
   

   

   

   

Overall I like the look of the filmic curves. It might just be that I watch too many movies and I’m used to that kind of look, but I just think the image looks more natural. I’m sure plenty of people would disagree with me though, especially since Reinhard and Drago are much better at preserving details across a wide range of intensities.

If you’d like to play around with the app itself, I’ve uploaded the code, content, binaries, and VS2010 project here:



Sorry about it being in 3 parts…together they total 174MB and SkyDrive has a 50MB limit per file. If you’re wondering why the app is so big, it’s because I ran the HDR environment maps through ATI’s CubeMapGen to generate some really nice mipmaps (it does proper angular extent filtering so that there’s no seams in lower mip levels) and that app can only save HDR cube maps in uncompressed floating point formats. But on the upside they have really nice mips…in fact I use a low mip level for faking diffuse lighting on the mesh.

转载地址:http://ihwqb.baihongyu.com/

你可能感兴趣的文章
ASP指南:ADO/SQL(数据存取) (转)
查看>>
微软将在HEC上发布Windows 2003 64-bit(转)
查看>>
保护SQL Server数据库的十大绝招(转)
查看>>
百度搜索引擎使用指南(转)
查看>>
专家观点:安全成交换机的基本功能(转)
查看>>
树型结构在ASP中的简单解决(转)
查看>>
解决玩游戏时显卡卡屏现象(转)
查看>>
移动通信概要(转)
查看>>
CMD命令全集(转)
查看>>
深度探索C++对象模型 ( 第四部分 )(转)
查看>>
MySQL中的SQL特征(转)
查看>>
使用JBuilder和WTK2.2搭建MIDP1.0和MIDP2.0开发环境(转)
查看>>
Symbian命名规则(翻译)(转)
查看>>
windows server 2003的设置使用(转)
查看>>
优化Win2000的NTFS系统(转)
查看>>
IE漏洞可使黑客轻易获取私人信息(转)
查看>>
脱机备份与恢复实战(转)
查看>>
WLINUX下的DNS服务器设置(转)
查看>>
游戏引擎剖析(二)(转)
查看>>
sms发mms C语言源码(转)
查看>>