This year Siggraph took place in LA between 5 and 9 of August and it was pretty cool as in previous years I attended. I took a couple of notes during the conference that I though might be helpful for people couldn’t attend – although I planned to post them months ago. Hope you enjoy! =)
Talk: Pointed Illumination http://s2012.siggraph.org/attendees/sessions/100-119
Progressive Lightcuts for GPU (Phd students, Universität des Saarlandes / Intel):
[Progressive lightcuts] http://dl.acm.org/citation.cfm?id=2343047
- Light cuts adaptively approximate complex global illumination through clusters of virtual point lights (VPLs): area light, sky light, indirect illumination, etc. Steps: convert illumination to VPLs, build a light tree, for each eye ray choose a cut that best approximates the illumination. (optimization to path tracing?)
- This work tries to “group” the contribution of lights that don’t contribute much to the final scene, e.g. are far away. (progressively averaging the lightcut images?)
- There’s issues with VPL’s too near the surface that cause spikes in the reflection intensity (very bright aliasing), they change the attenuation formula to clamp the intensity near the surface.
SGRT: Scalable Mobile GPU Architecture Based on Ray Tracing (Samsung)
- Presented an architecture for real-time raytracing (based on recent papers from Samsung).
- Fixed-function hardware for traversal and intersection (T&I engine [Nah et al. 2011]) handles bounding volume hierarchy (split bounding volume, surface area heuristic, etc), ray-AABB intersection, and supports dynamic scenes.
- Samsung Reconfigurable Processor for shading [Lee et al. 2011], supports shaders written in C-like language with branches, allowing recursive ray-tracing.
- Uses Samsung multiplatform kernel and adaptive restart trail.
Point-Based Global Illumination Directional Importance Mapping (Dreamworks)
- Raytracing was not used at all in Madagascar 3, their solution was to use many point lights (convert area lights to points?) and they baked them (all of them?) to environment maps (or HDR env maps).
- They presented an importance sampling scheme to increase quality without paying the cost of handling all lights. The result video showed nice improvements on shadows, they mentioned quality similar to 4 hours of rendering in 1.5 hours with importance mapping.
- Showed some videos of things that would be difficult to render without ray-tracing (character walking on energy cables with energy rays, motorbike shadows).
Ill-Loom-inating Handmade Fabric in “Brave” (Pixar)
- Started presenting some early work they did to handle fabric in brave, as previous techniques were not good enough handling “frayed edges”, “holes”, etc.
- His early work looked a lot like Relief Mapping, he showed demos of animation fabric with small deformations, stretching, etc. It looked pretty good. They dropped this technique as it was difficult to apply to arbitrary surfaces due to silhouette issues (again, same flaws as Relief Mapping that were only addressed for quadric surfaces).
- Final work was based on render-man shaders that were used to generate/tessellate the surface on-the-fly.
Technical papers: Image Processing http://s2012.siggraph.org/attendees/sessions/100-49
Decoupling Algorithms From Schedules for Easy Optimization of Image-Processing Pipelines (MIT, Adobe, Stanford)
- Decouples algorithms from schedules, allowing you to easily code and try many different scheduling approaches. New language called Halide which provides a compiler to generate code for many different architectures.
- The scheduling is done mixing some of the available “modes”, 1 to 1 (fusion), 1 to N or N to 1. Also, the output must be in a grid. Other data structures, such as lists, trees, etc, are not supported yet.
- It’s also difficult to write custom functions, such as a Fourier transform, using the Halide language. To handle that they would enable the “import” of functions from external libraries (apparently not currently possible)
- Results were very good and they improved running time of a few algorithms (e.g. “Local laplacian filter”), and they did better than Intel tools (Intel TBB?)
Adaptive Manifolds for Real-Time High-Dimensional Filtering (UFRGS)
- They apply high-dimensional manifold filtering over many rendered images, for example, scenes rendered with path tracing.
- They talk about their algorithm being “aware of edges” and “temporal coherent”, so it might be possible to use for AA as well. The only issue is that for some of the results (in a video for example) they also use future frames.
- Linear cost: 70ms for a 10MP image, and 7ms for a 1MP image (still too much for real time).
High-Quality Image Deblurring With Panchromatic Pixels (Kodak)
- They talked about generation blur kernels, gathering it from images and handling invalid kernels. My understanding is that their algorithm uses two images from the same scene captured with different levels of exposure, and use it to calculate the blur kernel.
- They also said something about using infrared sensors not sure what. They also show that their algorithm fails when there’s too much blur.
Practical temporal Consistency for Image-Based Graphics Application (Disney Zurich)
- Talked about optical flow, temporal anti-aliasing and motion aware filtering. They compared their result with MPEG and show that it is better for their scenes.
Talks: Surf & Turf http://s2012.siggraph.org/attendees/sessions/100-121
From a Calm Puddle to a Stormy Ocean: Rendering Water in Uncharted (NaughtyDog)
- They show the water simulation system created for Uncharted, which appears to be divided in a few parts to handle waves, ocean, etc.
- From what I get they can use Gerstner Waves or Point Particles to add displacement/deformation and simulate ripples and waves. They can control how much stretching is going-on.
- In the in-game indoor cinematics the water has so many particles on top of it that it looks much more impressive than the water itself.
- Many objects in the ocean are not attached to anything and move according to the water. For that they integrate all forces applied over each object (pretty cool).
What if the Earth Was Flat: The Globe UI System in SSX (EA)
- This talk covered the development of the UI in the SSX game. I wasn’t really impressed by it and the problems they solved look quite old to me.
- Relaxed Cone Relief Mapping was used to render all the details in their globe, as I remember it, this accelerates the per-pixel ray-tracing by using a safe-cone (don’t intersect the geometry) and a relaxed code (intersects the geometry only once). The silhouettes in the zoom-in image look better than I expected.
- They shown how they fixed issues using perspective interpolation instead of just linear interpolation on the sphere…
Adaptive Level-of-Detail System for End of Nations (Petroglyph Games)
- They presented a lod system that handles LOD in many different subsystems and has a nice level of scalability.
- Their LOD system would handle lod on meshes, materials, effects, screen resolution (need to confirm?), etc. They have an in-game menu where they would force an specific render time and the game would scale details to make it real-time again.
- The results were good, they would remove most of the particles (in a smart way), remove effects, and even do things like rendering a different subset of all objects per frame. Those changes were also fast enough that they wouldn’t introduce problems while “teleporting” to other maps of the map that were running smoothly.
Screen Space Decals in Warhammer 40,000: Space Marine (Relic)
- They presented an algorithm to render decals by projecting a 3D box on top of their G-Buffer and rendering the decals in screen space.
- There were some issues presented when the projected box intersects the near clipping plane of the camera, or then the normal of the surface changes too heavily (corners) under the projected box.
- Final results were pretty impressive were this system was used to add a lot of details to many parts of the game.