Wednesday afternoon began with a look at Horizon, which is the level editor used to create Tomb Raider. I was really nice. It was written using C# and WPF on top of a couple of C++ layers, and exposed a the underlying content based architecture is a really intuitive way for the artists to iterate through. It was also interesting to hear about the agile processes they used, which featured a request wall that artists could constantly add jobs to on post it notes. It was also interesting to see component based architecture (or dynamic aggregation, to give it it’s more technical, clever sounding name) crop up again.

Next came a very, very technical presentation on rendering characters for the next generation of consoles. The BBRF (Bidirectional reflectance distribution function) lighting models for existing solutions like black ops 2 we covered briefly, which features two lobes of specular reflection (a sharp reflection and a smooth reflection), then layers of complexity were added on – roughness, gloss, Sub-surface scattering, transience and cavity occlusion. It give you some idea of the complexity of the model cavity occlusion concerns preventing a specular reflection from being produced from the inside of a single pore on a model’s skin.

The session went on to talk about rendering eyes. This model including modelling reflections, reflection occlusion, wetness, redness and vein maps, and featured some pretty grim clockwork orange style experiments where participants eyes were held open and dried out with hair driers in order to find out just what a really, really dry eye looks like. Wetness values were used to modify the amount of specular reflectance from a bump mapped iris, and tears were added by including extra geometry at the edges of wet eyes using soft particles. The results were fantastic, and this amount of effort was all done in the name of avoiding a thing called Mori’s uncanny valley. Mori is a robot designer, and he noticed that the more life like robots (or any character) becomes the greater the emotional response of the people who encounter it up to a point where the character becomes almost lifelike – but not quite.

Mori's Uncanny Valley

Mori’s Uncanny Valley

This creates a valley of the graph of realism vs. emotional response where people are just freaked out. Looking at the level of detail involved the big question for me was where does the model come from. Where do the artist come from who create all this massively complex textures. Personally, for a long time I’ve thought that crossing the uncanny valley is just too expensive. There are great games (borderlands 2, Bioshock infinite etc) that remain firmly and proudly on the left side of the valley. The results from this presentation were impressive, but I still feel that crossing the valley is and unnecessary expense. I’m sure that one day we will do it, but even if the technology becomes cheap enough (in terms of memory and processing power) we still need the assets to render. Where do they come from. Perhaps from scanning real people – taking another step closer to the movie industry, casting an actor (as happens already) and then scanning them on a microscopic level.

After such a technical talk I was glad to get into the expo for a while, but my break was short lived as next came another session on Tomb Raider. This one was on the different types of lighting used in the game, and began with a comparison of the overall model used in Tomb Raider Underworld against the model used in the latest Tomb Raider. Underworld used a more traditional forward lighting model in which for a lot of the levels lighting maps were built, basically burning static lights into textures for a particular set of geometry. This has some drawbacks – more content means more maps, and makes it difficult to destroy things. The decision was made to jump to deferred lighting.

Deferred lighting seems to be more popular these days. Deferred lighting involves rendering all geometry together with anything you might need later, like colour information, normal maps and depth buffers into a G (for geometry) buffer. All this information is stored until later in the pipeline when the lighting calculations are done. The advantage of this approach is that expensive lighting calculations are “deferred” to the last moment. That way you know that all the calculation are going to contribute to the final scene. This is opposed to forward lighting, in which you might do all the complicated maths to render a particular pixel on a building, and then find that that part of the building is behind a tree, and all that effort was wasted. Deferred lighting is not without its drawbacks. For example, transparent objects are harder using deferred lighting, but it seems to be more popular these days.

Once the overview of the lighting model was complete a look at the different types of light revealed interesting models for torches that modulate over time, and non standard attenuation that meant a single light could be varied in very funky ways with distance, with light intensity dropping off AND coming back up again.

Shadow casting lights had their shadows rendered to a shadow map texture, and dark lights were used to help artists better control the level. These lights sucked light back out of the screen (in fact, I’ve seen some students with similar bugs in their graphics coursework recently).

New shapes of lights were added, Capsule lights, wedge shaped and box shaped lights were included. Screen Space Ambient Occlusion was added (Screen space ambient occlusion is another popular effect these days. Ambient light [that comes from anywhere] is a bit of a hack, and the idea is that when geometry is close together less of the hack should be applied because the ambient light is occluded by the geometry. This calculation is done in screen space, hence SSAO).

Next came sections about special effects for water and fire. For water caustics were added using animated bump and normal maps, and wet things were made to look darker using dark lights. For fire sections of the game, in which Lara must make her way through burning buildings an elaborate effect was created storing values for flame scale, amount of charring, burn speed and a fire mask into the RGBA channels in a texture. An effect to mimic heat distortion was added together with a bunch of other post processing steps to add motion blur, double vision etc. The overall effect was very cool!

This is definitely one session I will be revisiting in the vault. Although the talk was awesome the cherry on the top was that I won a goody bag of Tomb Raider swag by tweeting “I am Lara Croft” faster than anyone else.

If anyone saw that tweet and was concerned for my well being all is clear now. Getting freebies is nice, but this session was really special, partially because the presenter was so passionate about his work and I think you can really tell, and it helps you to be engaged with the content. I’ve had to give lectures about things I’m very passionate about, and things I’m less passionate about and I can definitely tell the difference. This talk makes me hope that my audience can as well, and that I’m able to deliver a similar effect.