Tag Archive: gdc


Well, it seems that the usual work commitments have distracted me from the important task of blogging once again, so in this post I will talk about the best of the rest of GDC so that I can move on to something else. I’ve picked two sessions from Thursday and two sessions from Friday, which was pretty much all of Friday because we had to go get on the plane back home. I was very disappointed to miss Friday’s experimental games workshop although now the GDCVault is up I’ll try to blog about that session soon.

So, an interesting session on Thursday was the session on rigid body simulation on the GPU. There was a lot of computer stuff at GDC this year, and I think writing and understanding massively parallel code is going to be a really important skill going forward. It’s also a very, very difficult one to get your head round. There’s a very famous quote (although I don’t know who said it) that says that parallel computing is only hard if you care about performance. It’s true too. 

When we learn programming we learn in a very procedural, you might even say logical way. This happens, then that happens etc etc. With parallel programming it all happens in one go, except sometimes it doesn’t. Some things will turn your program back into a serial program. It will still work just fine, but it will be slow. Sometimes in a sequential algorithm the next result depends on the previous result. This is a hard problem to solve, and initially it might seem impossible until you find some aspect of the problem that can help. Some indirect route that involves some extra work setting up, but pays off in the long run. Another issue is that parallel processors still have limits. They can only run so many threads at the same time and only use so much memory that is available to them, and that introduces constraints on the problem. Anyway, I’m going off on a tangent. I think anyone who wants a future in programming anything of any significance should at least have an awareness of compute, so look at cuda, Direct Compute or OpenCL and even if you just scratch the surface and understand a little about why it’s so hard you’ll be on your way. This session on physics on the GPU hinted towards a world where we have to worry about parallel programming a little less. The speaker had a library of more than fifty kernals (a small program that runs in a multiple data single program architecture) to perform certain tasks in parallel. That seems like a good thing, because it suggests a future where, whilst your code may not be optimal you may still be able to take advantage of multiple processors without really understanding the underlying parallel code. I still think it’s worth looking into though. 

The next two sessions that I enjoyed were about the AI in Hitman Absolution. One session was on Thursday and the other on Friday. The first was more about the decision making systems. This included the usual sensors that you might expect, but some of the more interesting parts were how several AI agents could work together and act as a group. When an AI character senses something that they want to react to (a dead body, say) they check to see if a “situation” exists for that event (i.e. if another character is already there, looking at the body). If a situation already exists then the new agent joins the situation as a supporter, and will support the actions of the lead agent with dialogue etc. If not situation exists that agent create a situation and becomes the lead in it. Situations are remembered which enables things to escalate in a reasonable way. For example, you can trespass once and be warned to leave by an AI, if you trespass again the AI will be more hostile, as the previous similar event is remembered. The second part of this session, and the session on the Friday were closely related. On Thursday the session talked about the difference between AI and the AI animation, and how that was separated in a way that works. One example of the problem that the presenters were trying to solve was how to make the decisions made by AI that pick animations run smoothly, or how old orders could be overridden. The solution that was presented was a software stack which added an interface between AI and animation, with a set of control parameters and transition triggers, and an animation system with a set of simple parameterised operations that could dynamically be attached together. On Friday the animation and locomotion systems were covered in more detail, including how the game code micromanages the animation graphs to blend different animations together. One big issue (which wasn’t entirely eliminated) was foot sliding.  Character animations are rooted at the pelvis, but that mean that there is nothing to keep the feet in the same place if they are misaligned when moving from one animation to the next. The solution was something called “blend pivots” which basically add a constraint (this joint can’t move) and blend the rest of the animation around it. Again, parametrics (from Monday’s session on curve design) came up. This time they were being mapped onto 2D space with dimensions of linear and angular velocity. Animations sit somewhere in the space based upon their linear velocity (walking vs running) and their angular velocity (turning at speed vs turning slowly). The characters velocity and angle place them somewhere in this 2D space, and the animation is blended between the three closest animations that for the triangle they sit in in 2D space. Finally, some controller strategies were covered to explain how different animations were chosen from a variety of options. When the issue of how to do this with a crowd of up to 1200 NPCs the answer was yet again heavy parallelization!

The last session on the Friday was about rendering Assassins Creed III. This went into a massive amount of detail so I’ll only skim the surface. Weather effects were all performed in cylinders attached to the camera. This allowed for effects such as height based sun tinted fog, volumetric mist and rain. This meant that particles within the cylinders could be vertex lit, and to give a feeling of density the cylinder included a scrolling texture at 20m from the camera.

Next came the impact of rain on other surfaces, such as the streets. If the street were wet the solution was to reduce the shader parameter concerned with albedo (diffuse reflectivity) which makes a wet surface darker and increase the gloss parameter, making wet surfaces shinier. A similar method was used when the streets were covered in snow, for which albedo was increased to white, and fade in a snow sparkle effect for that extra christmassy feel!

The next issue that was covered was deformable snow, which shifts as characters walk through it. The snow mesh was simply the underlying geometry displaced along the normal. When a character steps on that mesh the triangle is removed from the mesh and replaced with a tessellated, displaced version. This was done on the GPU and the result rendered into the vertex buffer directly. This render to the vertex buffer is something I’d like to try myself if I get the time!

Next came some details on world lighting using a world light map, which basically includes a light colour value per pixel in the z-x plane. This is then adjusted based upon the height. This is very fast but seems like it would use a lot of memory, and of course, there’s no specular light. World ambient occlusion was added to compliment the prolific screen space ambient occlusion effects that seem to be used everywhere these days. The world ambient occlusion was a simple top down projected texture, which was applied during the sun lighting, and modulated with the height. The texture itself was a simple depth render of the scene from above, that was blurred a bit.

Next was some details of the massive crowd scenes, and how to deal with model instancing and texturing on such a large scale. Finally the session finished up with some very cool ocean rendering using fourier transforms, which I remember disliking back when I did Physics. Displacements are rendered to a texture. Foam accumulated every frame and rendered to a separate wave crest texture. The water rendering include a diffuse, specular and normal map layer, reflection, depth tint (so deep water looks black and shallow water doesn’t), refraction and sub surface scattering and finally the addition of soft particles for foam in coastal areas. It looked very, very cool!

So, I guess that’s my round up of GDC, although there are a series of sessions that I still want to check out in the vault. Especially the experimental games workshop which always includes a load of cool stuff. Watch this space!

GDC Wednesday Afternoon

Wednesday afternoon began with a look at Horizon, which is the level editor used to create Tomb Raider. I was really nice. It was written using C# and WPF on top of a couple of C++ layers, and exposed a the underlying content based architecture is a really intuitive way for the artists to iterate through. It was also interesting to hear about the agile processes they used, which featured a request wall that artists could constantly add jobs to on post it notes. It was also interesting to see component based architecture (or dynamic aggregation, to give it it’s more technical, clever sounding name) crop up again.

Next came a very, very technical presentation on rendering characters for the next generation of consoles. The BBRF (Bidirectional reflectance distribution function) lighting models for existing solutions like black ops 2 we covered briefly, which features two lobes of specular reflection (a sharp reflection and a smooth reflection), then layers of complexity were added on – roughness, gloss, Sub-surface scattering, transience and cavity occlusion. It give you some idea of the complexity of the model cavity occlusion concerns preventing a specular reflection from being produced from the inside of a single pore on a model’s skin.

The session went on to talk about rendering eyes. This model including modelling reflections, reflection occlusion, wetness, redness and vein maps, and featured some pretty grim clockwork orange style experiments where participants eyes were held open and dried out with hair driers in order to find out just what a really, really dry eye looks like. Wetness values were used to modify the amount of specular reflectance from a bump mapped iris, and tears were added by including extra geometry at the edges of wet eyes using soft particles. The results were fantastic, and this amount of effort was all done in the name of avoiding a thing called Mori’s uncanny valley. Mori is a robot designer, and he noticed that the more life like robots (or any character) becomes the greater the emotional response of the people who encounter it up to a point where the character becomes almost lifelike – but not quite.

Mori's Uncanny Valley

Mori’s Uncanny Valley

This creates a valley of the graph of realism vs. emotional response where people are just freaked out. Looking at the level of detail involved the big question for me was where does the model come from. Where do the artist come from who create all this massively complex textures. Personally, for a long time I’ve thought that crossing the uncanny valley is just too expensive. There are great games (borderlands 2, Bioshock infinite etc) that remain firmly and proudly on the left side of the valley. The results from this presentation were impressive, but I still feel that crossing the valley is and unnecessary expense. I’m sure that one day we will do it, but even if the technology becomes cheap enough (in terms of memory and processing power) we still need the assets to render. Where do they come from. Perhaps from scanning real people – taking another step closer to the movie industry, casting an actor (as happens already) and then scanning them on a microscopic level.

After such a technical talk I was glad to get into the expo for a while, but my break was short lived as next came another session on Tomb Raider. This one was on the different types of lighting used in the game, and began with a comparison of the overall model used in Tomb Raider Underworld against the model used in the latest Tomb Raider. Underworld used a more traditional forward lighting model in which for a lot of the levels lighting maps were built, basically burning static lights into textures for a particular set of geometry. This has some drawbacks – more content means more maps, and makes it difficult to destroy things. The decision was made to jump to deferred lighting.

Deferred lighting seems to be more popular these days. Deferred lighting involves rendering all geometry together with anything you might need later, like colour information, normal maps and depth buffers into a G (for geometry) buffer. All this information is stored until later in the pipeline when the lighting calculations are done. The advantage of this approach is that expensive lighting calculations are “deferred” to the last moment. That way you know that all the calculation are going to contribute to the final scene. This is opposed to forward lighting, in which you might do all the complicated maths to render a particular pixel on a building, and then find that that part of the building is behind a tree, and all that effort was wasted. Deferred lighting is not without its drawbacks. For example, transparent objects are harder using deferred lighting, but it seems to be more popular these days.

Once the overview of the lighting model was complete a look at the different types of light revealed interesting models for torches that modulate over time, and non standard attenuation that meant a single light could be varied in very funky ways with distance, with light intensity dropping off AND coming back up again.

Shadow casting lights had their shadows rendered to a shadow map texture, and dark lights were used to help artists better control the level. These lights sucked light back out of the screen (in fact, I’ve seen some students with similar bugs in their graphics coursework recently).

New shapes of lights were added, Capsule lights, wedge shaped and box shaped lights were included. Screen Space Ambient Occlusion was added (Screen space ambient occlusion is another popular effect these days. Ambient light [that comes from anywhere] is a bit of a hack, and the idea is that when geometry is close together less of the hack should be applied because the ambient light is occluded by the geometry. This calculation is done in screen space, hence SSAO).

Next came sections about special effects for water and fire. For water caustics were added using animated bump and normal maps, and wet things were made to look darker using dark lights. For fire sections of the game, in which Lara must make her way through burning buildings an elaborate effect was created storing values for flame scale, amount of charring, burn speed and a fire mask into the RGBA channels in a texture. An effect to mimic heat distortion was added together with a bunch of other post processing steps to add motion blur, double vision etc. The overall effect was very cool!

This is definitely one session I will be revisiting in the vault. Although the talk was awesome the cherry on the top was that I won a goody bag of Tomb Raider swag by tweeting “I am Lara Croft” faster than anyone else.

If anyone saw that tweet and was concerned for my well being all is clear now. Getting freebies is nice, but this session was really special, partially because the presenter was so passionate about his work and I think you can really tell, and it helps you to be engaged with the content. I’ve had to give lectures about things I’m very passionate about, and things I’m less passionate about and I can definitely tell the difference. This talk makes me hope that my audience can as well, and that I’m able to deliver a similar effect.

I landed in San Francisco yesterday. I’m here because I’m lucky enough to be going to the Games Developers Conference. The conference starts tomorrow, so this morning I took the opportunity to take a walk to Fisherman’s Wharf to see if there’s anything worth having at the NFL shop on Pier 39. There wasn’t, but I still enjoyed my wandering around the city.

On my way to Fisherman’s Wharf I stopped by Lombard Street. If there’s one thing San Francisco has that Hull doesn’t it’s hills! and on this particular block on Lombard Street the hill is so steep that they’ve made it all wiggly (technical term).

The crooked street. The google self-driving car has driven down it too!

The crooked street. The google self-driving car has driven down it too!

On to the piers where you can get a great view of the Golden Gate bridge and Alcatraz – sometimes. Today I was a bit early and the morning fog hadn’t quite lifted. I took a picture anyway, and had a chat with an Mancunian farmer from Australia. He was a very nice chap.

The golden gate bridge - somewhere

The golden gate bridge – somewhere

There are lots of touristy things on Fisherman’s Wharf, but this chap was outside the wax works. Perhaps he has found his calling in the leisure and tourism industry.

The pope. I don't think he looks anything like Jim Bowen :S

The pope. I don’t think he looks anything like Jim Bowen :S

This evening I have some skyping to do, a few emails to answer (mostly about the simulation and graphics course) and my schedule to sort for the next few days. There are so many interesting session to go to. I had a quick look this morning and I think I’m going to have some very tough choices to make!