Archive for May, 2013


Minecraft-Pi at GLSIG

At the end of last week I went to a live meeting of the Games and Learning Special Interest Group. It was my first one, and I’d also agreed to take some Raspberry Pi’s along with an exercise in programming minecraft-pi using Python. I was made to feel very welcome from the outset by a group of people who seemed to value playing as much as they value teaching which I can relate to. If you’re not having fun then you’redoing it wrong!

Programming minecraft-pi seemed a bit off topic, but the exercise was well received – and we hastily covered loops, conditionals and functions in around two hours (although I suspect there may have been a fair amount of programming experience in the room already) and we created lava walls, wood – melon picnic blankets and melon balls inside anti-melon balls. I learnt that it would probably be useful to introduce the api call to set the player position so you can quickly move closer to whatever you’ve been building.

Behold my diamond encrusted solid gold wall!

Behold my diamond encrusted solid gold wall!

During the rest of the event we play tested games, designed games and discussed existing theories and funding opportunities for new research and generally had a lot of fun! For me, this was a great opportunity to network with a group of people with similar interests, and more experience. Thanks to everyone who made it such an enjoyable event.

Oh, and there was cake – and it was good!

Advertisements

Well, it seems that the usual work commitments have distracted me from the important task of blogging once again, so in this post I will talk about the best of the rest of GDC so that I can move on to something else. I’ve picked two sessions from Thursday and two sessions from Friday, which was pretty much all of Friday because we had to go get on the plane back home. I was very disappointed to miss Friday’s experimental games workshop although now the GDCVault is up I’ll try to blog about that session soon.

So, an interesting session on Thursday was the session on rigid body simulation on the GPU. There was a lot of computer stuff at GDC this year, and I think writing and understanding massively parallel code is going to be a really important skill going forward. It’s also a very, very difficult one to get your head round. There’s a very famous quote (although I don’t know who said it) that says that parallel computing is only hard if you care about performance. It’s true too.¬†

When we learn programming we learn in a very procedural, you might even say logical way. This happens, then that happens etc etc. With parallel programming it all happens in one go, except sometimes it doesn’t. Some things will turn your program back into a serial program. It will still work just fine, but it will be slow. Sometimes in a sequential algorithm the next result depends on the previous result. This is a hard problem to solve, and initially it might seem impossible until you find some aspect of the problem that can help. Some indirect route that involves some extra work setting up, but pays off in the long run. Another issue is that parallel processors still have limits. They can only run so many threads at the same time and only use so much memory that is available to them, and that introduces constraints on the problem. Anyway, I’m going off on a tangent. I think anyone who wants a future in programming anything of any significance should at least have an awareness of compute, so look at cuda, Direct Compute or OpenCL and even if you just scratch the surface and understand a little about why it’s so hard you’ll be on your way. This session on physics on the GPU hinted towards a world where we have to worry about parallel programming a little less. The speaker had a library of more than fifty kernals (a small program that runs in a multiple data single program architecture) to perform certain tasks in parallel. That seems like a good thing, because it suggests a future where, whilst your code may not be optimal you may still be able to take advantage of multiple processors without really understanding the underlying parallel code. I still think it’s worth looking into though.¬†

The next two sessions that I enjoyed were about the AI in Hitman Absolution. One session was on Thursday and the other on Friday. The first was more about the decision making systems. This included the usual sensors that you might expect, but some of the more interesting parts were how several AI agents could work together and act as a group. When an AI character senses something that they want to react to (a dead body, say) they check to see if a “situation” exists for that event (i.e. if another character is already there, looking at the body). If a situation already exists then the new agent joins the situation as a supporter, and will support the actions of the lead agent with dialogue etc. If not situation exists that agent create a situation and becomes the lead in it. Situations are remembered which enables things to escalate in a reasonable way. For example, you can trespass once and be warned to leave by an AI, if you trespass again the AI will be more hostile, as the previous similar event is remembered. The second part of this session, and the session on the Friday were closely related. On Thursday the session talked about the difference between AI and the AI animation, and how that was separated in a way that works. One example of the problem that the presenters were trying to solve was how to make the decisions made by AI that pick animations run smoothly, or how old orders could be overridden. The solution that was presented was a software stack which added an interface between AI and animation, with a set of control parameters and transition triggers, and an animation system with a set of simple parameterised operations that could dynamically be attached together. On Friday the animation and locomotion systems were covered in more detail, including how the game code micromanages the animation graphs to blend different animations together. One big issue (which wasn’t entirely eliminated) was foot sliding. ¬†Character animations are rooted at the pelvis, but that mean that there is nothing to keep the feet in the same place if they are misaligned when moving from one animation to the next. The solution was something called “blend pivots” which basically add a constraint (this joint can’t move) and blend the rest of the animation around it. Again, parametrics (from Monday’s session on curve design) came up. This time they were being mapped onto 2D space with dimensions of linear and angular velocity. Animations sit somewhere in the space based upon their linear velocity (walking vs running) and their angular velocity (turning at speed vs turning slowly). The characters velocity and angle place them somewhere in this 2D space, and the animation is blended between the three closest animations that for the triangle they sit in in 2D space. Finally, some controller strategies were covered to explain how different animations were chosen from a variety of options. When the issue of how to do this with a crowd of up to 1200 NPCs the answer was yet again heavy parallelization!

The last session on the Friday was about rendering Assassins Creed III. This went into a massive amount of detail so I’ll only skim the surface. Weather effects were all performed in cylinders attached to the camera. This allowed for effects such as height based sun tinted fog, volumetric mist and rain. This meant that particles within the cylinders could be vertex lit, and to give a feeling of density the cylinder included a scrolling texture at 20m from the camera.

Next came the impact of rain on other surfaces, such as the streets. If the street were wet the solution was to reduce the shader parameter concerned with albedo (diffuse reflectivity) which makes a wet surface darker and increase the gloss parameter, making wet surfaces shinier. A similar method was used when the streets were covered in snow, for which albedo was increased to white, and fade in a snow sparkle effect for that extra christmassy feel!

The next issue that was covered was deformable snow, which shifts as characters walk through it. The snow mesh was simply the underlying geometry displaced along the normal. When a character steps on that mesh the triangle is removed from the mesh and replaced with a tessellated, displaced version. This was done on the GPU and the result rendered into the vertex buffer directly. This render to the vertex buffer is something I’d like to try myself if I get the time!

Next came some details on world lighting using a world light map, which basically includes a light colour value per pixel in the z-x plane. This is then adjusted based upon the height. This is very fast but seems like it would use a lot of memory, and of course, there’s no specular light. World ambient occlusion was added to compliment the prolific screen space ambient occlusion effects that seem to be used everywhere these days. The world ambient occlusion was a simple top down projected texture, which was applied during the sun lighting, and modulated with the height. The texture itself was a simple depth render of the scene from above, that was blurred a bit.

Next was some details of the massive crowd scenes, and how to deal with model instancing and texturing on such a large scale. Finally the session finished up with some very cool ocean rendering using fourier transforms, which I remember disliking back when I did Physics. Displacements are rendered to a texture. Foam accumulated every frame and rendered to a separate wave crest texture. The water rendering include a diffuse, specular and normal map layer, reflection, depth tint (so deep water looks black and shallow water doesn’t), refraction and sub surface scattering and finally the addition of soft particles for foam in coastal areas. It looked very, very cool!

So, I guess that’s my round up of GDC, although there are a series of sessions that I still want to check out in the vault. Especially the experimental games workshop which always includes a load of cool stuff. Watch this space!