Category: Media


I’ve said before how I like to keep in touch with alumni. Often they are a pleasure to know when they are studying at University, but watching them as they go into the games industry and started making a name for themselves makes you really proud of what they are able to achieve, and perhaps even a little bit jealous!

Yacine Salmi did our masters course a while ago. It might even be more than 10 years ago now, which is scary. I’ve been vaguely aware of some of the projects that Yacine has worked on, but his latest offering – a mobile game called ellipsis, is doing really really well. Ellipsis has won Intel Level Up Game Developer Contest awards for “Game of the Year” and “Best Action Game”. and judging by social media Yacine is now busy flying all over the world promoting his game.

Here he is promoting his game at the Tokyo Game Show! He’s on from about an hour in.

Yacine explains to the host that Ellipsis is a minimalist action puzzle game. Ellipsis also has no text at all, which is clever because it means that the market is not restricted by language. You can watch the trailer for Ellipsis here, and you should definitely check it out.

Advertisements

Apart from HMDs the other main focus was changes to the way graphics are programmed. The last major release of DirectX, DirectX11 was 5 years ago. Since then a lot has changed around DirectX11. Most notably multicore CPU processors are far more common. The changes from DirectX11 to DirectX12 seem to be focused on enabling programmers to get more out of the CPU rather than the GPU, as well as reducing the number and the size of the calls that need to be sent to the graphics card.
DXII

Below is my summary of the big changes. Bear in mind though, that DirectX12 isn’t scheduled to be released until around Christmas 2015 and I’ve not played with it, so I have no first-hand experience. Take everything with a pinch of salt. First though, here is a recap of graphics hardware.

So much of the process of moving model vertices in 3D space, squashing triangles on to the screen and shaded pixels is completely independent of other vertices and pixels. That means that the results don’t depend on each other, and so they could be done in any order. This type of problem is sometimes referred to as an “embarrassingly parallel” problem. The vast majority of computation can be done at the same time. That’s what your graphics card is for. It’s a hugely powerful parallel computer, capable of running the same program on different bits of data at the same time. It’s kind of like having a fleet of Ferraris sitting in your PC waiting to execute your code.

This super computer is controlled using instructions from the CPU running the application, and the changing of state each frame is what takes up most of the time. These instructions are sent over the data bus – visualised using an actual bus. Each frame these buses carry instructions and data to the fleet of Ferraris, telling them what they should do. This bus is the bottleneck, so the less we have to use it the better. Another constraint of this model is that much of the state had to be sent from a main render thread. In DX11 something called Deferred Contexts tried to overcome part of this problem, allowing any thread to send data to the graphics card, but because of the nature of relationships between data sent from different threads there still needed to be a lot of communication and synchronisation with the main thread. This synchronisation means that a lot of the time thread and CPU cores become idle whilst they wait on results from other cores. Additionally a lot of data associated with the Deferred Contexts had to be sent on the bus. These are the problems that DirectX 12 is trying to overcome.

In DirectX11 you have to take the bus!

In DirectX11 draw calls include a lot of data and state changes, so you have to take the bus!

The strategy is to remove dependencies between calls sent from different cores, and to reduce the amount of data that has to be sent on a frame by frame basis. It’s a little bit like trying to replace the data bus with a smaller, more agile data Ducati. Here’s how (I think) it works:

New data structures that are stored on the graphics card are better aligned with the hardware, removing the need to build up parts of the pipeline during the draw call and enabling more draw calls per frame. The objects, called Pipeline State Objects can still be swapped in and out at run time, but they are created and saved on the graphics cards ahead of time.

Command lists are a similar to what DX11 tried to achieve with Deferred Contents. Commands can be compiled and sent to the graphics card from any thread, but because of the changes with the introduction of PSOs executing these command lists are no longer so large, and because they can share PSOs with other draw calls they are less dependent on one another. They just store the information about which PSO they should use and send the calls off to the graphics card.

Bundles offer similar functionality, but allow some state to be inherited from other calls, and some state to be changed. This means that the instructions for a bundle are computed once and replayed with different variables. Whilst the intention appears to be that command lists are constructed every frame and then discarded, bundles seem to be a way of computing commands and saving them between frames to render with different data (both in the same frame, and in different frames).

DirectX 12 allows more state to be stored in graphics memory, meaning smaller, faster draw calls.

DirectX 12 allows more state to be stored in graphics memory, meaning smaller, faster draw calls.

Finally, Descriptor Heaps give the power to the programmer to build their own heap and table of resources in graphics memory. This means that state concerning the current resources that are being used no longer has to set by the CPU. Instead, the GPU can request resourced from a list held in graphics memory without the need for a call from the CPU to bind that resource.

All of these improvements mean that draw calls are smaller, and can be executed more quickly, which means that there can be more draw calls in any frame. It also means that there is less need for synchronisation between CPU threads, which means less time wasted waiting and frees the CPU to spend more time doing useful processing.

Some of the best news is that unlike the change to DirectX11, which required many people to buy new hardware, DirectX12 will work on many existing graphics chips, including the chips in the Xbox One! Exciting times ahead!

HMDs at GDC

Well, while at GDC I was lucky enough to try out Sony’s new Head Mounted Display, and I was impressed. It was very light, all the weight was on the top of the head so there was no uncomfortable twisting forces. It was very immersive and a whole lot of fun.

Oculus Rift and Project Morpheus looking good!

Oculus Rift and Project Morpheus looking good!

I was treated to two demos. In the first I found myself hoisted up in a shark cage and dropped into a tropical ocean near a coral reef. Predictably, not everything went to plan and whilst the team on the boat were trying to fix the winch to pull me up a huge shark appeared from the depths at started taking my cage apart.

In the second demo I was in a medieval castle where I could attack a training dummy with swords and a cross bow. You could even grab the dummies arm, hack it off and then swing it at him which was a lot of fun. All too soon the demo ended when a huge dragon statue came to life and ate me!

Although I heard rumours that a few of the units were failing the product as a whole seemed very close to being released. The big question for me now is how much will it cost, especially when you consider that you need a PS4, a Move camera and controllers and the head mounted display to use it.

The other question for me are on health and safety, and on the social aspect. On the health and safety side Sony seemed keen to highlight that they could track the back of your head, implying that you could turn around in games. However, whenever you do that you end up wrapping yourself up in your own cable. I can’t see these headset being used in any other way than sitting down, unless you invest in some sort of rig like the Omni.

Sony seemed to be promoting the headset as a social experience in a couch coop setting, but the headset was so immersive that I don’t see how it can be. If I were playing with me, I certainly wouldn’t trust me when I couldn’t see or hear what I was doing.

In other news, there was a big focus on the Oculus Rift at GDC. They seemed to be everywhere! Not just in OculusVRs substantial area in the trade show. Since GDC Facebook as bought OculusVR for $2bn. I don’t see the two headsets really competing, ad they’re focussed on different markets, but it seems the future is bright for Head Mounted Displays.