Little iCloud Hint

Here is a little but important “feature” of iCloud on iOS that I stumbled upon. If you’re currently working on iCloud support for an iOS game, you _need_ this.

It turns out that all iCloud files that you create on your iOS devices in your own testing apps are automagically synchronized to your development machine, into the ~/Library/Mobile Documents folder. This is a huge time saver for debugging. You can also go in and delete files, also removing them from iCloud.

I couldn’t find any documentation on this. If there is, please post it in the comments!

Coding Conventions

Hey iDevBlogADay,

todays post shifts focus a little bit away from graphics to a more fundamental issue when programming in a team. At Limbic, we’ve recently started a new project, and I love that. Starting with a clean slate, looking back at previous projects, what was good, what was not so good, and trying to do better. This made me reflect a little bit about coding conventions. Let me present you what I’ve come up with.

Coding Conventions

Coding conventions are a very difficult topic. Many religious wars have been fought about this. Everyone that writes code has an opinion on this, and many think their solution is the best. But I think they are completely missing the point. The goal of coding conventions is not to win a beauty contest (which would be very subjective anyways), but to make it easy for others to understand your code, and to add more code that fits in well.

A classical example for coding conventions is the indent styles. A good programmer will have found his specific indent style at which he can work effectively. But a great programmer can work at any given style, and make his code match the surrounding code. As such, it doesn’t really matter what style is chosen. It is more important that everyone sticks to it.

The necessity for a shared coding convention is most obvious when you look at a commit log, and you can see that every single line of a file you’ve edited has been re-indented, brackets moved around, variables and functions renamed by someone else. And when you open it in your editor, half of the text is indented differently than the rest. Because of this you can’t figure out what your own code was originally intended to do, even though you wrote it. There is a good chance you have seen this happening to you. And the other programmer didn’t even mean to offend you, he just wanted to “improve the code” and didn’t know any better.

So far, so good, but which seat should I take style should be used?

Ask 100 programmers, and you’ll get 100 different answers. And each will tell you that his is the one and only style. And that’s the problem: the one and only style simply does not exist, and everyone learned programming in a different way, hence habits differ. Obviously, when you start a new codebase, you want to chose one style, because otherwise everyone will write whatever he or she likes, and it will be chaos.

You’ll probably have that programmer on the team who will be like “oh, it’s easy, just use my style!”. The problem with that usually is that he never documented that style, and he would probably describe a lot of rules with words like “obviously” and “trivial”, instead of properly defining them.

And that’s why I think the best solution to this is to just go out and pick one of the existing popular coding conventions, and stick to it precisely. At Limbic, we decided to use the Google C++ Style Guide (it comes with cpplint, a great script to check for a number of convention violations). Even though a lot of aspects were unintuitive to me at first, I accepted this style and worked with it. And now I love it, it definitely helped me become a better programmer. And I also learned this and that about C++.

Using a popular convention helps when bringing in new coders, too. It’s easier to tell a new programmer “check out the Google Conventions” than to tell him “just look at the rest of the code and do it the same way”, especially if they are somewhat esoteric (I’m pretty sure everyone has seen his share of esoteric code). In case of doubt about a certain style, there is always a place to go back to and look up how it was meant to be (and why).

Finally, keep in mind that the conventions are obviously not set in stone. If a programmer feels that a certain rule is odd, I’d suggest to go and try it anyways. Usually, it’s not as bad as it may sound initially. And more often than not, it has some great advantages that you didn’t think of initially. If that doesn’t work, you can still change the rules, by writing up some extensions.

Why are you telling me all this?

That’s a fair question. And I guess the point to take way is as simple as this: I learned that using tolerance when it comes to coding conventions is a huge gain, and fighting for your own style is a huge pain. The art in programming (in a team) is not to write fancy code, but rather to build solid code that is easy to maintain and extend. And that’s a lot easier if everyone pulls in the same direction.

HDR Rendering on iOS (iPad2/iPhone 4S)

Hey iDevBlogADay,

I’m excited for todays post, because this has been sitting on my desk for a while. With the public release of iOS 5.0, I can finally release this stuff. I’m talking about a little HDR tech demo that I wrote during the iOS 5.0 beta to figure out how, and how well, HDR rendering works with the new A5 only OpenGL ES 2.0 extensions, specifically GL_OES_texture_float, GL_OES_texture_half_float, GL_OES_texture_half_float_linear, and GL_EXT_color_buffer_half_float. I’ve made the tech demo public on github. I wont go into all the details (you can see the details on github), but rather focus on the issues I had creating the demo and working with those extensions. I’m not really going to talk about what HDR is or what benefits are gained from HDR, but focus on the particular details for implementing an HDR render on iOS.

Continue reading

Writing Maya Plugins

Hey iDevBlogADay,

A while ago we at Limbic decided we need a tighter integration with the content creation tools and the artists. Before, we had stand-alone scripts and programs that would convert common format like .obj into an engine format. However, this doesn’t work very well when the rendering gets more complex, like adding skinning, forward kinematic animations, multiple texture sets, etc. The choices were to look at Maya or 3dsmax, but since 3dsmax is only available on Windows, we decided to go with Maya.

So we started to learn Maya, build up knowledge about the APIs and the setting up a work environment. This turned out to take way longer than expected, as Maya is a quite powerful but somewhat chaotic application. To ease the pain for others, I’d like to post about some of my findings now and then. Todays post will be about the different ways to write a plugin.

Continue reading

Found some old stuff

Here are some screenshots of an engine I wrote 7 years ago for a pratical at my old lab as an undergrad. It featured standard shadowmapping with 1-16 sample PCF, parallax mapping, normal mapping, Doom3 models, Doom3 level format, and it was playable via TCP/IP. You could run around and beat the other players up with a wrench. And there was a ball bouncing around that you could hit to fire it off in one direction. Too bad I can’t find the source code of this.

Also that year, I did an entry into the NVIDIA demo competition held at our institute, which I won. The price was a shiny GF6800, which is probably slower than the iPad2 A5 these days. There is also an IOTD on flipcode (may it rest in peace, it was a great site!) about this.  😀

Good times of fooling around. Now I’m fooling around for a living. Still good times 🙂

Debugging Cocoa

Hey iDevBlogADay,

we recently had an issue in Cocoa, where for some reason we had two view controllers active at the same time (with two OpenGLES views active), leading to a bunch of issues. I then realized that there seem to be no good Cocoa debugging tools. From my Win32 days, I remember that little tool Spy++, which could inspect the window hierarchy. That was a great to help ensure everything is sane. I think with a tool like this many bugs in our Cocoa apps could’ve been avoided or more easily spotted. Continue reading

Fullscreen Motion Blur on iDevices with OpenGL ES 1+

Hey iDevBlogADay,
This, sadly, is my last post for this cycle, but I promise I’ll be back. It’s been a lot of fun being on the rotation, and it helped me a lot to share my findings.
However, for this final post, I’ve picked something special. Actually, a lot of people have asked me about this, how we do the motion blur in our latest game Nuts! and our very-soon-to-be-released game Zombie Gunship. The technique is by no means new, but the fact that it works so beautifully on the iDevices and it’s simplicity really seal the deal for me.
Showcase
First of all, let me give you some arguments on why the motion blur is so cool.
In the case of Nuts!, it is actually pretty hidden. The only place where you can see it is when you pick up a fireball nut. But as you can see in the screenshots (and even more so when you play the actual game), the motion blur adds a lot of “speed” feeling to those nuts. The whole fireball effect is a lot more convincing with the motion blur effect. Interestingly, the motion blur is only used in those situations and runs at half the resolution of the original game. But it is not noticeable, because of the temporal bluring. Even when the resolution switches back to the full 640×960, once the effect has worn out, there is no popping noticeable.

In the case of Zombie Gunship, the visuals of the whole game are in essence built around this effect. It gives the game this 80s-built warplane-targeting-computer like look and artificial “imperfection”. Also, as you can see in the screenshots, we’re actually running a quite low resolution (480×320), and the models are quite low-res as well. But with the motion blur the game looks a lot smoother, it’s harder to make out individual pixels.
Since it is a temporal blur by its nature, it is actually harder to see in screenshots 🙂
How it’s done
The best about this technique is that it’s super simple. It even works in OpenGL ES 1, and like many post-processing effects it can just be dropped into the game very easily.
In a traditional rendering setting on iOS, we would map the final framebuffer, then draw the solid geometry, blended geometry, and then the ui on top. Finally we would present the renderbuffer and the frame is done.
With motion-blur, instead of rendering into the final framebuffer, we render into an intermediate framebuffer that renders into a color texture. For us, this buffer is usually half the size of the final framebuffer. Once we’ve rendered the solid and blended geometry into this buffer, we enable alpha blending and render this intermediate texture into a so-called accumulation buffer with an alpha value smaller than one. This accumulation buffer is only cleared when the blur begins. Finally, this accumulation buffer is then rendered into the final framebuffer.
In pseudocode, it looks something like this:
Traditional Rendering:

ActivateFinalFramebuffer();
Clear();
RenderScene();
RenderUI();
Present();
With Motion Blur:
ActivateIntermediateFramebuffer();
Clear();
RenderScene();
ActivateAccumulationFramebuffer();
// No clear here!
RenderIntermediateTextureWithAlpha(alpha);
ActivateFinalFramebuffer();
RenderAccumulationTexture();
RenderUI();
Present();
As you can see, you “just” need to add a few functions to your -(void) draw call in order to add the motion blur, and you can turn it on and off on-the-fly.
The smaller the alpha, the longer the blur, because less of the pixel is “overwritten” every frame. In the first frame, the pixels contribution to the final pixel value is alpha, in the second frame it is alpha*(1-alpha), then alpha*(1-alpha)^2, so it slowly fades out over time.
Of course, alpha can be varied every frame. We use that in Nuts! to slowly fade out the fireball effect at the end.
Two small remarks
One simple idea for optimization would be to use the final framebuffer as the accumulation buffer. This would save us one full-screen quad rendering operation. However, the framebuffer on iOS is at least double buffered. That means every second frame has a different render target, which leads to a very choppy and mind twisting blur effect. Also, if you want to display non-blurred components, such as UI and text, such things should be rendered into the final framebuffer, after the accumulation buffer has been rendered.
Another thing to note is that the first frame needs to have alpha=1, eg. when the fireball nut is picked up in Nuts!. This makes sure the accumulation buffer is properly initialized and doesn’t have any very old data.
Conclusion
If you like what you read, consider following the official Limbic Software twitter account and of course buying our great game Nuts! 🙂
Cheers, see you next time!

Multithreaded Rendering in iOS Pt. 2

Hey #iDevBlogADay,
This is an update to my previous post about multi-threaded rendering, and some thoughts about leveraging the A5 processor in the iPad 2.
The Update
I’ve finally managed to track down the issues that broke the multi-threaded rendering. Turned out that, for reasons unknown to me, the [context renderbufferStorage:fromDrawable:] call has to be performed on the main thread. If not, it will simply not work.
After I found this out, I was able to get all my methods to work, and I added a new method. Here is a little summary:
  • The single-threaded method does everything on the main thread and is just for reference.
  • The GCD method uses a display link on the main thread to kick off the rendering on a serial GCD queue that runs on another thread. Display link events may get dropped if the main thread is busy.
  • The threaded method uses a display link on a separate thread that kicks off the rendering on the same thread. Display link events may get dropped when the rendering takes too long.
  • The threaded GCD method combines the GCD and threaded methods. It runs a display link on a separate thread and kicks off the rendering into a serial GCD queue that runs on yet another thread. It is completely decoupled from the main thread, and the rendering doesn’t block the display link either. Hence, the display link should be very reliable.
I didn’t conduct any real performance measurements to see which method is better. However, I personally like the last approach. It should minimize blocking, and one nice benefit is that it is very easy to count frame drops (GCD queue is still busy while display link fires again).
In addition to getting it to work, I’ve also added a very simple asynchronous .pvr texture loader.
The code is available at https://github.com/Volcore/LimbicGL .
The Thoughts
Based on the above results, I’ve been thinking about how to write a renderer that properly utilizes the A5 chip.
Before the A5, we had to balance three principal systems: the cpu, the tiler (transforming the geometry and throwing it at the rendering tiles), and the renderer (renders the pixels for each tile)
Balancing between tiler and renderer is app dependent and somewhat straight forward: if the tiler usage is low, we can use higher poly models “for free”. And if the renderer usage is low, we can do more pixel shader magic. If both are low and the game runs slow, it’s probably cpu bound.
Now, with the A5, there is an additional component in the mix, a second cpu core. The golden question is: How can we use this in a game effectively?
Here are some of my ideas:
  • Run the game update and the rendering in parallel. This requires double buffering of the game data, either by flip-flopping, or by copying the data before every frame. Interestingly, this works well with the threaded GCD approach from above. We can just kick off a game update task for the next frame into a separate serial GCD queue at the same time we render the current frame, and they both run in parallel.
  • After the game update is done (this should only take a fraction of a frame unless you do some fancy physics), we can pre-compute some rendering data:
  1. View Frustum Culling, Occlusion Culling, etc
  2. Precompute skinning matrices, transformations
  3. CPU skinning. Instead of handling the forward kinematics skinning in the tiler on the GPU, we could run it on the cpu. This is more flexible, since we’re not bound to the limits of the vertex shaders (limit of the number of matrices comes to mind). I’m uncertain about the performance benefits here. It’s a trade between CPU and DMA memory bandwidth vs tiler usage. I think this may pay off very well in situations where one mesh is rendered several times (shadow mapping, deferred shading without multiple render targets, multi-pass algorithms in general). One of the biggest drawbacks is that the memory usage is (#instances of mesh * size of mesh) versus just one instance.
  4. Precompute lighting with methods such as spherical harmonic lighting, where the results can be backed into the vertex colors. This could even run over several frames, and then only be updated at a certain rate (eg. every 10 frames)
  5. Procedural meshes and textures. This is interesting, and mostly depends on a fast memory bandwidth, which the A5 should provide.
  • Asynchronous loading of data (textures, meshes). This is mostly limited by IO though, but some interesting applications (such as re-encoding, compression) come to mind.
I’m going to try a few of these over the next month, I hope I’ll have some nice results and insights 🙂
As my closing words: We live in exciting times for mobile GPU programming! <3