If you’ve been reading the latest development updates the name myconid should be familiar to you. If not: he’s the one who’s been working on adding lots of new graphics capabilities to the game, which have made the game look a lot better (take a look at the latest development updates/the last release announcement if you haven’t already to get a glimpse of what he has made possible). Because of that it was not a difficult decision to make him the featured contributor this time, and without further ado here’s a quick interview with him:
Tell us a bit about yourself
My first encounter with programming was on an Mac Classic II, where I discovered Hypercard and Basic – I feel ancient just thinking that was over 20 years ago! A bit later I got a PC demo of Macromedia Director 4 (the Flash of its time), which was in many ways similar with Hypercard, but its Lingo scripting language was much more flexible… and it had colour too.
Then I found Borland Delphi 1 Freeware on a PC magazine’s cover CD, and that’s when I really started getting into programming. Maybe it won’t surprise anyone that my first big programming project (sometime circa 1996) was a collab with a friend to make an adventure game engine in the style of SCUMM (think Monkey Island, etc). We actually got pretty far, and had our own scripting language, pathfinding, hi-resolution graphics…
Anyway, after a while I moved to Borland C++, discovered the internet and then the Open Source community. A couple of Computer Science degrees plus a “few” lines of code later, programming is my full-time career. Since in my day job I don’t get to make games, here I am indulging my 27-year-old inner man-child!
What have you found most interesting about contributing to 0 A.D.?
Well, two things.
Firstly, I’m here because I enjoy the challenge! I love hacking (in the R. Stallman sense), and graphics programming is definitely something that fills the bill for being both challenging and intellectually stimulating.
Secondly, I’ve always been a huge fan of open source and libre culture, and I think it’s an absolute triumph that a team of highly talented people has come together from all over the world to create a game of such quality. I’m both amazed and humbled by what the free, worldwide internet has enabled, and I’m proud to be part of it all.
Anything you want to add?
Now that the postproc manager is over and done with, I’m wondering what my next task will be. Have you ever noticed that Pyrogenesis can’t really do lighting? As crazy as this sounds, maybe it’s time we replaced our forward renderer with a deferred lighting solution*, as used by other modern AAA titles like Starcraft 2… though you can forget about keeping the fixed renderpath then.
* I’ve been asked to explain WTF those things mean, but the explanation came out a bit longer than I had intended:
Forward rendering calculates the textures, surface effects, lighting, etc, as each object is drawn to the screen. In a scene that has n lights, each point on each object’s entire surface (usually, regardless if it’s visible or not) needs to be recalculated n times, and in a game like 0 A.D., where you can theoretically have dozens of lights on the screen at the same time, things will get really slow really fast. Moreover, since there is a limit to how many parameters you can pass to the GPU at the same time, practically only a handful of lights can be rendered in one go on each object, so objects need to be rendered in multiple passes (and each pass repeats a lot of work).
With deferred lighting, we don’t render the final scene directly. Instead, we first render out various layers of information to several buffers and then recombine those buffers in a second stage to create the finished image. The advantage of this is that scene information is only calculated once and then that information can be reused. Further, we can avoid rendering the surfaces of objects that are behind other objects, as at the second stage we’ve already decided which parts are obscured. Finally, we can have an unlimited number of lights, and lights simply become a postprocessing effect that is rendered on the screen pixels instead of each and every object surface.
Deferred lighting requires very precise control over what is rendered, is architecturally very different from forward rendering, and in practice is only feasible if you can render out multiple layers of information simultaneously. It’s impossible to implement deferred lighting on the old fixed renderpath, as it is very basic and inflexible, can’t run user code on the GPU and can’t render multiple things at the same time.