Advertising professionals, wake up! Sink or swim, but bring your smartphone.

Games as a medium for advertising, social interaction, and branding have been around for over a decade now, yet they haven’t had much consistent impact so far.  I’d dare say they’re still considered a novelty or a nice cAampaign add-on by too many advertising professionals.

However, the disruptive events in the online world over the last four years, have drastically altered the landscape. Suddenly (but not unexpectedly) gaming has exploded, and can no longer be seen as secondary field of interest by innovative advertisers and digital agencies.

Consumers now spend more time on mobile apps than they do in the browser, even if desktop and mobile browser usage is combined. Furthermore, the largest chunk of app use is is spent on game apps. Consumers spend 32% of their device time playing games, 17% on Facebook, and only 14% using their mobile browser*.  As Forbes stated earlier this year, “The mobile browser is dead, long live the app.”

Gauging the trend, it suggests that it won’t be long before the consumer starts spending more time in mobile games than they do in the browser on any platform. Period. In an app marketplace where 79% of Apple’s revenue from the appstore is driven by games **, as a game developer I would add: “the browser is dead, all your consumers are belong to us”.

We’re not talking about some distant future either… right now games are the biggest piece of the mobile cake.

The advertising industry is taking note, and in-game advertising is on the rise.  However, with the projected global ad spending on “mobile internet” reaching only a meager 6.2% in 2016 ***, we’re still talking baby steps.

It’s astounding that so few digital agencies and innovative advertisers have caught on. It’s not that mobile is just happily adding new market share, it’s also cannibalising the desktop. This should be evident from your own online campaign analytics, compared to those of a few years ago.

Anyone considering a transition to mobile should also understand that Apple and Google have changed the playing field. Yesterday’s free and untamed internet is slowly being replaced with manicured and carefully maintained walled gardens. Apple, for example, controls the hardware, the software, the user’s permissions and most importantly the payment system, and through this combination of factors: access to the users.

This garden will soon contain the most cherished and valuable consumers, if not the majority of the digital consumer market. Apple, Google, Amazon, and all those others who are scrambling to create their own walled gardens, have no interest in breaking down their walls. If you want to enter, you play by their rules, and pay 30% of your revenues.

Harvesting from the garden therefore, will come at a tremendous price. Not only in adapting to the new paradigm, but also in physical investment, and the toll payable to the platform holders.

Please realise that every important buzzword of the last years – internet of things, location based, mobile wallet, health tracking, iBeacons and many more – are all locked behind the wall. Even if they still have a desktop use, they cannot function outside the garden.

That means if you wish to participate in the future, you need to learn to play the mobile game. Abandon web and browser as a primary medium for advertising and engagement. Prepare to labour for those valuable permission screens. Learn to create app real estate that will reach out and touch consumers, getting their feet on the pavement and your brand on their minds.

In a digital world where users critically manage their home screens, gaming is currently the ultimate if not the only trojan horse.

The mobile gaming industry has proven to be the most successful at engaging consumers within the walled gardens, generating over 23 billion USD of revenue from mobile consumers ****.

It does this chiefly by triggering in app purchases. The players’ loyalty is staggering and triggers not only engagement of incredible longevity but also a tremendous amount of purchases. And yet the advertising industry seems to be stuck to banners and lead-in video’s, which teach the consumer nothing other than to click away.

The psychological hooks in gaming allow the gaming industry to create audiences that will stick to games month after month, year after year, and to retain spending and other behaviours*****.  King,  the company behind Candy Crush Saga and others, is only 600 persons strong, and spends less than 6% of its revenue on actual development. However, it still manages to attract a whopping 124 million players a day, and nearly 408 million people returning monthly to play and pay.

What can we learn from this? One way we’re starting to use these same hooks for brands, is to combine social/brand loyalty with gamification. Rather than focusing on campaign based interactions with consumers, we are creating long lasting digital showcases of consumer loyalty and the rewards that go with it. However, the trend of gamification is also a dangerous one, as it has a risk of obscuring the most important thing about games: that they should be fun!!

Gaming is about play. It’s not a medium or a channel, it’s not a spreadsheet filled with loyalty versus reward ratios. It’s a basic human activity that is natural to every single human on the planet (and their pets). Gaming is fun because it is our natural method for learning, and learning is rewarding because it makes us feel better about ourselves. It’s not just about the dopamine winning gives you. The ultimate reward is personal growth, learning, achievement, triumph, and the knowledge that you have had to earn it.

Human beings can learn nearly anything and feel that joy of playing. Music, sports, news and other thematic ties to a brand have to be carefully nurtured to work, but playing will work with nearly any brand or story.

It’s something we as game designers have built our industry upon. We’re not providing a means to an end; when we succeed, we provide instant happiness, instant satisfaction, and instant personal growth. Even if it’s just you getting better at navigating a flappy bird around a level.

If we were to draw any conclusions from the current market, it is that it’s baffling how in this huge success story of mobile games, branded content and advertising has only played a bit-part role.

Now that the initial gold rush of mobile gaming has come and gone, gaming has taken its seat at the head of table that is digital content. The gaming industry has accepted change and disruption like no other, and has made it a point to be the first at the party. The advertising industry can no longer afford to be a late comer.

Consider this carefully when discussing loyalty and digital engagement. The advertising industry doesn’t need to reinvent the wheel here… it exists already and you’re probably already hooked.






About the author:

Tomas Sala is Creative Director and co-founder of Little Chicken Game Company. Through his Amsterdam based studio, he has been designing and creating award winning applied games since 2001.

After 15 years in the industry, Tomas has come to realise that in fact, he doesn’t have all the answers but that’s okay, because neither does anyone else. The trick is to find them together.

Tomas believes the two best things in the world are his dog, and telling stories through great games. He regularly sets out on independent  design quests, leading to the creation of the popular Skyrim mod series “Moonpath to Elsweyr” and the (hopefully) upcoming Indie game “Oberon’s Court”.


1,400 total views, no views today

Delegation in game code structures

written by: Joris van Leeuwen


We’re currently working on a game in a production team of 3 artists and 3 programmers. During the development of the game’s prototype we heavily used delegation in the main code structure. This resulted in such an easy to read codebase that we’ve decided to use delegation in our main code structure for production as well. I decided to make this blog post to share some of our experiences using this technique!

Knowing this technique can help you to make code-bases that are able to rapidly respond to game design changes.

This article is meant for programmers that are interested in pros and cons of delegation. You can expect code examples in C# and solutions for hierarchies in code. Please note that this is not a “you should do it this way” article. Just my thoughts on the subject that I’d like to share!

Coding for redesigns

During the making of a game you always learn how to make it better, this is why you should iterate. Try something out, learn from it and make improvements accordingly. I think Game Programmers should therefore always prepare for redesigns in game mechanics. In my opinion, good game code embraces redesigns by making it very easy to change or rewrite parts of the code structure later on.

Hierarchy and encapsulation

To easily change or rewrite parts of the game you make distinctions between functionality. This is done by creating classes and putting them in a hierarchy. Each class should know as little as possible about other classes, as concealing functionality makes it easier to read code because you don’t have to travel around to other classes for it to make sense. This is called encapsulation.

By creating a hierarchy you effectively distribute the responsibility of functionality to separate classes. Parents manage their children, who serve as parents for their children and so on. Parents should only be able to access what they need from their children (I don’t know how you do it, just get it done!). Children shouldn’t know anything about their parent. This way children can easily be attached to another parent without breaking anything. If there is a redesign in game mechanics, the programmers can decide from what point in the hierarchy the code should be altered or rewritten and the rest of the code can stay as it is.

Communicating upward in a hierarchy

How is it possible to communicate upwards in a hierarchy? Or more explicitly, how can a child say something to its parent if the child may not know about its existence? Let’s say the parent of a child needs to know when the child is hungry so it can enable a pizza alarm. How does the parent know?

Well the parent could, for example, ask the child if it’s hungry every second of the day. If the child would be hungry the parent could enable the pizza alarm. But this does sound kind of strange.. it feels inefficient for the parent to ask his child whether he’s hungry or not every second of the day.

Another option would be for the child to tell its parent to enable the pizza alarm when it feels like it’s hungry. But that would break encapsulation. The child would be able to know the parent is able to enable the pizza alarm. And in addition to that, the child would even know that a pizza alarm exists! What if later in the project the Game Designers want to change the pizza alarm to a mother that actually cooks dinner? Multiple classes would have to be changed just for one change in game mechanics.

What would be much more efficient, is for the parent to tell his child to notify him when he’s hungry, so the parent can react accordingly. The parent would only have to tell this once, and the child doesn’t have to know about what his parent is going to do after it notifies the parent. This is called delegation.

Delegation is a way to communicate upwards in the hierarchy tree, without losing the ability to split context and functionality. It enables parents to “listen” to their children and perform actions when they are triggered.


To show how a delegate system looks like in C#, lets start with using the callbacks from the Parent’s perspective.

class Parent {
  PizzaAlarm pizzaAlarm;
  Child child;

  public Parent() {
     pizzaAlarm = new PizzaAlarm();
     child = new Child();

     //Make the child trigger the 
     //OnChildHungryHandler method when it gets hungry
     child.OnHungry += OnChildHungryHandler;

  void OnChildHungryHandler() {

The child has a callback that is named OnHungry. In this example the method OnChildHungry is assigned to the callback. When the child internally decides to become hungry it calls the OnHungry callback, triggering the OnChildHungryHandler callback-handler in the parent. Notice the +=? Yes, it is possible to assign multiple handlers to one callback!

The beauty about this is that the parent doesn’t need to know how the child gets hungry. It only needs to know when it does so it can respond to it. The child also doesn’t have to explicitly tell the parent to enable the alarm, enabling us to encapsulate that functionality within the parent class. This way nobody has to search around other classes for the parent class to make sense, making the parent class much easier to read.

So what does this system look like in the child class?

class Child {

  //Define the delegate type
  public delegate void HungryDelegate();

  //Declare the callback
  public HungryDelegate OnHungry;

  void Update(){
     if (/* insert logic here */){

        //Check if the callback has a handler
        if (OnHungry != null){
           //Perform the callback

First a delegate type is defined. This is done to enforce the callback-handler to have a certain set of parameters and returning type. In this case only methods with a void returning type and no parameters can serve as a callback-handler.

After defining the delegate type it can be used to declare the callback OnHungry. This is a public field so it can be accessed by the Parent. From within the Child class the OnHungry callback can be treated as if it was a method. Calling a callback does need a callback-handler though, else it would be a null value. When it’s uncertain whether the Parent has assigned a callback-handler a callback must always be null checked before it’s called.

The child has no idea who is using its callback, which is great! If there would be a redesign in the game mechanics, it’s now easy to detach the child from its original parent and attach it to another without having to rewrite anything in the child!


We had some issues with the naming conventions of our delegates and renamed everything a few times. In essence, there are three types that will need a name. These are the naming conventions that we’re using:

  • Delegate Type: HungryDelegate
  • Callback: OnHungry
  • Handler: OnHungryHandler

The reason why we put “Delegate” as an postfix to our delegate types is because just “Hungry” misses context. The handler still feels a bit long, but when removing the “Handler” it will have the same naming as the callback, creating issues when defining a handler within the class that has the callback itself.

Don’t name a callback something like “OnAlarmPizzaNow”. To take full advantage of using delegation in code it is essential that the naming of a callback does not describe what happens as a result. This is to maintain valid context after redesigns. OnHungry doesn’t say anything about the actions that follow and thus is more reusable after design changes.

Our issues with delegation

There are some downsides to the way that we’re using delegation. One downside is that it can be a wall of text that arises in a class with a lot of different children when assigning all the callbacks. We use newlines between different instances when assigning the handlers but it’s still a big list.

Another issue is when the parent of a parent of a parent needs to be told that something is happening. This would need a big stack of callbacks before it actually triggers the action that should follow. This issue feels awkward but we haven’t found a solution that maintains the distributed responsibility of the parents in the hierarchy. You could suggest to use an event messenger system that skips a few layers of the hierarchy, but using event messages for actual game mechanics always results in spaghetti code in my experience.

Recursive loops are always a thread, but in delegation it is sometimes harder to detect. This happens when a parent listens to a child, and in its handler performs an action on the child which makes the child perform the callback again and keeps on going.

There’s also no clear stack-trace of what is happening. When trying to debug the handler of a callback the entrypoint in the stack-trace is the handler itself, making it hard to trace back where a problem is coming from.

The last big issue we’ve experienced is that you can easily forget to remove the callback-handlers without noticing. If this is not done, the system would never be able to wipe the child’s memory because the parent is still referencing it, resulting in errors that are pretty hard to trace back. This is scary in delegation and should be handled with care. All delegate callback-handlers that are attached to another object should be removed at some point. This responsibility can be given to either the parent to remove the handlers it has given to the child by using -=, or to the child itself to set its callbacks to null when it gets destroyed.


Delegation proved to be a really cool technique which makes implementing redesigns on game mechanics easy without having to remove functionality because it is too tightly coupled.

In the end it always depends on what kind of project you’re working on. How much the game design is already set in stone, the deadline and the size of the team should all have a big impact on your coding strategy. In our case we’re making a game in a team with programmers coming in and out and a project that can constantly get redesigns in game mechanics. We like using delegation and I’d very much recommend experimenting with it when setting up a coding strategy for a comparable project!


Something that is not discussed here but is very useful when using delegation are called actions. Find more about actions here!

40,773 total views, 1 views today

Making textureless 3D work

Textureless pure 3D

Creating a textureless “pure3D”  look


(This is a post about Oberon’s Court; a fantasy RTS/RPG game being developed by Tomas Sala (@littlechicken01) who is also one of the co-founders of Little Chicken Game Company. However the game is not an official Little Chicken Production, you can keep up to date on the game at

One of ways I decided to challenge myself when starting Oberon’s Court was to create a visual style that does not use any textures. There’s two reasons for this.

First of all, it looks beautiful. Having grown up with pixel games and the advent of 3D games has instilled in me a deep appreciation for 3D graphics. But the advent of pixel art has shown me that you can take any visual tech and distill it to its purest form. Creating 3D art without the use of textures does exactly this; it distills your modelling, animation and visual skills. Henceforth I shall call this style “pure3D” (feel free to add that your technobabble jargon).

Secondly, it’s very efficient, taking away the need to unwrap and texture a model removes a significant chunk from the development process. However, you will need to adjust your models and shaders to compensate for the lack of definition, seeing as you’re also losing an aspect or tool in your pallet.

This post is a how-to on approaching this style. This is by no means a “one-and-only” guide, as there are many artists and indie-devs using wildly different approaches to creating a stylized and purist 3D aesthetic, but it is how I did it for Oberon’s Court.

In this first post I’ll write about the general setup, and going through the process step by step.

Regarding shader code:

I’ll do an in-depth look into each shader and how to create them yourself in Shaderforge or Strumpy as a part two of this post. For those eager to experiment here’s a copy of my entire shader library that I used for Oberon’s Court:

In the meantime, please do download / purchase these two awesome shader tools for Unity3D. I highly recommend them, as I use these as essential tools for my development process.

Download Strumpy for unity3d
Purchase Shaderforge for unity3d 

Lets get started!

Evolving the aesthetic

When I started creating the visuals of Oberon’s Court I was very much into compensating for the lack of textures, adding lights and accents to maximize the impact of the style. However, I quickly found that increasing contrast, when you have nothing but gradients, edges and solid colors to work with, does not improve the clarity of a game. Even though visually striking, it was hard to discern units and foreground items from the background.

First true textureless unity3D test. Very striking, but sadly not very well readable from in-game perspective.

As development progressed I found myself removing and subtracting visual effects to create a visual style where the player could easily discern the “shadow” units from their environment. Additionally, I found it a very pleasing experience to subtract effects, instead of adding more and more.
If you compare the earlier screens with the latest screens you can see that some of the initial tests where visually more striking, however they weren’t very suitable for game play. I’m not saying that you can’t make a game using a more high-contrast approach, but it was not the game I was designing.

Cluttered and hard to read.

The final style.

The ingredients

To create the pure3D style I used a couple of recurring ingredients and themes. I’ll iterate on those here, explaining what techniques are involved and how I achieved the ingame results.

In essence the style is very, very simple. There is nothing new here for most game artists. But combining these ingredients can lead to striking result when you remove the texturing from the process.

  • using smoothing groups as shape definition
  • using additional dark/light vertex color data, such as radiosity solutions
  • using UV coordinates to create color gradients and color mixing
  • using shaders to add shape enhancing effects(fresnel etc.) to the model
  • using unlit shaders (without lights), but with realtime shadows
  • using select post-fx (bloom) to enhance/soften the look

1. Using smoothing groups as shape definition

When working without textures, you lose an important part used to help enhance the depth of your 3D model. Especially for mobile development, where textures are often used as main way to enhance the visual complexity and shape of an otherwise very lo-poly model.

In Oberon’s Court I used smoothing groups to enhance the definition of my 3D models. Usually smoothing groups are discarded, as normal maps and other techniques are used for describing the angles of faces. Nowadays we’re more used to creating high poly models and distilling that angular (normal) data in a texture. When you have the full dataset of a hi-res texture, smoothing groups seem quaint and only of passing interest. But using them effectively can provide a beautiful tool of enhancing the shape of your model. Both as it’s being lit as well as for use in shaders.

Some notes:

  • Unity does accept multiple normals per vertex, so your smoothing groups will transfer to Unity3d’s lighting models still intact. The same does not count for vertex colors (more on that later)
  • Edge triangulation: when creating smooth surfaces you’ll need to occasionally re-triangulate to create the smoothest look. Just make sure you’re modelling in basic shaded preview mode
  • Create smoothing groups that enhance the edges of your model
  • Break up over-convex shapes, if the angle between two faces is too big (too sharp) do not try to smooth over it, break it up in two smoothing groups
  • Create enough faces so you can create interesting edges and protrusions, which the smoothing groups can enhance.
  • Use a limited number of smoothing groups, just for practicality. You can repeat use smoothing groups, as long as they don’t touch another set of faces with the same smoothing group ID
Creating an interesting hill shape by extruding faces

Creating an interesting hill shape by extruding faces.

selecting faces to create smoothing groups, and the end result in preview shading

Selecting faces to create smoothing groups, and the end result in preview shading.

2. Using additional dark / light vertex color data, such as radiosity solutions

When working with shaders you want to be able to add as much additional information as possible into a 3D model. Most shader models work with data stored in the textures (normal maps, diffuse maps etc), but also with data stored in the geometry and vertices (points). The simplest form is the position: the normals and basic data required to display the model. But you can keep adding more data to each vertex. This can be physics data for physical materials, but also additional lighting data. In Oberon’s Court I used 3D Studio Max’ ability to render a radiosity solution on vertex level and save this to vertex colors. Basically creating a dark-light shading for an object based on a complex lighting solution. This allowed me to darken and lighten the model based on a pre-determined lighting scheme.

Some notes

  • Using skylights is a quick way of creating a soft outdoor look with radiosity
  • Object and vertex colors will influence the color of other faces, as light literally gets bounced off the geometry, and thus radiates color
  • Don’t worry if the radiosity is not very precise, you can add extra tesselation and vertices to improve quality. We only need a hint of dark-light to give depth to a model. It doesn’t need to be realistic, nor perfect
  • Only add radiosity after you’ve done the smoothing groups, as smoothing is taken into account during the calculations
  • Radiosity is found in the scanline renderer of 3D Studio Max and is an advanced lighting method. (Similar features are available in Maya, I believe)
  • You can assign the radiosity to be permanent, by using the vertex colors modifier, and use the “assign vertex colors” function in the roll-out
  • Detach each smoothing group! Unity does not retain multiple vertex colors per vertex over the same channel, so if you want to retain the radiosity solution and the sharp smooth edges, you must detach each smoothing group to separate elements.
First detach all the smoothing groups, so they're seperated, then perform radiosty calculations.

First detach all the smoothing groups, so they’re seperated, then perform radiosty calculations.


First results of the radiosity solution, and then assigned to the vertex colors and graded. Notice how the smoothing groups really pop-out the shape of the models.

First results of the radiosity solution, and then assigned to the vertex colors and graded. Notice how the smoothing groups really pop out the shape of the models.

3. Using UV coordinates to create color gradients and color mixing

In order to diffirentiate height in the environment, I used color gradients. The easiest way of implementing these would be to create a texture. However, seeing as I committed to not using textures, this wasn’t an option. Also, gradients are something shaders can do quickly, without much fuzz. In order to create a gradient we need information on both the direction and length.
Initially I used world position data for this, to create a true height-map type effect. However, this approach is calculation-heavy, as the shader needs to retrieve the world position of each individual vertex. Therefore, this approach is not very suitable for mobile use. After coming to this conclusion, I decided to use UV coordinates to achieve the same results.

Simple shader that turns UV  Coordinates into gradient.

Simple shader that turns UV Coordinates into gradient.

You can even use UV coordinates to create not just gradients but entire dynamic gauges and bars, using shaders.

Creating the shader (using the Strumpy shader editor):

  1. Create a model with a simple UV set (in 3dsmax, simple planar mapping)
  2. Create a gradient by lerping between two colors based on either U or V component
  3. Floor or ceil the gradient value (makes it a hard line, either zero or one)
  4. Add a sine deform based on time, to the gradient value (sinetime)
  5. Add an offset to make the gradient rise or fall (add float value)

You end up with this:

This stuff will stay sharp at any resolutuion and zoomed  -in. All the way until your floating point unit will go BLERGH!

This stuff will stay sharp at any resolutuion and zoomed -in. All the way until your floating point units go BLERGH!








4. Using shaders to add shape enhancing effects (fresnel etc.) to the model

When writing a shader you can combine gradients, vertex color data, and finally some shader specific effects to create a nice unlit shading that is quick to render and accentuates the geometry, instead of hiding it.

The shader algorithm I used is really simple, and can be summed up as such:

  • Create a top down gradient for basic colors
  • Add a fresnel effect, (shiny rim effect based on the normals of your model)
  • Mask the fresnel effect with another gradient (so the shiny rim only applies on the top of the model, not the flat floors)
  • Multiply all with the radiosity vertex colors (adding a dark / light shading to everything)
  • Add a realtime shadow layer to the unlit shader
  • Tweak until right

Here’s how this looks in strumpy shader editor, and here’s a link to the strumpy shader file 

Creating two gradients from Uv coords, 1 to make a color gradient, and one to mask the fresnel effect.

Creating two gradients from Uv coords, 1 to make a color gradient, and one to mask the fresnel effect.

Mixing the fresnel, color gradient and vertex colors and outputting to emmisive (unlit)

Mixing the fresnel, color gradient and vertex colors and outputting to emmisive (unlit).

At one point I even multiplied the gradient with the Y component of the vertex normal, in order to have one color on flat horizontal surfaces and another on vertical surfaces, creating a true height-map look, however I discarded this as being to cluttering in the scene.

Here’s a link to a decent description of what “fresnel” means:

No fresnel effect on the left, and a red fresnel applied on the right, with it masking off towards the bottom.

No fresnel effect on the left, and a red fresnel applied on the right, with it masking off towards the bottom.

5. Using unlit shaders, without lights, but with real-time shadows

Nowadays in Unity3D 4+ you can use real-time directional shadows on mobile platforms, which is great. However, shadows in shaderlab (the in-between system Unity uses for cross-platform compatibility) are part of the lighting calculations. This means that if you make an unlit shader, you have to add shadows in a separate pass, as making shaders unlit or emmisive excludes the effect of lamps or lights on the model, and thus the ability to cast or receive shadows.

Luckily I’ve already done an entire post on this topic , which you can read here:

6. Using select post-fx (bloom) to enhance / soften the look.

Post-fx are effects that take the entire final rendered image of your game and apply different effects to it. Unity Pro ships with a few post-fx shaders that are optimized for mobile platforms. For example: the Depth of Field shader uses a black and white depth image to blur the 3D world, based on depth. This makes everything in the foreground sharper and at the same time blurs the background.

One of the post-fx I’d like to point out is Mobile Bloom and Fast Bloom. The bloom shader is simple: it makes a copy of your final renderview, blurs it and multiplies the blurred result back onto the original, thus creating a soft glow that is most pronounced at very light areas such as the sky.

Without and with the standard fastBloom shader. The effect can be subtle, but be aware of high contrast visuals where it can really go overboard.

Without (left )and with (right) the bloom shader.

End of Part 1

I hope this post provided some insight into making a pure3D or textureless 3D game. It’s not all that difficult, but it does require you to use your tools differently. That being said, much of what I wrote here also applies to more “normal” 3D art assets.

Please let me know on twitter or facebook what you think, or if there’s anything you’d like to see explained or shared. Just hit me up and i’ll do my best. I’ll try and get into specific shaders in the near future.

Tomas Sala


68,548 total views, 21 views today

Een respons op het artikel van Ferry Haan,

Vandaag verscheen het volgende  Opinie stuk van Ferry Haan in de Volkskrant

dit raakte bij mij een snaar, en ik heb dan ook een respons gestuurd naar de volkskrant in de hoop dat dit geplaatst wordt.  Voor de geïnteresseerden bij deze mijn respons:


Game verslaving, de positieve verslaving.

Ferry Haan maakt zich, als leraar en econoom, bezorgd over scholieren die veel gamen. Te veel aandacht en tijd gaat zo verloren, stelt hij. Als game ontwikkelaar besef ik ook dat games een onweerstaanbare aantrekkingskracht uitoefenen, op jongeren en kinderen, maar eigenlijk op alle leeftijdsgroepen. Dat men er verslaafd aan raakt komt voor, maar moeten we het daarom beperken of verbieden, gaan we dan ook Facebook, snapchat, mobiel bellen en internetgebruik inperken? Spelen is van alle tijden, en de computer is nu het platform voor alleen of samen spelen, zoals tennis en voetbal dat was in de dagen van Ferry Haan. Dat heeft ook voordelen, de enorme populariteit van games heeft ook in Nederland tot een succesvolle en bloeiende industrie gezorgd. Onlangs is de Nederlandse game Ridiculous Fishing door Apple tot beste app van 2013 uitgeroepen, maar ook zien we dat er steeds meer games in het onderwijs, de zorg en bedrijfsleven worden gebruikt, gamification is een duidelijke trend in alle sectoren. Ik zie gaming dan ook, anders dan Ferry Haan niet als een probleem, maar als een nieuw vakgebied waarin we jongeren kunnen bereiken en stimuleren. Hij ziet terecht in dat jongeren enorm veel tijd spenderen aan games en dat dit ten koste gaat van andere activiteiten, maar waardeert niet dat gamen ook past bij de multitasking en digitale flexibiliteit die de moderne tijd vraagt. Interactief omgaan met digitale media is een basiseis voor de samenleving van morgen, anders dan leren tennissen. Spelen is een natuurlijk onderdeel van de ontwikkeling van een kind, is nodig voor het ontwikkelen van de eigen identiteit en de toekomstige rol in de samenleving. Waar scholen inderdaad ingericht zijn om op een bepaalde manier vaardigheden en kennis over te dragen en vaak de eigenheid van de leerling (laten) maskeren, bieden games een andere insteek.

Wat spelen in de breedste zin van het woord biedt, is een kans om in de spreekwoordelijke magische kring te stappen (Huizinga, Homo Ludens)en een nieuwe identiteit aan te nemen,namelijk die van speler. Binnen deze veilige omgeving kan de speler veilig en los van echte consequenties experimenteren, zichzelf verbeteren en fouten maken, iets wat in de echte wereld met afnemende privacy steeds moeilijker wordt. De game-industrie heeft dit gekoppeld aan ongekende onderdompeling (immersie) waardoor spelers enorm kunnen opgaan in een virtuele identiteit en er kennis en vaardigheden mee verwerven. De speler moet niet alleen de regels van het spel snappen, maar wordt gebombardeerd met logische uitdagingen die opgelost dienen te worden en daarbij ook sociale vaardigheden oppikt. Identiek aan sport, niet alleen fysieke vaardigheden maar ook teamwerk, hoe om te gaan met regels en scheidsrechters, hoe om te gaan met verlies en succes. Door te spelen leren we ons staand te houden in een veranderende wereld vol met sociale conventies, regels en routes naar succes. Dat overdaad hier schaadt is duidelijk, maar dat geldt voor veel meer en ook voor sport, televisie, internet.

In relatie tot het schoolgedrag en het niet conformeren aan de normen van het systeem van de zogenaamde verslaafde leerlingen kan men het succes van gaming ook anders zien. Ons onderwijssysteem faalt blijkbaar om op eenzelfde manier als games het vermogen tot zelf-ontwikkeling aan te spreken. Zoals Ken Robinson in een ondertussen beroemde TED talk al stelde is ons onderwijs systeem gebaseerd op de industriële revolutie en de behoefte aan gestandaardiseerde vaardigheden en kennis. “Schools kill Creativity” stelt hij. Onze scholen kunnen simpelweg niet concurreren met de ongekende groei en persoonlijke avonturen die gaming kan bieden. Waarom leren over de Zilvervloot en Admiraal de Ruyter als ik diezelfde middag nog de gehele Engelse vloot naar de zeebodem kan jagen (Assasins Creed, Blackflag). Waarom leren over de Griekse oudheid als ik vanavond in de huid van de Griekse god van de oorlog kan kruipen en Olympus kan bestormen (God of War). Waarom zou ik willen leren over de maatschappij en economie als ik kan beslissen over de toekomst van duizenden in mijn eigen stad. (Simcity)

De zorg die Ferry Haan uitspreekt lossen we niet op met verboden en regels, door de geest terug in de fles proberen te stoppen, maar wel door te proberen om die kracht, de prikkeling van de fantasie die gaming biedt aan te wenden voor iets anders dan entertainment. Op dit moment wordt dan ook een nieuw vakgebied gecreëerd waarbij het onderwijs, academici en game ontwikkelaars samen komen om dit spelend leren te vatten in tastbare en verantwoorde producten.

Ik denk persoonlijk, als game-ontwikkelaar, dat we als games-industrie niet op zoek zijn naar een virtueel pretpark in het onderwijs, maar naar verantwoord gebruik van games en technologie. Geachte Ferry Haan u zou eens met eigen ogen moeten zien hoe een VMBO student met taal achterstand in eigen tempo door middel van een game de leerstof tot zich neemt, of wanneer diezelfde VMBO groep niet met de pauze bel het lokaal uitstormt, maar gewoon doorspeelt met de lesstof. Dan zult U direct begrijpen waarom de leden van de Nederlandse Game Industrie zo enorm gedreven zijn in de verdediging van hun vakgebied en we het buitengewoon belangrijk vinder dat er correct en met kennis over dit onderwerp geschreven wordt.

In een cultuur landschap waarin games nu al niet weg te denken zijn in de strijd om de aandacht van de jongere generatie, is het ook de taak van de media om een genuanceerd beeld over games te brengen. De Nederlandse game industrie worstelt zichzelf nu naar boven (zonder noemenswaardige steun van de overheid!) met name omdat we een alternatief willen bieden voor een stortvloed van buitenlandse games en producten, alternatieven die zijn toegespitst op bijvoorbeeld het Nederlandse onderwijs. Een mening waarbij games gelijk worden getrokken met alcohol en tabaksverslaving draagt dan ook niet bij aan een gebalanceerde discussie over het games.

Voor die ouders die zich zorgen maken over de spellen die hun kinderen spelen, een advies. Ga naast ze zitten, pak de controller, Ipad of joystick op en probeer samen te ervaren wat de aantrekkingskracht is. Dat geeft een kijk op wat het kind bezig houdt en zo kunt u zelfs samen beslissen welke game wel of niet geschikt is. Dus bemoeit u zich vooral wel met het spel gedrag van uw kind, niet elk spel is geschikt voor elk kind. Er zijn wel degelijk gevaren in een wereld waarin de virtuele identiteit van het kind zo gemakkelijk wordt geprojecteerd in het echte leven. Dat heeft weinig te maken met het spel plezier zelf, maar wel met hoe de digitale technologie ons dagelijkse leven steeds meer bepaalt.

Tomas Sala

6,826 total views, 3 views today

Programmer’s Weekly: So you like them shadows?

Written by: Tomas Sala

So you like them shadows?

As you might know Unity3D now supports realtime shadows on mobile devices, woot!  For those interested here’s the official announcement of 4.2:

But, like us, your first response is probably: “That’s never going to fly in my mobile game, it’ll suck down frame-rate like nobody’s business!”, and you would be partly right. But it’s not the whole story. If used correctly and carefully it is possible to add a whole new layer of visual depth to your game.

First of we need to talk about what type of shadows unity3D allows on mobile devices. This blog post is not about shadow-maps, but about real-time shadows. This means shadows projected from a directional light onto your scene. Shadows that are rendered every frame, and thus dynamically respond to your scene and light..

Some basic rules

  • Only hard shadows are supported, (we can soften them up in the shader pass I’ll discuss later, but that’s not advisable processing wise)
  • Only 1 directional light is allowed in the scene.  
  • Basically, only low-resolution shadows are practical on most devices (even the nexus 7 (2nd gen, 2013) will take a substantial hit from medium resolution shadows)

Now a couple of main problems pop up, specific to most mobile games.

  • Most mobile games don’t use lights due to the added rendering cost, or are based on unlit atlassed textures, vertex colors or shaders optimized to work without light sources.
  • Draw Calls and batching, these related issues are a pain when doing mobile development. It just got a whole lot nastier with real-time shadows.

So in this week’s blog post we’ll be looking at solutions for each of these two problems.

Solution 1:  Differentiate between what needs to cast a shadow and what needs to receive one.

First off all the most basic rule is: Don’t use shadows where it’s not required, and know the difference between a receiver and a caster.

A shadow caster is an object that casts a shadow. You need to minimize the amount of these. The fewer casters, the fewer shadows and the faster the calculations are. This works down to detail. So if you have a building, make sure only the base structure casts a shadow. Don’t do the chimneys, windows, doors, and other details! They have no need to cast a shadow. Make sure they are turned of in the mesh render component in Unity3D.



A shadow receiver is the object that receives the shadows. First of all, make sure that an object that receives shadows does NOT cast shadows. So separate ceilings from floors for instance, and have the floors be shadow receivers, but not the ceiling. (Or not even the walls. Remember, because you have only 1 directional light, it’s practical to make it a top-lit scene).

Now splitting up your objects might result in additional draw calls,but we’ll deal with those later.

Solution 2: Adding real-time shadows to an Unlit or custom-lit scene.  

So a basic trick transferred from the days of yore to modern mobile game development is the use of vertex colors. Vertex colors allow you to not just paint a model, but actually light it (with, for instance, a radiosity solution) and then save it or bake it into the vertex data of the model..

That will look something like this. A basic vertex colored lighting solution, merged with a custom shader.


This gives the Illusion of lights without using any. Add to this light-maps or pre-lighted textures and you get  a static object, that looks like its lit. Additionally, you can shape the visual style to be unrealistic, cartoony or anything else.


Now that your game artists have gone through all the effort to make something look nice without light just to save performance, it’s un-logical to simply add a light to have shadows. Basically a double whammy, the shadows need to be rendered, and the light needs to be calculated.

So why not just turn on shadows? 

Here at Little Chicken Game Company our artists write their own shaders in tools like the Strumpy Shader Editor. The disadvantage of this is that you export surface shaders. To turn on shadows, you need to create a shader that has an output to diffuse. This causes the shader to become lit: lighting your object and at the same time creating shadows. The moment you connect an input to emmisive to create an unlit shader, your shadows disappear. And we don’t want to light the object, we only want the shadows.

Solution: Add a renderpass to your unlit surface shader.

If you really want shadows, you’ll need to find a way to add shadows to your emmisive surface shader. (fragment shaders might be faster and simpler, but when you’re stuck to strumpy and don’t know how to code fragment shaders, this will do the trick)

So we’re going to add a shadow pass to an emmisive shader. I’ll be quick and just give the shader code for the second pass.


Blend DstColor Zero
{ Mode Off
#pragma vertex vert
#pragma fragment frag
#pragma multi_compile_fwdbase
#pragma fragmentoption ARB_precision_hint_fastest
#include “UnityCG.cginc“
#include “AutoLight.cginc“
struct appdata
fixed4 vertex : POSITION;
fixed4 color : COLOR;

struct v2f

fixed4 pos : SV_POSITION;
fixed4 color : TEXCOORD0;
v2f vert (appdata v)
v2f o;
o.pos = mul( UNITY_MATRIX_MVP, v.vertex);
o.color = 1;
return o;
fixed4 frag(v2f i) : COLOR


fixed atten = LIGHT_ATTENUATION(i);

fixed4 c = i.color;
c.rgb *= atten;
return c;

—————code ends——————————————————————————————-

 Just paste this after the ENDCG of your regular shader code, in the un-compiled unity3d shader. Make sure you use  FallBack “VertexLit” at the end of your shader. (FallBack “VertexLit” will also enable shadows in your fragment shaders, it’s way easier)

Example, here I’ve only used this additional shading pass on the ground surface. On nothing else. All the shading and colors you see are not created by the directional light, but by the shader, combining fresnel effects, ramps and vertex colors. And finally a shadow pass.



Solution 3:  Batching and drawcalls.

So we now have two things: shadows in our unlit scene and a selection of objects that cast shadows, and an even more limited number of objects that receive shadows.

Doing this separation tightly, will already decrease your drawcalls substantially compared to blindly turning on your directional light with shadows.

But there is an additional problem, materials that are still shadow capable are not batched. In the documentation it states that objects that cast or receive shadows are not batched. But the problem goes deeper, any material capable of casting shadows (or possibly, that is also used for shadows) is not batched!  So in Oberon’s Court I made the mistake of having one solid black material that could cast shadows.

This shader:


Shader “Oberonscourt/Color”


Properties {
_Color (“Color“, Color) = (1,1,1)

SubShader {
Color [_Color]
Pass {}
Fallback “ VertexLit“, 1

—————code ends——————————————————————————————-

Now I stupidly assumed that using this shader on an object that did NOT cast or receive shadows would allow the object to be batched. This is not true! Even objects that do not cast or receive shadows but have shadows enabled in the shader will not batch. (so it seems, I could be wrong, but I’ve got the drawcalls to back it up)

So the final trick was to take any object that receives or casts no shadows, and make sure it had a material that had no shadow capabilities. In the sample case I removed the fallback vertexlit code from the shader

To optimize even further I changed the shader on materials that where on a shadow caster, but did not need to cast a shadow to a non-shadowed version of the same shader.  Practically this means the eyes, the mouth and other small props that where part of the skinned mesh. These are part of a shadow-casting mesh, but now no longer cast any shadows, and reduce the drawcalls. This is especially true for characters with many small un-skinned sub-parts.

Its probably more logical that any material with a shadow capable shader that is also used in an object that does NOT cast or receive shadow(so used also in non shadow, and shadowed objects). Causes the instance of the non casting material to be NOT batched.. (A hunch),,  The solution is the same no matter what the cause. Do not reuse a material you’ve used on a shadow caster or receiver, on an object that is not casting or receiving a shadow. Otherwise the non casting/receiving object will not be batched.



Having done these steps, and making sure that only a few objects actually cast shadows, I was able to create the environment for the game and have it run on most android 4.1+devices.  An added advantage by skipping the lighting and keeping the shadow reception shader unlit, I can now turn off the directional light and shadows, and it will look exactly the same (without shadows).  Which is great for an ingame settings menu for instance.

Do remember that shadows are now possible and performance can be maintained, but still if you expect anything less than a doubling of your drawcalls, you will be disappointed.

To finish it off here’s a side by side of with and without real-time shadows. I hope the above solutions and workflow will help you out in implementing shadows on mobile devices.



199,120 total views, 39 views today

Programmer’s Weekly: Debugging with ducks

Written by: Mark Bouwman

Eh, what?

Yes, you read it right: Debugging with ducks. Today I wanted to talk about debugging a little. Of course I don’t have to; programmers never create bugs, right? All works as intended! It’s not a bug, it’s a feature! It worked fine just yesterday!

Yeah, sure.

But for that one special case where you just can’t find the bug, there’s the “Rubber duck debugging”. It’s quite simple, really. All you do is explain your code. Line by line. Out loud. To a duck. This might sound silly, but it’s actually a pretty good way to debug your code. There is reason and logic to this method!


Rubber duck

It’s a rubber duck. 

Well then, how does it work?

You start off by explaining to the duck what your code is supposed to do. With a clear goal of what you want to explain set out for you, you then explain the code to the duck, line by line. Each line you say out loud, telling the duck exactly what the code does. More often than not you’ll find out what was wrong somewhere halfway along the code.

You might be wondering: “Why does this work?”. The magic lies in explaining. There are so many advantages in doing so.

When you take the time to sit down and really look at your code, you’ll find bugs and errors in your logic quicker. You can stare at code all you want in order to find your bug, but you won’t actually see anything. When you have to explain the code however, you re-read all of it carefully. You actually read it in order to get terms and sentences to use for your explanation, in order to get mental ‘checkpoints’ for the story that you’re about to explain. When doing this, you’ll find flaws in the flow of your code quicker.

Another reason the rubber duck debugging works is because you speak out loud. Saying things out loud helps as you take more time for each separate line of code. Also, when you want to say something out loud you first have to clarify your thoughts. You have to turn all the information you know (and assume to know) into something you can put into words.
Thinking out loud also triggers different parts of your brain. Not only do you think about it, but you also hear your story and talk about it. These are different inputs for your brain, allowing it to process all the data in different ways.

Another thing that’s so great about the rubber ducking method is that you can do it by yourself. You don’t need to bother anyone, and no one will really bother you (I mean, c’mon.. You’re talking to a rubber duck. People should be afraid to bother you). It often helps in finding solutions to the problems you’re facing without having to feel bad about resorting to someone else. And when it doesn’t work out, you’ll have a clear view of the problems and the goals. This helps when you have to explain it to a fellow programmer.

All in all, debugging with a rubber duck is a great way to think about the code you wrote. It helps to clarify your thoughts, it helps to think about the flow of your code, it helps to see things you’ve missed. It’s a great way to debug!

And let’s be honest here: who doesn’t like to have a rubber duck on his desk?

1,582 total views, no views today

Programmer’s Weekly: Shadow variables against hackers

Written by: Mark Bouwman

Hacking. It happens. A lot.

I don’t think I have ever played a game that people did not want to hack. Even for the smallest flash games you can find some sort of way to hack them. In (offline) single player games it’s not all that bad though, there’s not really all that much for a developer to worry about. However, when there are highscores or even actual rewards linked to the performance of a player, hacking suddenly gets a pretty important thing.


It sucks.

There’s only little ways to completely protect your game against hackers. It’s a pity, but you can at least try to demotivate hackers by making it harder for people without a lot of knowledge to hack your game. This week, I wanted to take some time for the very first line of defence. Something I like to call: shadow variables.


So we use shadow variables?

Indeed. And in case you’re wondering what a shadow variable really is: a shadow variable is a variable that lies parallel to the actual variable you use for stuff like keeping track of scores. It might sound vague, but allow me to explain.

Let’s say you have a variable called ‘score’, in which the player’s score is represented. Every time the player picks up an item: score++. When someone uses something simple as Cheat Engine, they can easily track a score that’s visually represented in their own interface. They know the value of their score, so they scan the flash memory for that value. Within a few scans, they got access to the memory and are free to change that variable to something way off the charts.

This is where shadow variables come in.

Other than the variable ‘score’, we also have a variable named ‘sv1’. The name doesn’t mean anything but it’s good practise not to name the variable anything score related, just in case the hacker can see variable names. When score gets increased, we also increase the shadow variable. But not by one, no, we increase it with a set random number, which we generated at the start.

Every time you change score, check it against sv1. If score isn’t sv1 divided by the random number, it means someone tinkered with the score! Then you can either kick them, nullify their score, or just allow them to continue with their actual score.

This way of protecting important variables (life, score, time, bullets… anything gameplay changing really) isn’t all that complicated, but it definitely works as an efficient first barrier!

1,703 total views, no views today

Programmer’s Weekly: Making difficulty of levels less difficult

Written by: Mark Bouwman

Implementing difficulty in levels isn’t all that hard! Use random levels and change cut-off values!

Yeah, right. Be like that. “I don’t care about difficulty; I just change cut-off values for spawning things when using math.random!” Even though it does suffice for some situations, you can often find you want just a tad more control over how tough your game really is. Sometimes, you just need that extra bit of control over the situation and need stuff to be a bit more exact. Other times, you just want to manually make a level to keep the best control over the difficulty.

But Mark, how do you decide when to use what?!

Each situation requires a different approach. It really all depends on what you need. During the past few projects I worked on, I have had to implement several different ways of difficulty management: randomly generated levels and manually designed levels. Most of the time you’ll find that if you need infinite gameplay, you need either a randomly spawned level, or a level that confines the player to a small place. When you want short burst of gameplay, it might be better to manually design levels in order to pace the game to your own liking. It’s a pretty important decision to make at the start of development: going for either random levels or pre-set levels makes a huge impact on how you develop the game.


Randomly generated (“procedural”) levels

I’ll start with the randomly generated levels. These are used in a lot of games, most often the ‘infinite’ gameplay types. The pros about using random generated levels is that you don’t have to put hours and hours in manually placing assets in order to create a level and it can lead to quick prototyping. The cons about using randomly generated levels are that you have less control over the game’s flow. It’s tougher to manage the difficulty in a level. Small tweaks are harder to make, and you have to test over and over with every adjustment to see if you didn’t just get that ‘lucky round that played perfectly’.

I’ll explain how we used randomly generated levels.

The customer wanted a doodle jump sort of game to show children they have to eat healthier and exercise more. The game had to appeal to kids and had to get increasingly harder as the player progressed. In theory, the game should offer infinite time of gameplay in a single session. Because of this ‘infinite jumping gameplay’ restriction, we knew had to create a randomly generated level.

There were two challenges in creating this type of level. Challenge number one: The level had to get progressively harder, but should be random enough not to really notice. Challenge number two: the system had to be easily tweakable, so that we could playtest often and adjust to the results from these tests quickly. Our target group were children after all, so we didn’t know how hard the game should be. Our own perception of difficulty is completely unlike theirs.

Screenshot of Na-Aapje, a game with randomly generated levels

Screenshot of Na-Aapje, a game with randomly generated levels

The system we used was based on percentages. For every object that could spawn in the world, we had a minimal and maximal spawn chance and minimal and maximal distance between two spawned objects of the same type. Every action the player did (jumping, picking up items, completing challenges) rewarded him with points. These points were then used to measure progress.

The amount of ‘progress-points’ the player had gotten over his session decided the difficulty of the game. We compared the current amount of points with the amount of points needed for the maximum difficulty. The percentage we got from that was then used to spawn all of the objects. If the player was halfway through the ‘progress-points’, the level would be halfway through becomes the maximum difficulty.

Because of this system, we could easily adjust the difficulty of the level. Changing the duration of a single session took just a single variable, changing how difficult the game is at the end only took a few more variables. It saved us so much time in tweaking the game, which allowed us to tweak often, resulting in the best gameplay possible. We eventually tweaked the game to last around 10 minutes for the average player; however it could last up to 20 minutes until the maximum difficulty was reached. The best part: difficulty increased along with the player’s progress, not the time spend in a session. Because of this every child could play at his or her own pace.


Manually created levels

I know: it’s something you’d prefer to avoid if possible. It can take really long; you have to really think about positioning stuff, there’s a lot to keep in mind when creating your level. The truth is though, manually creating levels offers a great way of controlling the difficulty and learning curve for a player. Having pre-set levels allow for the best tutorials and are great for oozing in new parts of gameplay. The downside is that all the manual labour you put into creating (each of the) levels can feel repetitive and can take a lot of time to do well.

However, creating levels does not HAVE to be a tedious task. Creating and using tools is the key to surviving here. Of course, it all depends on what kind of game you have, but in most cases you can create a tool to massively improve your workflow. Even though creating tools takes time, almost all of the time it’s a great thing to do.

People often don’t create tools for themselves, thinking it takes up more time than it really gives back. However, the opposite is often true. When you’re able to click a level together way more easily, when adjusting already set level designs gets easier, when using tools to chunk parts together to decrease the time of creating a level all-together. All of these situations allow for the game designer to focus on what really matters: on the actual gameplay, rather than implementation.

Every situation requires different tools. The only thing the programmer can do is listening to what the designer would need and create the tools based on that.


A bit of both levels?

Another thing you can do, is mix. You can manually create set chunks and then use these randomly throughout a world. Or use a pre-set level with random assets in it. You’re not set to either of both. Like I said before: it’s an important decision for your development process. Think about what you need before implementing any code.

Don’t rush it. 

3,703 total views, 1 views today

Programmer’s Weekly: Steering AI in the right direction

Written by: Mark Bouwman

AI? Artificial Intelligence?

Indeed. Artificial Intelligence. You might be wondering, what exactly IS Artificial Intelligence? (Of course you’re not, you’re reading a programmer’s blog on the website of a game company. I’m guessing you got here for game related stuff, and that AI isn’t all that new to you.)

Artificial Intelligence has a wide range of topics, spreading from simulating flocks of birds to computers beating us at a game of chess. However, when we talk about it in the gaming industry, we usually mean just one thing: the illusion of intelligence in a non-player character (NPC). The illusion that the NPC is smart enough to act on his own, the illusion that an actual human being controls him. It’s getting the player to believe the opponents are aiming the same way he does, it’s getting the player to believe the opponents don’t drive the best laps possible, it’s getting the player to believe his opponents are as bad at playing the game as him.

But Mark, when do you need a NPC to be intelligent?!

Well, I’ll give an example. In one of Little Chicken’s current projects, the player can drive around and has to chase enemies, shooting them down in order to continue to the next level. But to create a challenge for the player we don’t want the enemies to feel like they’re just driving randomly (and through buildings). We want the enemies to be smart. We want them to dodge buildings, chase the player and group up with other enemies. This brings us to the topic of this week’s post: Getting the AI to steer.

The challenge: Getting the enemies to NOT crash into a wall
Let’s start by actually getting the AI to drive. I mean, how can we chase the AI if they’re just standing still? Getting them to drive isn’t hard: all you need to do is to move the AI forward and rotate them so that they steer. It’s nothing fancy, nothing special. The tough part though, is getting them to NOT crash into walls (read: not making them look like idiots).

There are tons of ways to implement collision detection and collision prevention. During the development of this game we thought of, and implemented, several different ways to get the AI to detect walls around them, until we finally got to something we agreed to use in the game.

The first method we came up with was traversing the AI over a grid, using A* to find the shortest path to their goal. Buildings were represented by non-traversable tiles in the A* grid. This ‘perfect route’ was then used to create a curved route that the AI followed. We switched to a different method due to numerous things; balancing quality and performance being the main reason. A* can be performance heavy, calculating a lot per frame. If you have three enemies calculating the path to a point about 200 squares away, it can take a LOT of time to actually get the best possible path if there are lots of obstacles in the way. Also, the quality was not as high as we wanted it to be; AI cut corners due to the curved lines they followed. Having no other form of collision detection on the AI, they carelessly drove through walls. Because of this, the grid had to get smaller and more precise. A goal that used to be 200 tiles away, suddenly got 400 tiles away! This more than doubled the amount of possible calculations!

The most efficient way we found to handle collision detection was through using something I like to call the three-point-raycast method. The method is fairly simple, but shows that the simplest solutions sometimes have the best outcome. It’s light on performance and memory, works fast and creates human-like movement.


A visual representation of the three-point-raycast method

How it works: Every NPC has a point at the front of his bike from which a ray is casted. This initial ray detects a possible collision directly in front of the AI. When this possible collision has been detected, two new rays get casted. These new rays are slightly angled (about five degrees) to the left and the right side of the initial ray. These rays give information on which side has an object closest to the AI. The side with the closest collision should be avoided; there is more space at the other side. The AI steers towards that open space, based on how close the collision in front of him is. The closer the object he can collide with, the tighter he steers. Because of this, the AI moves in nice curves, avoiding all collisions.

Voilà, done. The AI now steers like a human being.

All we had to do next was getting the AI to actually steer towards a goal. This is used for chasing the player. With collision detection out of the way, this was actually quite an easy thing to do. All we had to do is create a simple rule: When there’s no collision detection for a second or so, steer towards the goal. With this one rule in action, we were able to have the AI circle around the player, scatter and race away from the player and much, much more. Such a simple rule made the AI a lot more fun to play against.

So, what did you learn?

Getting the AI to steer ended up not being all that hard once we found a good way to implement the collision prevention. It did however, get me to realize something important: Keep it simple. Keeping things simple help a lot. The three-point-raycast method was so much quicker to implement and required less thinking (math) than the method using pathfinding, but gave much better results. Sometimes, the easiest solution is the best solution.

Keep It Simple, Stupid.

1,468 total views, no views today

Programmer’s Weekly: Performance in Minecart Madness

Minecart Madness Logo

Minecart Madness Logo

Written by: Mark Bouwman

Minecart Madness? What’s that?!

One of the newest projects of Little Chicken Game Company: Minecart Madness.
Minecart Madness is an entertainment game for iOS, aiming at casual gamers who play games on their phones and tablets. The game is a 2D racing game with a view from the side.

The entire game is developed using Unity3D, a free program that allows developers to create games on multiple platforms. We however, are using the licensed Unity3D Pro and iOS Pro features, in order to obtain the best gameplay results possible.

This week I would like to talk about performance for iOS. It’s one of the most valuable things to have: good performance. You can easily notice the difference between 10 frames per seconds or 60. Our goal is to keep the game at a steady 60 frames per second.

Minecart Madness, ingame screenshot

Minecart Madness, ingame screenshot

But Mark, how do you do that?!

Well, there’s a lot of ways you can increase the performance in a game. There’s all sorts of things you can do to keep the graphics down, but there’s also some real nice things you can do by programming. I’ll list a few, just to get you started.

Tip #1: Profiler
This is where getting your performance to the best possible starts. The profiler. It tells you exactly what you need to know: What is killing your game’s performance? Unity allows you to dig deep into your scripts, telling you exactly which calls take performance and how much those calls hurts. You’ll quickly find out that those nasty Debug.Logs you’ve been calling take up a lot of performance compared to the rest!

The profiler has two modes, normal and deep profiling. With normal you can get an overview of the actual performance you have, with deep profiling you get more information at the cost of some performance. It’s really nice when you need just that extra information.

Whenever the framerate drops, it’s represented by a big spike in the profiler. You want as little as spikes as possible, with as little difference from the average as possible. If you do this, your game will be running as smoothly as possible!

Unity3D profiler

Unity3D profiler, found under Window –> Profiler

Tip #2: Pools
That’s right. Pools. Not the swimming pool type, no. The ones where you instantiate objects at the start and reuse them. Instantiating and destroying objects at runtime is a killer for any program. Instantiating the objects first and then using them over and over again keeps the overhead of instantiating to a low. You can do this for a lot of objects! You can use it for bullets, audio sources, small graphic effects or even entire randomly spawned worlds!

The math: I’ll explain the math using the audio system we use in Minecart Madness. In MM, we have an average of five sound effects playing every second. Since every sound only lasts a second and a half tops, we can have ten audio files playing at the same time. Knowing this number, we create ten objects at the start of the game. It increases the loading time by around 3ms, but that’s hardly noticeable. Creating a sound object on runtime takes about 0.3ms, removing it around 0.5ms. This might not sound like a lot, but let’s calculate.

We want to reach 60 frames per second. A whole second is 1000ms. This means we have around 16ms for each frame to use. Out of this, we need 10ms for the actual rendering of the game. This leaves us with 6ms to use for coding, or 360ms in a second. With ten objects a second, that takes 8ms. That’s roughly 2% of the calculations we are allowed to make, just from creating an empty gameobject with two scripts (I’m not even talking about playing the actual audio file!). If you instantiate and destroy a lot of objects (or complicated objects, like geometry with scripts on them), consider this neat trick!

Tip #3: Object Culling
Culling happens when an object is outside of your view. By default, every object still calculates the rendering and updates whenever it’s in your world. Always. Even when you don’t need because it’s outside your view. This is where Object Culling helps.

A big part of the culling can be handled by Unity3D itself. There’s a thing called occlusion culling, which handles the culling for the camera in the game. Unity’s website has some pretty nice documentation about it, give it a quick read:

Another part of the culling happens in programming. Would you need something to check for precise collisions with a specific part of the player, when they’re miles away? No. Imagine you’re casting a ray to check for a certain collision, but this ray has a high overhead (Let’s say, 2ms each frame). If you first check the distance from the player, which shouldn’t take more than 0.02ms, you save your program from casting this calculation-heavy ray!

Occlusion Culling

Occlusion Culling in Unity3D, found under Window –> Occlusion Culling

Tip #4: Using the cache (save object references for later use)
Wow, this one can save so much time. Imagine you have 20 enemies, all checking if the player is close to them or not. They check their own position and compare the distance to the player. Nothing special, right? Nope. However, you can make this quite heavy on your program. If you don’t save the player’s transform in your script, you have to get it every single frame. GameObject.Find might seem harmless, but you should really take care using this!

The math: Let’s think about this simple calculation: The distance between the player and the current object. If we use GameObject.Find for this it takes around 0.5ms every frame, for each object that calculates the difference. However, if we already declared the player’s object at the beginning of the game, and just use that reference each frame it takes just around… 0.002ms. That’s. 250 times better!

Alright, I believe you.

There’s a lot of small tricks like these to increase the performance of your game. Feel like you want to share yours? Just reply to this post!

2,766 total views, no views today