Have you ever thought about in how levels are ordered in a game? How you get (or don’t) the feeling of an increasing yet “wall-free” challenge?
During Zink development we’ve found that this is a non trivial issue that could make the difference between success or failure of a game.
I am sure that there should be tons of information on the web, however, we weren’t able to find a proper workaround at the time, so we did our own.
The obvious approach
In the aim to state the obvious, the first idea that came to mind was to order levels by “human experience”. In other words, one of the developers decide how difficult a level is based on own experience. Some correction by external inquiry may be applied too.
This approach has two main problems:
- The error introduced by the human, since the difficulty of a mind challenge isn’t perceived equally by different subjects
- The amount of time required to do so, since each challenge requires a personal study and some testing correction.
In the Zink world another problem appears, since the content is dynamic and self-maintained by the community without developer intervention. Hence, becomes impossible to chase the community creations on order to evaluate them at “mass-creation-speed”.
The observer approach
The best estimator for the difficulty will be always the one obtained by observing real user behaviors. This approach assumes you have a way to store the challenges outcomes in order to analyze the data.
The server will collect, as an example, how many times a level is completed vs how many tries took to do so. This ratio is relative to the level so every try made by every user counts. Let’s call this ratio “O-difficulty” as in Observed Difficulty.
Once again, this approach takes a lot of time or resources (assuming you can pay someone to get the data faster). Some time to get the data and some to analyze it. In Zink, the dynamic nature of the content supposes the additional handicap of being able to test new levels so this isn’t suitable for us either.
The statistical analysis approach
Don’t worry that’s it, I’m not going to bother with more scenarios…
Let’s take the data obtained with the “observer approach” at a certain stationary instant and focus in the tested level set. Let’s focus, specifically, in the “static” or countable properties of this set of levels.
As an example, take this Zink level.
In our game, the game board contains elements such as walls, color drops, targets, keys, doors and traps. The game board as well as this elements can be treated as “difficulty variables” as in “a level with N traps is easier than a level with M traps”.
Our mission is to extract as many as this static “difficulty variables” as we can. Don’t be afraid to get non linear! Maybe wall density is better estimator than number of walls…
Collect all the data and arrange it someway (at first we used Excel)
Then the magic… Get Minitab or any other software that let you to perform a regression analysis (learn about regression here).
The statistical process is not trivial at all, you should read about it to learn how to handle the results but, in a few words, a simple regression analysis will provide an equation that estimates an output value (difficulty) as a lineal combination of variables (game properties). Something like this:
difficulty = - 15,5 + 1,57 par + 0,40 size + 22,4 numTargets + 2.1 wallDensity + ...
This is a powerful tool that will let us evaluate a level’s difficulty by entering its elements, and just after creation! And this is something a computer can do really fast.
Obviously, the more observations you get, the more accurate the estimation would be. However, the beauty of this method relies on that you can get an estimate from a few testers and iterate from there when you’ll have more users.