Category Archives: What troubles me…

Under Pressure

The issue tracker is overflowing and the deadline is inexorably coming near: The milestone 4 build has to be reached! Feature-set B15 has to be fully implemented and needs to be QA approved but bugs still occur and some features haven’t even been worked on. Everything needs to be crunched in there somehow as bug fixing is not limited by the announced feature freeze… and so it happens that you go into overtime!

Because of actual events in my last weeks and months this topic just pops up again and again with me: Crunch and Overtime! Nowadays, these are even accepted as “normal” in not only Games but general IT and Development. I know only few other industries and departments that take crunch-time for granted… especially in the end of any project.

As soon as overtime happens it is already too late. As no project manager (should) plans with crunch-time something went wrong if it happens anyway. In some cases this is not necessarily bad. Most people do what they do, work at what they work because they like the challenge, they like the environment… they just like what they do.
But no matter how much you love your work, after 12, 18, 24 hours day after day after day no Red Bull nor a single good night sleep can help to keep you really focussed and up to the task that you are actually on.

I do not want to go into detail why something like overtime happens but there are some things professionally and socially that I observed over the last years and especially months I want to share if you have to crunch to release a feature in dependency with others!

Documentation vs. Communication (or “State the obvious”)

Pressure pushing down on me
Pressing down on you no man ask for

It may be so easy: You get your Game Design, your Technical Design, Interfaces, Standards etc. defined and start developing from top to bottom. In the end everything works out, interconnected and your task is finished. Great!
This perfect world is pretty rare and in most cases does not reflect the “real life“. In most cases many things have to be reworked or clarified and therefore communication socially and professionally is one of the most important factors when it comes to development in larger teams.
Nevertheless, especially after 12, 14 hours of work or during a night the receptivity starts to lack the focus it needs for intense communication and dialogues. People start starring at their displays trying to get around that one oddness or gaze into the coffee/energy drink creeping over the floor. People that normally question anything start developing “till the end” and not “to finish a task successfully” meaning they “crunch” all what is left into their current objective, finish it up as quickly as possible top to bottom based on the docs… and as clarification takes time if the design itself can also be interpreted in a specific implementation kind of way: It will be!
So, in the end of any project, after many hours of work, during nights etc. try and start being pro-active: If you crunch with others, state the obvious! If you do overtime yourself, start questioning the most simple things! This may sound annoying but is most important as the most well-formed process is nothing worth after four+ weeks of crunching. Normal things like “Did you add the graphics of that item?” or “Have you added the i18n key?” are the first things that get lost as soon as a narrowed mind is focussing on fixing a bug or finishing up a feature.

Crunch in Overtime (or “The right Task at the right Time”)

Insanity laughs under pressure we’re cracking
Can’t we give ourselves one more chance

Don’t get me wrong: Sometimes overtime can be very healthy for a project and team if e.g. a small group of people focus on one small feature-set altogether and try to reach a goal in a given time frame. Tasks get crunched, time just passes by and everybody is happy (with some pizza and beer of course this can be a wonderful achievement).
Nonetheless, very often overtime is used or has to be used to finish up tasks that are unfinished or even untouched. This leads to crunching in all the different tasks that just have to be done before a milestone or deadline is reached. So, the overtime is used to clear out the issue tracker and not to finish what the main goal was.
If overtime happens use it wisely and plan what to do! You are not in your right state of mind after hours and hours of coding, drawing, layouting, … and deprivation of sleep can lead to similar effects as alcohol e.g. headache or dizziness. The efficiency may seem increased after some energy drinks but based on experience and code review… it is not! You cannot state a number but if the efficiency and focus is decreased, plan in some laborious work, some monotonic tasks, clean up and work off method sets etc. Complete new structures, concept arts (depends on the crazy creativity ^^), calculations or templates especially interfacing with others (see above) are detrimental. Crunching has to be planned and should not just occur!

Social Competence (or “To Develop is Human”)

Watching some good friends
Screaming ‘Let me out’

During daytime everybody is calm, touched by the sun, always having a smile on their faces. But after 15 hours from dawn till dusk the smile starts to vanish from their faces as the sun sets.
It is no matter how “nice” somebody is during the day: during overtime and crunching tasks every mood starts to swing. If set under pressure over weeks, sleepless for days and crunching code into a machine people get nervous and tetchy.
Now it is important to be sensitive. Not only developers, artists, … in-between but also a managing director has to apply his best soft-skills and pressurize focussed but appreciative. Even ironic jokes that would cheer up anybody during daytime can break loose hell if people spend 20 hours working on one bug! This emotional intelligence is a major issue when it comes to delegated work. Nobody intentionally tries to not finish any task so do not sound like this.
To loosen up a little and see the crunching time as a task of the team. Do not take it too serious… it is more important to sometimes just take a walk and have a little water cooler talk. I am a non-smoker but if it is getting dark it can be helpful to just go with the crowd and to keep together. Share thoughts, introduce pair-programming (if not already given, 200% more effective during overtime in my opinion) and try to help each other as together the longest nights can become the best stories for the next day.

Stay Focussed (or “Utopia is nowhere near”)

It’s the terror of knowing
What this world is about

In 90% of cases overtime and crunch-time happens because a goal has to be reached in time. A milestone, a release build, … whatever. Unfortunately, during crunch-time it often occurs that some people see this time as “additional” hours to use (see above). They try to achieve 200% and not to reach bug-free 100%. Such ideas come from management/directors but also from developers that tend to pressurize themselves. If they do not get to see their bed for days at least this time has to pay out.
Always be realistic about what the goal is and try to not loose focus of what can be achieved during this overtime. As stated above crunching should be planned and therefore plan against the origin of that specific overtime. If people are under pressure it is more important to eliminate all mush and narrow down what you want to achieve. Overtime pays out in work and even for the person itself if something has been achieved. A e.g. developer that works all through the night coding and coding without having achieved what he wanted in the morning is only half the developer for the coming hours and days. But if you clearly achieve your realistic goal you are happy and produce endorphins. Your body is powered up and you can shed any sorrows of work. This is the best sleep you will have for months!

Keep the Balance (or “The equilibrium of Life and Work”)

And love dares you to care for
The people on the edge of the night

Overtime happens and crunching some work, too. This can be manageable to some degree. But if your whole purpose in life is work and you are crunching everyday, hour after hour, seeing sunlight only as a reflection of your display you will “dry up”.
As much as overtime has to be planned (see above), the balance of overtime, crunching and regeneration has to be maintained, too. Otherwise the productivity and benefit of the additional time decreases down to a (negative) point of no return… yes, negative. At least in many cases I have seen, people actually fixed and created productive resources and code up to a specific point where the amount of positives fell below the amount of negatives. And this just happened over one day. When days went by the amount of time that produced good quality decreases and got inferior to the amount of time producing crap. And the most important issue is: Those errors have to be cleaned up, too!
This is something general and may sound corny but to keep a good Work/Life balance is most important and overtime is no contradiction to it. But crunched overtime needs different compensation to be regenerated. As mentioned, a good night sleep might not be enough for a 80 hours week. Fresh air, sunlight, healthy drinks and food are a necessity to “survive” not only the crunch-time but the time after (downfall).

Post Mortem (or “The Lessons we Learned”)

This is our last dance
This is ourselves
Under pressure

This is not necessarily something to keep in mind during a crunch-phase but afterwards. Always recapture what happened! Always try to learn from the lessons made! A retrospective or post mortem should help to pinpoint problems, miscommunication, bad planning etc. for the coming tasks and have to be used for positive and directed criticism.
A review about every process, not only meta or technical processes but also socially can help to suffocate future errors. Especially critique is hard to deal with and often taken personal. But what directed criticism (a director that guides the review is most important. Reviewing, not discussing when it comes to focussed critique) should provide is what we require to grow, to evolve. Because that is what we all want: To become better! It may sound unfortunate but people outside ourself often provide a better view on us than we ever can.
Therefore, always have a retrospective, a review, a post mortem, a lessons learned meeting, … call it whatever you (or your project management philosophy) like, but do it!

So, if we have a look at the lessons we learned:

  • State the obvious
  • Plan your overtime
  • Be social
  • Be realistic
  • Keep a Work/Life balance

and always recapture your work!

All this may sound general and soooo obvious but after weeks of overtime, pressure from the management and the deadline coming near it gets lost pretty easy.
Overtime happens and sometimes it can even be fun to see “this one feature being finished”, “this one bug being fixed”, especially in a nice social environment. Nevertheless, if you have to crunch keep in mind that not everybody is in the right state of mind and always remember some general work rules… maybe even pin them on a wall in front of you!

Written for #AltDevBlogADay

(Don’t Fear) The (C)Reaper

I have to be honest: My C and C++ skills are bad! Besides some personal tries in my “early years” and lessons at the University I never had a good connection to the world of C. Nevertheless, I got my degree, became a developer, working for nearly a decade now in my young life and I call myself a successful Software Engineer, even developing games… but in Java, JavaScript and C#. So, my dreams of getting into what I love most (Gaming) became real, doing what I like all day!
But still I feel inferior to the “real” developers because of my bad C expertise and especially my personal ignorance to really focus on it.

All my history…

I got in touch with computers and gaming early in my life through my brother. I started with a C16, C64 and Amiga 500 besides my GameBoy until I got my first (nowadays) classic PC. I was always intrigued by what was possible, the magic, playing Pong, Maniac Mansion, Zork and watching Scene Demos and Cracktros from legends such as The Black Lotus or the Animators. I wanted to do the same stuff, I wanted to (text-)wander through my own forests, wanted to have colorful spinning balls on the screen… so I began learning how to do so.
I started off on the Amiga with Assembler, got into QBasic later, Pascal, Delphi (loved its structure), Visual Basic (quick results) and very early PHP (the Internet) through early Web-Development tests and HTML/CSS. About ten years ago I got into Java at version 1.1 and am still on it. At every Job I had before and during my studies I was able/forced to use Java and it kept that way until today. Besides Java I had a look at and use Python, Scala (what I like about Java+functional programming), Actionscript, … and even Perl (just in one project) out of personal interest or for personal stuff.
Just from my history my expertise developed early around object-oriented programming which appealed to me the most, so I stayed. Therefore, very early in my “personal development” my development expertise was already conquered by Delphi and Java that formed my view on OOP and general Application Development besides their originators being influenced by C++ (good or bad). But there were still the games I liked the most. So, I had to learn C and C++. Teach myself the language of my favourite entertainment.

Try, Fail, Ignore

I bought books about C, about C++, about Game Development, about DirectX, about OpenGL, got into boards, searched the net for every tutorial I could find, tried everything and even got some minor things to work so that something moved on my screen… and it was programmed in C++. But something clicked in my head, spreading bad thoughts such as:

  • This could be easier!
  • Linkage and IDE is clumsy, Eclipse is way superior!
  • These design patterns are native in Java!

I read more and more, tried more and more and unfortunately failed more often. The initial fun and ambition faded away with every single compilation that turned out to not work as expected, crashed or ended up in memory leaks.
Even with every interest and devotion I had to learn, to me it was “just” another syntax complicating things. Pretty much everything I learned and did I was able to reproduce in Java in less time and with more comfort and less errors. I got lazy!

Coding Personality

So, even if I tried to seriously learn and get into C and C++ it just did not reach me, did not touch me. From my history and my experience with other languages, IDEs and projects I knew that there were different ways to achieve nearly the same things. And it was not only my laziness from very elegant development environments or library usage, also the code itself appeared to be cryptic to my eyes.
No matter if I read Java, Python or PHP code nowadays: Besides the fact that every code can be beautiful and ugly I understand Java code instantly; I recognize the Python functionality; I get what the PHP developer meant to do! Even in the last years as I was checking examples and help sites for iOS and Android NDK coding out of interest I could not get rid of the thought: I can achieve the same thing with the Android SDK! (PS: Objective-C is pretty ugly ^^)
And it is not that I do not like any other languages any more: I was “forced” to use Haskell and dismissed it; tried Scala and loved it! Fooled around with Ruby and had fun; Prolog and Lisp… na; Eiffel and C#, olé!
Especially C# instantly appealed to me: The syntax, the structures, the functionality and the ideas filled the holes that Java left over the years. It may be a coincidence that Anders Hejlsberg, a main man behind “my” Delphi is the lead designer of C# but maybe we think alike. And with the advent of XNA I even had a connection to game development again… and it started with a C! The commonality of course was a similar syntax, similar principles and the idea of a Virtual Machine executing and “managing” my code. No changes for specific operating systems (at least in the perfect sales world ^^), just develop and it would work… now with easy native Windows “ways”!
But the thing that always struck me again were games. Even XNA seemed “unreal” for real game developments.

Games are developed in C

If I would have gotten one cent for every time I read this exact line on a board, tutorial or e-mail… you know what I would be then as you probably think the same right now. And I believed it! It was like this; It stays like this!
But over time I got more experienced in developing and engineering applications and solutions and I realized that in most cases the programming language is just the tool to fulfil the requirements: And my requirement was still to make the things I have in my head!
I started to look around and found games such as Spiral Knights, Puzzle Pirates, Jake2 (a Quake2 Java port), Chrome using Java for scripting and even EVE Online from CCP. A Server and Client nearly 100% developed in Stackless Python; a dynamic programming language in a multi-micro-threaded environment. Easy to read and learn, hard to master.
But probably the biggest counterexample today would be Minecraft. The biggest Indie sensation last year is developed in Java and even if I never really got into the game, I admire Notch for what he did and achieved… and everything in Java. And Minecraft was not the first but Wurm Online already showed where Notch could go… in Java.

With these great examples of Games not developed in C/C++ I felt more confident in following my own way that I have successfully gone for years now.

To be or TioBe

I do not intend to defy C or C++ but if I am not required to use C for the games I want to create and other segments and industries can be conquered by languages such as Java, too (as shown in the Tiobe Index), why should I?
Especially in enterprise environments Java is a strong candidate for projects: From a Manager perspective the Java salesman argues with operating system independence, easy extended library architecture, basic native Database framework and UI support… sold! Enterprise Java is still a keyword for international research projects today. And with JME and Android even the mobile sector is invaded by Java for years now.
And with Android supporting Java as well as Microsoft supporting C# I can be everywhere: On PCs, on Consoles, on Mobile Phones and on Browsers. With languages I know, am experienced with and that appeal to me.
So, do I still have to put all my power in re-learning what I already know in other languages? Where I have intensive practical knowledge? Where I can craft my dreams?

Ignorance is bliss

Even with my underwhelming C skills I get along very well. Tiobe proves me right and until now I always solved the problems given to me or achieved and created what I wanted. I am working in the games industry, worked on large and international projects for big companies, wrote some publications and most results were accepted just fine. I even remember some projects and programs created by me that I am still proud of and this does not happen very often as every developer I know normally wants to change the code he wrote the second he/she finished the last line ^^.
I am aware that for the last performance tweak, for the most awesome graphics engine I would have to use C (or Assembler) and I am aware that the foundation for all that I use such as the Java or the .net VM an explicit knowledge is required. Nevertheless, I do not state that nobody should use C or C++. It is just that I want to raise awareness for people that complain about people not knowing C, labelling these as non-programmers. These guys are able, too. And if they want to Write Games, not Engines they might even be better for game logic and not “just” tools. These guys are also able to know what really happens underneath as that is a mandatory pre-requisite and not the knowledge of a syntax.
Therefore, besides all my years trying to get into “the game” of learning C and C++ I turned out pretty well, with experience in large projects, systems and now games. I call myself a game developer. And if many decline my languages I decide for myself that (C+)Ignorance Is Bliss…

Part of the Challenge: Show your ignorance! for #AltDevBlogADay

3…2…1… planned!

No matter what we do, if we are Agile or fall down the water… if we are senior or junior… if it is big or small… for nearly everything we do we have to define tasks and estimations to plan the days, weeks and months to come. Again, no matter what, this (especially first) planning is in most cases (and from personal experience most means 90%) pretty far from what is really required in the end. The other 10% split up into 1) the ones that planned good but not 100% correct, maybe used “proven” methodologies such as PERT or just estimated +30% and 2) the ones where the planning perfectly fit the development (again in my experience normally 1%-3%).
So, what you could say is to just “do it” like the 1%-3% did. This would normally be the way to go if theirs worked out. The thing is, from everything I have seen in project planning over the years: It just worked because of luck!

I think it is fair to say that I learned project planning pretty much from the practical side, always failing what I learned theoretically. No mater how much time I spend planning big projects, setting up tasks, goals, milestones, reviews, reworks, … it never got into this 1-3 frame.
Even with a more agile-driven approach, small sprints, good daily tasks, weekly reviews and time consuming remodelling of the plan: If I sum up what had to be reworked every single week I was as far away as with the initial waterfall plan. All goals got achieved and “somehow” it worked out but it is disappointing for the one who planned to see his estimations being more a guideline than a workplan.
Based on that experience I started thinking: What are the reasons for such divergence? What am I planning wrong? What do I have to change to fit the developers needs? And that is what strucked me: The Developer!

…to be busy!

With all IT projects I had to work on, the main time is consumed by the developers, the engineers, the architects of the (mostly) software projects. Of course Game Design, Art, etc. have to be taken into account but are often more parallel to what goes wrong more often: The actual development or implementation! (no question, thinking lean everybody should care about downtimes because of unfinished output/input)
As a developer myself that has to plan for others, estimate work, thinking about the production, milestones etc. none of the “theoretical” methodologies really worked out for me but just took my time. And in most cases this time is very limited. Estimations have to be given instantly to evaluate feasibility, plans have to be set-up initially to have a higher model to work and further estimate on. So, time is of the essence not only in the plan itself but also for the time to create it. And if I have to rework it all the time (real-life) I do not want to spend too much time in that phase (no time for building up charts with optimistic, pessimistic and realistic plans…).

…should be enough!

In a coincidence Jake Simpson gave a pretty good impression of this wonderful land, where everything works out. It is known as Should Be Land. This is normally the land where the estimations come from, too. From developers that should estimate their tasks, should give away an idea how long each of it could take to make a plan that also has to tie in with other departments (lean everywhere). If such an estimation fails because “36 hours should be enough!” more often others that depend on you are delayed, too.
Especially inexperienced developers, juniors and fresh “hackers” from the backyard tend to underestimate especially the requirements in correlation with others, to plan interfaces, to build adapters to dock onto others and so on. Nevertheless, seniors aren’t better in general. All people that “program” stuff normally just plan the programming time… and they do not want to plan too much time as the developer is often assessed based on his Cph (Code per hour) output and not based on his quality of code, re-usability, extendibility or tests. The results are in many cases optimistic estimations with no or little time to even plan what you are going to develop.

…am no developer!

Another often misleading planning element is that (many) project managers, scrum masters, gantt-junkies, … do not have the best development background. Therefore, the estimations given are taken as fixed. Experienced managers add an amount of 30% and plan it in. This is unfortunate as even the best estimation cannot just be coped by some time-addition if essential points that are requirements for good development are missing.

One of Two of Three

Besides complicated methodologies or the adding of just 30%-50% of time to an initial estimation given, I split it up into the three tasks I want to see as an output from a developer: The implementation (or coding, hacking, programming, refactoring, …), the planning and the tests!

  • The development is the actual implementation of the task. It may be the creation of a user-system, achievements, tool, crafting, … whatever comes to mind
  • The planning is the structuring of work, the evaluation of patterns, architecture and interfaces to follow during development and precedes it accordingly
  • The testing is no QA process but the personal testing of code, writing of (unit-)tests, maybe even playing the created and succeeds the development

Now, instead of adding a specific amount to a given estimation I add tasks to the estimation. My input is the implementation estimation from a developer. Based on that I add two thirds as planning and one third of that as testing resulting in the three tasks of implementation, planning and testing with a weight of 1/3 of 2/3 of 3/3. For example, if an estimation is 9 hours, I add a task for planning with 6 hours and a task for testing with 2 hours.

Yes, the result is a very keen estimation but the important part for me is that it covers mandatory tasks that are often forgotten and is also able to compensate for possible misjudgement, unforeseen circumstances, … as the package is given as one. The creation of these tasks remind the developer what he “should” do, and the derived estimations compensate for possible problems as well as they fit the real necessity for the other tasks (at least in my experience).
The tasks are important as normally you do not start hacking instantly. To evaluate existing code, interfaces and elaborate what architecture or pattern to use is often more practical and a necessity in general before starting to implement (to think something through before starting programming). To already know what the result should be helps the implementation. And the testing part may be the coders worst nightmare but again a requirement.

The most important point for me is: It’s easy! I can easily derive it in my mind, do have a most-likely accurate estimation (future may prove me wrong ^^) and won’t forget the importance of planning and testing.
If you follow up different approaches the weighting can also be adapted either by mixing tasks or changing the base weight. For example, if you are following a Test-first approach you can either switch the planning and testing tasks, as the testing in TDD also compensates planning partly. Or you can change the base to 4 and plan 1/4 of 3/4 of 4/4 meaning for our example implement 8 hours, test-first for 6 hours and plan for 2 hours (bare with me as I selected easy to calculate estimations).
What base to use depends on personal experience, the project and just the most important gut feeling. For me myself a third for general estimations and a fifth (1/5 of 2/5 of 5/5) for more specific tasks paid out. But all in general split up into my three main tasks I instantly have an estimation ready that fits at least my real-world.

…should work!

Please keep in mind this has no theoretically proven background but my experience over the years experimenting with different approaches and using the methodologies given in literature. Everything depends on your environment and personal likes and dislikes. It “should” work for other instances, too. I used it in several personal standalone and living project estimations and at least for now it was fitting best.
In my environment, with the time given and the amount of work to do this approach works. It is never really off the track, it reminds people about planning and testing besides the actual hacking and helps me to easily keep track about developments without spending too much time in overblown concepts that do not fit my personal habits or the “real” developer.
Of course, there are also drawbacks, such as too little/too much planning. If you split up e.g. User Stories to have tasks such as: Build divideByZero() function; Create class object; Write SQL Statement for querying all users; … you will end up in unnecessary tasks because of the simplicity. In such cases, the User Story “should be” the one to estimate and divide onto the tasks or you reduce the base and introduce a zero/x task.
Therefore, this may not be the 100% 1-3 approach but it fits me best and therefore leads me into that frame more often as the important thing is the variance that fuels this approach… and that can make it work for you, too!

Written for #AltDevBlogADay

What happened to the middle class of gaming?

As time was short this week just a small post but one thing in the last days really intrigued me: Just recently at GDC, Epic‘s own (in)famous Cliff Bleszinski stated: “The Middle class game is dead!“. Now, “Cliffy B.” is known for some polemic statements but if you have a look at the current games in the Top10, being released or being covered in magazines and online portals unfortunately he has a good point. But as I do not agree with Mr. Bleszinski I thought I have to do a little rant on it.

State of the “Art”

To quote what was said:

It needs to either be either an event movie – day one, company field trip, [Battle: Los Angles], we’re there. Avatar – we’re there. The Other Guys starring Will Ferrell and Marky Mark? Nah, I’ll f****** rent that, I don’t really care, right?
Or it has to be an indie firm. Black Swan – I’ll go and see that. I’ll go to The Rialto or I’ll go to the triple-A Imax movie. The middle one is just gone, and I think the same thing has happened to games.

What he does is making a very logical comparison to the movie industry and the people watching movies. And I think we all had that kind of thinking when going to the cinema, at least once.
In general, the movie and the gaming industry underwent a big change over the last years through the advent of fast internet connections, a wider offer of “different” games delivering the same and of course piracy. The movie industry had many problems related to the “new media audience” and was trying to force a new way of thinking onto an old structure… and in most cases failed (not counting things like the iTunes Movie Store, Netflix, … as these are 3rd parties)!

Gaming did so, too! With building restrictive copy protections, enforcing keys for online playing etc. the industry of course tries to protect their property and investment but has also to deal with a not so new audience but its only audience, the media-audience that knows the possibilities and knows how to spread or deal with what they do not intend to deal with (again some “kind of” 3rd parties succeeded such as Steam, providing other games more successful).
But from there, two new branches (re-)opened for gaming: 1) Widen the audience with titles like Just Dance and the NDS or Wii and 2) work with the “independent idea”, doing something special, even odd (being “Sundance“). These also broadened the art of games and the art of developers. But if only Triple-A and Indie developments are really the only thing succeeding the whole Wii line-up probably has to be canned.

If we just take a look at sales, Top10s and media coverage right now everything pretty much underlines what Bleszinski stated:

  • The Top10 is ruled by Killzones, Call of Duties, Fight Night Champions, Bulletstorms, …
  • Call of Duty: Black Ops sold over 20mio units (
  • Arkham City, Bioshock: Infinite, Guild Wars 2, Prey 2, … on gaming news sites

Not much variance in what is the main source for information and “opinion” making for 90% of all gamers. I may be a developer but foremost a Gamer! And growing up in a time where every game was Indie or middle class and could get the attention it wanted, this somehow makes me sad. So why is it that there is no middle class in gaming any more?

Today’s Ratings…

You could think that gaming and game reviews follow the rules of the Highlander: There can be only one! Games seem to need a 90+ rating to be sold and they have to be the “definite” thing. Every game has to duel against the genre highlight and nothing is accepted besides. Every MMO has to compete against World of Warcraft, every Action-RPG against Mass Effect.
Now, with this attitude in reviewing games, always coming back to: This was a good game, but Call of Duty has more players online!, how can middle class games even get the attention they deserve? How can a middle class FPS compete against billions of dollars in revenue? Only “Indie Games” seem to have that little bonus given so that even e.g. a game as Magicka (great game) often just got a 7 out of 10 but is a success (a death sentence to other games). But wait? Magicka was known and well sold already before reviews with great ratings!?

…and the perception?

Reviewers and PR often argument that they have to force the 90+ and advertise the hell out of it just to provide a “deceiving perception” that it is the next great game and needs to be bought. Now, if you just read the comments below reviews and the comments to In-Game Videos that may show a little lag you could think those arguments are correct. And if sales such as Call of Duty: Black Ops pop up everything is signed that “their way” is right.
Now, everyone wants to do that “one bullet”, wants to be the next Call of Duty or WoW (not even Blizzard can do another WoW!). But really besides those what sells are a Pokemon, a Mario, a Just Dance, are Kinect and their Adventures, and the many other (not only Wii) titles that everyone watches but somehow nobody really sees. These are not necessarily games that are labelled triple-A or are covered throughout the gaming sites all day long (up to the release I didn’t even know about a new Pokemon, but I like Black&White ^^), but nevertheless they are sold, are in Top10 charts and are even fun to play. I will skip that in some purely economically driven stock corporations these numbers seem to not count.

Media Coverage

How games are covered in magazines and on online sites is very important nowadays. 100 Action-Adventures against 100 First-Person-Shooters against 100 whatever games compete for the money of the gamer. It is not enough any more to release some footage near release. You have to be in the gamers mind for months before the release. You need to be watched to be recognized, you need to waited for to be a “seller”. But did people really waited for the Top10 game Just Dance 2?

The recognition long before the release is produced with “unreal” information. Costly Render trailers provide a visual media entertainment over the real gaming. Produced parallel to games, sometimes from external studios they try to establish a name in the gamers mind. Games like Deus Ex: Human Revolution were noted for nearly a whole year just because of an impressive render trailer. It is just now that some people start to report from real playing sessions. If these would be anywhere near mediocre the game would be criticized prior release because of faked expectations. In this particular case even two expectations: Because of its predecessor and really incredible trailer. If I think about the Video Game Awards last year and the trailers I could start thinking that gaming is not important any more as I haven’t seen much gameplay… the important part in the end!
A different game and according media coverage is Limbo. Videos, Previews, Screenshots etc. always showed gameplay and people got intrigued by the game, by the art, by the style and not a “produced emotion” but what they felt themselves watching the real game. Now, I am always thinking: If they only show a render trailer the game probably isn’t that intriguing being watched… or is it?

…and the perception?

Basically you could say: WYSIWYG! We can only go out and buy what we know of, what we see. Therefore, PR and Marketing, the coverage in the media and the outcome as a rating is so important to be recognized and a possible subtract for the buyers money. That is limited and more and more players try to get a share out of it. Once the music industry complained that piracy is destroying the revenues. No question, piracy is bad but an important point is missing: The every growing entertainment battles for the same share! New technologies, more games, more movies, … broaden what has to be consumed and therefore fight for the right to be noticed.
I remember a time where I would pick up a gaming magazine at my local store (yes, a printed one!) and would have an overview about every single game to be released in that particular month and nearly everything in development. The market was growing but manageable and you would be able to keep an overview of everything interesting. Nevertheless, we still remember the time and those “glory days” of gaming. Games such as Outcast or Might and Magic VII keep me on my machine still today (just bought them at GOG).
Nowadays, such games would hardly been noticed if not covered exclusively or they would not stand comparisons against “that one” genre defining game.

The hidden Middle Class

A problem as always is the definition of “middle class” and especially “indie”. Middle class is not necessarily A- or AA-Games and Indie does not mean: One guy sitting in its room developing the next extraordinary gaming evolution.
Of course, games such as Braid or World of Goo with it’s extremely small teams are top-notch productions and extremely great games. These are also often used as a definition of Indie games. Besides providing an interesting game design and gaming twist both games are extremely polished, with sometimes incredible graphics and beautiful music. So, besides an interesting game design these games also provided incredible art and music, often only related to so-called Triple-A games.
But besides these two examples what about others? For example the developer of the fantastic ‘Splosion Man Twisted Pixel is no “Two and a half Men” team but a team of about ten doing a high production value, extraordinary game with good design and a special twist. The new developer Adhesive Games just showed of their premier title Hawken which was so impressive that Kotaku labelled it as the most beautiful indie game. But a giant city, mech action game with an impressive graphics style and city view, an indie game? Introversion coming from Indie “heritage” being middle class nowadays. This is also a good example of what indie development becomes after that “one hit”: Middle class! You cannot stay “indie” if you are noticed and people follow what you are doing and especially starting building up expectations.

All these and even more such as contract developers (e.g. Shin’en) are the middle class no one notices. This “hidden” middle class is what provides the foundation for our gaming, for our everyday entertainment. As well as a movie such as Avatar gaming cannot and will not consist of only triple-A high budget productions. Many try to achieve just that: A triple-A 90+ international seller every year! But from a real economic viewpoint this is nonsense. You neither plan with just one horse nor do you found on one pillar. Sometimes productions have to level each other because success cannot be planned, especially in entertainment. If you have bad luck and your extremely awesome military shooter comes out right the week after a catastrophe, your game is doomed. The movie industry knows this for years and paid for it. These errors should not be repeated.
And from my own experience, another player, the browser games are not developed by some “PHP script kids” any more but productions with larger teams trying to increase the production value to a level with standalone games. Right now probably the widest middle class games everyone plays, but nobody knows.

No question, I love the so-called AAA-Titles such as Uncharted, Gears of War or Killzone. But to me the middle class definitely is not dead. It again depends on how we look at it, how we rate it (without prejudices) and how we classify “The Other Guys” in gaming such as downloadable games, “indies” and online games (as well as my infamous Casual Games). Because we have to remember even a company such as Epic started of as a middle class developer with Indie developers. “Cliffy B.” may be right for the games he intends to create, but The middle class game is not dead in general! Or would you label EVE Online as middle class? Just based on money or CUO you would have to if you think like Cliff!

PS: A polemic assumption by me: If the general gamer has 100$ each month, he would buy more games if the costs are lower and therefore a wider production would be more effective and risk-less!

Written for #AltDevBlogADay

There are no Casual Games…

…but Casual Gaming (and Casual Gamers)! Wait, wait… Before you stop reading and ask yourself why I would make such an offensive proposition, please hear me out.

My Origin
I have a very tense relation to the terms Casual and Core Games that found on three things: 1) I am getting older and my best “Gamer” times are over ^^’, 2) I develop Browser Games and 3) I develop in Java! A very bad combination to go into a Game Developer discussion… trust me!
I am a Gamer for over two decades now, started with Pong and played a high percentage of every mentionable game ever made. Studying Computer Science and developing Games was a reasonable step and I like what I am doing now, I am good in what I do (yes, I am ^^), developing Games and I am proud of what I achieved until today. But nowadays if I mention that I develop Games, I get asked:

  • “Oh, very cool. What Games?”
  • “Browser MMOs”
  • “Ahhh…*pause*… Casual Games!”

Even with swallowing the bitter taste of that sentence I somehow feel unvalued for being part of one of the largest Browser Games around, over four years old, still growing and established way before Facebook. A massive simultaneous multiplayer game, relying heavy on PvP, time intensive and based on a very technical Story. All features that are normally related to so called “Core Games”. But if a Browser Game features such elements why is it that the term “Browser Game” is instantly related to “Casual Game”? And why in general is “Casual Game” leading to the idea of “not a real game”?
In my specific case Browser and Casual is not the only evil term. The dialog above often continues as follows:

  • “Why do you think I develop a Casual Game?”
  • “It’s in the Browser… probably Flash.”
  • “I am Java Developer.”
  • “I think you said you develop Games. How can you develop Games in Java?”

But that is another story I will cover in another post ^^’.

My Problem
Another thing that leads me to my “new” thinking was the evolution of my own gaming habits. As mentioned, I am a Gamer, a Core Gamer you would say, played Games from Wolfenstein 3D to Call of Duty: Black Ops, from Half-Life over CS to TS2, from Zork to The Whispered World. I owned and own Handhelds, Consoles and PCs. I spent ages playing through games each day after school, alone or with friends. But over time my gaming habits changed through studies and now a whole lot of work. I am still a Gamer, just tested the newest Crysis 2 and Bulletstorm Demos… but the actual gaming sessions changed!

I am part of the working community now. Most of the day I am sitting at my work desk and if I get home I have some commitments to do or just want to get some peace. Nevertheless, my Civilization is teasing me to conquer the world, while Snake is asking me to finally complete the Mission, besides Kane & Lynch still arguing. Everything becomes a Quest that Puzzles me and I get Angry like the Birds outside my Windows (couldn’t find a transition to my iPad here ^^).
So, every evening I really have to decide what I do and IF I play. And even if I play, the time a playing session takes reduced a lot nowadays. For example, I played Plants vs. Zombies as well as Dead Space. I played Mirror’s Edge as well as Angry Birds. All four games would be categorized into Casual and Core Games, but the way I played them somehow did not fit the definition. I played Plants vs. Zombies for hours straight but Dead Space actually in 15-20 minutes chunks until the end (not only because it was scary). Mirror’s Edge I played through in one session but Angry Birds just 15 minutes some evenings.
Now, with the advent of all this classification that somehow does not fit my overall love for games of every type that “entertains” me, I questioned myself: Am I doing something wrong? Or is the classification not practicable?

The Definition
With that many inconsistencies in my general understanding of Browser and further more Casual Games I tried to find a conclusive definition. During the search of a definition I had to notice that I never read so many different ways of defining something, especially as most definitions come down to attitudes of the writer. Because of that, let’s start with a “not so ideal” example from the Urban Dictionary:

Casual games are any kind of game that is over hyped and over rated or just the exactly same thing as a previous version that was over hyped and over rated, these games are known by gamers as “crap” because even with all the perfect scores the games still have mediocre graphics and shitty plots that casual gamers think are good. Usually the only thing that makes a casual game not-total shit is the multiplayer; otherwise these games would get ratings lower than dirt.
With shitty graphics and a generally horrible campaign mode, the halo series is the indisputable king of casual games.

But all jokes aside, for a more serious definition from the IGDA Casual Games SIG from 2005/2006:

The term “casual games” is used to describe games that are easy to learn, utilize simple controls and aspire to forgiving gameplay. Without a doubt, the term “casual games” is sometimes an awkward and ill-fitting term ““ perhaps best described as games for everyone. Additionally, the term “casual” doesn’t accurately depict that these games can be quite addictive, often delivering hours of entertainment similar to that provided by more traditional console games. To be sure, there is nothing “casual” about the level of loyalty, commitment and enjoyment displayed by many avid casual game players ““ just as there is nothing “casual” about the market opportunity and market demand for these games.

That is an interesting definition. Let’s have a look at some more. Wikipedia describes:

Most casual games have similar basic features:

  • Extremely simple gameplay, like a puzzle game that can be played entirely using a one-button mouse or cellphone keypad
  • Allowing gameplay in short bursts, during work breaks or, in the case of portable and cell phone games, on public transportation
  • The ability to quickly reach a final stage, or continuous play with no need to save the game
  • Some variant on a “try before you buy” business model or an advertising-based model

The CasualGameWiki as well as extend the definition with specifics about the price point and the platforms:

  • Style Of Play: Casual Games are now considered “games for everyone” – with a special emphasis on whether your mom can play it.
  • Distribution: Casual Games are frequently distributed with a “Try Before You Buy” model. Where a person can play for an hour for free and then decide whether to purchase or not. This model of play grew out of the Shareware distribution model.
  • Casual Games are usually sold for $19.95.
  • Platforms: Casual Games can now be found on Cell Phones and Consoles such as XBox 360 via the Xbox Live system.


Casual games are most often played via a Flash or Java based platform on a PC, but are now appearing in larger quantities on video game consoles and mobile phones.

The definitions often come with a timeframe of around the millennium or 2001.

An Interlude
Let’s move away a little from the term “Casual Games” and the definitions given and have a look at the last sentence: The Year! If we take a look on what happened and was released around that time that is somehow “defined” as the origin of the term we will find things like the Playstation 2 (2000) and the Xbox (2001). While the Playstation 1 was still a child of the “old” console generation especially the Playstation 2 as well as the Xbox introduced the “new” generation of consoles, away from old Entertainment Systems we adored. More important is that with the new generation the games from the “old world”, the Personal Computer, and the consoles were starting to congregate. Complexity from the PC moved to consoles and simplicity of the Consoles moved to the PC.
On the PC Flash was released in version 4.0 in 99 and one year later in version 5.0. These introduced and extended Flash’s own programming language ActionScript. From here on Flash was not only a way of playing frames off a timeline but introducing conditional actions onto these. More and more Flash Games started popping up. Around the same time Java 1.3 was released introducing the HotSpot VM and building the foundation for JavaME (J2ME at that time) that brought gaming very heavily to normal phones.
This interlude is important to understand how Games opened up to a larger community (yes, long before the Wii) away from the nerdy PC hardware geeks that “pimped” their autoexec.bat to play games as of today these build a large majority of the people defining and mostly complaining about “Casual Games” (no offense).

The Ambiguity
If we sum up the definitions the following list could be seen as a general understanding of Casual Games:

  • Easy to learn/simple gameplay
  • Simple controls
  • Forgiving Gameplay/quickly reach a final stage
  • Gameplay in Short Bursts
  • Games for Everyone
  • Up to 20$
  • Try before you Buy
  • Flash and Java Games on the PC side/DLGames on XBoxLive, PSN, etc.
  • Since 2000/2001

This list looks pretty decent doesn’t it? As you can guess from the headline the list is not as decent as I hoped it to be. I often refer to these hand full of games that somehow should fit these rules, are named casual but do not really allow a distinct identification of what a casual game should be.

Let’s start off with Plants vs. Zombies (one of my favourites over the last years). This modified Tower Defense game is a success on every platform. First released in 2009 it sold and sold and people rated it effusively. It’s a great game that just brings a ton of fun. If we look at our list we can see that it looks pretty good: It costs under 20$, the controls are simple and it is downloadable on PC, iOS and XBoxLive. Regarding the gameplay, it is simple and easy to learn… because of the many tutorials, and can be hard to master. This makes this game attractive to be played for just some minutes or for hours fiddling on the new strategies and therefore attracting Casual Gamers and Core Games as well as hybrids like me. It provides Casual Gaming and more, for Casual and Core Gamers. So, is this a Casual Game?
My second example would be Super Meat Boy (and N/N+ in parallel). This 2010 hit platform game has gone through different stages of the list. It was a Flash Game first, ported, tuned and extended for the PC and Consoles. Over 300 mostly short levels (short bursts) with a very gory portion of simple gameplay. It is also cheaper then 20$ and has some very simple controls. But it is extremely hard to master forbidding the slightest error ending in a pure gore fest. And actually (as PETA already noticed ^^’) this is no “game for everyone” anymore because of its scenario and its quickly increasing difficulty level. Hardly a Casual Game, isn’t it?
My third example is Prince of Persia (the one before the crappy Movie Game). Not a typical Casual Game and was more expensive then 20$. But if we look at some definitions it fits as much as the previous two examples: It has forgiving gameplay (yes, I mean you Elika), had a Demo, was easy to learn but had not the easiest controls. The save points were pretty frequent and it had a scenario that even my casual sister was able to relate to. Still Core or did it become Casual?
My fourth example is Lara Croft and the Guardian of Light. A franchise that may have brought many women to gaming, featuring intense 3D platform gaming and 3rd Person Shooting Gameplay. With GoL it became a DLG with a strict isometric perspective. It’s on PC and Consoles, downloadable, costs under 20$ and has (in my opinion) simple controls to master the fine placed action and puzzles. Now, are Tomb Raider and Lara Croft becoming casual? Is it just that game? Or does Lara Croft not count?
My fifth example (to use the full hand) would be every Wii Game. Nearly every gaming site and every “Core Gamer” defines a Wii Game as a Casual Game. Why? Because your Family got into “your hemisphere”?

In general if we just take some of the bullet points, some of the definitions describe things that nearly every game, no matter if Casual or Core, wants to achieve nowadays or is a general gaming tradition:

  • Try before you Buy

Demos, Shareware, … Nothing new to the experienced Gamer and Games in general.

  • Gameplay in Short Bursts

Actually, this is something popping up more and more since the advent of consoles. PC users are used to saving games, being able to use up space on their hard disc. For console gamers this was no natural thing to use so developers very often used stages with manual and automatic save points that were not separated too far away from each other to not enrage the player if he dies. I mentioned Dead Space and my very tight gaming sessions playing through it. This was only possible because of the very “controlled” stages and their save points that I could reach in the given time frame.

  • Forgiving Gameplay/quickly reach a final stage

This as well is something that especially First-Person-Shooters nowadays provide to the user. “Old” Gamers remember a time when it was a necessity to know where the next HealthPack is. Today, we rely on a regenerative system, often presented with the argument to be more accessible to more gamers (“games for everyone”). Becoming casual? And regarding the second part, I could get heretical now but games such as Modern Warfare do not really provide that much gaming time to the user anymore. 5-6 hours are some times normal.
The problem is that gaming following the definitions given is way older. This is why gameplay elements can hardly be used to define the games themselves. What is left are technical definitions, prices as well as hardware to describe the so called “Casual Games” and these obliterate more and more.
So, with all this ambiguity coming from the point of defining the Game, wouldn’t it be better to define the interaction?

Classic Classification
We tend to define things based on their surroundings and the “object” using the “subject” (“People Playing a Game” in our case) because that is what we visually perceive. And as it is easy for us to define unknown things from what we know, we derive the Browser into our experience of Casual Games as the Browser was never a dedicated environment for games but so many things that so many people do, not only gamer. Therefore, it is very easy for “Core Gamers” to define games such as Plants vs. Zombies as Casual Games as their Moms or Dads are playing them.
The problem with the classification and the according definitions of Casual Games is that they try to really define constraints where these games may fit in. In a time where it becomes harder and harder to “just” define the Genre of a game (e.g. Puzzle-Survival-Horror Adventure-Games) it is even harder to define an umbrella term of games in general. But my personal strongest point regarding the definition of “Casual Games” is that most of the people that play “Casual Games” do not even know that these are “Casual Games” (or did your Mother or Sister ever talked about Casual Games when playing Wii or DS?).
The classification normally is given by “Core Gamers”, Developers or Game Editors that want to separate themselves from these “unappreciated” games (in many cases). But what we were able to see from the definitions normally used to describe Casual Games is that these do not fit the real world anymore. Especially as they evolved over the last years, away from most simplistic Flash Games to the best gaming experiences of the last decade (e.g. Darwinia, Braid, Limbo and more)
What is required is to divide not only Games but the interaction, the gaming. For gameplay we have genres. Now, we need a new graduation for Facebook, Flash, Indie and everything else that evolved our gaming experience (and will in the future). To what this New Classification could be, I can give you no answer. This needs a long discussion and a broad overview of everything gaming has to offer nowadays.
But what all Gamers need to do is to be open minded to new possibilities and not argue with the term “Casual Game” anymore, especially those that call themselves Core Gamer. I think we all do not want to hear another: “Epic Mickey is a Casual Game. It’s on the Wii!“

My Conclusion
My intention was to make a polemic assertion, presented with my experience, many questions and concluded with my own ways of thinking. If you were looking for THE definition of Casual Gaming, this post does not deliver. It just brings up some things that do not work out in our current scheme of games classification and with the ever growing amount of releases that qualify to our current definition of Casual Games we should quickly start thinking about a new way of filtering, fitting all modern characteristics such as Facebook, iOS and Android, Unity/Shiva/…, Steam and all the other new ways of developing, presenting and distributing games, challenging the “old way” of games development.
I started off with arguing that there are no “Casual Games” but “Casual Gaming” and I tend to support this even if I give away no new definition because such a broad definition of games cannot be made, if the gamers that count are so broad and different themselves. I agree that I only presented arguments for my theory but as long as it is possible to oppugn the current definition that easily it is in our hands to discuss and define better definitions for our most beloved games… that are changing pretty quickly right now!

Written for #AltDevBlogADay