Build Valuable Systems, Better and Faster

FooPets: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

FooPets had several different products, but all were related to photorealistic virtual pets.

Major System Aspects

The core FooPets site was originally framed by Facebook with a small amount of HTML driving a Flash-based interactive game. The server behind the Facebook content was a Ruby/Rails application with a few standard extensions initially running on a bare linux box. No automated testing or other code-quality metrics.

For most kinds of software, the logic and user-interface are the hardest things to build. But because FooPets was effectively a 3-D movie / game, the content was actually the hardest thing to build: creating photorealistic 3-D pets took a lot more people-power than making these photorealistic pets behave.

Later there were to be a couple more architectures:

  • An iPhone version of the pets
    • This was purely client initially and then later semi-integrated with the server until the game was killed
  • The HeartPark truly 3-D interactive game (vs. canned limited videos).
    • The game ran in Unity and only talked to the server for leaderboards

AA-1 : Lack of promotions and regression testing is a very bad thing

FooPets was a very unstable code base for many reasons, but definitely the lack of ‘make sure this works’ was a huge part of it. Some team members had the ability to modify production directly, and they could do it even before getting on a plane and disappearing for the holidays. A promotion model looks like this:

  • Developer tests things on their own machine, and when happy, ‘promote’ into the development team server
  • The development team server tests the new code checkin and sees whether it passes the base line rules for the server
    • If not, it rejects it or starts yelling ‘foul’!
  • When the development team is happy the development team server’s code base is good, it goes to the next level (QA, Integration, etc.)
  • Finally the last QA level is basically the same as production, and the final promotion should be trivial

Without this kind of staging model, you are basically just praying production will be about the same as whatever machine you last tested on… potentially a developer laptop very different from production.

AA-2 : Sanity testing and automation are very good things, in all kinds of situations

When I joined FooMojo, I expected to work on the main code base and the iPhone application as the ‘most important’ code bases. These two code bases needed some work, but ultimately the long-pole for the company was content. And creating content required a lot of computational power rendering graphics, connecting images into movies, and similar activities that were only semi-automated. The biggest productivity improvement was when a few of us focused on making the whole pipeline pretty much completely automated (with failures identified and easily re-run).

At thirty frames per second, a 10-minute set of clips has 18,000 frames that have to be rendered and composed. The frames might have different kinds of issues, but the most likely issue is they didn’t get rendered at all, which is pretty easy to identify.

So similar to DevOps where you apply software to computer operations, DevGraphics applies software and development techniques to the graphics pipelines of movies and high-content games.

AA-3 : Using advanced technology can make things very easy and impressive

The concept for HeartPark started during the post-Christmas “break”, and within less than two months, a fun game was in all users hands. By combining the skills of a very small team with a very advance technology (Unity 3D), we went from idea through prototype to live in a very short time.

Sometimes the technologies used could be considered ‘prototypical’: you could never support 1MM with them (say maybe hardware compatibility issues). But the importance of figuring out whether you have a viable product for a viable market and pivoting as you do or don’t is far higher in value than getting the technology ‘perfect’ the first time. And even more fun: sometimes the nonviable technology becomes mainstream enough to use by the time you have the right product for the right market.

AA-4 : Make sure clients “always work” or Apple is smart about devices

A critical rule to getting through the Apple AppStore process was making sure your application obeyed a bunch of rules. Some of these rules were human-factor rules to make sure the product was pleasant and conformant with iOS. But one of the core rules was that applications must work even when the device was not in a suitable environment. The application could be thoroughly hobbled by the environment, but it must launch, interact with a user pleasantly, maybe explain and resolve the situation, and shut down no matter what. An application / client can’t fail without explanation or really at all, it can only behave very restrictedly but properly.

This “always work” can then lead you to a decision tree around how well does a client in a degraded environment work. It definitely must work, but how well?