Build Valuable Systems, Better and Faster

Polyglot DevOps

During the last decade, I did quite a bit of launching IT and product operations for products as a ‘sidelight’ of my main product development work. Build the product, launch it operationally, and grow/maintain it going forward. This pretty well matches the responsibility of a DevOps role as people are now using the term, so I seem to be among the earlier ‘DevOps’ people. At first I had a Data Center (racks in Hurricane Electric), later VMWare ESX on the Data Center, later servers running in other people’s Data Centers (e.g. RackSpace), and then servers running in true clouds (e.g. AWS, GCE, etc.)

A number of tools have appeared to help automate operations, and each seems to have a fair number of acolytes. One of the issues with software is that there is a lot of it, and it takes quite a bit of time to become familiar with all of (a) software design, (b) software languages and (c) the problem domain itself.

Although it was not really my ‘direct intent’, over a decade I put in thousands of hours into DevOps. We needed a running product and I was more and more concerned that the operations itself needed to be capable and software deployment seamless. Part of this may have kicked in when I lost all Thanksgiving vacation in a data center fixing someones very bad IT.


You really shouldn’t talk about something you don’t know, and ideally you know quite a few different things so you can compare them to each other. Or you can try to leverage someone else’s work and evaluation to help yourself go in the right direction.

The ‘Taste Of’ series is meant to help people compare different programming languages solving a similar problem. The ‘Architecture’ series is meant to help people see similarities and differences of architectures solving various problems.

This comparison describes a number of plausible ways to provision and deploy software systems. Depending on your goals and background, they are all viable. Some are clearly better for most teams because they deal with critical needs very effectively, where others handle needs some teams might have and other would never have.

A list of approaches is below.

  • Copy-Paste Provision – Have a script designed to provision a machine, and copy-paste into the console to make it happen
  • Machine Image – Have a virtual machine image that you can clone and launch
  • SSH-Provision – Have a script that executes on any number of machines
  • Jenkins (Pipeline) Deploy – Have the automated pipeline automatically deploy/provision
  • Bash-Up – Have a script that a machine can automatically execute when baseline provisioned
  • Git-Deploy – Have one or more repositories with changing contents and machines watching (PushMePullYou)
  • Provisioning DSLs:
    • Chef
    • Puppet
    • Ansible
    • Salt
  • BOSH
  • Docker
  • RightScale


Software operations is software running. Development operations is within the development team, and production operations is for the benefit of users outside the development team. DevOp Goals can be either for development or production. Some of them include:

  • DOG-1: Running the software successfully with the intended amount of usage
  • DOG-2: Handling spikes of usage successfully/gracefully
  • DOG-3: Being easy to diagnose and repair issues
  • DOG-4: Knowing what version (of each/all components) is running in a given environment
  • DOG-5: Easily deploying new versions
  • DOG-6: Automatedly deploying new versions
  • DOG-7: Being able to tune system components to behave optimally
  • DOG-8: Being able to understand what the working environment looks like and how it works
  • DOG-9: Not having a single-point-of-failure (SPOF)
  • DOG-10: Having development and production mirror each other as much as possible
  • DOG-11: Being able to deploy to a new data center easily


In the last few years, a trendy new term has come up: “microservice” and a microservice architecture. I have been building software for a long time, and have used a service-oriented-architecture (SOA) and component-oriented-architecture (COA) for many of the applications / systems my teams have built. Sometimes an industry is missing a shared term for things people have done for decades. The microservice term was new to me, so after having some discussions with people from ThoughtWorks and other companies, I wanted to get a more definitive definition.

So far, this article seems a good intro: Microservices by Lewis and Fowler

If you read through the core issues that a microservice architecture is trying to address, they seem to distill to this:

  • Have an architecture where you produce working ‘product’ continuously
  • Your ‘product’ will be deployed and you need to be responsible for that being successful
  • Don’t make the architectural components too big for a team to understand and successfully build and maintain

These seem very practical and straightforward architectural guidelines. In both startups and enterprises, you are delivering products that the company needs or sells, so you should always have your focus be on achieving that.
I agree that some teams seem to lose this, so keeping them reminded is a good idea.

Good Idea, Horrible Name

Given the above reasonableness of the concept, this would be a good thing to have a good name for. Maybe something like:

  • Service-Component Architecture
  • Business-Service Architecture
  • Right-Sized Service Architecture
  • Deployed Service Architecture
  • Composed-Service Architecture

All of the above pull at least some interesting aspect of the solution or problem into the term.

Unfortunately using ‘micro’ with service doesn’t do anything other than imply ‘small’ or ‘very-small’. A ‘microscopic’ entity is too small to see. A ‘microprocessor’ is a processor that is powerful in abilities but physically small.
I am not sure what word containing ‘micro’ that in any way implies what the ‘micro’ of ‘microservice’ is trying to imply.

Bad names lead to bad architecture

You might think this pedantic, but words are powerful. And the first uses of microservice in the wild I heard were for services so tiny in functionality that if you scaled to build a system out of them, you would have a nightmare of managing components. It is fine to do something small as a test run, but services need to be cohesive and as comprehensive as possible to avoid spiralling out of control with dozens of service components wired together haphazardly.

Trendy/Transitional vs. Foundational

My guess is microservice is somewhat trendy and will die because it is a poor term. A more appropriate word is again ‘service’ or ‘service component’ or ‘right-sized’. You want an architecture that has a reasonable number of services organized/composed in a reasonably understandable way. One way you might organize things is by business function, another is by core infrastructure capability (e.g. caching), another is by ‘rate-of-change’, and another is by ‘risk-of-change’.

Monoliths are incredibly easy to understand, so they are actually a good starting point. Your goal is to deliver a valuable product, so “start with” doing that.

As long as you are good at splitting monoliths (dealing with inter-service calls vs. in-memory calls, providing a facade that hides the change, etc.), and leverage third-party service-components when suitable, you should have a pretty effective architecture. You can split early (before production) or later, but keep you eye on delivering valuable product to your company.

Basically review ‘Design Patterns’ and think at the service-component level.

PeerCase: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

PeerCase was a mobile-first medical communication application that enabled community doctors to get expert advice from specialists.

Epocrates: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

Epocrates was building a mobile-capable EMR.

Vive: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

Vive had a single product: a wellness coaching system designed for both individuals and groups. I was interested and partially ‘won’ the interview process because I could help game-ify the product.

Major System Aspects

When I joined Vive, the system was implemented in Flash and Microsoft .Net on proprietary servers. This had a couple big issues:

  • It was meant to be mobile, and especially iPhone
  • The server code was not really functional

So the first architectural change was to move to an iPhone compatible front-end and a Linux based backend. If the backend code had been better, the second part wouldn’t have been done, but given it was a restart, staying in the Linux stack was a significant benefit, especially on cloud computing.

After doing a Rails project with FooMojo and a couple other ones on my own, I was pretty comfortable with Rails and liked the productivity improvements it provided from being so opinionated. At this point the Grails technology was mature enough to be stable, so I leveraged that to make Vive. After evaluating a number of front-end technologies, the team and I picked YUI as being sophisticated enough for the kind of user interfaces we needed (or thought we needed).

Although the product was meant to support iPhones, it was also meant to be ‘SMS compatible’ and most of the messaging was expected to be through SMS vs. through email and web capabilities.

A couple more system aspects that ‘came later’ or were proposed included:

  • A Unity or similar interactive game environment
  • Connecting to devices like FitBits

AA-1 : Unifying SMS and a Web UI

AA-2 : The curse of the UI, especially for mobile

The UI on small-screen form factors is very constrained and with a platform like iOS, quite opinionated. Unless your design team is experienced and skilled at this, it can cause a lot of issues and thrash.

Vive was at the very beginning of WebApps for mobile devices. Technically, everything was correct:

  • The payload was manifest-based, so updates happened correctly and behind the scenes
  • There was some amount of client detection to make sure features would work (including on a BlackBerry)
  • The layout was simple and widgets were large enough to interact with.
  • And a number of other mobile-oriented aspects

Unfortunately there was a core problem of ‘fancy’, ‘complex’, ‘changing’, and ‘confused’ for the requirements with the mobile UI and its relation to the desktop UI. Week upon week a slightly different approach was used that might be ‘better’ and needed to be tested out. The functionality was always the same: it wasn’t much more than the SMS interface provided, except you could see what was happening better. This continuous thrash was a significant overhead to development, slowing work in other functional areas.

AA-3 : Gamification

The game-enabling of Vive was important to me: I wanted a race-like environment, or a treasure hunt, or something similar.
The technologies behind FooMojo (Unity and similar) made this possible, but the prototypes were never moved forward with. The one gamification that did make it in was that distance competitions (say first team to 100 miles combined) were plotted out on a map that was a windy course and potentially had realistic terrain (there was a collection of maps, some of which were terrain-based and others simply geometric)

AA-4 : Connected devices

Vive never decided to productize it, but architectural work was done on using connected devices for input. The market was still early but Garmin and others had some devices that you could pull data from.

AA-5 : Grails

Although not fully capable at the time, Grails was a great productivity booster and made it easy to transition the code to someone else by simply saying “It uses Grails, augmented by this”.

FooPets: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

FooPets had several different products, but all were related to photorealistic virtual pets.

Major System Aspects

The core FooPets site was originally framed by Facebook with a small amount of HTML driving a Flash-based interactive game. The server behind the Facebook content was a Ruby/Rails application with a few standard extensions initially running on a bare linux box. No automated testing or other code-quality metrics.

For most kinds of software, the logic and user-interface are the hardest things to build. But because FooPets was effectively a 3-D movie / game, the content was actually the hardest thing to build: creating photorealistic 3-D pets took a lot more people-power than making these photorealistic pets behave.

Later there were to be a couple more architectures:

  • An iPhone version of the pets
    • This was purely client initially and then later semi-integrated with the server until the game was killed
  • The HeartPark truly 3-D interactive game (vs. canned limited videos).
    • The game ran in Unity and only talked to the server for leaderboards

AA-1 : Lack of promotions and regression testing is a very bad thing

FooPets was a very unstable code base for many reasons, but definitely the lack of ‘make sure this works’ was a huge part of it. Some team members had the ability to modify production directly, and they could do it even before getting on a plane and disappearing for the holidays. A promotion model looks like this:

  • Developer tests things on their own machine, and when happy, ‘promote’ into the development team server
  • The development team server tests the new code checkin and sees whether it passes the base line rules for the server
    • If not, it rejects it or starts yelling ‘foul’!
  • When the development team is happy the development team server’s code base is good, it goes to the next level (QA, Integration, etc.)
  • Finally the last QA level is basically the same as production, and the final promotion should be trivial

Without this kind of staging model, you are basically just praying production will be about the same as whatever machine you last tested on… potentially a developer laptop very different from production.

AA-2 : Sanity testing and automation are very good things, in all kinds of situations

When I joined FooMojo, I expected to work on the main code base and the iPhone application as the ‘most important’ code bases. These two code bases needed some work, but ultimately the long-pole for the company was content. And creating content required a lot of computational power rendering graphics, connecting images into movies, and similar activities that were only semi-automated. The biggest productivity improvement was when a few of us focused on making the whole pipeline pretty much completely automated (with failures identified and easily re-run).

At thirty frames per second, a 10-minute set of clips has 18,000 frames that have to be rendered and composed. The frames might have different kinds of issues, but the most likely issue is they didn’t get rendered at all, which is pretty easy to identify.

So similar to DevOps where you apply software to computer operations, DevGraphics applies software and development techniques to the graphics pipelines of movies and high-content games.

AA-3 : Using advanced technology can make things very easy and impressive

The concept for HeartPark started during the post-Christmas “break”, and within less than two months, a fun game was in all users hands. By combining the skills of a very small team with a very advance technology (Unity 3D), we went from idea through prototype to live in a very short time.

Sometimes the technologies used could be considered ‘prototypical’: you could never support 1MM with them (say maybe hardware compatibility issues). But the importance of figuring out whether you have a viable product for a viable market and pivoting as you do or don’t is far higher in value than getting the technology ‘perfect’ the first time. And even more fun: sometimes the nonviable technology becomes mainstream enough to use by the time you have the right product for the right market.

AA-4 : Make sure clients “always work” or Apple is smart about devices

A critical rule to getting through the Apple AppStore process was making sure your application obeyed a bunch of rules. Some of these rules were human-factor rules to make sure the product was pleasant and conformant with iOS. But one of the core rules was that applications must work even when the device was not in a suitable environment. The application could be thoroughly hobbled by the environment, but it must launch, interact with a user pleasantly, maybe explain and resolve the situation, and shut down no matter what. An application / client can’t fail without explanation or really at all, it can only behave very restrictedly but properly.

This “always work” can then lead you to a decision tree around how well does a client in a degraded environment work. It definitely must work, but how well?

Winster: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

Winster was a cooperative social gaming web site that enabled players to win real-world prizes. It predated Zynga and Facebook, but both of those companies ‘rose’ during the time I worked for Winster.

Major System Aspects

Winster had a fairly standard Java backend that dealt with managing player information, talking with PayPal, and keeping track of prizes, advertisements, and promotions. This is pretty independent of what Winster was: almost any commercial consumer site might have these capabilities.

What made Winster interesting is that the client was in Adobe Flash/Flex, it was realtime multi-player, and the rules of the games were all stored on the server. This created a pretty compelling environment for players to interact: players could swap pieces and both be in better shape vs. “the house”. And this interaction supported a real-time chat system. So very much like a card-game table without any competitiveness between players.

The client-server interaction was a combination of HTTP calls and socket-based bidirectional updates.

AA-1 : No important logic on the client

At Winster players could ‘win prizes’ based on playing the games. A lot of basic games out there put the actual functionality into the game client (Flash, JavaScript, and even compiled desktop clients). This is fine if there is nothing at stake. Someone hacks the client and they get to play a ‘different’ game. Many games even have available ‘cheat modes’ that make a different game easy to enable.

But if the client can actually impact the business, it has to be securely and correctly implementing business rules. To enable this, you can try to make sure the client is an unhacked/unaltered version of the correct client. Or more simply, you can treat the client as untrusted: it makes request, and the server decides whether they are reasonable.

For Winster, we chose to not trust the client and so every action done on the client that affects the state of the game went through a ‘game server’ that knew the rules of the game. There are a lot of wins for this:

  • Servers tend to be easier to verify
    • You control the hardware completely, and at least at that time, there were significantly more testing frameworks
  • Server failure is ‘unlikely’ and should be totally visible if you have a problem
  • You are already writing the server in a particular language, so may more easily be able to augment its capabilities (although some clients have very nice game/event-oriented languages)

There are some losses:

  • Latency is guaranteed to be higher, and potentially has to be masked
    • For games like first-person shooters, you need to see the bullet fly even though the server determines the hit
    • For things like field-validation, you commonly have to repeat yourself on both the client and the server
  • Clients sometimes have really nice game language
  • If there are delays in answers, you somehow have to get them to the client asynchronously
  • As clients scale ‘game servers’ scale.

The last loss can badly affect your operational profits, especially ‘pre-cloud’ which is the timeframe that Winster was. We had to have servers big enough to deal with our peak loads and smart enough not to overload themselves.

The server-side game can be much simpler than the visual appearance on the client side (e.g. think the rules of chess vs. a pretty chess board), but the server-side game has to protect the business rules of the game so people can’t game the game.

AA-2 : Socket based client-server connection

Winster existed before Websocket, Comet, and other specifications and approaches. To communicate what other players did within your room / table, the server sent updates through a direct socket. Making sure customers could connect with a straight socket was painful for customer support (if a customer was behind a very restrictive firewall) and required augmentations to deal with ‘Flash Policies’ and other aspects. The advantage of the Game Server approach was that the socket notifications were just that: notifications that the world was in a new state. If clients missed them, they could get updated on a subsequent notification. Or catch up if initially stalled for some reason.

AA-3 : Protocol versioning

On top of the socket communication was a ‘V1’ and ‘V2’ version of a custom communication protocol. A great rule to any protocol:

  • Version it!

You may not think it will change, but by simply versioning the protocol with a ‘V1’ or ‘{ “version”:”v1”, …’ or similar you have enabled easily migrating forward with backward compatibility. In many cases you can never be sure when or if a client will be updated, so you need to enable continued support of old clients until they are commercially non-viable.

AA-4 : AJAX or Send-Data vs. rendering

Because Flash/Flex is a very high-level UI language, the Java server had absolutely no ability to ‘render’ for the client, so there was a very strong client/UI vs. Server/Data & Rules separation. You make a request of the server and you get data back via HTTP / XML or via the socket connection. This enables the client to swap out and enables the server to have easier automated testing.

AA-5 : Logging and Telemetry

Logging has a number of different purposes. Three very different ones include:

  • To see if the software has issues / defects
  • To have a record if a customer asks for ‘proof’
  • To see what a customer and the systems are doing compared to the business benefit

Winster had a lot of logging and telemetry because it (a) needed to work, (b) needed to deal with grouchy customers, and (c) needed to be very optimized to be profitable.

Logging frameworks and infrastructure improve every year, and it is important to put in the best structure you can for the purposes you have.

Velidom: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

Velidom came out of the technologies that helped increase velocity, agility, and scale of the Evant development team. This was in the mid 2000 time-frame and Evant had created technology which did mass regression testing on every checkin, enabled continuous inter-team communication (including to India), and various other major features. Velidom was an attempt to productize this whole concept: The Velocity to Dominate with an Advanced Software Development Factory.

Major System Aspects

The Velidom Factory was built primarily out of Java-based technologies and VMWare ESX capabilities. The goal was to integrate into common tools at the time (Eclipse / Subversion / etc.), automatically launch testing and deployment servers on any given checkin, verify whether a commit was clean, and either push it through to the main development line or roll it out based on that verification.

Another side of the factory was an agile development tool that tracked features, their values, the tasks needed to complete them, and the status of everything. This was for planning, agility, and measuring. The automation was to increase velocity and ‘reality’ (if it didn’t successfully go in, it wasn’t in).

A final side of the factory was a set of communication tools for both real-time and knowledge accumulation, where both of these were hooked into the other sides of the factory so what everything was visible and memorable.

If you look at the Advanced Development and Delivery Environment you will see pretty much all of this vision manifested through other companies’ solutions. Ultimately Velidom’s vision was too big to succeed with the runway the startup had and the events that occurred during its’ lifetime.

AA-1 : Virtualization and Virtualized Desktops

Of all the architectural aspects that Velidom got right, virtualizing the infrastructure was almost certainly the biggest ‘Yes!’. VMWare ESX was expensive, but having that infrastructure in place made it possible to think about computation in a way very different from the raw hardware. Ultimately from Amazon EC2 through to Vagrant, this separation has come to pass as a higher-level-language of computational hardware. Virtualized computers can be built in minutes and discarded after being used.

Velidom provided virtualized desktops and servers as a service, and we spent the money and time building out the hardware, the virtualized server side, and creating a custom desktop client. The results were impressive, leading edge, unreliable, and expensive. Things were unreliable due to the client-server desktop communication paradigm. This needed good networks and a good protocol for the remote desktop. And given these were development machines, they needed to be secure, so each had its own private network. Again, this was too big a vision to succeed at that time. Focusing on just servers may have been viable, but it wasn’t clear what the value was without a large base of customers committed to good automated test suites (which is certainly plausible).

Expensive is relative, and the virtualization Velidom provided may have been viable except for an event that occurred in 2006. Amazon announced EC2, and suddenly the price point of virtualized computers dove to a number no one else could compete with. Even 10 years later, there are very few viable cloud providers, and no small ones.

AA-2 : Tool Integration and Improvement

One of the core Velidom concepts was the integration between tools (e.g. Eclipse) and the functionality we provided. We had Eclipse plugins to do things within the factory, including chat, logging, automation, and other activities. Integrating directly with a tool is nice, but definitely development expensive. And the tools you are connecting with (Eclipse and Subversion) may ‘go away’. In 2005, Eclipse was a good choice, but if we had customers on a Microsoft development stack, they might have been unsatisfiable.

The core problem wasn’t picking the right tool, it was picking any tool before having a Minimal Viable Product. Tool integration is not part of Minimal Viable: if your product is good at group chat, people will start using it. A full suite of products (the Factory) is also not part of Minimal Viable: whether chat or automated regression testing, just get a product done and in the hands of customers to get feedback. If they like your product, they will drive integration with their favorite tools or your other products as an important improvement.

AA-3 : Reactor

The last architecture from Velidom that I am going to mention is the ‘reactor’ or ‘queue’ pattern. Doing a mass regression test with servers being created on the fly takes time. Separating the ‘request’ from the ‘task’ to complete the request is very effective for both scaling and also avoiding needless scaling. If the automation is fast enough, you don’t need to scale. If you don’t have the extra money to buy the bonus performance, you also don’t need to scale. You can choose whether to pay for time or not.

The one aspect that for a codebase or similar ‘team progessive’ activity is whether people wait for things to finish. Ideally, you are ‘unblocked’ while you wait: you can go on to something else. But in a team environment, many people are trying to get there work in the main codebase of work. With Subversion, we had to do some annoyingly fancy tricks to extract a bad build. With Git and faster testing tools (in memory databases, better functional test declaration languages), this is far less of a problem.

Evant: Two Decades of Systems and Architectures

This is a series describing various architectures I have worked with in the last two decades and some of their benefits and issues. The table of contents of the series is here.

Evant was originally named Retail Aspect and provided a Retail-as-a-service suite to companies that were joining the Web retail boom (e.g. Disney Online).

Major System Aspects

The technical foundations of the company were from a Java / Smalltalk background, so the server technologies were pretty mainstream Java enterprise technologies. The client was the ‘leading edge’ piece in the implementation technology, using a lot of JavaScript back in a very early time for the language (early 2000s). The whole system was notable in the number of automated regression tests it contained (see below). The database was initially Oracle but later moved to DB2.

Evant had a suite of products that did not succeed as a suite, potentially due to 9-11 causing a shutdown of online retail activity. But the Advanced Planning system was of interest to several retailers, including Staples.

AA-1 (Architectural Aspect): Strong Client, Server-UI, and Server-Domain separation

In terms of making the Evant Advanced Planning product capable, performant, and testable, there was a very strong separation between “Interface” (UI or Test) and “Domain”. The Domain includes all the business functionality within the planning engine, exposed by a Java-based API. It could be driven by either tests or the User Interface. The API was identical, so if the tests were successful, the engine was doing the ‘right thing’. And the UI just needed to:

  • Interact with the interface similarly to the tests
    • Or expand interface and tests for new needs
  • Present the information pleasantly and effectively

The UI could do all kinds of amazing things to transform the results or make actions easier for a user. Since this was a JavaScript application, lots of things could happen on the client without server interaction or asynchronously with the server. The important part was having a single contract that the two clients (one verifying, one using) could run against.

AA-2 : Mass Automated Testing

The original Evant team was very committed to a full XP (Extreme Programming) approach and used TDD, Paired Programming, and other aspects of XP as part of their development process. I arrived after this development period, but there were a fairly extensive collection of automated tests as part of the development artifacts. However they were created, they were incredibly useful for regression testing as we transformed the Domain to be far faster, more scalable, and flexible.

Initially the tests were in XML to allow a very flexible system of automated testing that (in theory) could have tests written by subject matter experts or general end users. This flexibility made it a poor Domain-Specific-Language and users could not write tests themselves. The tests were also very repetitive (wet) given they had to describe many states, inputs, and outputs within a matrix-like space. Ultimately the solution was to move to a matrix-oriented tool: a Spreadsheet. And simply organize states, inputs, and outputs within that spreadsheet. Automation turned the spreadsheets into automated test specifications. And the integration server ran this vast collection of tests pretty much all the time to make sure nothing regressed (or at least it was identified if it did).

The automated testing was a continuous benefit as long as we could keep performance of the testing servers equal to developer demands.

AA-3 : Hidden Storage Model

An important part of the Domain’s API was its’ separation of ‘transactions’ from its ‘storage’. The system had transactional statements (‘update’ and ‘save’) but how those things were accomplished was not visible at the interface. This separation prevented callers from caring and fiddling with how things were communicated to the persistent storage.

Not all systems need this kind of separation: What is the chance you will swap out your database? With a very different database? But the Evant storage model was a Hybrid-relational system with the bulk of the data stored in semi-opaque compressed format. So the domain acted transactionally, but under the covers it did a lot of data transformations to organize and compress facts. Transformations that evolved in time (different versions had better formats) and evolved based on the size of the data space and performance tuning around it.

AA-4 : Canned to Generic

Another common and useful architectural progression is going from ‘canned’ (fully specified) to ‘generic’ (very flexible) capabilities. You should generally start at ‘canned’ so you have super-control over what you are doing and what you expect its results to be. This is great for both modeling and testing the system. As the canned capabilities grow, they can become unwieldy and need to be more parameterized or even genericized (e.g. an Excel formula built out of operations).

As you go from canned to generic, you will likely encounter both behavioral anomalies and performance anomalies. But if you start with generics that do the same as canned, you can focus on performance. And then switch to generics that are more broadly capable and focus on whether they behave correctly. And then return to performance of these more broadly capable generics.


Velidom Factory

Two Decades of Systems and Architectures

It is now the end of 2015 and for decades I have been reading and writing software in both small and large companies, in startups and established enterprises, and in multiple industries. My background includes several early languages (C, Basic, Pascal, Fortran, and specialty database systems), but I truly became a serious engineer in Smalltalk. After doing several systems including IRIS-2 / CargoSmart, BidLink, and others, I switched to Java as my primary language. Since then, I have also worked in Objective-C, Ruby/Rails, JavaScript (client and server), Python, and various other languages.

By building so many systems over the years, I have seen many choices and their impacts. Sometimes the choice was before me, commonly it was mine and my team, and sometimes people made choices after I ‘passed the system on’. This series is meant to document as many of these systems as possible. Previously I spoke at conferences and disseminated some of our insights. I may return to that venue, but wanted to get more than a decade of work visible.

The systems, applications, and architectures documented here will eventually include:

  • Evant Advanced Planning: A multidimensional planning system
    • Java, JavaScript
  • Velidom Factory: A highly virtualized and automated software development environment / factory
    • Java, VmWare ESX, Subversion (as part of the infrastructure), Flash/Flex, and Eclipse plugins
  • Winster: A cooperative online gaming and social site
    • Java, Flash/Flex, MySQL
  • FooPets: A virtual pet entertainment site
    • Main Site:
      • Ruby, Rails, Maya, Flash, iOS, etc.
    • HeartPark: A 3-D world / game
      • Unity 3D, C#
  • Vive: A mobile health and wellness application
    • Grails, Java, YUI
  • Epocrates EMR: An electronic medical records application
    • Ruby, Rails, iOS
  • PeerCase: A mobile-first medical communication application
    • Grails, Sencha
  • Rumble: A platform to support free-to-play and hiqh-quality games
    • Grails, Kafka, Redis, eJabberd, and a host of other technologies
  • SnapArch: An architecture to build out a collection of services and applications on top of them
    • Spring, Angular, etc.
  • ABC: An analytics platform for massive scale machine learning
    • Weka, Python, Grails, Amazon AWS services
  • ADD: A recommended development & delivery environment and platform for The Gap, Shaklee, and others
  • IRIS-2 / CargoSmart: An enterprise container-shipping logistics system
    • Smalltalk, GemStone, C++, Java, etc.