Polyglot

Build Valuable Systems, Better and Faster

Advanced Development and Delivery (ADD) [Part-4]

This is the fourth installment of describing a radically more productive development and delivery environment.

The first part is here: Intro. In the previous parts I described the big picture and the Vagrant and EC2 bootstrap.

Node initialization

The previous parts described getting Vagrant and EC2 to have an operational node. For Vagrant it leverages ‘host’ virtual disk access to configure and bootstrap itself. For EC2, it leverages CloudFormation to configure and bootstrap itself. In both cases the very last thing the node does in the bootstrap is:

1
2
3
cd /root/gitrepo/`cat /root/nodeinfo/initgitrepo.txt`
include () { if [[ -f \"$1\" ]]; then source \"$1\"; else echo \"Skipped missing: $1\"; fi }
include it/nodeinit/common/init.sh

Advanced Development and Delivery (ADD) [Part-3]

This is the third installment of describing a radically more productive development and delivery environment.

The first part is here: Intro. In the previous parts I described the big picture and the first part of the Vagrant bootstrap.

EC2

The Vagrant bootstrap occurred through ‘bash’ files that shaped (put shape information into files) and the ‘init’ itself to get access to the repo (repo2) that contains the true configuration. For EC2 the same thing happens within a CloudFormation template. The code of the ‘init’ is almost identical, but because it is in a JSON file there is a lot of noise as the string gets concatenated together.

Advanced Development and Delivery (ADD) [Part-2]

This is the second installment of describing a radically more productive development and delivery environment.

The first part is here: Intro, but to summarize: since the 1970s, the most productive way to develop and deliver software was present in Smalltalk, Lisp, and other languages (Mesa/Cedar at Xerox) by using a very simple and powerful model. You take a computer with a fully running environment, you tweak it, and then you clone that. This way you: (a) minimize what could go wrong, and (b) maximize what will continue to work. It is very tangible and very instructive (you have full source for everything that is running). You tweak other people’s masterpieces until they do what you want, and you learn from their masterpieces to create your own.

ADD: How Better?

As described before, ADD has four ingredients:

And these are hooked together to enable ‘Changers’, ‘Watchders’, and ‘Machines’ to be super-productive. How is the ADD more productive than the tweak and clone model? It is because it solves the core problems of the clone model:

  • How do we clone to different environments? Different hardware or configuration changes?
  • How do we reduce the amount of information we have to clone?
  • How do we reduce the time it takes to transport the clone?
  • How do we know what version of the clone is on any machine?
  • How do we create thousands of clones?
  • How do we know what is different about the different clones?

Advanced Development and Delivery (ADD)

I have been paid to develop and deliver software since about 1980. That is 35 years of professional experience. When I started in 1980 there were a lot of ‘old timers’ who had been around since 1965 or so. They were 15 years ahead of me, and even after finishing college, I had less than ten years to their twenty or so. This was both intimidating but also very helpful: after college my main programming language was Smalltalk (ParcPlace, Digitalk, etc.), which included full source to everything. So the ‘masters’ would write masterpieces of code, and I would read them. And then try to write my own beautiful things leveraging the masterpieces. I was late to the party, but could learn quickly.

I also have one other unusual advantage: I do startups. Lots of startups (http://SlumsOfPaloAlto.com/). A total of ten software startups over a period of a bit more than a decade. Each of these startups failed for one reason or another, but each one enormously progressed in how well my development team ran. Eight, nine, and ten were crazy productive: I would run production servers for the whole company at the same time that I built out the product. Alone. And generally way faster than the product management team could keep up. At PeerCase the product team actually asked me to slow down delivery so they could ponder what they wanted for longer. I literally went to Disney World during ASH (a medical conference) to prevent myself from releasing new features I knew they wanted. I was paid to not work (well, I was contracting at the time, so I stopped the hourly billing clock, but my project bonus was the same).

10x Productivity

Besides doing startups, I also consult for companies. I try to help them improve their development methods, usually by at least 4x if not 10x. A lot of times, people don’t believe you can improve things to ‘10x’ the productivity of the current team using new development and delivery techniques. At one company, the CIO and a number of other executives believed me, but I had to convince a lot more stakeholders. So two amigos and I sent me into the trenches. I started taking projects estimated as two-developers, six-months, and doing them in one month. Part time. That is more than 12x productivity. Realistically it was likely about 20x the productivity because the teams tend to miss their estimates (they go over).

Being a Git About Everything (IT Automation [2])

This is the fifth in a series of using git as part of interesting solutions to problems.

The first is here: Intro and the previous part of this topic is here: IT Automation

PushMePullYou or Leveraging git to enable mass-automated IT

The previous post dealt with the groundwork of having Git be a central part of IT automation. That showed the core idea but was a bit too simple to fully express the power of the approach. This post will be dealing with all the things that were left off, especially support for:

  • Many different types of servers with both their own and shared ‘recipes’
  • More complicated install/upgrade actions
  • More sophisticated install behavior
  • Multiple versions of ‘recipes’ and an ability to promote whole IT from development to production
  • Getting information from other active repositories

Being a Git About Everything (IT Automation)

This is the fourth in a series of using git as part of interesting solutions to problems.

The first is here: Intro

Leveraging git to help enable automated IT

Doing IT for computers involves installing software, configuring things, doing backups, updates, etc. The ultimate IT is one that ‘simply works’ and involves almost no human interaction even in failure situations. Ideally IT should be equivalent to a macro-level program that does everything that does not require touching a physical machine.

This IT-as-program has become easier and easier over the last many years with better and more standardized operating systems, free software that does not require annoying human interaction during installation, and virtualization on top of physical hardware that makes provisioning and reprovisioning easier. With cloud computing, IT-as-program becomes almost a necessity as hundreds of virtual computers are created, updated, failed, migrated, and decommissioned.

Git alone doesn’t enable IT-as-program but it can be a core component in many areas. Among these are:

  • Easy ‘Live IT’ servers
  • A Push-Me-Pull-You model for continual deployment
  • Server presence

Having git as a core piece of IT infrastructure enables thousands of machines to very rapidly react (within a minute or two) without needing a heavy infrastructure. You simply need one or two (for redundancy) git servers, of which one can be GitHub or a similar free or inexpensive service. Other technologies in this space have significantly more complicated servers, are more likely to be SPOFs (Single points of failures) or bottlenecks, and are much more expensive as a service.

Being a Git About Everything (IT)

This is the third in a series of using git as part of interesting solutions to problems.

The first is here: Intro

Leveraging git to help with IT tasks

Doing IT for computers involves installing software, configuring things, doing backups, updates, etc. Git can help out with a lot of this, with several big benefits. Some examples:

  • IT changes should always be version controlled in case of mistake. Git and a couple patterns/tools makes that easy
  • Installing software can frequently be done with ‘yum’ and similar commands, but some software needs a package, and some times you want more direct ‘this version’ control. An annexed git repository helps with that
  • Doing backups or other kinds of off-lining with an annexed repo is very simple and flexible
  • Git enables a PushMePullYou model that is very flexible and capable of driving hundreds of machines to do either (or both) very light and very heavy-weight operation

Some of these things can be done with other tools, but I have found an annexed git repository to make doing these things easier and better

Being a Git About Everything (Annexing)

This is the second in a series of using git as part of interesting solutions to problems.

The first is here: Intro

Dealing with Binary Files

As mentioned in the first posting, git and similar DVCS have issues with binary files. Adding 100s of 10MB files, or 100s of versions of 10MB files will produce gigabytes worth of data that must be cloned by everyone using the repository. How do we avoid this?

There are a number of solutions in this space out there with differing characteristics, but the core approach is usually similar: “Don’t store it in git”. Instead we want to record enough information to retrieve the binary files from somewhere else.

Being a Git About Everything (Intro)

There are times when a new technology comes along, that at first appears to be pretty similar to existing technology, but certain characteristics make for radically different or just nicely new solutions. A recent example of this is ‘git’ and similar distributed version control systems (DVCS). They may at first appear to be an interesting version of centralized version/content management systems, but they are really much more… a core piece of technology useful for many things.

This is a series about how to use git to solve many different problems, some obvious and some more unusual. I hope a few of them are interesting to readers.

Scratch on Flash

The original version of this posting is on the MIT Scratch Forum and the current location of the repository is https://github.com/markfussell/scratchonflash

In the spirit of release early and often…

I have made a full bottom-to-top pass at implementing Scratch in Flash. It can:

  • Read a scratch project file
  • Create the project model (Stage, Sprites, Blocks, etc.)
  • Run that model (the core variable assignments, loops, etc.)
  • And do a couple of the other kinds of Sprite blocks in the ‘motion’ and ‘looks’ categories