Polyglot

Build Valuable Systems, Better and Faster

ADD: Estimation

This is the fourth series describing the ADD: a radically more productive software development and delivery environment. The first series starts here: Advanced Development and Delivery and described the core four ingredients of the ADD. The second series starts here: ADD Stack and described the application stack and system components associated with the ADD. Some of these were Java / Grails specific, and some were more general components and capabilities. Together these went through a huge amount of functionality, flexibility, reliability, and scalability. Both for the IT infrastructure (whether on EC2 or on bare hardware) and for the applications on top of that IT infrastructure. The third series starts here: Advanced Development and Delivery Summary and gave an executive summary of the ADD.

This series is about project estimation, and is basically orthogonal to the ADD itself. You can have good estimates or bad estimates. The ADD delivers exceptional productivity. How much you need to estimate is quite independent of productivity. But by using Estimation and Feedback, you can figure out (a) whether to do a project and (b) whether the people you have are up-to-snuff. If you know you have to do a project and you have certain people: don’t worry about the estimate. It is irrelevant. Just “wing it”. But if you have choices about either (a) whether to do a project or (b) what people to use or even (c) figuring out how good your team is after you do a project, then estimations are the core benchmark to give you feedback.

Estimation is crazy-hard

The problem with estimating is you can’t be good at it unless you have done something very similar before. But most workers in an industry are relatively new to it: less than ten years. And if each project takes a couple years to complete, you may have only a couple experiences in your first five years of work. Change technologies, change roles, or change organization, and your estimates are likely poorly based.

When something is too hard for someone to do easily, you need to decompose it to make it (a) easier and (b) separable.

Doing two or more pieces sequentially is commonly easier than doing the whole task at one time. By easier, I don’t mean faster. It may take more time, especially if there is some interrelationship between the pieces. But there is commonly a limit to how much a human brain can do at one time until they are trained. Over time, the training in the pieces enables them to be done simultaneously. Master chefs and similar tend to be trained in all pieces of their craft until they can do it all so easily that they can do it simultaneously.

By making the pieces separable, you can have different people work on each of the pieces, and develop the skills needed for that piece alone. Note that this isn’t “slicing” a large piece of work into a smaller piece of work which requires all the same skills. That would be like a chef doing part of the dinner service. Instead it is separating the skills needed for the task: one chef “Plates” and just plates, another cooks meats, another cooks vegetables, another makes sauces, and so on.

The critical question is then how to decompose? First we need to define and decompose what we mean by estimation. And to do that, we can put estimation into the macro solution-delivery process of software.

A Framework: READ-OR

I have been reading and writing software for about 35,000 hours. Mostly reading, which I believe is the core to being a great programmer: read great programmers’ code and you will learn from it. It may be hard to find great code to read, but you can always start with Smalltalk or find some epic system close to the kinds of systems you build.

Working through possible acronyms for the bigger software development process, I stumbled upon my favorite word: “READ”. All software is developed through this simple approach and acronym:

  • Requirements of Solution – Given the problem, and what is your solution to that problem? What does that solution require to solve the problem?
  • Estimation of Size – What is the estimate of the size of the solution? Note this is independent of the architecture and technology. It is the functional size estimate and not the delivery estimate
  • Architecture – What is the architecture and technology you want to use to deliver this solution
  • Delivery – Now deliver or consider delivering.
    • Estimate the delivery timeline and costs, and decide if the solution is worth doing. If not, change architecture or change solution.
    • If this is a waterfall, deliver once. If incremental, deliver in increments.

And bonus pieces

  • Over – Now the project is either done or the process starts over until the solution solves the real problem.
    • This requires “Delta” size estimation. How much is being added to or modified about a system?
  • Review – Review the size of what was built (not time, but solution functional size)
    • Determine what you would likely want to improve in the solution, architecture, or delivery

So now that we have Esitmation

Estimation of Solution Size

The hard part is doing Estimation of Solution Size without getting pulled into the details of architecture and delivery. We are not yet good at this as an industry. We need a body of knowledge and a system to estimate the size of a solution independent of the architecture chosen and the team that delivers. If we have this, we can finally measure something useful. Actually we can measure a lot of useful things.

  • What is the ROI of the solution? If the solution is really big, there are very few architectures and delivery teams that are going to dramatically reduce the cost
  • How well do certain architectures do against a particular solution? For a given delivery team, you can see what they are good at or not. Going across delivery teams, you may be able to identify better architectures vs. more familiar to the team.
    • What are the differences between the architectures that cause these effects?
    • Should we change architectures or train more in an architecture?
  • How well do the delivery teams deliver? Especially if the solution is about the same size, and the architecture is the same, you should start seeing better and worse results.
    • What are the differences between the teams that cause this effect?
    • Can we improve the numbers?

Estimation of solution size has been attempted with Function Points and more recently with COSMIC Function Points. I think using the term “point” is not quite right, so I am going to recommend being more creative although just a simple twist on the word. I propose ‘joint’ to capture complexity. Things with more joints are usually more complex, and it is relatively linear or even somewhat exponential as you increase the joint count.

Joints / Moves / COSMIC

So what should constitute a ‘joint’? With very little effort in evaluating it (will be starting soon), I believe COSMIC is a valid starting methodology. So a COSMIC Point is a Joint. Not much difference, but at least it splits the two issues: How big is it (1000 joints)? How long is it expected to take to build it (10,000 points)? Given COSMIC measure data movements, a joint could be a synonym for a ‘move’, but if we augment the system with additions beyond ‘moves’ as source of size, then the term would become a misnomer.

More information about COSMIC is here: http://cosmic-sizing.org/ and elsewhere online.

Analysis and Design

When working through COSMIC terminology and examples, it flashed me back a couple decades to a time that we built software very differently:

  • Software was built from requirements through analysis through design and then into implementation

This was called a ‘Waterfall’ model because each stage tended to block doing anything for the next stage until the stage was ‘complete’. Modern Agile proponents, especially the extremely agile, have basically dropped the stages. But the stages are useful going both forward (analyse what your system is supposed to do before coding it) and backward does the analysis seem to plausibly match the requirements? The analysis and design work was actually not the problem, it was just the amount of work (if you an analyst, you want to analyze) and the ‘blocking’ nature of the work that was a problem.

A great book on this analysis and designe work was Object-Oriented Analysis and Design (1st edition) which was in several different programming languages and dealt with realtime, server application, and other types of software systems. The second generation can be found here: http://www.sphoorthyengg.com/MCAupload/upload/object-oriented-analysis-and-design-with-applications-2nd-edition.pdf These methods became parts of UML, which is again commonly used for heavy analysis and heavy design, preventing the customer from seeing the actual product early on.

The solution isn’t to drop the steps, but to scale and time them properly

Comments