This is the fourth series describing the ADD: a radically more productive software development and delivery environment. This series is on estimation, and the first part is here: ADD: Estimation.
As mentioned at the end of the first part, the extreme reaction to waterfall was to toss Analysis and Design away, and go from very-loose requirements into actual implementation. I believe this was a complete mistake and has caused these “extreme” agile projects to be much slower and unstable compared to another approach.
The alternative is “Scaled Agile”, and by this I mean both:
- It is better designed for larger projects
- It is achieved by scaling the activities of software development projects to be appropriate for the scale and needs of the project
- Analysis & Design efforts, deliverables, and precision can be smaller or larger
- Increments of delivery can be smaller or larger
- Allowance for iteration (repeating something to refine it) can be smaller or larger
- Overlap of activities can be smaller or larger
The second item was tossed out with the “Waterfall”, at least by many in the industry. A Waterfall is an extreme version of Agile (basically no during-build agility), but going directly from verbal or spread-sheet based requirements is a similarly extreme version of Agile and is unlikely to be as effective as properly-scaled versions of Agile.
Retail Aspect / Evant / XP / Scaled Agile
In 2003, while at Evant, we took an XP-based product (an advanced retail planning system) and worked on scaling the velocity and agility of that project:
- To a bigger and non-XP team
- Through multiple time-zones, including to India
- Outside the development team into Product Management, Technical Sales, Marketing, and Support
- Into a bigger portfolio of products that had inter-dependent deliverables
- Across the portfolio to other product lines and teams
Evant was eventually acquired and folded into a larger company, so much of the memory of what worked well and not about this was lost. Except the person who had the ‘APT’ charter for Evant is still alive and cognizant. And maybe a bit opinionated, but will try to be objective.
Ultimately, I think the scaling “worked in principle” but having a pure XP team shift to Scaled Agile is asking them to give up an almost religious desire to “Not Analyze”, “Not Design”, and “Not be Objectively Measured!”. If the software developers really think they can program without being measured around productivity of delivering value to the customer, you really have to let them go. Maybe you have a wizard or two that makes crazy amazing things happen and you don’t bother putting them into the normal project teams, but beyond that, you need to know if your products, projects, and programs are delivering a good ROI and how you could make them deliver a better ROI.
To do that, you need to measure. And returning to COSMIC, it is a way to measure the scale of software functionality. Which isn’t the value to the customer of that system, but does reasonable express the complexity in providing the value to the customer.
COSMIC Measurement: Analysis and Design
The COSMIC process is clearly doing Analysis and Design, but in a relatively light and tunable way. You can control:
- How much you decompose a system
- The detail level and decomposition of the functional users of the system
- How detailed you get with “Objects of Interest” and their corresponding “Data Groups”
- What scope of the system you are going to measure
- The granularity and formality of the functional user requirements (FURs)
- Details around the non-FURs (indirect FURs, non-functional requirements, and other constraints)
Doing the COSMIC measurement process is basically doing a minimal Analysis and Design on a system, so you have a good basis for measuring its size and correlating that to development costs. But it doesn’t just support the measurement, it starts building a model of the user concepts (Object of Interests), their events, the system actions, and the system components (based on the decomposition level).
The Hub: Additional (or Not) Analysis and Design
The COSMIC analysis can be a basis to additional investment in Analysis and Design, but that is optional. That is a choice of delivery. You could have extreme delivery that ignored the COSMIC scope model completely. Or you could have extreme delivery that went into Waterfall after the COSMIC model was produced. Or more sensibly, you could start working outward from that hub:
- Does the minimal user concepts start describing a good logical data model?
- Does the decomposition work with the implementation frameworks? Should these be more fully designed or just implemented and noted how they are different.
- If there is a lot of data-reading by different functional processes, are these from a single database or many or cached?
Note that these ‘expansions’ are not meant to alter the Hub: The measurement was done, and you can leave it alone but leverage it for subsequent activities
The Hub: Drilling down with COSMIC
Alternatively, as you do analysis and design, you can produce a new granularity and scope that you want to now measure again. This could be done only when things seem to have become very different from original expectations, or consistently to understand development ‘hidden requirements’ or really ‘derived requirements’ growth. The measurement should still ignore NFRs, but the deeper granularity and the conversion of some NFRs to FURs would cause a different estimate of ‘Joints’ because we have a different (more detailed) view of the system.
Scaled Agile Manifesto
I believe the Agile Manifesto was reactionary and extreme. It had good ideas in it, but was missing core premises that customers and development teams should expect from a good development project:
- The Customer is always right about what they want. The delivery team should always be honest and informative about what they can deliver and how they are doing it.
- The Customer can change what they want, change the delivery team they use, but they can’t interfere within the delivery team
- The delivery team should always deliver a functional-requirements-based estimate of a project independent of the technology and other delivery choices, so the customer knows what they are asking for (compared to other projects) and getting (given the delivery approach)
- Unless the customer clearly says “I don’t care”, in which case the delivery team can choose whether to do the work for their own internal benefit
- Unless it helps a customer figure out what they want, requirement-fulfilling working systems are the only true progress of a project
- A customer can desire to go forward to later stages (analysis, design, and implementation) before they are sure what they want
- The cost of responding to change should be minimized while still supporting maximal ROI and velocity if there are no changes
- Processes and tools are to support the Customer and Ddelivery team’s needs, and not ends of themselves.
- A default or customary usage of processes and tools is appropriate, but should be evaluated if there appears to be no need or even clear counter-benefit
- Our highest priority is to satisfy our current customers
- Through techniques that enable the customer to make good decisions, see verifiable progress toward their goals, and determine if they are getting a good ROI
- Our second highest priority is to satisfy future customers even better
- Through evaluation of what went well and poorly, and looking for ways to give even higher satisfaction (whether ROI, predictability, or in other ways) to customers
- Both sides should care about each other:
- A happy development team that cares about the customer is usually better at satisfying the customer
- A customer that truly cares about the development team is usually better at inspiring the team to deliver its best