Do Story Points = Value? (Pt 2)

2nd in a series of posts related to story points and value delivery

The TechFar Handbook suggests that the Government should be intimately involved with the Agile process:

As part of its responsibility, the Government is involved, at a minimum, at critical decision points in each sprint cycle – at the requirements development phase and sprint cycle review, but it is preferable to have daily involvement from the Government Product Owner, and frequent involvement from end-user representatives.

Essentially, the Government should be involved in the prioritization of the backlogs, the release and sprint planning events, and at the sprint and release reviews.  This is in alignment with the Agile Manifesto that states that we value, ” Working software over comprehensive documentation”; and the related principle “Working software is the primary measure of progress”.

I have yet to find a reference that the measure of value is through story points.  My previous post introduces the trend where Story Point quotas are being established for several Federal projects as a measure of progress or value delivery.  Unfortunately, this is a terrible measure for either.

Why is that?  As mentioned in the previous post, story points provide a light-weight estimation technique that helps to facilitate collaboration, and over time provides a reasonable metric for forecasting.   Story points by themselves serve nothing more than a way to relatively size items in a backlog.  Velocity is simply the rolling average of the number of points a team typically completes per sprint as we all understand.  But value is another beast all together.

The TechFar indicates that the way to judge ‘progress’ is:

The agency tracks progress by tracking completed work. Velocity is also useful for predicting future software deliveries.[27] With Agile software development, project status is evaluated based on software demonstrations. If new system requirements are discovered, they are queued for possible inclusion in later iterations.

Fundamentally, this sounds right, but may be interpreted as story points completed per sprint or release.  After all, at the demonstration, a certain number of stories will have been completed and perhaps a few stories are not completed.  Typically these stories have points assigned, so now we have a relationship that shows for each sprint we correlate progress with the number of points completed.   Thus, the slippery slope is engaged.  Now, we are evaluating if the number of points completed meet our expectations, rather than does the demonstrated software represent something valuable, and that the completed story points represents a data point for the team and the system as a whole, not an estimate accuracy question.

But aren’t they essentially the same?  Software Value per sprint vs Points completed per sprint.

A challenge here is that ‘Value’ seems to be subjective, where ‘Points’ appear as objective.  The reality is that they are both subjective.  In addition, it is very hard to interpret the significance of the increase in value or points on a 2-week sprint basis.   So, our interpretation of these two subjective measures every 10 business days is hardly something to base progress.   10 business days is a very small window to make changes to an underlying software solution, as intended.  So the change should be small.  The collection of a few of these small changes should represent a modest change at the release level (5-6 two week sprints).

Donald Reinertsen’s book, “The Principles of Product Development Flow” is chock full of principles that are representative of the underlying realities in our software development processes and products.  The following principle regarding Variability is relevant in that each sprint we conduct is essentially a small experiment.  We make them small to reduce variability which in turn helps to reduce risk and improve flow.

Variability principle V7: The Principle of Small Experiments: Many small experiments produce less variation than one big one.

The question remains, are the sum of the parts greater than or equal to the whole?

Brian Wernham in his book, “Agile Project Management for Government” says:

Agile project controls turns conventional planning on its head…  Thus in agile the team’s efficiency is calibrated rather than the accuracy of the estimates.

So, story points really help us understand how to calibrate and ultimately stabilize a team’s velocity within the ‘system’ they operate as they work to create value.  It is not value itself.  This idea is in line with optimizing the whole (the system) and not sub-optimizing for meeting a story point quota per sprint.  A larger program will be much better served by ensuring

“that the many activities in a process stay in a sustainable balance and at an even flow”.

Where does this leave us?  I think it leads us back to the foundational principles and values of Lean and Agile.   Where story points help to facilitate an empirical model to forecast the system of building software’s ability to deliver value over time; and, to evaluate value by observing what is actually built.  Ken Rubin over at Innolution has a good post called Outputs vs Outcomes-Measuring Business Success with Agile that is related.  When we measure story points we are measuring outputs.  These are relatively easy to measure, but lose sight of the outcomes or the value that is actually important.

I know,  but how do you measure outcomes?  The customer (e.g. the Government) wants to know if they are receiving the ‘value’ they are paying for.   I’ll expand more in the next post, but value and outcomes are probably best understood through a consistent survey mechanism with your stakeholders; oh, and don’t forget trust.

Question: In the mean time, do you have a similar situation on your project where the customer is trying to judge if they are getting value for their investment? You can leave a comment by clicking here.