16 September 2008

You've gotta be doing something.


An earlier post mentioned my "go-in" position is assuming that any successful company has to be doing something in many of the process areas, else they wouldn't be successful in general, let alone in using CMMI to improve their processes.

It might be helpful to explain myself a little in a way that might be illuminating for the benefit of anyone else.

CMMI is written in the language of process-improvement-speak. What I learned early in my career is that process-improvement-speak rarely speaks to decision-makers or others in positions of authority over "real" work. It seems that this holds true for software developers as well regardless of where they are in the corporate food chain.

What I learned that *does* speak very loudly and clearly to those who hold sway over what gets attention -- as well as those who are doing the heavy lifting -- is the discussion of risk.

When a risk becomes an issue, it usually involves loss of resources, time, and money. It's the money piece that grabs the attention of most "gold owners" (aka, "decision-makers"). Well, whether it's time or resources, it usually translates into money anyway at the top. At the "bottom" time and resources are the focus. Regardless, if you want to really grab attention talk about your project failing and you will get attention.

In fact, every goal and practice in the CMMI avoids some risk. Not every risk CMMI avoids will lead to eminent danger or failure, but left alone, many risks avoided by CMMI could eventually lead to failure or loss. From another perspective, many CMMI practices are "early warning signs" of risk.

This is why I believe that most organizations are likely doing many of the things needed to implement CMMI practices. If they weren't avoiding the same risks CMMI is concerned about, their projects would fail more often, and, they wouldn't be in business long enough to be thinking about CMMI.

Whether an organization is using agile or traditional methods, they are likely working against risk. With CMMI working to avoid the same risks as agile and traditional developers, it stands to reason that we ought to be able to find how those risks are avoided in each organization's view of reality. We can then see how/where CMMI practices can be incorporated to improve the performance of that risk-avoidance reality.

What I normally find is that the basic efforts to avoid the risk are in place. What's often missing are some of the up-front activities that cause these risk-avoiding behaviors to be consistently applied from project to project, and, some of the follow-through activities that close the loop on being able to actually improve these risk-avoidance activities. Usually, once exposed to the few practices that can help the organization improve upon their already decent risk-avoidance efforts, they welcome the reasonable suggestions.

What's more often missing is the distinction that the previous successes experienced by the team (or organization) relied heavily on certain people and their experience and their natural or learned know-how to ensure risks are avoided. This is fine in CMMI -- if that's all that is expected. Accordingly, this is called "Capability Level 1". However, for most organizations, they want more than just achievement of the Specific Goals through the performance of Specific Practices. They want to institutionalize that performance; meaning, they want it to be consciously managed and available to be distributed throughout their group, applied with the purpose of producing consistently positive outcomes, and improved via collective experience.

The first step in that journey is to manage the risk-avoidance-and-improvement practices. That's called "Capability Level 2". After that, we get into really taking a definitional and introspective approach towards institutionalization and improvement in "Capability Level 3".

Usually, once agile teams see that many practices are accomplished as a natural outcome of risk-avoidance, and, that practices not already inherent in what they're doing aren't unreasonable, staying agile while also improving their practices using CMMI is quite a non-event.

As projects get bigger and/or more complex, and, as teams get bigger, the mechanisms to manage risk are necessarily more comprehensive. If you're a small, agile, yet disciplined organization, most risk-avoiding activities are natural. The question is whether they are scalable. If you're such a company, you might look at the additional activities as building-out the infrastructure to be bigger. Not less agile.

Bottom line: translate CMMI practices into risk-avoidance and you will find the value in them as well as the means to accomplish them in a lean and agile way.

(Thanks to http://wilderdom.com/Risk.html for the great images!)

Labels:

08 September 2008

NUTs, GUTs, Hours and Days. They're all AUTs and should be treated as such!

NUTs:: Nebulous Units of Time image

GUTs:: General Units of Time

AUTs:: Arbitrary Units of Time

So much emphasis is placed on time and estimates in planning development work. In reality, even hours, days, and weeks are nothing more than arbitrary measures.

To be clear, time as we typically account for it, is an important component of planning and estimating. But, as are many other aspects of development (and life in general?), are open to several interpretations, and, when taken literally, can result in undesired consequences.

Let's face it, in the world of development, estimates have lost their original dictionary meaning. I, for one (and I doubt I'm alone), believe this to be unfortunate. Estimates were never meant to be locked-in forever. They were meant to be a guide to making the next set of plans, then moved forward to make the set of plans after that, and so forth. But when estimates started to be used as the basis for long-term budgets, they lost their original definition and took on the expectation of being accurate.

(You can see how this lead to the ideal that to improve estimates, requirements needed to be more, more, more of everything about them.)

Everyone knows that estimates are frequently largely fiction. That's true of even the most capable and mature organizations. (In fairness, that comment applies when experienced organizations are trying something completely new. In those cases, the new aspects of the effort have low confidence in estimates and the common aspects of the effort would have higher confidence in estimates.)

However, even when not attempting to estimate "the whole thing", when projects are only trying to estimate a single user story, organizations get very skittish about making any commitments to estimates. This is often a symptom of placing a high premium on the perceived value of the time. In other words, a "day" = a "day" and a "week" = a "week", just as we'd count time between now and our next vacation.

As a result of this reluctance to ascribe estimates due, in part, to the automatic psychology attributed to the significance and meaning of the number, and in part due to the concern of being held to them, the notion of "Story Points" (look here and here) came about. Story points offers a less concrete, more relativistic, and seemingly more natural way of estimating.

Story points allows estimation based on the relative perceived effort required to complete one user story as compared to other stories. This story takes longer than that story, and so on. This eliminates the natural tendency to put undue emphasis on the number and provides a means of filling a time-box with a number that can be later compared to how much work was actually completed in that time-box.

This approach easily lends itself to tracking story points per iteration, which is one way to measure "velocity". With the velocity value, the total story points of the project, and the length of the iterations, one can project how many iterations, and thus, how much time, before a project is likely to be completed.

There's one (maybe more, but only one that we'll look at here) challenge with story points: as beneficial as they are at eliminating the many "psychological" connections to time that are associated with using "time", they're also not very natural to humans. Even those experienced with using story points have learned that the estimates aren't consistent, story point estimates from iteration to iteration aren't stable or predictable, and, when required to plan for new, or large, or long-term, or complex situations, points from previous projects aren't much help and don't satisfy many organization's needs for budgeting.

Nonetheless, I still like the idea of story points as a teaching tool and here's why. It helps introduce the idea that estimates, whether in points or hours, or days, or whatever, are supposed to be meant for tracking progress, not setting expectations that are to be written in stone.

This is especially true when taking a time-boxed (and hopefully incremental and iterative) approach to development. The purpose of the estimate is to see how much can get done in the available allotted time and then throughout that time to check to see how much progress is being made, make adjustments to the expectations, and then at the end of that box of time to see how much was actually done.

In other words, the estimate of a task's effort should be used as a measure of productivity, not as a measure of accuracy.

Throughout and at the end of the time-box, it is valuable to take note of the differences between estimated and achieved productivity so that future estimates can be somewhat more accurate, but only so that productivity can be more predictable. Clearly, as productivity predictions become more accurate, then budgeting becomes more accurate.

When estimates are used for measures of productivity, it doesn't matter how much time, in physical clock terms, is being ascribed to the tasks. The number becomes as arbitrary as the notion of time itself. Time is merely the "distance between events". We're conditioned to be familiar with time as being discrete and concrete. So, when we use "time" to describe estimates, we're drawn to compete against the clock to achieve the task within the allotted time. An alternative is to complete as much of the task as we can within the time-box without killing ourselves, then looking back at how much "time" it took to get as far as we got and to use that lesson to better understand our productivity.

Another way to look at it is that "time" allocated to a task isn't really physical "clock" time. It's just a way of guessing a rough idea of how much work can get done in the boxed time. As it is, it's highly unusual for physical clock time to align itself nicely with how much work is actually done in that span of time. But, when it is understood that the estimate of time is nothing more than a guess used to fill-up a time-box-full of tasks, time as applied to estimates might just as well be an arbitrary number.

The only way to get a sense of "estimated time" to align with "clock time" is, well, over time and experience. This time and experience on the project level can take a while to attain. One way to short-cut the wait is with SEI's Personal Software Process (PSP), which works surprisingly well in development of other work, not just software. At the team level, there's the TSP, which builds on the PSP so that personal productivity can be used collectively with the overall productivity of the team.

However, this post isn't a commercial for TSP/PSP. The point to this post is that time associated with the clock comes more naturally to humans, and that using time as a measure of productivity makes estimates more valuable. When estimating a task, worry less about estimating correctly. When working on the task, worry less about getting it done within the estimate. Worry more about checking your progress against the estimate and making adjustments accordingly throughout and at the end of the iteration.

Keep in mind, whether used for productivity or budgeting, organizations who expect the estimate to stay precise throughout the project are deceiving themselves. Organizations who use the 1st estimate as the basis for the entire project are even more deluded. Organizations who are being given the freedom and flexibility to pursue incremental and iterative development often also have the luxury of not being held to providing imaginary long-term budgets and thus using estimates iteration by iteration is acceptable to their leadership.

Used as a measure of progress and productivity, estimates have much more power and add much more value than when used for long-term budgeting. Within a few iterations, estimation, budgeting, productivity and predictability will converge so that clock time and estimated time will be more meaningful to everyone involved.

For what it's worth: there nothing about the above that runs contrary to CMMI. Not.One.Thing.

Labels: , , ,