Archive for the ‘Generic Practices’ Category

You’ve got processes, but . . .

Friday, September 23rd, 2011

A friend who consults in program, project and risk management (typically to parachute-in and save wayward technology projects) is working with a client whose project is dreadfully behind schedule and over budget, and, not surprisingly, has yet to deliver anything the end-client or their users can put their hands on.  It doesn’t help that his client isn’t actually known for being technology heroes.  In fact, this is not the first time his client has tried to get this project off the ground.

Looking everywhere but in the mirror, my buddy’s client decided to have the developer put under a microscope.  After all, reasoned the client, they hired the developer on, among other attributes, touts that they were rated at CMMI Maturity Level 3!  So, they had the developer and the product undergo a series of evaluations (read: witch hunts) including a SCAMPI (CMMI) appraisal.  Sadly, this tactic isn’t unusual.

Afterwards, trying to help his client make sense of the results, my pal asked me to review the report of the appraisal which was fairly and (more or less) accurately performed by someone else (not us).  The appraisal was quite detailed and revealed something very interesting.

Lo-and-behold, the company had processes!

However, the development organization nonetheless failed to demonstrate the necessary performance of the Maturity Level 3 (ML3) practices they were claiming they operated with!  In other words, they had processes, but they were still not ML3!  In fact, they weren’t even Maturity Level 2 (ML2)!

How could this be?

While the details bore some very acute issues, what was more interesting were the general observations easily discernable from far away and with little additional digging.  The appraisal company created a colorful chart depicting the performance of each of the practices in all of ML3.  And, as I noted, there were important practices in particular areas with issues that would have precluded the achievement of ML2 or ML3; but, what was more interesting were the practices that were consistently poor, in all areas as well as the practices that were consistently strong in all areas.

One thing was very obvious: the organization, did, in fact, have many processes.  Most of the processes one would expect to see from a CMMI ML3 operation.  And, according to the report, they even had tangible examples of planning and using their practices.

What could possibly be going on here?

Seems awfully much like the development group had and used processes.  How could they not rate better than Maturity Level 1 (ML1)?!  Setting aside the specific gaps in some practices that would have sunk their ability to demonstrate anything higher than ML1 – because this isn’t where the interesting stuff shows up, and, because even were these practices performed, they still would have rated under ML2 – what the report’s colorful depiction communicated was something far harder to address than specific gaps.  The developers’ organization was using CMMI incorrectly.  A topic I cover at least in the following posts: here and here.

In particular, they were using CMMI to “comply” with their processes but not to improve their processes.  And, *that* is what caused them to fall far short of their acclaimed CMMI ML3.

How could I tell?

Because of where the practices were consistently good and where they were consistently gap-worthy.

I was reviewing the report with my friend on the phone.  As I was doing so he commented, “Wow!  You’re reading that table like a radiologist reads an X-ray!  That’s very cool!”  The story the chart told me was that despite having processes, and policies, and managing requirements and so on, the company habitually failed to:

track and measure the execution of their processes to ensure that the processes actually were followed-through as expected from a time and resource perspective,

objectively evaluate that the processes were being followed, were working, and were producing the expected results, and

perform retrospectives on what they could learn from the measurements (they weren’t taking) and evaluations (they weren’t doing) of the processes they used.

It was quite clear.

So, here’s the point of today’s post… it’s a crystal clear example of why CMMI is not about process compliance and how it shows up.  There are practices in CMMI that definitely help an organization perform better.  But, if the practices that are there to ensure that the processes are working and the lessons are being learned aren’t performed, then the entire point to following a process has been lost.  In Shewart’s cycle, this would be akin to doing P & D without C & A.

The only chance of anything that way is compliance.  There’s no chance for improvement that way except by accident. 

CMMI is not about “improvement by accident”.  (Neither is Agile for that matter.)

Interestingly enough, while there were clearly issues with the developer’s commitment to improvement, there may not necessarily have been any clear issues with either the product or the results of their processes.  While the experience may not have been pleasant for the developer, I don’t know that by buddy’s client can say to have found a smoking gun in their supplier’s hands.  Maybe what the client needs is a dose of improving how they buy technology services – which they might find in CMMI for Acquisition.

What’s in a Policy?

Tuesday, February 12th, 2008

This post is part gripe part informative.

Let’s start with the gripe (it’s also informative).

Two of my kids go to a pre-school with the following inclement weather policy:
(Only the names of the places have been edited. Otherwise, this is verbatim.)

  1. Our policy is based on [B] County School Announcements. We do not make specific [OurSchool] Announcements.
  2. If [B] County Schools are closed due to inclement weather, our school will follow the same procedure.
  3. If public schools are opening one hour late, we will open on time. There will be no before school care.
  4. If public schools open 2 hours late, we will open at 10:00 AM. There will be no before-school care. The morning session will end at 12 Noon.
  5. If public schools open more than 2 hours late, we will open at 12 Noon for extended day children.
  6. If [B] County closes early, we will also close.
  7. If inclement weather develops while the children are in school, it may be advisable to close school early. Listen to your radio for [B] County closing announcements. Parents concerned about inclement weather should pick up their children as soon as possible. Working parents are responsible for making arrangements to have their children picked up on time.
  8. Our facility is air-conditioned in cases of extreme heat.

Inclement Weather Procedure
Please read carefully.
There have been changes to our policy.

Notice anything strange?

How about they don’t know the difference between a Policy and a Procedure. (Don’t even get me started on where “process” fits into this!)

How about the problem that what their “policy” boils-down to are:
a) we don’t have a policy,
b) we follow what the County does,
c) except when we don’t.

In reality, how this “policy” plays out is this: If/when (and it *has* happened) that our kids’ school decides to not follow the County’s lead, they have no reliable way of notifying parents. They expect us to either call them, monitor email, or just show up. The issue with calling them is that they’ve got about 100 families of kids at their school. Not like 20 in a daycare.

Here’s another problem. Today, public schools were closed for Primary Elections (our County uses the public schools as voting locations), and, there was inclement weather. How exactly was the policy going to work today?!?!

Don’t think I haven’t brought this to their attention from the very first time the first of my kids attended that school. Of course, I was ignored by the school administration. To say that my concerns were “dismissed” as insignificant details would be giving them too much credit for even comprehending why my observation was even an issue.

So, here’s the “informative” part of the post.
And, the “agile” connection.

A policy is little more than a charter for doing something. When it comes to processes and process improvement, the policy gives “teeth” from upper leadership to the performance of the processes by projects. Existence of policies for doing a process are the first indication that processes are being worked into the fabric of an organization. In CMMI, being worked into the fabric of an organization is called “institutionalization” and is carried out by performing the Generic Practices. Since “being worked into the fabric of an organization” is about making an impact on the organization’s culture, I like to call this aspect of process implementation, acculturation.

Here are some other hints and tips for policies:

  • They don’t include processes or procedures, though they may reference them for clarification, edification, or example.
  • They could easily be replaced with a “charter” if that works better for your organization.
  • They shouldn’t be so weak as to be rendered obsolete or useless under the strain of simple integrity (read: “smell”) tests.
  • Someone should care whether the policy is followed because having the policy ought to be essential in helping the organization and its stakeholders make decisions and be predictable.
  • Policies should be clear about what’s expected of people’s conduct and performance (which makes them *not* the same a vision or mission statements — if it reads like one of those, it’s not a policy).
  • They can actually be carried out, and you know when they aren’t, and when not it’s either very unusual, or, someone’s not gonna be happy.
  • Suggest a certain priority of activities, and when in doubt or when conflict arises, the policy should help sort through it, or at least let those operating under the policy know when they’re in unchartered territory so they can seek leadership input.
    And, most importantly,
  • Policies set expectations, not steps, not work-flows, not checklists.

In agile organizations, or in “traditional” organizations looking to become “agile”, look to policy to ensuring minimal ‘prescriptiveness’ or, respectively, a source of unnecessary limitations, restrictions and undue ‘proceduralizing’.

In our experience, our kids’ school isn’t alone in having intractable policies. Only, when our kids’ school ran into me, they were up against someone who actually thought having a “policy” meant something.

Silly me.

As seen elsewhere…

Friday, July 6th, 2007

A recent thread over on the extremeprogramming Yahoo! group delved into whether or not CMMI sucks. One sub-thread was orbiting on the topic of Generic Practices.

As some folks know, the Generic Practices are what lead to the “institutionalization” of process improvement. In discussing this concept, the following lengthy (in text) but concise (relative to studying CMMI) explanation was given as a way to understand “institutionalization” by understanding the “Capability Levels” in CMMI.

The full post is here. The relevant text follows with small edits indicated by []s:

“Institutionalization”, besides being a ridiculously long word, refers to the depth to which you have knowledge of your process. Institutionalization also often implies the extent to which your processes are ingrained into your organization, but really, when you look at what institutionalization involves it’s more about how well you know your processes, not how widespread any given process may be throughout your organization.

At the lowest level at which anyone gets any ‘credit’, “institutionalization” is hardly the term appropriate for the state of the process. This is “level 1″ where the process gets done, but by no means is it something you’d say any forethought, planning, or commitment was put forth into getting the process done.

The next level (level 2) is where we start to see some concerted effort towards treating the process as though it’s something we cared about.

We see that the process is something someone in charge wants to be done (a.k.a. “policy”), we see that we know what tasks are involved in executing the process (a.k.a. “plan”), we see that resources have been allocated towards doing these tasks and that the tasks have been assigned as someone’s responsibility.

If training for the project’s execution of the process is needed, that’s done at this level as well. We’d also expect that we’d see the outputs of the process as something we cared about so we’d control the outputs so that we could appropriately version, find, and update those outputs over time.

Given how much we’ve already invested in this process, it makes sense then to involve those folks who hold a stake in the outcome and to monitor the process’ progress and activities, making changes to the plans, scheduling, or resources as needed to keep the process rolling.

We’d also want to keep tabs on whether the process is meeting the objectives of why we wanted the process done in the first place. And, finally, we’d review all of these process-oriented activities with people who can make decisions about the cost/ benefit/ value/ funding/ resources associated with the process fairly regularly over the life of the project.

These activities comprise what CMMI calls a “managed” process. An organization needs to know what process it’s going to follow and what makes up that process if it’s going to manage it. Thus comes the notion that the process is “institutionalized” as a “managed” process. We know enough about the process to manage it.

Beyond this level are 3, 4, and 5. Sometimes it’s easier to understand “why” level 3 by looking at levels 4 & 5 first. At level 5 you know enough about your process that you can optimize it by eliminating the “noise” in the process.

A noisy engine can often be quieted by simply tuning it. Adjusting fuel, air, timing. But there’s nothing outside the engine that’s causing it to be noisy, it’s just the engine itself. A noisy engine usually means inefficiency. The noise is just a symptom of that inefficiency. The same is true for processes. But in processes, true noise elimination is something that can realistically only be done mathematically. So, at level 5, the noise is found and reduced using models and statistics. Noise usually isn’t spread all over the process, it’s usually limited to some particular subset of the process. Usually, it’s just some sub-process of the process on which statistics are applied.

Before you can get to this point, however, you must first be able to eliminate (or at least control) external factors that unnecessarily influence your process. This isn’t “noise” because noise comes from the process, just like in an engine. And, just like in an engine, this is more like a rattle or a knocking sound, or even blunt-force damage. Something is either very broken or badly impacted by something related to, but not in control of, the engine. [In other words, the engine/process in not fully in control.] But, unless we know what the engine is expected to look like and operate we don’t really know where to look to eliminate the issue. We need (with engines) the engine’s shop manual which includes its diagrams and models. With processes, it’s the same.
We need to be able to model them before we can determine what’s supposed to be there and what’s not. [I.e., we need to know what an "in control" process looks like and what it's capable of doing.] The engine shop manual has performance specifications, and so do the processes at level 4. Capability Level 4 produces the models and performance expectations for each process as well as for the project itself. Without these we can’t get to level 5 because, while there’s certainly noise in the system at level 4, there are also too many other special causes of variation [let alone whether or not the process is in control] that must be eliminated before we can start to optimize in level 5.

Together, levels 4 & 5 are very much parallel to what many people know today as “Six Sigma”.

So, now there’s level 3. What’s in there? If levels 4 & 5 are about getting to the point where we know so much about our processes that we can use statistics to eliminate process variation and noise, then capability level 3 must be where we eliminate chronic waste. How do we discern the chronic waste from the “necessary” activities? Well, we must first define the process so that we can then improve it.

There’s no point in trying to improve a process that’s not defined, and, there’s no point in trying to define a process that’s not even managed, and no point in trying to manage a process that no one does, wants, or needs.

This is what the generic practices of CMMI do. They create an infrastructure to better understand the process toward the ability to optimize it. Starting with doing the process, then managing it, then defining and improving it, then getting into statistics to model and predict performance which ultimately opens the door to optimization.

Believe it or not, organizations at (true) levels 4 & 5 are highly agile. They can pretty much roll with anything that’s thrown at them. True level 4 & 5 organizations are NOT straight-jacketed by their processes, they’re actually freed by them. If anyone is in (or has been in) a so-called “level” 4 or 5 organization and felt stifled, I’d wager the organization was not really “in it” for the improvements.