15 November 2009

My Caloric Rise to High Maturity Health

Today I put myself into a program of health and fitness with the express purpose of "putting my body where my mouth is".  For the next 6+ months I plan to track specific health & fitness measures as part of an overall performance objective of increasing my endurance, losing body fat, and gaining better health.  Using the values, principles and practices of high capability CMMI, I will demonstrate statistics & quantifiable results.

Making this effort public and committing to report the results by SEPG-Europe 2010 is part of the effort to personally motive myself to stay on track.

I plan to track normal effort for about a month, then to begin looking for patterns, correlations, and perhaps even causality.  In particular, I plan to seek processes, baselines, and models that I can begin to experiment with to achieve higher performance and better/faster/long-lasting results.  I would like to be able to have specific patterns and models which I can use and manipulate for specific conditions (such as travel, availability of exercise equipment, lack of planning/control over food choices, and other variations).

I would like to be able to further determine the critical sub-factors that I can focus on when I don't have all the ideal conditions for weight and exercise management.  For example, what's more important: total calories or calories from some specific source?  What's more influential: what I eat or whether I exercise?  What should I try to control more: meal frequency or meal size?

If I had to pick a few things that I could easily manage over time, which would they be?

I would like to result in a long-term sustainable program the works for me no matter what my circumstances, and, if/when I can't control all the variables, what *specifically* can I do to get specific results and how long will it take to get back to where I want to be

Using practices from Measurement and Analysis (MA), Project Planning (PP), Project Monitoring and Control (PMC), Process & Product Quality Assurance (PPQA), High Maturity, and others, I will work towards specific process performance objectives in personal health.

Business objectives (Within 6 months from 15 November 2009):

  • Reduce body fat at least 40 lbs.
  • Increase endurance/intensity at least 20%.
  • Reduce waistline to no greater than US size 38
  • Maintain or increase total muscle mass.
  • Understand the influence/impact of processes, patterns and tools on health.
  • Establish a manageable, defined sustainable process for my personal health including:
    • how much I need to eat and of what
    • how much I should exercise and what types of exercise
  • Create a long-term strategy for well-being.

The information I need is:

  • Nutrition data (Calories IN)
    • What I eat
    • Calories from what I eat
    • Distribution of calories in terms of fat, carbs, protein and fiber.
    • When I eat
  • Exercise data (Calories OUT)
    • Type of exercise
    • When I exercise
    • Intensity (specific to exercise)
    • Calories burned
    • How long I've exercised
    • How I feel afterwards
  • Weight data
    • Weight
    • Date and time of day
    • Have I eaten before weighing?
    • Have I exercised before weighing?
    • Have I relieved b/m before weighing?
    • Was I wearing clothes?
  • Clothes size data
    • Waist
    • Chest
    • Thighs
    • Hips/Butt
    • Neck

I plan to eat no more than 2400 calories/day, up to 6 "meals" or snacks per day.
I plan to exercise a minimum of 5 days/week
I plan to weigh myself once/week.
I plan to measure my clothes size measurements once/month.

For years I've been using the image of a fit man as an example of a "model" for health, and I've been saying that despite the fact that he doesn't represent all men in all situations that he can still be an example of what "fitness" can be.  I usually joke about how, despite the fact that the man-in-the-picture's waist is probably smaller than my own thigh, I can still pursue a level of fitness that works for me that would appear as fit as the man despite our differences.

The time has come for me to make good on that joke and to pursue fitness in a way that I have never done before, and, I believe, is a way that I must pursue to finally settle the question for myself of "what does a 'fit' me look like?"  It's a question I've been after for nearly 40 years.  For about the last 10 years I've suspected the answer will be found in a profound exploration of my own personal process performance.

I hope to reach my initial objectives in time to:
1. Reach a steady state condition such that I can report on both the initial drop as well as some aspects of a "maintenance" state.
2. Have something to report by the time the presentation materials are due.

For years I've been using a health analogy to describe process improvement; to describe the differences between a prescription and a description of improvement.  With this fitness project, I will demonstrate how a few simple values and concepts can be leveraged into an entire approach using high maturity practices that convert these descriptive concepts into very specific execution of practices that work for me, and can possibly demonstrate both process improvement and high maturity for others.

I have avoided this inevitable and dreaded project for years.

Labels: , , , , ,

06 July 2007

As seen elsewhere...

A recent thread over on the extremeprogramming Yahoo! group delved into whether or not CMMI sucks. One sub-thread was orbiting on the topic of Generic Practices.

As some folks know, the Generic Practices are what lead to the "institutionalization" of process improvement. In discussing this concept, the following lengthy (in text) but concise (relative to studying CMMI) explanation was given as a way to understand "institutionalization" by understanding the "Capability Levels" in CMMI.

The full post is here. The relevant text follows with small edits indicated by []s:

=-=-=-=-=-=
"Institutionalization", besides being a ridiculously long word, refers to the depth to which you have knowledge of your process. Institutionalization also often implies the extent to which your processes are ingrained into your organization, but really, when you look at what institutionalization involves it's more about how well you know your processes, not how widespread any given process may be throughout your organization.

At the lowest level at which anyone gets any 'credit', "institutionalization" is hardly the term appropriate for the state of the process. This is "level 1" where the process gets done, but by no means is it something you'd say any forethought, planning, or commitment was put forth into getting the process done.

The next level (level 2) is where we start to see some concerted effort towards treating the process as though it's something we cared about.

We see that the process is something someone in charge wants to be done (a.k.a. "policy"), we see that we know what tasks are involved in executing the process (a.k.a. "plan"), we see that resources have been allocated towards doing these tasks and that the tasks have been assigned as someone's responsibility.

If training for the project's execution of the process is needed, that's done at this level as well. We'd also expect that we'd see the outputs of the process as something we cared about so we'd control the outputs so that we could appropriately version, find, and update those outputs over time.

Given how much we've already invested in this process, it makes sense then to involve those folks who hold a stake in the outcome and to monitor the process' progress and activities, making changes to the plans, scheduling, or resources as needed to keep the process rolling.

We'd also want to keep tabs on whether the process is meeting the objectives of why we wanted the process done in the first place. And, finally, we'd review all of these process-oriented activities with people who can make decisions about the cost/ benefit/ value/ funding/ resources associated with the process fairly regularly over the life of the project.

These activities comprise what CMMI calls a "managed" process. An organization needs to know what process it's going to follow and what makes up that process if it's going to manage it. Thus comes the notion that the process is "institutionalized" as a "managed" process. We know enough about the process to manage it.

Beyond this level are 3, 4, and 5. Sometimes it's easier to understand "why" level 3 by looking at levels 4 & 5 first. At level 5 you know enough about your process that you can optimize it by eliminating the "noise" in the process.

A noisy engine can often be quieted by simply tuning it. Adjusting fuel, air, timing. But there's nothing outside the engine that's causing it to be noisy, it's just the engine itself. A noisy engine usually means inefficiency. The noise is just a symptom of that inefficiency. The same is true for processes. But in processes, true noise elimination is something that can realistically only be done mathematically. So, at level 5, the noise is found and reduced using models and statistics. Noise usually isn't spread all over the process, it's usually limited to some particular subset of the process. Usually, it's just some sub-process of the process on which statistics are applied.

Before you can get to this point, however, you must first be able to eliminate (or at least control) external factors that unnecessarily influence your process. This isn't "noise" because noise comes from the process, just like in an engine. And, just like in an engine, this is more like a rattle or a knocking sound, or even blunt-force damage. Something is either very broken or badly impacted by something related to, but not in control of, the engine. [In other words, the engine/process in not fully in control.] But, unless we know what the engine is expected to look like and operate we don't really know where to look to eliminate the issue. We need (with engines) the engine's shop manual which includes its diagrams and models. With processes, it's the same.
[]
We need to be able to model them before we can determine what's supposed to be there and what's not. [I.e., we need to know what an "in control" process looks like and what it's capable of doing.] The engine shop manual has performance specifications, and so do the processes at level 4. Capability Level 4 produces the models and performance expectations for each process as well as for the project itself. Without these we can't get to level 5 because, while there's certainly noise in the system at level 4, there are also too many other special causes of variation [let alone whether or not the process is in control] that must be eliminated before we can start to optimize in level 5.

Together, levels 4 & 5 are very much parallel to what many people know today as "Six Sigma".

So, now there's level 3. What's in there? If levels 4 & 5 are about getting to the point where we know so much about our processes that we can use statistics to eliminate process variation and noise, then capability level 3 must be where we eliminate chronic waste. How do we discern the chronic waste from the "necessary" activities? Well, we must first define the process so that we can then improve it.

There's no point in trying to improve a process that's not defined, and, there's no point in trying to define a process that's not even managed, and no point in trying to manage a process that no one does, wants, or needs.

This is what the generic practices of CMMI do. They create an infrastructure to better understand the process toward the ability to optimize it. Starting with doing the process, then managing it, then defining and improving it, then getting into statistics to model and predict performance which ultimately opens the door to optimization.

Believe it or not, organizations at (true) levels 4 & 5 are highly agile. They can pretty much roll with anything that's thrown at them. True level 4 & 5 organizations are NOT straight-jacketed by their processes, they're actually freed by them. If anyone is in (or has been in) a so-called "level" 4 or 5 organization and felt stifled, I'd wager the organization was not really "in it" for the improvements.

Labels: , , , , , ,

24 May 2007

More on Models

EE GADS!

It's been over a month since my last post. In fact, possibly the longest stretch ever in the life of this blog. Sorry about that! Where *does* the time go?

Well... I hope everyone's been busy and successful with their time!

So... In a recent presentation, I displayed several pictures of models: a model airplane in a wind-tunnel, a model of an entire airport terminal, a model of an office building, the cover of a fitness magazine with a buff pretty-boy pictured, and last, but not least, a computer-rendered model, of a LEGO® model of the NCC-1701B, "USS Enterprise" -- which so far, NOT ONE PERSON seeing the picture didn't recognize as (at least) the "USS Enterprise" of Star Trek fame.

There on the screen were five models. They all share one very basic characteristic:
not one of them is real in any way. Not even the photo of the fitness model. At the very least it's a picture, not a real person. Yet, in every case, each model could be used to do any number of things such as learn by example, take relative measurements, try and communicate ideas, and also, put pictures in our minds.

The importance of understanding the concept of a "model" is critical to understanding and effectively implementing process improvement with CMMI.

The distinction to make is how "models" are different from (1) "standards", (2) "specification", and foremost, from (3) reality.

Without clarity of these distinctions, implementation of CMMI will be challenging, tedious, and frustrating, and implementing CMMI in agile settings (or vice-versa) will be effectively impossible.

There's a certain skill (not sure how to describe it) in having the ability to take a model and make something real out of it. The more abstract the model, the more developed this skill must be. Because models are abstractions (some more-so than others), it's often helpful to be as detailed as possible when describing examples of what the product of the model might look like. This is what CMMI does. There are many examples throughout of what might be produced when the model is used.

Some of these examples are called "typical work products", and to some, even the "Specific Practices" can be more readily applicable when thought of as "sample" or "suggested" practices.

But here's the point to this post: in the picture, the USS Enterprise (NCC-1701B) is recognized by nearly everyone in every presentation I've ever shown it. And yet, it's not just a model. It's a LEGO® model and still, it's recognized. More than that, it's a computer-rendering of a LEGO® model and still, it's recognized. BUT WAIT! THERE'S MORE! It's a computer-rendering of a LEGO® model of a FICTIONAL space vessel that WON'T BE BUILT for *another* 400 years!

And yet, everyone recognizes it. No one denies that it is an instance of a model.

Here are two things that make this possible, and it is these the confluence of these two things that must be present to enable a successful implementation (or appraisal) of CMMI (or of whether or not it is being applied) in an agile (or any) setting without having to create or see the precise examples of practices or work products described in the text:
(1) People must understand what the model is and how to discern whether what they're doing or seeing represent the model. This is a skill, not everyone has it. And,
(2) People must be thoroughly exposed to, if not immersed in, the context in which the model is being applied.

There are people every day trying to use or appraise CMMI who don't have one or the other or both. It's no wonder they don't recognize the Enterprise when they see it.

Earlier, I mentioned how "models" are different from (1) "standards", (2) "specification", and (3) reality. I'll get to that shortly.
(I hope the wait was worth it.)

Labels: , ,