Archive for the ‘Data’ Category

Happy 2011!! Don’t let mediocrity be a “goal”!

Sunday, January 2nd, 2011

With many people and business executives making New Year’s resolutions, today’s topic is about goals and how setting the wrong goals can often undermine becoming high performance.

For example, a business *goal* of +/-10% budget/schedule? What’s wrong with this picture?  What’s it saying about an organization who makes a business *goal* out of being within 10% of their budget and schedule?

Does it give customers a warm fuzzy that a business knows what it’s doing when *their* *GOAL* is to come within 10% of what they said they’d do?  *THAT’S* supposed to make you feel good?

Shouldn’t goals be something to aspire to?  A challenge?  And, if getting within 10% of the budget or schedule is an aspiration or a challenge, that’s supposed to be *goodness*?

Such goals are nothing more than an aspiration to be mediocreAn admission that the organization actually has little confidence in their ability to deliver on commitments, to hit targets.

That’s one way to look at it.

Another is to say (what’s probably more accurate) that their estimates are a joke, and that when the “estimate” becomes the allocated budget, what they’re saying is that they’re praying the estimate won’t screw them.  Furthermore, it’s a likely reflection that they really don’t know their organization’s true capability in a “show me the data” kind of way.  They don’t have data on lead time, cycle time/takt time, touch time, productivity, throughput, defect/muda or other performance-revealing measures.

And so, without real data to instill confidence in capabilities, setting lame goals to hit targets is like many other things such organizations do: they go about business without a clear understanding of what they need to do or what it’s going to take to get the job done.  That way, when they don’t hit their targets they can just blame the innocent or find some other excuse for remaining mediocre.  After all, how exactly would such an organization expect or plan to hit their targets?  Come on!  Let’s be real.  They have no idea! 

Either way, making it a *goal* to do something we *expect* them to do is rather lame!

This year, don’t make lame resolutions, instead, come up with a strategy and a plan to to attain *confidence* in being able to hit specific SMART targets.  Then, grow that confidence and narrow the spread of the targets.

Doing Agile CMMI without “Doing” Agile or CMMI

Monday, November 8th, 2010

There’s an under-appreciated reality of what either agile or CMMI can accomplish for an organization.  In particular, it’s not as much about what either accomplishes for an organization as much as it is about what an organization does for itself that achieves agility and systemic improvement.

It seems to be a decades-old issue that many technology-oriented companies, and, it seems, especially software companies, struggle with organizing and managing operations towards excellence.  I can’t even begin to dig into any reasons why this is so, but there may be some truth to the stereotype about technology people not being good with business and/or people. ;-)

I’ve found something fascinating that is fairly consistent across many companies I’ve visited or discussed with colleagues.  What’s fascinating about it is not only the consistency across multiple fields, industries, verticals, and national boundaries, but that it reinforces a position I’ve taken since beginning my career.  That position is the afore-mentioned “under-appreciated reality”:

Aligning the organization with specific business goals and providing a supportive culture
leads to broad behaviors at all levels that result in high performance.

OK.  So, that may not seem earth-shattering.  But there’s a lot in this statement about agile and CMMI that too many organizations to “get”.  And, this is where all the anecdotal evidence from the many companies comes into play:

Organizations with a culture of excellence generate behaviors (including setting and pursuing specific business goals) that achieve agility and systemic improvement without specifically setting out to achieve either “agile” or “CMMI”.

Throughout my earlier career, I was routinely frustrated by “training” that provided me with specific tools and techniques for dealing with “many common” situations – pretty much all of which were cultural, interpersonal, and otherwise based on human behavior.  The cases, examples, and solutions all felt very canned and contrived.  Why?  Because, in effect, they were.  They were very specific to the context and would only solve issues in that context.  What the examples lacked – and by extension, the entire course – was fundamental tools with which to deal with situations that were not neatly boxed into the provided context.  In other words, these training courses provided practices. These practices work in explicit situations, but they fail to provide the basis upon which those practices were built.  Without such a basis, I and other consumers of this “training” could not address real situations that didn’t match the training’s canned scenarios.

“Doing” agile or CMMI by “doing” their respective practices results in exactly the same limited benefits.

Making agile or CMMI “about agile” or “about CMMI” accomplishes little value and lots of frustration.  These are only practices.  Practices are devoid of context.  A culture of excellence and an explicit business case to pursue improvements provide the necessary context.

We see this all the time.  For example, for decades in the West mathematics was taught in a way left many students wondering, “what will I do with this?”  (This may still be true in many places.)  It was/is taught without any context to how it can help them better analyze and understand their world.  As a result, Western students have historically been less interested in math, do less well in math tests, and are less inclined to study in fields heavily dependent on math.  All due to being taught math for math’s sake and not as a means to a beneficial end.

Medicine is also taught this way around the world.  Leading too many doctors to seeing patients as packages “symptoms” and “illnesses” rather than as people who need help.  Scientific exploration often gets caught up in the same quandary.  Exploration is the goal, if you’re looking for a specific answer, it’s research.  When you’re trying to create a specific solution it’s development.  Mixing-up “exploration” with R&D will frequently result in missing interesting findings in pursuit of narrow objectives.

In agile practices, what’s more important: doing Scrum or delivering value?  Pair programming, or reducing defects?  Maximizing code coverage in unit tests or testing the right parts of the product?  “Doing” Scrum, pairing, and automating unit tests are intended to deliver more product of high value, sooner.  Focusing on the practices and not what’s best for the customer are missing the point of these practices.  Same with CMMI.

What are the economics of your core operation?  Not just what your group costs to operate on a monthly basis, but what unit of value is produced for any given unit of time?  How do you know?  Why do you believe your data is reliable?  The ability to make decisions relies on data and when the data is unreliable, decisions, plans and anything else that relies on the data is questionable and risky.

It turns out (not surprisingly) that when a group focuses on what’s important AND has the economic data to reliably understand the behavior of their operation, it aligns their actions with the very same goals set-forth in both agile and CMMI.

Focusing on the right things in your operation will cause behaviors that achieve agility and “rate well” against CMMI.  Whether or not you’re even trying to “do” agile or CMMI.

Decisions without Data

Thursday, April 22nd, 2010

As many people know, for the six days ending Tuesday this week, the UK along with much of Europe has been in a virtual travel “lock-down” when it comes to commercial turbine-engine air travel.

The instigation of this situation has been the eruption of the Eyjafjallajökull volcano in Iceland last week whose plume of ash and debris was carried by the wind and jet stream straight over to Europe.

We humans are no match for one-two-punch of geology and meteorology, but how we respond to events such as these is entirely within our control.  It appears that the collective wisdom in Europe had no contingency planning for what to do in this sort of situation.  As a result, the air-space lock-down went on for days — many argue, now, it should not have lasted more than 48 hours, and, should never have resulted in a nearly system-wide blanket closure of air-space in most of Europe under any circumstances.

But even the lack of a plan for what to do isn’t the underlying cause, but merely a symptom of the root cause:

No defined standards, and, decision-making despite a lack of data.

As early as Saturday, 17 April, airlines were conducting their own flight tests with actual (but empty) passenger aircraft, and were returning information regarding the flight conditions in several places in Europe.  By Sunday, at least four major airlines had conducted their own flight tests and were beginning to compare data and report the same thing: There are many places where it is not safe to fly, but there are *more* places where it is safe to fly and we should work out a way to exploit these places.

What sort of data was prompting aviation and meteorological officials in Europe to keep the sky closed?

Weather RADAR data and satellite imagery.

Weather RADAR data and satellite imagery showed an ash cloud spreading over Europe.  On the face of things, this would prompt most rational thinkers to do what Europe did: progressively shut-down the airspace as the cloud made its way across Europe.  (Ash-laden air doesn’t make for good compressible, combustible materials in air-breathing engines, not to mention the damage it would cause in the works.)  However, since when did “on the face of things” ever really prove to be enough information?

There were two problems with using weather RADAR and satellite imagery both as a basis of determining the impact of the ash, and, as a source of data for making decisions:

  1. It doesn’t give you an accurate sense of proportion or density, and
  2. It can mislead you into seeing a more serious situation than exists.

What does weather RADAR look for?

How deep into an ash cloud can weather RADAR see?
Not far past the outer boundary.

What does ash look like on a weather RADAR?
A solid block of lead.

How much ash density does it take for weather RADAR to freak-out and “see” a massive block of solid ash?
Not much at all.

OK, so now we’ve established that using weather RADAR alone isn’t sufficient from which to be making decisions, let’s move our discussion to a simpler, but more pervasive gap in Europe’s air-traffic planning:

They had no established standards for how much ash in the air is enough ash to cause them to shut-down commercial aviation and bring businesses, commerce, and economies around the globe to a serious, sputtering stall, (not to neglect the stranding of hundreds of thousands of people all over the planet, including myself), and putting many plans, deals, and families into a tail-spin.

Even when European agencies did send aircraft to the air to test it, they didn’t know whether the data they brought back was telling them things were safe or unsafe.  They assumed “any ash is bad”.  It wasn’t until the airlines got together with engine and airframe manufacturers to look at the data collected by the airlines themselves and use the governments’ meteorological data to come up with “safe” standards for air-ash-density, that the collective governments (the last of whom were in the UK and Ireland) decided to lift the air ban in a dramatic change-of-position late on Tuesday evening (European time).

Let me summarize:

  1. No standards.
  2. Data collected but not meaningful.
  3. Empirical data collected by equipment not intended for how it was being used and interpreted without true insight (literally).
  4. Decisions being made anyway.

What I’ve skipped in this post is all discussion of contingency planning, continuity planning, and challenges with communicating across dozens of countries, laws, and decision-making structures.  Much of which were all gummed up throughout this mess for lack of thinking things through before the ash hit the turbo-fans.

The focus, here, however is on one crucial point, forget planning, because none of it would have mattered:

Europe was making decisions without data.

Are you?