Archive for the ‘Discipline’ Category

SEPG North America 2013: Why You Want to Be There!

Thursday, August 22nd, 2013

Why Do You Want to Be There?
This year, the conference is significantly re-orienting itself towards END USERS. Previous SEPG conferences had a lot of useful information, especially for experienced change agents and consultants in the field.

This year, the focus is on up-and-coming disciplines, established success strategies, and most importantly, direct business performance benefit of using CMMI. In fact, what we’ve seen over the years is that CMMI is working extremely well with other forms of improvement as well as with existing defined service delivery and product development approaches — whether agile, lean, traditional, customer-focused, innovation-focused, or some combination.

CMMI provides a specific framework that is both a way to focus attention on specific needs while also benchmarking progress. Instead of flailing around trying to find where to put improvement energies, or waiting for a long-term traditional approach of process exploration and decomposition, CMMI takes a lot of the guesswork out by leveraging decades of experience and laying out very specific goals to seek to improve performance.

CMMI users have reported their productivity to increase magnitudes of order, costs drop in double digits, and their ability to cut through thick process jungles more quickly than being left alone to their own devices.

Yes, I’m speaking and presenting at SEPG 2013, but that’s the least relevant reason to attend. Come because you want to see what others are doing to marry CMMI with existing (or new to you) concepts; come because you want to hear from other end-users what they’re doing with CMMI to improve performance. And, most of all, come because you want to get and stay ahead of your competitors who aren’t using CMMI nearly as effectively as you will after attending.

SEPG North America: The CMMI Conference is coming soon, but there is still time to register.

This year’s conference program will include content perfect for you if you are:

  • Beginning to implement–or considering implementation of—CMMI
  • Seeking resources and best practices for integrating CMMI and Agile practices
  • Interested in taking your process improvement game up a level
  • A fan of rivers, boats, bridges or baseball !

Check out the conference agenda here: and when you register, enter the promotional code "Entinex" to save $100 on your fee. (Or just click this link and the discount will be applied for you.)

Book before September 1st to get a discount on your hotel room, as well.

Get the details on the website ( and email with any questions.

Performance and Change

Sunday, August 7th, 2011

Over the past weeks I’ve come in touch with several companies with the same exact challenge.  Though, to be sure, it’s nothing new.  I encounter this challenge several times each year.  Perhaps, even following Pareto’s principle, 80% (or more) of the companies coming to me for improved processes have a variant of a form of distress that accounts for no more than 20% (or less) of the possible modes of distress.

In particular, the challenges are variants of a very basic problem: they want things to change but don’t have an objective performance capability to aspire towards.  Put another way, they can’t articulate what it is that their operation cannot currently accomplish that they’d like their operation to be able to do once the changes are put in place.

I’ve mentioned “SMART” objectives before.  Here’s another application of those same objectives, only now, they show up at a higher level within the organization. 

Choose the right objectives.

Executives of the organization often confuse “SMART” objectives with “fuzzy” objectives.  By “fuzzy” I mean objectives that appear to be “SMART” but aren’t, and, the fuzziness obscures the situation so as to over-render the truly uninspiring nature of the objectives as being substantial accomplishments.  In fact, fuzzy objectives are not actually objective (lacking a solid way to measure accomplishment), or, are easily “gamed” (data or circumstances can be manipulated), or, are very deep within their comfort zone – or the opposite – are ridiculously unreachable (achievement is too easily attained or excused for not attaining), or, are indicators of task completion rather than indicators of actual outcome changes (don’t actually achieve anything but give the appearance of making progress), or, aren’t tied to actual increased capabilities/performance (don’t cause anything to change that anyone cares about), or, are dubious achievements that can be accomplished by simply “rowing faster” (working harder by working longer hours or assuming too much risk or technical/managerial debt), and so on. 

These same “fuzzy” objectives are frequently couched in deceptively goal-esque achievements such as achieving a CMMI rating, or “getting more agile”, or getting ISO 9000 registered.  What I noticed among the recent crop of companies with these issues is that they shared a particular set of attributes.  They were after “improvements” but didn’t know what they wanted these “improvements” to enable them to do in the future that their current operation was preventing them from accomplishing.  Sure, as in the case of CMMI, achieving the “objective” of a rating would enable the company to bid on work they currently can’t bid on, but that’s a problem addressed in two separate posts (here and here) from a while ago.

Digging a little further, I uncovered a more deeply-seated challenge for these same companies.  In each case, they could not articulate what they actually wanted to be when they “grow up”.  Closely related to not being able to explain how they wanted to be able to perform that their current operation precluded them from performing, they also couldn’t say whether they wanted their company to be leaders in:

  • product innovation,
  • operational excellence, or
  • customer intimacy.

According to Michael Treacy and Fred Wiersema, in The Discipline of Market Leaders, every company must decide the ordering of the above three values and how to organize and run the company to pursue the value they’ve chosen as first, followed by the second, etc.  Furthermore, and most seriously, leaders in the companies I visited were having serious issues.  Sometimes in more than one area: delivery, quality, scaling, proposal wins, proposal volume, cost pressure, and so on.  In none of the distressed companies were they looking at the performance capabilities of their operation.  And, in none of the companies did they have metrics that gave them insight into the performance of their operation or helped them make decisions about what to change or how.  In other words, they weren’t connecting their challenges to their lack of performance to the role their operational system of processes plays in that performance.

HPO_Cover_smOne thing that could help these companies climb out of the mud they’re in would be to simply and clearly define how it is they’d like to be able to perform that their current operations don’t facilitate, and, to define this capability in terms that represent an actual shift in how the operation functions.  Changes for improved performance is not about adding more work, adding more bureaucracy, or making people work harder.  Often, “working smarter” is easy to say but lacks substance.  “Working smarter” actually shows up as changes in the operational and managerial systems that carry out the performance of the operation.  A company that wants to perform better doesn’t need to add more work, or crack the whip louder, it needs to change how its operation runs.

More about this is in my upcoming book, High Performance Operations, available now for pre-order and due out at the start of October 2011 or earlier.

Counting Change

Monday, May 2nd, 2011

A more appropriate title would have been "counting changes" but it would have hardly been as interesting.  :-)

Change happens.  And often.

In particular, when a product is in its operation and maintenance ("O&M") phase, changes are constant.  (Note: O&M is frequently called "production", and this simple choice of words may also be part of the issue.)  But, too often, changes to products are handled as afterthoughts.  When handled as "afterthoughts", product features and functions receive far less discipline and attention than warranted by the the magnitude of the change were the new or different feature/functionality have been introduced during the original product development phase.

In other words, treating real development as one would treat a simple update just because the development is happening while the product is in production is a mistake.  However, it’s a mistake that can be easily diffused and reversed.

O&M work has technical effort involved.  Just because you’re "only" making changes to existing products that have already been designed, does not mean that there aren’t design-related tasks necessary to make the changes in the O&M requests.  Ignoring the engineering perspective of changes just because you didn’t do the original design or because the original (lion’s share of) design, integration and verification work were done a while back doesn’t mean you don’t have engineering tasks ahead of you.

In O&M, analysis is still needed to ensure there really aren’t more serious changes or impacts resulting from the changes.  In O&M, technical information needs to be updated so that they are current with the product.  In business process software, much of the O&M has to do with forms and reports.  Even when creating/modifying forms, while there may not be any technical work, per se, there is design work in the UI.  The form or report itself.  And even if you didn’t do that UI design work, you still need to ensure that the new form can accept the data being rendered to it (or vice-versa: the data can be fit into the report).

It’s frightening, when you think about it, how much of the products we use every day — and many more products that we don’t know about that are used by government and industry 24/7 — are actually "developed" while in the "O&M" phase of the product life cycle when the disciplines of new product development are often tossed out the door with the packing material the original product came in.  Get that?  Many products are developed while in the "official" O&M phase, but when that happens they’re not created with the same technical acumen as when the product is initially developed.

(I have more on this topic, and how to deal with business operations for products in the O&M phase, in this Advisor article from the Cutter consortium.)

In a sadly high number of operations I’ve encountered, once a product is put into production, i.e., is in O&M, the developers assigned to work on it aren’t top-notch.  Even in those organizations where such deleterious decision-paths aren’t chosen, the common experience in many organizations is that the developers are relied-upon even more for their intimate knowledge of the product and the product’s documented functionality — as would have otherwise been captured in designs, specifications, tests and similar work artifacts of new product development.  In these organizations, the only way to know the current state of the product is to know the product.  And, the only way to fix things when they go wrong is to pull together enough people who retain knowledge of the product and sift through their collective memories.  The common work artifacts of new product development are frequently left to rot once the product is in O&M, and what’s worse is that the people working on the new/changed features and functionality don’t do the same level of review or analysis that would have been done were the functionality or other changes been in-work when the product was originally developed.  Of course, it’s rather challenging to conduct reviews or analysis when the product definition only exists as distributed among people’s heads.  Can you begin to see the compounding technical debt this is causing?

I’ve actually heard developers working on legacy products question the benefits of technical analysis and reviews for their product!  As though their work is any more immune to defect-inducing mistakes than the work of the new product developers.  What’s worse is that without the reviews and analyses, defect data to support such a rose-colored view seldom exists!  It’s entirely likely, instead, that were such data about in-process defects (e.g., mistakes in logic, design, failing to account for other consequences) to be collected and analyzed, it would uncover a strong concentration of defects resulting from insufficient analysis that should have happened before the O&M changes were being made.

Except in cases where O&M activities are fundamentally not making any changes to form, fit, feature, appearance, organization, integrity, performance, complexity, usability  or function of the product, there should be engineering analysis.  For that matter, what the heck are people doing in O&M if they’re not making any changes to form, fit, feature, appearance, organization, integrity, performance, complexity, usability or function of the product?!

If anyone still believes O&M work doesn’t involve engineering, then they might need to check their definition of O&M.  Changes to product are happening and they’d better be counted because if not, such thinking fools organizations into believing their field failures aren’t related to this.  Changes count as technical work and should be treated as such.

(I have more on this topic, including how to help treat O&M and development with more consistent technical acumen in this Advisor article from the Cutter consortium.)