So much about software development (in particular, and product development in general) as a business has less to do with technology than it has to do with keeping customers happy. What do customers really care about? While they say they want their product on time, on budget and doing what they asked of it to do, most of the time, managing their expectations has little to do with time, cost, features, functions, or quality. What they experience is more about how the developer treats them as a customer. In other words, what they perceive as the developer’s business as a service is what customers react to.
Of course, customers aren’t typically happy when the product is late, doesn’t do what they need it to do, and/or costs more than they were expecting to pay – scope creep notwithstanding. Be that as it may, agile development and management practices recognize the importance of customer involvement (and all stakeholders, in general). In fact, while the “traditional” development and management world has long espoused the importance of an integrated team for product and process development, it’s the agile development and management movement that actually made it work more smoothly with more regularity.
(Before anyone from the “traditional” development camp jumps down my throat, keep in mind: I came from the traditional camp first and saw attempts at IPPD and saw how difficult it was to get it going, keep it working, and eliminate the competition and other organizational stress that IPPD continues to experience in the traditional market. And, I’m also not saying it doesn’t work in traditional settings, just that it worked much better, much faster, and with much more regularity in the agile settings.)
From the beginning, agile practices understood the importance of the customer and of being a service to the customer. Kanban (more recently) even refers to different types of work as “classes of service”. In fact, if we look at the most common pains in development work (e.g., staffing, time, agreement on priorities and expectations), we see that it’s seldom technology or engineering issues. They’re issues more aligned with the developers’ abilities to provide their services.
[NOTE: For the remainder of this post, I’m going to assume the development operation actually knows its technology and knows what real engineering development looks like. This is a big assumption, because we all know that there are development operations a-plenty whose technical and engineering acumen leave much to be desired.]
Let’s now look at another importance facet of all development, agile notwithstanding. Much of it happens after the initial product is released! Once the product is released, there is precious little actual development going on. The ongoing support of the product includes enhancements and other updates, but very little of that work requires any engineering! Furthermore, what is worked-on comes in through a flow of requests, fixes, and other (very-often unrelated) tasks.
After a product has been released, the operation of a development shop resembles a high-end restaurant far more closely than it appears as a production floor. Once the menu has been “developed”, from that point forward, patrons merely ask for items from the menu and for modifications to items on the menu. Even were there to be a “special order” of something not at all on the menu, the amount of “development” necessary to "serve” it is minimal. And, when something truly off-the-wall is requested, the chef knows enough to respond with an appropriately apologetic, “Sorry, we can’t make that for you right now. Please let us know in advance and we’d be happy to work something up for you.” At which point, they would set about developing the new product.
Meanwhile, the vast majority of the work is actually just plugging away at the service. In the service context, development is often not the majority of the work. In that context, engineering plays an important role much less often than the ability to deliver services, manage transition of services, ensure continuity of service, handle incidents, manage resources, and so on.
What does this mean for agile teams, and, what does this have to do with CMMI?
Well, maybe much of the perceived incompatibility between CMMI (for Development) and agile practices are not due to incompatibilities in CMMI and agile, but incompatibilities in the business of agile and the improvement of development. In other words, maybe the perceived incompatibilities between CMMI and agile are because CMMI for Development (CMMI-DEV) is meant to improve development and many agile teams aren’t doing as much development as they are providing a service. Perhaps it’s just that the business models presumed by the two approaches are not aimed at making progress in the same way.
When agile teams are doing actual development, CMMI-DEV should work well and can help improve their development activities. But, agile teams are often not doing development as much as they are providing a service. They establish themselves and operate as service providers. Most of the agile approaches to development are far more aptly modeled as services.
CMMI for Services defines services as follows*:
*Glossary CMMI® for Services, Version 1.3, CMMI-SVC, V1.3, CMU/SEI-2010-TR-034
Many requests made of many agile teams have more to do with supporting the product than developing a product. While the product is still under development, then, by all means, CMMI for Development is apropos. But after the initial development (where more product-oriented money is spent), the development is hard to see and harder to pin down.
Maybe, improving development is not the right thing to develop. Perhaps agile teams could look at how they handle “development as a service” for their improvement targets. Maybe CMMI for Services is a much better fit for agile teams.
Could a switch from CMMI-DEV to CMMI-SVC benefit agile teams? Could a switch from CMMI-DEV to CMMI-SVC make achieving CMMI ratings easier and more meaningful?
I believe the answer to both is a resounding: ABSOLUTELY!
ATTENTION AGILE TEAMS: You need a CMMI rating? Look at CMMI for Services. It might just make your lives easier and actually deliver more value right now!
[NOTE: I have an essay, Are Services Agile?, in this book on this topic. Since you can “look inside” you might be able to read it without buying it. Furthermore, the essay has been published online in some places. You might be able to find it out there.]
Why I wrote High Performance Operations…
This entry is a near-duplicate of the entry on my new site, hillelglazer.com. That site and the blog there are dedicated to a broader perspective within which this blog is a subset. The launch of that site and blog coincides with the publication of my new the book, High Performance Operations: Leverage Compliance to Lower Costs, Raise Profits, and Gain Competitive Advantage.
I figured I’d explain how and why I ended up writing the book.
What you see behind me is my office. This is where I “shoot” most of my blog entries, but I also shoot plenty of video from the field. At conferences, clients, and even from my car. Especially when the ideas are fresh in my mind and timely. My entries are rarely polished, seldom rehearsed, and typically WYSIWYG. Me.
I try to keep my content to topics that impact what we’re trying to do, and, with this site, blog and the book, we’re trying to affect change. We’re trying to help operations avoid having compliance issues drag them down in directions against their desired pursuits. We do this through several aspects of lean and excellence.
All things being equal, competing operations with similar market standings will end up fighting over a few percentage points in the noise. We want to help operations stand apart from the crowd by operating at a level of performance their competition won’t try to reach. One way to do this is to minimize, if not eliminate the burden of compliance on the operation. It just so happens that what it takes to do this also has an immediate and long-lasting positive impact on the entire operation, compliance or otherwise.
I hope you join in the discussion over there and find value in your participation.
A friend who consults in program, project and risk management (typically to parachute-in and save wayward technology projects) is working with a client whose project is dreadfully behind schedule and over budget, and, not surprisingly, has yet to deliver anything the end-client or their users can put their hands on. It doesn’t help that his client isn’t actually known for being technology heroes. In fact, this is not the first time his client has tried to get this project off the ground.
Looking everywhere but in the mirror, my buddy’s client decided to have the developer put under a microscope. After all, reasoned the client, they hired the developer on, among other attributes, touts that they were rated at CMMI Maturity Level 3! So, they had the developer and the product undergo a series of evaluations (read: witch hunts) including a SCAMPI (CMMI) appraisal. Sadly, this tactic isn’t unusual.
Afterwards, trying to help his client make sense of the results, my pal asked me to review the report of the appraisal which was fairly and (more or less) accurately performed by someone else (not us). The appraisal was quite detailed and revealed something very interesting.
However, the development organization nonetheless failed to demonstrate the necessary performance of the Maturity Level 3 (ML3) practices they were claiming they operated with! In other words, they had processes, but they were still not ML3! In fact, they weren’t even Maturity Level 2 (ML2)!
How could this be?
While the details bore some very acute issues, what was more interesting were the general observations easily discernable from far away and with little additional digging. The appraisal company created a colorful chart depicting the performance of each of the practices in all of ML3. And, as I noted, there were important practices in particular areas with issues that would have precluded the achievement of ML2 or ML3; but, what was more interesting were the practices that were consistently poor, in all areas as well as the practices that were consistently strong in all areas.
One thing was very obvious: the organization, did, in fact, have many processes. Most of the processes one would expect to see from a CMMI ML3 operation. And, according to the report, they even had tangible examples of planning and using their practices.
What could possibly be going on here?
Seems awfully much like the development group had and used processes. How could they not rate better than Maturity Level 1 (ML1)?! Setting aside the specific gaps in some practices that would have sunk their ability to demonstrate anything higher than ML1 – because this isn’t where the interesting stuff shows up, and, because even were these practices performed, they still would have rated under ML2 – what the report’s colorful depiction communicated was something far harder to address than specific gaps. The developers’ organization was using CMMI incorrectly. A topic I cover at least in the following posts: here and here.
In particular, they were using CMMI to “comply” with their processes but not to improve their processes. And, *that* is what caused them to fall far short of their acclaimed CMMI ML3.
How could I tell?
Because of where the practices were consistently good and where they were consistently gap-worthy.
I was reviewing the report with my friend on the phone. As I was doing so he commented, “Wow! You’re reading that table like a radiologist reads an X-ray! That’s very cool!” The story the chart told me was that despite having processes, and policies, and managing requirements and so on, the company habitually failed to:
track and measure the execution of their processes to ensure that the processes actually were followed-through as expected from a time and resource perspective,
objectively evaluate that the processes were being followed, were working, and were producing the expected results, and
perform retrospectives on what they could learn from the measurements (they weren’t taking) and evaluations (they weren’t doing) of the processes they used.
It was quite clear.
So, here’s the point of today’s post… it’s a crystal clear example of why CMMI is not about process compliance and how it shows up. There are practices in CMMI that definitely help an organization perform better. But, if the practices that are there to ensure that the processes are working and the lessons are being learned aren’t performed, then the entire point to following a process has been lost. In Shewart’s cycle, this would be akin to doing P & D without C & A.
The only chance of anything that way is compliance. There’s no chance for improvement that way except by accident.
CMMI is not about “improvement by accident”. (Neither is Agile for that matter.)
Interestingly enough, while there were clearly issues with the developer’s commitment to improvement, there may not necessarily have been any clear issues with either the product or the results of their processes. While the experience may not have been pleasant for the developer, I don’t know that by buddy’s client can say to have found a smoking gun in their supplier’s hands. Maybe what the client needs is a dose of improving how they buy technology services – which they might find in CMMI for Acquisition.