Archive for the ‘Agile’ Category

Verification, Validation, & the iPhone 4

Wednesday, July 7th, 2010

Apple, Inc. learned the hard way what happens when engineering isn’t complete.  In particular, when verification and/or validation aren’t performed thoroughly.

Verification is ensuring that what you’re up to meets requirements.  “ON PAPER.”  BEFORE you commit to making the product.  It’s that part where you do some analysis to figure out whether what you think will work, will actually do what you expect it to do.  Such as, walking through an algorithm or an equation by hand to make sure the logic is right or that the math is right.  Or, stepping through some code to see what’s going on before you assume that it is behaving.  Just because something you built passes tests, doesn’t mean it is verified.  All passing tests means is just that: you passed tests.  Passing tests assumes the tests are correct.  If you’re going to rely on tests, then the tests need to be verified if you’re not going to verify the requirements or the design, etc.  Another problem with tests is that too many organizations only test at the end.  Verification looks a lot more like incremental testing.  Hey wait!  Where’ve we seen that sort of stuff before?

Had Apple’s verification efforts been more robust, they would have caught the algorithm error that incorrectly displays the signal strength (a.k.a., “number of bars”) on the iPhone4.  This is why peer review is so central to most verification steps.  The purpose of peer review, and of verification, is to catch defective thinking.  OK, that’s a bit crude and rude… it’s not that people’s thinking is defective, per se, but that thinking alone didn’t catch everything, which is why we like to have other people looking at our thinking.  Even Albert Einstein submitted his work for peer review.

Validation is ensuring the product will work as intended when placed in the users’ environments.  In other words, it’s as simple as asking, “when real users use our product, how will they use it, and will our product work like we/they expect it to work?”  Sometimes this is not something that can be done on paper, and you need some sort of “real” product, so you build a prototype.  Just as often it’s not something that can be done “for real” because you don’t get an opportunity (yet) to take your product into orbit before it has to go into orbit to work.  Sometimes you only get one shot, and so you do what you can to best approximate the real working environment.  But neither of these extreme conditions can be used by Apple as excuses for not validating whether or not the phone will work as expected while being held by the user to make calls.

Had Apple’s validation been operating on all bars, they likely would have caught this while in the lab.  When sitting in its sterile, padded vice, in some small anechoic chamber, after taking great care to ensure there are no unintended signals and nothing metallic touching the case, someone might’ve noticed, “gee, do you think our users might actually make calls this way?”  And, instead of responding, “that’s not what we’re testing here”, someone might’ve stepped up and said, “hey, does our test plan have anything in it where we’re running this test while someone’s actually using the phone?”

Again, testing isn’t enough.  Why not!?  After all, isn’t putting it in a lab with or without someone holding the phone a test?   True…  However, I go back to the same issue we saw when using testing as the primary means of performing verification… Testing is too often at the end.  Validating at the end is too late.  You need to validate along the way.  In fact, it’s entirely possible that Apple *did* do validation “tests” of the case separately from the complete system, and, in *those* tests — where the case/antenna were mere components being tested in the lab — performed fine, and, then only when the unit was assembled and tested as a complete system would the issue have been found.  In such a scenario we learn that component (elsewhere known as “unit testing”) is not enough.  We also need system testing (in the lab) and user testing (in real life).  Back we go to iterative and incremental…

So you see… we have a lot we can apply from ordinary engineering, from agile, and from performance improvement.  Not only does this… uh… validate(?) that “agile” and “CMMI” can work together but that for some situations, others can learn from applying both.

In full disclosure, as a new owner of an iPhone 4, I am very pleased with the device.  I can really see why people love it and become devotees of Apple’s products.  Honestly, it kicks the snot out of my prior “smart” phone in every measurable and qualitative way.  And, just so I’m not leaving anything out, the two devices are pretty much equally balanced in functionality (web, email, social, wifi, etc.)  – even with the strange behaviors that are promised to be fixed.  For a few years, this iPhone will rule the market and I’ll be happy to use it.

Besides embarrassing, this will be an expensive couple of engineering oversights for Apple to fix.  And, they were entirely avoidable for an up-front investment in engineering at an infinitesimal fraction of the cost/time it will take to fix.  For even less than one day of their engineering and deployment team’s salary, AgileCMMI can make this never happen again.

Apple, look me up.  I’m easy to find.

Truly Agile CMMI

Thursday, May 27th, 2010

The team room of a truly lean/agile company doing CMMI in a way that is natural to them and authentic.  They are doing CMMI in an agile way.  They know no other way to do it.  They went from "what is CMMI?" to ML2 in 14 weeks.  Their commitment to lean gave them an edge many companies wish they had: a culture of value and excellence.

What does "truly agile CMMI" look like?

Well, it looks like a commitment to adding value, for one.  It looks like delivering incrementally and using each incremental deliverable to iterate, learn, reflect, and continuously integrate into the whole.

It looks like questioning everything that you don’t understand until you do, and then basing decisions on what will provide the most benefit without adding unnecessary features, functions, or work.  It also looks like being true to your collaborative nature, to your culture of learning, to your behaviors of communication and transparency.  It looks like using measures to know where you are and how well you’re doing.  It looks like a commitment to to doing nothing for the sake of doing it — either it has a benefit that you can reap, or it’s not done.  It looks like building practices into what you do in a way that eliminates the need for waste-riddled, ceremonial audits later.

When every effort has a purpose that you can tie to a business benefit; when every task delivers something someone needs or wants; when you create a system that people want and use, that you don’t have to pull teeth to get people to adopt and provide you feedback on; that not only flows with and follows in-line with your natural ways of working but promotes new ideas and ways of changing your work regularly and distributing those ideas to everyone who wants to know…. when not a single result of some effort exists whose only reason to exist is to provide evidence for an appraisal….

*THAT’S* what truly agile CMMI looks like.

It’s not just in the processes that result from using CMMI, but also in the manner in which those processes were created.

You don’t "do CMMI" in an agile way when you’re a stogy traditional-oriented organization, and you don’t achieve an agile CMMI when your implementation approach is traditional.  If you’re an agile organization, incorporate CMMI in an agile way.  Don’t abandon agile values and principles to implement CMMI.  Exploit your agile values and principles to implement CMMI in a kick-ass way.

CMMI in an agile way, an agile approach to CMMI, and a seamless blending of CMMI with agile approaches doesn’t happen (easily) if your approach to AgileCMMI isn’t lean and agile.

Proper and Improper Use of CMMI

Tuesday, February 2nd, 2010

Just a few thoughts on some questions to pose as a sort of “guide” for whether or not you might expect benefits and value from using CMMI.  These also have the benefit of helping CMMI be implemented in a more lean/agile approach.

When implementing CMMI, Are you seeking . . .

  • Improvement or Compliance?
  • Empowerment or Definition?
  • Clarity & Awareness or Constraints & Rigidity?
  • Bottom-up input or Top-down direction?
  • To understand whether what you’re doing is working?  or Whether you’re doing what the process says?

In this case, we also value the things on the left more.

:-)

The things on the right are a longer road, with questionable benefits and many risks.  The things on the left get you to benefits and value sooner with less carnage and baggage.

Take your pick.