Author Topic: Mitigating risks with Quality Check Points  (Read 50 times)

admin

  • Administrator
  • Full Member
  • *****
  • Posts: 124
Mitigating risks with Quality Check Points
« on: November 04, 2015, 11:50:36 am »
Has quality assurance fallen to the bottom of your priority list? Learn why it should be top of mind at every stage of development.

By Jeffery Gainer

It's the sordid little secret of many IT development organizations: with tight schedules, tighter budgets, and projects increasing both in scope and complexity, the deadline too often becomes the all-important goal. From the boardroom executives on down to the line managers in the development cubicles, little if any forethought is being given to quality. The mission is clear: Get to the market first and fast. Fix it later. Catch the bugs in testing after the coding is mostly done.

"We're too busy for process improvement" is a frequent refrain. Occasionally, well-meaning management might resort to a slogan like "Quality Is Job One." The reality, though, is that quality becomes Job 1.1.

Not only is management lacking insight into the process, in the trenches, at the developer level, there is no clear focus on quality assurance. Certainly testing may be carried out, usually at the end of the development work, but testing does nothing to ensure quality of the software product.

Disturbing study results and statistics supporting my observations abound. The central problem is apparent that all too often, according to Carnegie Mellon University's Software Engineering Institute, software gets delivered on time by way of overtime and individual heroics. While defined processes may well exist, when faced with a tight deadline or when any sort of crisis evolves, even ambiguity, those processes often are forgotten or foregone under the weight of the overwhelming pressure to deliver on that all-important deadline.

When teaching and mentoring clients on how to improve their development process, I shift a large extent of the focus from the development processes to the quality processes. Whether a shop is using a traditional waterfall development process or "extreme" or "agile" development approaches is largely irrelevant, both at the theoretical and at the hands-on practical levels. A core practice of quality assurance is examining and analyzing the entire process of how a product is conceived, defined, built, and delivered to the customer. Again, the development life cycle in use is not a key factor. Quality assurance practices can be built into any development approach. Quality assurance practices must be matched carefully to your organization's approach, regardless of any formal framework, be it CMMI, Six Sigma, best practice analysis, and so on. The approach that I teach and mentor clients in utilizing is what I call identifying and exploiting "Quality Checkpoints."

Rather than searching an existing development framework that may be in use by your organization, the idea is to identify quality checkpoints at key deliverables. For example, look for points where the deliverable-e.g., requirements, use cases, designs and prototypes, even mid-point customer demos-can be evaluated objectively. A quality checkpoint is any opportunity in the development process (as defined above) for identification and removal of defects.

It is important to understand that a defect is not simply a "bug" in code. A defect can be an incomplete, vague, or incorrect requirement. Not only can a defect be a failed test, but it can be one that is vague or poorly defined. A defect can reside in how your sales or business teams communicate with customers or subject-matter experts. Identifying quality checkpoints is not only a concern of subject-matter experts, business analysts, or technical teams. It is an invaluable opportunity for senior management to achieve a greater transparency of the development and quality processes of the organization.

Some of these quality checkpoints may be apparent already - e.g., functional testing, acceptance testing-but there are far more opportunities earlier in the development life cycle, when defects are less difficult and less costly to correct. Cost-benefit studies have demonstrated that the cost of fixing a faulty requirement late in the development process, as opposed to early in the process, can vary by a factor of 200 to 1. For example, if a defect can be identified and corrected during the requirements phase for a cost of, say, $500 then this means that the cost of correcting the defect after it has metastasized into the deployment phase rises to $100,000 - and this is the cost of a single bug!

In this scenario, correcting the faulty requirement might be a matter of convening an ad-hoc conference about the issue with a few subject-matter experts, members of the teams who create the business scenarios (also known as use-cases or user stories) in order to examine the requirement and correct or clarify it. The options to consider here are to dedicate a few person-hours in a meeting in order to address and fix a written requirement or to deploy the same personnel, plus a small army of developers and testers, another team to package and distribute a fix, and then even more staff to make apologies to customers-and in a truly worst-case scenario, call a press conference to deliver an explanation of the entire mess to the media and stockholders!

Let's look at the following simple example, one that can serve to prevent the above expensive scenario by defining and implementing a quality checkpoint. Typically, at the end of the requirements definition phase, the written requirements are passed to another business unit for approval. The members of the business unit review the requirements, deem them satisfactory, and sign off on them. The requirements then are passed on to the next stage of development. The criteria for the approval may be fairly well defined, but even the typical approval definitions do not encompass the full rigor of a quality checkpoint. I define (and have worked with clients to define and implement) a true quality checkpoint as testing the requirements.

Testing a requirement is a relatively straightforward task. Is the requirement complete? Is it correct? And is it testable-that is, is it binary? When translated into software, will it be a binary (i.e., true/false) test? This brief example helps illustrate how injecting another level of rigor-in this case, testing-early in the project life cycle can help to prevent the occurrence of passing a defect up the line where it inevitably will become more complex and costly to remediate.

Share on Facebook Share on Twitter