CASE STUDY – Part 4 – Proven, Practical Tactics For Agile IT Release Management (FINAL QUALITY CONTROL)

OVERVIEW:

This article is the fourth in a series of five which explain how an IT organization delivered a release management process that exceeded its management’s expectations and provided a foundation for continued success. The series includes:

  1. How did we get here – THE CONTEXT
  2. First solution steps – DEFINITIONS AND TRIAGE
  3. Intake and Release Planning – THE CORE SOLUTION
  4. Production Change Control – FINAL QUALITY CONTROL
  5. Metrics and Insights – LESSONS LEARNED
SUMMARY:
Many Information Technology organizations flounder when they are tasked to understand, organize and implement change to the system and application software serving their clients and end customers over a period of several years. This fourth article focuses on the key processes of the solution I developed during the Release Management consulting engagement.
Please refer to the first Article – THE CONTEXT for a full discussion of the problem domain and organization, to the second article – DEFINITIONS AND TRIAGE for a discussion of the get-ready steps, and to the third, THE CORE SOLUTION for details on planning releases.

It has been said that the Plan is nothing, Execution is everything. No Release Management process is complete without the correct steps to implement changes into IT production safely, securely and with acceptance by the client community. The IT organization still had some work to do in this area at the beginning of the consulting engagement. Here is how we improved, enhanced, and succeeded in the final quality control and implementation steps of releases.

OBJECTIVE SETTING:

In the first weeks of the consulting engagement I was encouraged by the progress already made by the Configuration Management team in beginning to exert discipline on what changes got implemented in production and how source and object code management tools (Clearcase) were being applied to support developers. What I didn’t find were any metrics or objectives for these processes. Were they being applied with great consistency or not? How could targets in this area be set? The best processes in the world must be executed successfully. Based on interviews, it seemed that about 80% of the changes that went into production were going through the Configuration Management team, and perhaps 20% were still being done “programmer-direct”. A lot of reasons (excuses) were offered.

Given this backdrop, I prompted the IT management team to set a new goal for the disciplined configuration-managed deployment of software to production at 95% of the deployments. Setting this higher objective allowed for some middle-of-the-night direct patches for emergency fixes, but IT made a resolute commitment to keep the source and object code integrity through quality-driven deployments. The results were pretty remarkable. The new objective itself caused behavioral changes in the programming staff, better collaboration with the configuration management team, and the outcome could actually be monitored and reported. In the course of 10 months, 98% of the deployments were done through the configuration management team. The Software Development Manager insisted that even “emergency” changes in his total control should follow the better path to production.

QUALITY GATES:

There were 3 principle quality gates that improved during the engagement. They were:

  • Passed QC Testing

The Quality Control group in IT had a strong leader and pretty talented and experienced testers. They had a firm grip on their processes and knew what they were doing. Viewed as the end of the chain, they often got the short end of the stick for the proper timeframes to do their job. Their worked got supported and strengthened by three key things:

  1. The QC group was able to insist that any code they were to test had to flow through the configuration management team first. This was great discipline to apply.
  2. The QA Manager attended all the Release Planning meetings and added a key piece of information on all CRs slated for the next release. She would place a green dot on all CRs that had passed the QC steps, denoting her team’s signoff or not. Also, she would place an orange dot on any CR that was late in getting their code into QA, based on project or CR-specific target dates. “Dot-placing” was done immediately prior to the Tuesday 10:30AM RPG meetings.
  3. During the RPG, the IT managers held focused discussions about the quality state of CRs and often made decisions to defer items to the next release or apply more resources to get a “Green-Dot” status. The QA manager wielded a lot of power in a short amount of time because the conditions were correct for these discussions.
  • Passed Change Control Board (Checklist)

As a reminder, in the second article, we stated about the CCB:

“This group was chaired by the Configuration Management leader, and had the responsibility to review and approve or defer the completed Change Requests for implementation in production. The Operations Manager and QA Manager played strong roles within this forum. The SPOCs for each Change Request were questioned for preparedness items, including the advance notification of the client communities. The CCB made a consensus decision on each Change Request and the outcome of these decisions allowed the Configuration Management Team to prepare the scripts and code packages for production upgrades.”

At the onset of the engagement the Change Control Board met infrequently (mostly to review implementation of only major projects). With a firm commitment to release production changes every 2 weeks, this role and the execution of its duties was firmly reinforced. A standard agenda was prepared and meetings were facilitated by the Release Manager. As Release Manager, I also proposed that the SPOC present answers to a checklist of items for every CR and to bring the checklist(s) to the CCB to aid in its decision. The checklist approach was useful and helped keep the key discussions focused. If only 2 CRs were being proposed, these meetings were mercifully short. If we had 15-20 CRs in a release, the meeting was suitably extended, but we never had a single meeting exceed 90 minutes.

As a practical matter, the CCB meetings were conducted on alternate Wednesday afternoons, just prior to the Releases done Thursday night. That left the right amount of time for the Configuration Management team to do its staging job effectively. On rare occasions, the CCB agreed to grant a SPOC (and the developers) another 8 hours to “get ready” for CCB approval, but these cases were true exceptions to the rule.

  • Passed Deployment Testing

We mentioned previously that the QC group included 2 functions – software testing and also help desk/customer support. The 3 people assigned to the Help Desk were extremely knowledgeable in the client’s use of applications and were always assigned as the post-deployment testers for application changes. They would know if anything didn’t work or “looked funny” and could order a rollback to the previous production code. The Help Desk staff also covered the non-prime-time shifts of the Call Center to resolve problems. They didn’t want bad code going into production.

We enhanced the Deployment Testing process in a major way with the simple act of scheduling releases at 2 week intervals. The previous practice had been for developers or project managers to individually negotiate with the QA manager and team for releases on any night of the week or on weekends. The Help Desk wanted to be helpful, but was forsaking any semblance of a real life with the requests to have production deployments and testing happening 2 to 3 times per week.

We also made a smart move by scheduling deployments on Thursday evenings. This allowed the Help Desk to work late on alternate Thursdays, but also “earn” Friday afternoons off for a long weekend. Simple way to build morale and improve execution! This scheduling pattern also had benefits for the client community as the routine was familiar and consistent. In selected cases, they would perform end-user testing as well on Thursday evenings.

CONCLUSION / TRANSITION

These improvements in the Final Quality Control steps built a lot of credibility for the IT organization. We saw a marked decrease in the number of Change Requests that failed the Deployment Testing gate and a marked upswing in customer confidence in implementation of change. We also found ourselves much more capable of backing out, recovering and then re-executing a configuration-managed deployment when necessary.

The rest, they say, is “agile” history. To summarize the improvements achieved, our story continues with LESSONS LEARNED.

(c) By David W. Larsen

Article Source

Share

.......................