Archive for the ‘Agile’ Category

In this article, we illustrate how the tools of the ZDLC framework are employed in an Agile software delivery mode to achieve precision and acceleration in delivering software artefacts. We consider Daikibo, which is Cognizant’s methodology of executing agile.

Background on Daikibo℠

Daikibo℠ is a combination of Scrum, Kanban, and XP frameworks, and supports both the Agile and Lean principles. Daikibo proposes techniques to go beyond the classical approach of typical agile – scrum approach so that efficiency and productivity of the development life cycle are maximised. In  orthodox Agile-Scrum approach, there exist one self-organizing, cross-functional team writing stories, designing solution models, developing codes, testing, and producing functionality in each sprint. Daikibo makes this process work for distributed teams geographical dispersed around the globe. The “Hybrid” Daikibo℠ Agile approach separates the cross-functional teams and bifurcated responsibilities (a producer-consumer model) operating in an incremental-iterative pipeline approach, following the defined Agile principles. A simple 3 tiered process model binds Daikibo together:

  1. A Concept Team  is devised to manage the story production and generation
  2. The stories are consumed by the Delivery Team – story consumption
  3. And finally the developed software is validated by the Validation Team – story validation

Daikibo Team Structure

The Daikibo Pipeline Approach

The concept teams produce stories in the leading sprint. Near the end of the leading sprint, the delivery teams evaluate the stories and provide effort estimates in points.  The concept teams use the effort estimates to tweak the priorities of the stories.

The the delivery teams have a sprint planning session on the first day of the new sprint. They review the prioritized stories and commit to completing a number of them. The concept team starts to produce the next set of stories.
After the stories have been tested and accepted by the story owners, the new functionality is demonstrated. In the following sprint, the system integration testing group performs more tests and validates the integration with other systems.
Daikibo Pipeline Structure
Daikibo, in Japanese, means large scale which correctly sumarises the vision of Cognizant to lead the way in Agile development towards building a scalable and distributed agile approach but with strong  Location Transparency. In order to achieve this vision, we are required to formulate a cohesive collaborative model of work which is essential to achieve Flow, one of the Lean principle. The collaborative model is depicted in the following interactive model overview.
Daikibo Collaboration Model
The fundamental element to ensure flow within the conversational dynamics is to focus on the Critical Path. Consider the diagram aforementioned. The red line represents the critical path and the flow of information for the project, vis-à-vis pigs and chickens. The Governing Committees – (at the top) provide oversight over the scope, the functional and technical aspects of the project, and the Agile/Scrum process. The Supporting Groups – (at the bottom) identify existing content and manage the loading of copy into the new site; manage the integration with back end and 3rd  party systems, oversee the architecture of the site, and plan and build the infrastructure to support the project.

Breaking the Threshold

However, what has been observed is: it is vital to have the correct governance model in place which is supported by the people engaged on the development life cycle so that quality is continuously achieved. Yet, there is always a threshold, or a limit to how much quality and acceleration can be yielded by the people who are following a series of guidelines and best practices within a defined organisational framework.
In order to break the threshold and increase the point at which the Law of diminishing return kicks in, one needs to seek for innovative and breakthrough solution, and  when coupled with existing governance model and processes(e.g. Daikibo), will draw a new normal. In our story we talk about the introduction of automation and formal validation, which we believe augment the capability of the process towards this new normal. And this solution, we call it the Zero Deviation Life Cycle (ZDLC). This story is about Daikibo℠ and ZDLC. Follow us on our next article entitled Daikibo, a Cognizant Agile Production, with ZDLC (2/2) where we shall tell you about this story and demonstrate how ZDLC together with Daikibo℠ change the world of distributed Agile.

Is it possible to increase speed by reducing horsepower? Some brilliant minds believe the only way to increase speed, in this day and age, is to reduce horsepower. Big contradictions should not be compromised but dissolved in elegant solutions. Now, we are in the domain of software engineering and IT, may we ask the following question with big contradictions:

Can we increase Agility by increasing the engineering discipline to an agile process? Can we find an equilibrium, the resonant frequency between the many iterations of backlog grooming and “getting it right first time”? Can we achieve small iterative driving forces yet produce a large yield of work done?

Let us demonstrate how we achieve these using ZDLC in Agile.

Background

In the previous article on ZDLC with Agile, see article here, we asked 6 key questions about the risks involved in agile executions and elaborated on the consequences of those risks, should they be unchecked and untreated. The questions are:

  • How do we continuously Tie-Back User Stories (User requirements) to original Business Vision ?
  • Once we have got the Vision in place how does the Product Owner consistently do Validation and Verification of user stories?
  • Backlog Grooming – How do we continuously Prioritize User Stories ?
  • Backlog Grooming – How do we dynamically quantify the dependency amongst User Stories?
  • How are we going to handle the volume of work and Manual Overheads associated with creation and management of test cases for user stories?
  • How do we ensure Knowledge is managed consistently across highly complex and distributed Program ?

Now we have a choice. We can either address these risks and challenges in a conventional manual way or employ tools that can help to mitigate these risks by providing key methods and automation techniques to simplify and accelerate the process. The ZDLC Platform proposes tools and techniques to facilitate the process of risk mitigation. This article tells the story.

ZDLC in an Agile Execution

The ZDLC tools employed to address and mitigate the challenges and risks in a given agile execution are HoQ-e and RMS-e. Let us consider each question, one by one and demonstrate how the ZDLC platform improves Agile. The objective of ZDLC in Agile is simple: to add rigour to agility without hurting agility but augmenting it.

  • How do we continuously Tie-Back User Stories to original Business Vision ?

The HoQ-e is founded on traceability matrices of the House of Quality method. The drill down process of the HoQ-e allows one to break down high level business needs and goals into detailed requirements. Each level of drill down is linked to the others and navigating the traceability matrices is an innate property of the HoQ method. By innate, we mean the traceability is not an added component that requires additional management activities to preserve, but a core property  within the HoQ method, requiring no additional effort to maintain. The ability to trace user requirements, known as user stories in an Agile execution,  back to the business vision is a natural function of the process.

HoQ-e Traceablity Flow Down

In an Agile execution, the HoQ-e is used to gather user stories and manage the information relating to the Themes (Business Requirements), User Stories (User Requirements) and Technical Stories (Technical Requirements). This is done at the Concept Phase, the upfront thinking.

The House of Quality provides a journey which is structured and guides the requirement elicitation process. Through the use of smart automation throughout the process, HoQ-e quickens the investigation activities in workshops and enable requirements or user stories to be traced at any point of the life cycle. It connects the business vision with the user stories and as a result facilitates the validation process of the user stories, which leads us to the next question.

  • Once we have got the Vision in place how does the Product Owner consistently do Validation and Verification of user stories (user requirements)?

There are two parts to this question. The first part is about validating the user stories, i.e. asking the question “are we implementing the correct user stories?”  and the second part is about verification of the user stories, i.e. asking the question “are we implementing the user stories correctly?”. We start with the validation process.

The HoQ-e provides us with the capability to generate intuitive reporting and analytical data belonging to the requirement gathering process. Heat maps can be automatically generated at any level of the House of Quality but more importantly the heat map can also show relationships between attributes of different levels. By virtue of an example, the HoQ-e can generate a heat map illustrating the relationships between the attributes of level 5, user stories and the attributes of level 1, business vision.

HoQ-e Level HMap Transformation Initiative

The heat map provides an holistic view of the all the user stories against the business vision. Those stories possessing strong intersecting cells with the vision (highlighted in red) are important and must be implemented in order to achieve the vision. The ease of comprehension facilitates the validation process, anytime and anywhere within the agile life cycle. The facility adds rigour to agile without hurting the speed at which agile should sprint.

The second part of the question is about verification and this is how we tackle it. Part of the verification activity is to ensure artefacts are built correctly using the right engineering principles and best practices. The ZDLC platform proposes the use of the RMS-e tool (Requirement Modelling Solution) to automate part of the verification process in order to ensure quality whilst accelerating. RMS-e enables business analysts to model the user stories into pictorial diagrams like use case models or user stories models and to provision the models with additional rich data. Note that, HoQ-e flows naturally to RMS-e preserving the entire traceability as the journey continues from Requirement elicitation (HoQ-e) to Requirement modelling (RMS-e).

HoQ-e To RMS-e

RMS-e employs the techniques of Natural Language Processing (NLP) to compile the user stories against a predefined grammar,  hence automating part of the verification process. The grammar are written based on the best practices and software engineering principles. A typical example of the principles are 1) an actor should be a noun; 2) an action should have a verb ; 3) an action having 2 or more verbs should be split into two actions and so on. The grammar can be configured by any operational principles of the agile life cycle and employed by the NLP parser, to automatically verify the user story model. Any aspects of the user story model that does not conform to the grammar is highlighted as warnings to the Business Analyst.

RMS-e NLP Compiler

The Natural Language Processing (NLP) Parser acts as a compiler to the user story models. These suggestions in the compilation report can be employed or ignored by the business analyst, but the most important aspect of the compilation system is: ZDLC offloads some parts of the tedious verification process from the human and gives it to the machine to take care of. Any discrepancies in the user story models can easily be identified and rectified should it be a requirement defect. The earlier, the defects are found and fix, the cheaper to reach quality.  In automating the verification exercise, we accelerate the agile execution whilst augmenting quality.

  • Backlog Grooming – How do we continuously Prioritize User Stories ?

In agile execution there exist a backlog with items or artefacts that have to be treated and developed. As the backlog items are scheduled to be developed over parallel sprints, the Product Owner checks if the items are built according to “design”. Should there be a problem (non conformance), after retrospectively analysing the problem, the artefacts are put back into the backlog to be reprocessed and rebuilt.  Then, there is a need to re-prioritise the items, it is a continuous process of re-prioritising.

06fig06

Source: The Importance of the Product Backlog on a Scrum Development Project, , Jul, 25, 2012, InformIT

The continuous prioritisation of user stories in agile grooming is tedious, time consuming and very often are bottlenecks for sprints to run smoothly. The ZDLC proposes the use of HoQ-e to alleviate the pain in performing continuous prioritisation. When using the HoQ method, prioritising the requirements is not a manual exercise but an automatic one. The HoQ enables the processes of requirement elicitation and prioritisation to be merged as one process. In the classical approach of developing software solution, the Business Analyst prioritise the requirements after having elicited and gathered the requirements. These are two distinct processes. However as one employs the HoQ-e, both processes are clubbed together. The priority is calculated whilst the HoQ matrices are being populated with requirements and the relationships between the requirements are assessed. Consequently, as one inserts new items into or removes old items from the backlog,  the priority of the items are automatically re-calculated by HoQ-e.

HoQ-e as a Backlog in Agile

The priority index is calculated based on the number of  High, Medium and Low intersecting cells. The x-axis represents the backlog containing the items or user stories to be developed. As one adds or removes items, the priority will be re-calculated automatically. This means that re-prioritising is not a manual effort using the ZDLC HoQ-e tool. One the one hand, HOQ-e reduces the probability of error when prioritising (moving from manual to calculation) and on the other hand, HoQ-e accelerates the process of agile grooming.

  • Backlog Grooming – How do we dynamically quantify the dependency amongst User Stories (user requirements) ?

Should one use the priority of user stories solely, to schedule the sprints, one may run into the problems of framing unbalanced sprints in the agile execution. Priority on its own is not enough, one also needs to identify the co-dependencies of user stories. The rationale is, if highly prioritised items are put together in a sprint regardless of their dependents (how many other user stories depends on the items), changes to the user stories may require changes to the dependents. This means that, the first sprints may well require over 70% of effort and several intricate changes. There will not be a balanced spread of sprint and may result to the collapse of agility. ZDLC proposes the use of the HoQ-e to automatically identify the co-dependencies of the user stories which is calculated using the same basic principles of calculating the priority.

In HoQ-e, the roof provides the relationships between the x-axis attributes.

HoQ-e The Roof
 The type of the relationships between the x axis and the intensity of the relationship depicted by  High, Medium, Low, is provisioned by the Business Analyst whilst questioning the Business SMEs or other business stakeholders, during the workshops of the concept phase in the agile life cycle. The links in the roof  are used by the HoQ, to internally calculate the co-dependency indices between the x axis attributes, hence two dimensions of observations can be used to plan and schedule the sprints. HoQ-e generates a graph that position the user stories  over a priority against dependency model as shown below.

HoQePrioritizationCodependency

The graph of priority against co-dependency can be used to schedule the sprints in a balanced fashion and is a powerful tool to the Programme Manager. The rule of thumb, typically is to start with the high priority and low dependency, then high priority and high dependency, then low priority and low dependency and finally low priority and high dependency.

HoQ-e The Quadrant Pri vs Dep

With the priority and co-dependency indices being churned inside the engine of the HoQ-e, one is now empowered to mitigate the risks of planning unbalanced sprints in the Agile execution. The HoQ-e enables a balanced spread of Sprints.

  • How are we going to handle the volume of work and Manual Overheads associated with creation and management of test cases for user stories (user requirements)?

In any given agile life cycle, it is told that each user story formulated should have its corresponding test cases, which may be more than one test case per user story. The exercise of building test cases is manual and tedious, allowing for human injected defects. Furthermore, there is also a need to keep the test cases in sync with the user stories, any changes in the user stories may result to changes in the test cases. All these activities require manual effort.

The ZDLC Platform proposes the use of RMS-e tool to address the problem of manual overhead associated to test case creation and management per user story. In RMS-e, the user stories from the HoQ-e are modelled and for each user story, a process flow diagram is designed by the business analyst and the architect teams. The process flow diagram depicts the functional behaviour to how a given user story is expected to run or operate and contain key business rules and they are annotated with rich information where required.

HoQ-e To RMSe to Proc Flow Diag

The RMS-e can generated all the possible scenarios of each process flow diagram for each user story automatically. The software runs through all the transitions of the process  flow diagrams and the scenarios are created that defines the test cases. By default, the test cases take the shapes of sequence diagrams but can be formatted into any required syntax.

Scn Generator in RMS-e

By shifting the manual activities of formulating test cases to an automated process, maintaining the test cases in sync with the user stories is performed automatically by the system. RMS-e accelerates the agile cycles and minimises defect injections by the human through automatic generation and management of test cases for each user story.

  • How do we ensure Knowledge is managed consistently across highly complex and distributed Program ?

The answer to this question is through a robust yet transparent communication model that enables the same version of the truth to be perceived and shared collaboratively by all the stakeholders. The HoQ-e is predominantly a communication tool, that Dr Yoji Akao invented as part of the Quality Function Deployment (QFD) family. The HoQ-e enables different people of the supply chain or development life cycle to create, share, update and understand the substance of information in order to de-risk the process of decision making.

The management of knowledge across distributed teams is a complex undertaking, especially across geographies and cultures. The ZDLC platform proposes the use of the HoQ-e to provide a reliable communication platform across the diverse teams who are exercising an agile life cycle.

HoQ-e Share

The HoQ-e is a collaborative platform and it enables all participants of a project to view or change the information structured over the traceability matrices. Since HoQ-e provides a systematic view of the refinement process, the information populated in the HoQ-e can be traversed either top to down, which is a refinement activity or bottom to up, which is an abstraction activity. By virtue of an example, a developer writing a piece of code for a given user story may want to know which are the business functions, a given user story realising, and through an abstraction process, the developer moves up the levels of the HoQ-e.

HoQ-e provides a rich and intuitive collaboration platform to all  the stakeholders of the agile value chain and it exposes the traceability matrix to the Business stakeholders, BAs, PMs,  Developers, Testers  to share common understanding of the User Stories, whenever and wherever you are in the development journey.

Summary

With smart automation of  the tedious and error prone activities within the agile life cycle, one can add engineering and structural rigour whilst augmenting the agility of the process. We presented 6 questions that challenges the process of developing high quality software in an agile process. The tools of the ZDLC platform, namely the HoQ-e and RMS-e propose a unique solution to significantly improve the yield and quality of agile executions. ZDLC constantly attempts to find the equilibrium between the number of iterations in backlog grooming and “getting it right first time“.

Can we increase Agility by increasing the engineering discipline to an agile process? Yes and we are doing it.

“Is that what you meant?” one asks the client

Most probably the answer is no.

We do believe this question to be a fundamental one in any development life cycle and the earlier the question is answered, the cheaper it is to achieve quality and customer satisfaction.

In software engineering and IT, we proposed a number of development models based on iterations and small increments to plan and ensure several small deliveries to the client so that the latter may confirm: “that is what I meant” as quickly as possible for the production process to flow proficiently. Yet should there be too many of those small deliveries, it may become irritating  to the client but above all else, getting time from the client or business SMEs to frequently check the work is not feasible, not feasible at all.

So an equilibrium has to be achieved; an equilibrium between the number of iterations and getting it right first time. ZDLC proposes techniques to achieve this equilibrium, wherein we balance between the intensity or scientific rigour of getting it right first time and the number of iterations required. ZDLC succeeds in automating and hiding many parts of the scientific rigour and enables the question of  “is that what you meant?” to be asked as early as possible. Based on which, remediation and reinforcement activities take place.

ZDLC reaches the equilibrium by enforcing quality by design and this capability is embedded in the tools proposed, and these are as follows:

  • HoQ-e
  • RMS-e
  • TiA-e
  • CPN-e

May we present to you, four articles wherein each article demonstrates how each distinct tool achieves the equilibrium between rigour and iterations.

This article talks about the first tool of the ZDLC, the HoQ-e.

“Ensuring requirements are aligned and consistent is never easy. Add prioritization and it gets scary.”

Abstract

The article explains how the HoQ-e is used to gather:

  • unambiguous requirements
  • requirements which can be justified against business goals and drivers and
  • requirements which can be traced at any point in the Software development Lifecycle (SDLC)

The objective is to describe our process of engagement to ensure consensus on the approach and potential outcomes over a time line.

The High Level Structure and Usage of the House of Quality enhanced (HoQ-e)

The HoQ-e is a tool of the ZDLC platform. We employ it to enable the following capability in the problem of requirement engineering:

  • rapid requirements elicitation,
  • intuitive validation,
  • structured analysis,
  • consensus-based decision-making all based on objectively prioritised and dependency-aware metrics and
  • permitting dynamic traceability and change-impact assessment is our goal.
HoQ-e is based on a proven methodology & technique that enables problems and requirements to be addressed, validated, prioritized and used for transparent decision making enabling higher quality and more rapid outcomes.
Why do we need it?
  • Use of the House of Quality has been shown to reduce costs of quality by over 50% in the manufacturing Industry
  • HoQ-e is a unique and faithful adaptation of HoQ that has been customized for the Software Development industry and enriched with latest features of technology for enhanced design and improved usability.

What does it do?

  • It enforces structure in how information is captured and represented  and optimizes the effectiveness of the Business Analyst (or decision maker) and the Engagement with the Business SME
  • It graphically represents findings for rapid and effective quality control and governance
  • It traceably aligns captures information to interpretation to decisions – more quickly and effectively

How does it work?

  • It permits conventional styles of working whilst imposing better structure and rigour
  • It enables the Analysis approach to be pre-planned and (if needed) iterated safely
  • It represents information in levels which permits easier identification of patterns (e.g. For Re-Use)
  • It objectively prioritises decisions permitting un-contestable conclusions
  • It objectively quantifies co-dependency allowing for safe Program and Test Planning

The HoQ-e Approach

To illustrate a typical approach of HoQ-e, we employ a transformation programme  for a claim processing platform as example. Prior to starting the requirement workshop, there were key questions to be answered, and in answering those questions, we traced the journey we undertook to achieve an exhaustive yet unambiguous list of user requirements. The questions are as follows:
  • Who are the key stakeholders in claims and how do we classify them?
  • What are the key business functions of the stakeholders?
  • What is the high level problems in the current claims systems relative to the stakeholders?
  • What is the root cause for the the high level problems for claims?
  • What sort of solution characteristics need to be in play to address the root causes of the high level problems of claims?
  • What are user requirements do we need to implement in order to meet the solution characteristics?

HoQ-e Claims Transformation Output

The HoQ-e preserved a logical traceability between the level 1 questions to the level 5 questions, and this traceability routed the journey that the Business Analyst took to reinforce and enrich the requirement elicitation process without additional effort.

How do we identify the Requirements?

The HoQ-e facilitates the process of identifying and classifying  the problems statements. The key activities of the method are listed as follows:

  • Mapping the stakeholders for the domain gives a balanced prioritization.
  • Elicitation of business functions provides a map to business imperatives.
  • Mapping high level problems to business functions gives a view a balanced prioritization of the problems to solve.
  • Mapping the root cause to the problems enables engineers to address the root cause of the problem rather than the symptoms of the problems.
  • Mapping the Solution characteristics to the root causes of the problems defines an accurate expression of the user requirements, scoping and constraining the work to be done within the limits of the problems in hand, hence avoiding scope creep.

The ZDLC Approach to using HoQ-e in Software Requirement Engineering

We plan the requirement elicitation workshops with key stakeholders and/or their representatives as follows:
  • Workshop 1: Identify the Stakeholders and their  business functions within the problem domain under investigation. Level 1: Business Functions against Stakeholders.
  • Workshop 2: Identify the high level problems they are currently experiencing for each business function. Level 2: Business Functions against Problems
  • Workshop 3: Identify the root causes of the problems by asking why do theses problems exist for the business functions. Level 3: Problems against Root Causes
  • Workshop 4: Propose Solution Characteristics to resolve the root causes of the problems. Level 4: Root Causes against Solution Characteristics
  • Workshop 5: Formulate the user requirements that are to be implemented in order to realise the solution characteristics. Level 5: Solution Characteristics against User Requirements
  • Workshop 6: Derive the technical requirements from the user requirements. Level 6: User Requirements against Technical Requirements

HoQ-e Claims Transformation Level 1

HoQ-e enables a  logical flow down process of refining requirements. At each level of the HoQ-e, prior to traversing to the next level, the HoQ matrix is reviewed, justifying the requirements and asking the vital question “is that what you meant?” as early as possible during the planned time with the business SMEs or client. Such reviews lead to the concept of  micro sign off of each level, ensuring that the next level of the HoQ-e starts from a validated and firm foundation.
We elicit the Root Causes of the problems and drill down the requirements over a period of 3 weeks, as depicted on the following diagram:
HoQ-e Claims Drill Down Process
The HoQ-e is designed in such a way, where the information and annotations of the requirement attributes are provisioned at the appropriate location within the traceability matrices. With a structured and logical flow down process, it accentuates the correct questioning techniques during the investigation or study. To reduce ambiguity and enrich requirement attributes, there are two fundamental capabilities of the HoQ-e to be considered:
  1. Placing the Right Information in the Right Place: In order to reduce ambiguity in the description of the requirements, HoQ-e provide a feature to annotate the requirements based on some predefined meta dictionary, which are based on some best practices proposed by the IEEE. The additional information provided, improves the comprehensiveness of the requirement attributes and this is a core capability of the HoQ-e to become a proficient communication tool. The annotations for each of the requirement attributes are agreed by consensus at each level prior to a micro sign off. The following diagram shows how the requirements attributes are annotated. It has been observed in many classical approaches to requirement elicitation where information missed or lack of enrichment of requirements led to expensive change requests. HoQ-e Claims Transformation Annotation Req
  2. Finding the Right Information at the Right Time: The ability to investigate a user requirement to justify its existence for the solution is essential; this is asking the question why and tracing back the user requirements of the lower levels  to the higher levels business goals. Unlike the conventional approach, this exercise is easy and intuitive in the HoQ-e. The latter is a traceability matrix which by default structures the requirements and their roots over a tree model. Traversing the tree nodes, empowers one to walk trough the requirement definitions and validate their origin against the business goals at any point in time and by anyone collaborating on the HoQ-e. This capability ensures validation is done correctly and swiftly. So now it is not only about answering the question “is that what you meant?” but urging the client to answer “do you actually mean this?, or is it…”

HoQ-e Claims Transformation Traceability

In 3 weeks, the quality and richness of the requirements gathered are much more accurate and much less ambiguous than any classical approach. The yield of quality is significantly increased with less effort required. With the HoQ-e in hand, it is like having a drum beat whilst doing workit fetches the right rhythm to compel the process to flow efficiently.
In the next article we talk about the Requirement Modelling Solution (RMS-e) and how it is employed in the ZDLC Platform to answer the question “is that what you meant?” as we leave requirement elicitation to dwell into Requirement Modelling.
From the ZDLC Team

“Quality cannot be tested, it should be embedded”

The Story

This story is about the motivation behind the Zero Deviation Life Cycle (ZDLC). A motivation driven by real business problems, where projects are plagued by cost and schedule overrun and requirements no longer resembled the business needs or IT solutions failed to resolve the correct problems. Many methodologies emerged, most of them tackled the management issues of realising a rationalised development life cycle. Very few looked at the engineering parts; the parts that define the quality and reliability of the end results; the part that makes one proud of the product. There are several topics and line of thoughts written towards the concepts of Application Lifecycle Management within a new business dynamics. As a result, the motivation behind the ZDLC and its origin is to propose the concept of Application Lifecycle Engineering (ALE). This is because, management constraints cannot dictate how engineering techniques are applied and above all else, management constraints cannot sacrifice engineering methods for speed and time.

ZDLC is about ALE, or “smart ALM”. It complements all ALMs by focussing on the engineering aspects of a typical Software Development Life Cycle, advancing techniques to employ statistical and probabilistic models, formal methods, simulation and intelligent automation that speed up the process of developing software whilst augmenting quality and productivity of the process. Yet all the scientific rigour is well hidden through clever abstractions and simplified user interface and experience of the tools.

Many organisations seek to use ZDLC in an agile execution mode, where ZDLC automates many of the tedious and time consuming software validation processes that may hinder agility. But this story is about an organisation which did not want to implement agile but wanted the agility of their existing Waterfall model (SDLC) to be increased. This organisation shifted Agile from an execution model to become a quality attribute of waterfall, and wanted best of both worlds. ZDLC thrives in such environment.

We start with the business drivers of the organisation which are as follows:

  • To capture requirements for projects/programmes (including new projects) more effectively, ensuring that the client receives the downstream benefits of higher quality deliverables and innovation.
  • To bring more agility to the current Waterfall approach to project definition and delivery, distinct from a purely agile approach.

We were asked to comeback with our experience of helping clients address these challenges and how they may be applicable to the organisation in question. We have been working with customers for some time who have had similar challenges and concerns. The outcome and experience from these engagements has allowed us to develop a new platform, the Zero Deviation Life Cycle (ZDLC).

By using Cognizant’s ZDLC platform we have achieved the following measurable benefits:

  • 20-25% saving in the e cost of Software Development Life Cycle (SDLC) ) delivery
  • 40%-50% reduction in the cost of quality in support and maintenance cycles

The ZDLC platform also ensures a better decision-making process, leading to o a higher degree of project success. This is achieved through a structured, yet unrestricted, requirement gathering process, establishing consistent communication across life cycle stages and controlled impact analysis.

ZDLC is a platform that comprises of the following principal tools:

  • (HoQ-e)- House of Quality enhanced
  • (TRiZ-e)- Theory of Inventive Problem Solving
  • (RMS-e)- Requirement Modelling Solution
  • (TiA-e) – Testable Integration n Architecture
  • (CPN-e)- Coloured Petri Nets
  • (SDP-e)- Systemic Defect Profiler

These are used by joint client and Cognizant teams across the project life cycle. The ZDLC platform is based on some key principles to propose an approach of development which is a scientific and quantitative means of managing and measuring ever-changing requirements. This provides the ability for each requirement change to be mathematically analysed and assessed against the impact of the change across business processes. ZDLC has allowed our clients to:

  • instigate a culture of sustainable and measurable innovation within t the programme lifecycle
  • trace requirement through the SDLC, providing a consistent communication mechanism
  • prioritise requirements and bring consistency across SDLC, injecting great agility into the process
  • eliminate contradictions within a solution
  • minimise defects throughout the SDLC and thereby reduce cost

In these engagements ZDLC increases the power of modelling software applications and creating innovative solutions at pace. Whilst Cognizant has a dedicated agile team, we were empowered by the organisation’s focus to bringing greater agility to the current waterfall way of working rather than introducing the agile methodology.

Detailed below are two examples of how ZDLC added agility to the standard Waterfall methodology allowing our clients to realise significant benefits.

  • To prove the benefit of the new approach a comparison exercise was undertaken. Two streams of work were started at the same time to solve the same problem. The objective was to gather sufficient requirements and produce technical specifications to meet a project need. The team using a classic Waterfall approach took 15 days to complete the task, whereas the team using ZDLC took only 3 days (due to intelligent automation).
  • The second example e focuses on using ZDLC to bring innovation to a client’s on-line platform. The aims were to increase the number of functional software releases over a 12 month period from 1 to 4 and deliver a reduction in the cost of quality. ZDLC allowed the team to deliver 42% reduction in cost of quality a and in the first six months two functional releases have been delivered putting the c client on-track to meet their business goals. In addition ZDLC found 3 major design flaws that the classical Waterfall approach failed to identify.

The ZDLC approach adds significant value to bring agility to waterfall and rigour to agile and the principle tools has been carefully crafted to achieve this.

The Tools

The House of Quality (HoQ-e), adapted to the problem domain of IT for requirements engineering and business requirement traceability to strengthen reliable communication amongst the stakeholders of the ZDLC.

  • Inputs: Structured questioning, Aligned Business and Architectural Analysis, Customer engaged decision making process.
  • Outputs: Prioritised and dependency aware work packages and consensus building across teams.
  • Benefits: Auditable alignment to goals, pattern-based solution definition, strategic alignment and powerful decision-support.

The Theory of Inventive Problem Solving (TRIZ-e) adapted to the problem domain of IT for focused innovative solution -definition.

  • Inputs: HoQ analysis (re-used), prioritised list of contradictions to solve.
  • Outputs: Contextualised and d measurable Innovation options.
  • Benefits: Directed process off ideation, reliability that ideas generated meet n needs and can be measured before building.

Requirement Modelling Solution (RMS-e) Used to model and compile user requirements, model process flow diagram for each user requirement and generate test scenarios for each process flow diagram.

  • Inputs: HoQ analysis (re-used), prioritised list of contradictions to solve.
  • Outputs: Generated Software Requirement Documents (SRDs), Process Flow diagrams and Test Cases.
  • Benefits: Accelerated process of requirement modelling and automates the process of generating SRD and requirement verifications.

Testable Integration Architecture (TiA-e) Used for low-level requirement consistency, verification of design against requirements and generation of validated artefacts to drive delivery and to assist in governance.

  • Inputs: RMS-e artefacts (re-used) 100% transparent design decisions prioritised process and entities for communication.
  • Outputs: Industry-standard m models testable against requirements and generated technical contracts.
  • Benefits: Auditable alignment to requirements, notionally formal, technical contracts to drive development and testing, earlier and more comprehensive defect detection, requirements consistency, lower cost of quality.

Coloured Petri Nets (CPN-e) adapted to modelling process and non-functional requirements and simulation of models against them.

  • Inputs: HoQ-e analysis (re-used), prioritised process-entities.
  • Outputs: Machine readable deployment model for solutions.
  • Benefits: Deployment model can be simulated against non-functional requirements, capacity planning, stress testing support, early defect detections, lower cost of quality.

Systemic Defect Profiler (SDP-e) for automated root cause analysis.

  • Inputs: TiA models, log files from development work streams or network layer data.
  • Outputs: Formal analysis from design to run-time reconciliation, logging sanitation.
  • Benefits: True, enabled Governance, faster root cause analysis. Much lower cost of quality/defects.

The story begins…

Background

The principles of the Zero Deviation Life Cycle (ZDLC) complement the Agile methodology. By means of employing ZDLC, we are empowered with a unique set of tools that enables us to achieve the following:

  • Mitigate certain risks within an Agile execution,
  • Reduce the manual effort,
  • and accelerate the overall process through intelligent automation.

We observe that the aforementioned objectives, lead to an enhancement of effective Agile Adoption.

Key Agile Artefacts

In order to understand the risks and challenges involved, we looked at the important artefacts and tasks in a given Agile execution and these are depicted in the following diagram.

KeyAgileArt

Each of the processes requires effort to avoid waste and agility to speed up the end results. There are risks and challenges that are required to be diagnosed and treated.

Risks facing any Agile Execution

There are key questions that need to be answered in order to model the solution to mitigate the risks. The questions are as follows:

  • How do we continuously Tie-Back User Stories to original Business Vision ?
  • Once we have got the Vision in place how does the Product Owner consistently do Validation and Verification of user stories (user requirements)?
  • Backlog Grooming – How do we continuously Prioritize User Stories ?
  • Backlog Grooming – How do we dynamically quantify the dependency amongst User Stories (user requirements) ?
  • How are we going to handle the volume of work and Manual Overheads associated with creation and management of test cases for user stories (user requirements)?
  • How do we ensure Knowledge is managed consistently across this highly complex and distributed Program ?

Consequences of the risks

The problem of “continuous Tie-Back of user requirements or User Stories to original Business Vision” is a constant battle to ensure that: what is delivered is what has been asked from the business (Voice of the Customer). This process is tedious and time consuming and very often if incorrectly handled may result to developing unyielding capabilities to the business stakeholders. As a result validation of the user stories is necessary, which leads to the next question.

Once the vision of the business is in place, how do we instigate a process of consistently validating and verifying the user stories against the vision. The consequence of failing this exercise will lead to reworks as user stories will either be 1) not reflecting the needs of the business (validation) or 2) incorrect formulation of user stories against a predefined set of best practices (verification). This exercise of validation and verification (V&V) is time consuming, and may not be thorough since V&V may be sacrificed for speed leading to more rework and a growing product backlog of the Agile Lifecycle.

Product backlog grooming is a vital activity in an Agile environment and getting this process right defines the success of delivery. There is a continuous need to treat the backlog as new user stories or incorrect user stories stream into the backlog queue. Backlog grooming is a repetitive task of re-prioritising and re-mapping the inter-relationships of user stories so as to plan the next sprints efficiently. As the Product Backlog increases in size, the effort required to prioritise and re-prioritise increases and the human error will increase. Incorrect priority leads to incorrect planning of sprints.

In the next question of addressing risks, we discuss on the problematic of handling large volume of work and manual overhead associated to the creation and treatment of test cases for each user story. This problem hinders the flow of activities and slow downs the process of Agile. The creation of test cases is tedious and time consuming and as seen in classical state of affairs these test cases are formulated manually. The latter allows for human injected defects in the test cases which requires extra effort to 1) correct the test cases and 2) keep the user stories and test cases in sync.

The last question addresses the challenge of ensuring Knowledge is managed consistently across a complex and distributed Programme. Ideally, the perception of a given user story in the eyes of a Business Analysts should be the same for the Tester or the Developer and so on. It is required to achieve a common understanding of the description of requirements. Yet the complexity of social dynamics and geographical dispersion of the programme, transform this activity into a very challenging and risky issue. If the knowledge is not managed correctly, communication amongst peers of the development process is ambiguous and unclear, resulting to defects and waste in the agile process. Subsequently the product backlog grows.

The next diagram illustrates a typical Agile process.

Agile Process

The consequences of these risks in an Agile execution lead to waste, poor quality, low yield and growing cost to the client. Like any process, an Agile process is also subjected to entropy, and work has to be done to minimise the waste so that the value of agility and speed is not loss. Now the question is: work that has to be done can either be done manually or with the help of some smart tools and techniques.

Are we comfortable handling these risks manually or do we agree that these challenges warrant a tools-based mitigation approach ? If the answer is yes, then follow part 2 of the blog, wherein we present the Zero Deviation Life Cycle Agile Enablement Product. The latter was designed to seamlessly blend process automation and formal mathematical rigour into the capability of Agile. It adds rigour to agility without hurting agility but augmenting it.