Archive for the ‘Agile’ Category

In this article, we illustrate how the tools of the ZDLC framework are employed in an Agile software delivery mode to achieve precision and acceleration in delivering software artefacts. We consider Daikibo, which is Cognizant’s methodology of executing agile.

Background on Daikibo℠

Daikibo℠ is a combination of Scrum, Kanban, and XP frameworks, and supports both the Agile and Lean principles. Daikibo proposes techniques to go beyond the classical approach of typical agile – scrum approach so that efficiency and productivity of the development life cycle are maximised. In  orthodox Agile-Scrum approach, there exist one self-organizing, cross-functional team writing stories, designing solution models, developing codes, testing, and producing functionality in each sprint. Daikibo makes this process work for distributed teams geographical dispersed around the globe. The “Hybrid” Daikibo℠ Agile approach separates the cross-functional teams and bifurcated responsibilities (a producer-consumer model) operating in an incremental-iterative pipeline approach, following the defined Agile principles. A simple 3 tiered process model binds Daikibo together:

  1. A Concept Team  is devised to manage the story production and generation
  2. The stories are consumed by the Delivery Team – story consumption
  3. And finally the developed software is validated by the Validation Team – story validation

Daikibo Team Structure

The Daikibo Pipeline Approach

The concept teams produce stories in the leading sprint. Near the end of the leading sprint, the delivery teams evaluate the stories and provide effort estimates in points.  The concept teams use the effort estimates to tweak the priorities of the stories.

The the delivery teams have a sprint planning session on the first day of the new sprint. They review the prioritized stories and commit to completing a number of them. The concept team starts to produce the next set of stories.
After the stories have been tested and accepted by the story owners, the new functionality is demonstrated. In the following sprint, the system integration testing group performs more tests and validates the integration with other systems.
Daikibo Pipeline Structure
Daikibo, in Japanese, means large scale which correctly sumarises the vision of Cognizant to lead the way in Agile development towards building a scalable and distributed agile approach but with strong  Location Transparency. In order to achieve this vision, we are required to formulate a cohesive collaborative model of work which is essential to achieve Flow, one of the Lean principle. The collaborative model is depicted in the following interactive model overview.
Daikibo Collaboration Model
The fundamental element to ensure flow within the conversational dynamics is to focus on the Critical Path. Consider the diagram aforementioned. The red line represents the critical path and the flow of information for the project, vis-à-vis pigs and chickens. The Governing Committees – (at the top) provide oversight over the scope, the functional and technical aspects of the project, and the Agile/Scrum process. The Supporting Groups – (at the bottom) identify existing content and manage the loading of copy into the new site; manage the integration with back end and 3rd  party systems, oversee the architecture of the site, and plan and build the infrastructure to support the project.

Breaking the Threshold

However, what has been observed is: it is vital to have the correct governance model in place which is supported by the people engaged on the development life cycle so that quality is continuously achieved. Yet, there is always a threshold, or a limit to how much quality and acceleration can be yielded by the people who are following a series of guidelines and best practices within a defined organisational framework.
In order to break the threshold and increase the point at which the Law of diminishing return kicks in, one needs to seek for innovative and breakthrough solution, and  when coupled with existing governance model and processes(e.g. Daikibo), will draw a new normal. In our story we talk about the introduction of automation and formal validation, which we believe augment the capability of the process towards this new normal. And this solution, we call it the Zero Deviation Life Cycle (ZDLC). This story is about Daikibo℠ and ZDLC. Follow us on our next article entitled Daikibo, a Cognizant Agile Production, with ZDLC (2/2) where we shall tell you about this story and demonstrate how ZDLC together with Daikibo℠ change the world of distributed Agile.

Is it possible to increase speed by reducing horsepower? Some brilliant minds believe the only way to increase speed, in this day and age, is to reduce horsepower. Big contradictions should not be compromised but dissolved in elegant solutions. Now, we are in the domain of software engineering and IT, may we ask the following question with big contradictions:

Can we increase Agility by increasing the engineering discipline to an agile process? Can we find an equilibrium, the resonant frequency between the many iterations of backlog grooming and “getting it right first time”? Can we achieve small iterative driving forces yet produce a large yield of work done?

Let us demonstrate how we achieve these using ZDLC in Agile.


In the previous article on ZDLC with Agile, see article here, we asked 6 key questions about the risks involved in agile executions and elaborated on the consequences of those risks, should they be unchecked and untreated. The questions are:

  • How do we continuously Tie-Back User Stories (User requirements) to original Business Vision ?
  • Once we have got the Vision in place how does the Product Owner consistently do Validation and Verification of user stories?
  • Backlog Grooming – How do we continuously Prioritize User Stories ?
  • Backlog Grooming – How do we dynamically quantify the dependency amongst User Stories?
  • How are we going to handle the volume of work and Manual Overheads associated with creation and management of test cases for user stories?
  • How do we ensure Knowledge is managed consistently across highly complex and distributed Program ?

Now we have a choice. We can either address these risks and challenges in a conventional manual way or employ tools that can help to mitigate these risks by providing key methods and automation techniques to simplify and accelerate the process. The ZDLC Platform proposes tools and techniques to facilitate the process of risk mitigation. This article tells the story.

ZDLC in an Agile Execution

The ZDLC tools employed to address and mitigate the challenges and risks in a given agile execution are HoQ-e and RMS-e. Let us consider each question, one by one and demonstrate how the ZDLC platform improves Agile. The objective of ZDLC in Agile is simple: to add rigour to agility without hurting agility but augmenting it.

  • How do we continuously Tie-Back User Stories to original Business Vision ?

The HoQ-e is founded on traceability matrices of the House of Quality method. The drill down process of the HoQ-e allows one to break down high level business needs and goals into detailed requirements. Each level of drill down is linked to the others and navigating the traceability matrices is an innate property of the HoQ method. By innate, we mean the traceability is not an added component that requires additional management activities to preserve, but a core property  within the HoQ method, requiring no additional effort to maintain. The ability to trace user requirements, known as user stories in an Agile execution,  back to the business vision is a natural function of the process.

HoQ-e Traceablity Flow Down

In an Agile execution, the HoQ-e is used to gather user stories and manage the information relating to the Themes (Business Requirements), User Stories (User Requirements) and Technical Stories (Technical Requirements). This is done at the Concept Phase, the upfront thinking.

The House of Quality provides a journey which is structured and guides the requirement elicitation process. Through the use of smart automation throughout the process, HoQ-e quickens the investigation activities in workshops and enable requirements or user stories to be traced at any point of the life cycle. It connects the business vision with the user stories and as a result facilitates the validation process of the user stories, which leads us to the next question.

  • Once we have got the Vision in place how does the Product Owner consistently do Validation and Verification of user stories (user requirements)?

There are two parts to this question. The first part is about validating the user stories, i.e. asking the question “are we implementing the correct user stories?”  and the second part is about verification of the user stories, i.e. asking the question “are we implementing the user stories correctly?”. We start with the validation process.

The HoQ-e provides us with the capability to generate intuitive reporting and analytical data belonging to the requirement gathering process. Heat maps can be automatically generated at any level of the House of Quality but more importantly the heat map can also show relationships between attributes of different levels. By virtue of an example, the HoQ-e can generate a heat map illustrating the relationships between the attributes of level 5, user stories and the attributes of level 1, business vision.

HoQ-e Level HMap Transformation Initiative

The heat map provides an holistic view of the all the user stories against the business vision. Those stories possessing strong intersecting cells with the vision (highlighted in red) are important and must be implemented in order to achieve the vision. The ease of comprehension facilitates the validation process, anytime and anywhere within the agile life cycle. The facility adds rigour to agile without hurting the speed at which agile should sprint.

The second part of the question is about verification and this is how we tackle it. Part of the verification activity is to ensure artefacts are built correctly using the right engineering principles and best practices. The ZDLC platform proposes the use of the RMS-e tool (Requirement Modelling Solution) to automate part of the verification process in order to ensure quality whilst accelerating. RMS-e enables business analysts to model the user stories into pictorial diagrams like use case models or user stories models and to provision the models with additional rich data. Note that, HoQ-e flows naturally to RMS-e preserving the entire traceability as the journey continues from Requirement elicitation (HoQ-e) to Requirement modelling (RMS-e).

HoQ-e To RMS-e

RMS-e employs the techniques of Natural Language Processing (NLP) to compile the user stories against a predefined grammar,  hence automating part of the verification process. The grammar are written based on the best practices and software engineering principles. A typical example of the principles are 1) an actor should be a noun; 2) an action should have a verb ; 3) an action having 2 or more verbs should be split into two actions and so on. The grammar can be configured by any operational principles of the agile life cycle and employed by the NLP parser, to automatically verify the user story model. Any aspects of the user story model that does not conform to the grammar is highlighted as warnings to the Business Analyst.

RMS-e NLP Compiler

The Natural Language Processing (NLP) Parser acts as a compiler to the user story models. These suggestions in the compilation report can be employed or ignored by the business analyst, but the most important aspect of the compilation system is: ZDLC offloads some parts of the tedious verification process from the human and gives it to the machine to take care of. Any discrepancies in the user story models can easily be identified and rectified should it be a requirement defect. The earlier, the defects are found and fix, the cheaper to reach quality.  In automating the verification exercise, we accelerate the agile execution whilst augmenting quality.

  • Backlog Grooming – How do we continuously Prioritize User Stories ?

In agile execution there exist a backlog with items or artefacts that have to be treated and developed. As the backlog items are scheduled to be developed over parallel sprints, the Product Owner checks if the items are built according to “design”. Should there be a problem (non conformance), after retrospectively analysing the problem, the artefacts are put back into the backlog to be reprocessed and rebuilt.  Then, there is a need to re-prioritise the items, it is a continuous process of re-prioritising.


Source: The Importance of the Product Backlog on a Scrum Development Project, , Jul, 25, 2012, InformIT

The continuous prioritisation of user stories in agile grooming is tedious, time consuming and very often are bottlenecks for sprints to run smoothly. The ZDLC proposes the use of HoQ-e to alleviate the pain in performing continuous prioritisation. When using the HoQ method, prioritising the requirements is not a manual exercise but an automatic one. The HoQ enables the processes of requirement elicitation and prioritisation to be merged as one process. In the classical approach of developing software solution, the Business Analyst prioritise the requirements after having elicited and gathered the requirements. These are two distinct processes. However as one employs the HoQ-e, both processes are clubbed together. The priority is calculated whilst the HoQ matrices are being populated with requirements and the relationships between the requirements are assessed. Consequently, as one inserts new items into or removes old items from the backlog,  the priority of the items are automatically re-calculated by HoQ-e.

HoQ-e as a Backlog in Agile

The priority index is calculated based on the number of  High, Medium and Low intersecting cells. The x-axis represents the backlog containing the items or user stories to be developed. As one adds or removes items, the priority will be re-calculated automatically. This means that re-prioritising is not a manual effort using the ZDLC HoQ-e tool. One the one hand, HOQ-e reduces the probability of error when prioritising (moving from manual to calculation) and on the other hand, HoQ-e accelerates the process of agile grooming.

  • Backlog Grooming – How do we dynamically quantify the dependency amongst User Stories (user requirements) ?

Should one use the priority of user stories solely, to schedule the sprints, one may run into the problems of framing unbalanced sprints in the agile execution. Priority on its own is not enough, one also needs to identify the co-dependencies of user stories. The rationale is, if highly prioritised items are put together in a sprint regardless of their dependents (how many other user stories depends on the items), changes to the user stories may require changes to the dependents. This means that, the first sprints may well require over 70% of effort and several intricate changes. There will not be a balanced spread of sprint and may result to the collapse of agility. ZDLC proposes the use of the HoQ-e to automatically identify the co-dependencies of the user stories which is calculated using the same basic principles of calculating the priority.

In HoQ-e, the roof provides the relationships between the x-axis attributes.

HoQ-e The Roof
 The type of the relationships between the x axis and the intensity of the relationship depicted by  High, Medium, Low, is provisioned by the Business Analyst whilst questioning the Business SMEs or other business stakeholders, during the workshops of the concept phase in the agile life cycle. The links in the roof  are used by the HoQ, to internally calculate the co-dependency indices between the x axis attributes, hence two dimensions of observations can be used to plan and schedule the sprints. HoQ-e generates a graph that position the user stories  over a priority against dependency model as shown below.


The graph of priority against co-dependency can be used to schedule the sprints in a balanced fashion and is a powerful tool to the Programme Manager. The rule of thumb, typically is to start with the high priority and low dependency, then high priority and high dependency, then low priority and low dependency and finally low priority and high dependency.

HoQ-e The Quadrant Pri vs Dep

With the priority and co-dependency indices being churned inside the engine of the HoQ-e, one is now empowered to mitigate the risks of planning unbalanced sprints in the Agile execution. The HoQ-e enables a balanced spread of Sprints.

  • How are we going to handle the volume of work and Manual Overheads associated with creation and management of test cases for user stories (user requirements)?

In any given agile life cycle, it is told that each user story formulated should have its corresponding test cases, which may be more than one test case per user story. The exercise of building test cases is manual and tedious, allowing for human injected defects. Furthermore, there is also a need to keep the test cases in sync with the user stories, any changes in the user stories may result to changes in the test cases. All these activities require manual effort.

The ZDLC Platform proposes the use of RMS-e tool to address the problem of manual overhead associated to test case creation and management per user story. In RMS-e, the user stories from the HoQ-e are modelled and for each user story, a process flow diagram is designed by the business analyst and the architect teams. The process flow diagram depicts the functional behaviour to how a given user story is expected to run or operate and contain key business rules and they are annotated with rich information where required.

HoQ-e To RMSe to Proc Flow Diag

The RMS-e can generated all the possible scenarios of each process flow diagram for each user story automatically. The software runs through all the transitions of the process  flow diagrams and the scenarios are created that defines the test cases. By default, the test cases take the shapes of sequence diagrams but can be formatted into any required syntax.

Scn Generator in RMS-e

By shifting the manual activities of formulating test cases to an automated process, maintaining the test cases in sync with the user stories is performed automatically by the system. RMS-e accelerates the agile cycles and minimises defect injections by the human through automatic generation and management of test cases for each user story.

  • How do we ensure Knowledge is managed consistently across highly complex and distributed Program ?

The answer to this question is through a robust yet transparent communication model that enables the same version of the truth to be perceived and shared collaboratively by all the stakeholders. The HoQ-e is predominantly a communication tool, that Dr Yoji Akao invented as part of the Quality Function Deployment (QFD) family. The HoQ-e enables different people of the supply chain or development life cycle to create, share, update and understand the substance of information in order to de-risk the process of decision making.

The management of knowledge across distributed teams is a complex undertaking, especially across geographies and cultures. The ZDLC platform proposes the use of the HoQ-e to provide a reliable communication platform across the diverse teams who are exercising an agile life cycle.

HoQ-e Share

The HoQ-e is a collaborative platform and it enables all participants of a project to view or change the information structured over the traceability matrices. Since HoQ-e provides a systematic view of the refinement process, the information populated in the HoQ-e can be traversed either top to down, which is a refinement activity or bottom to up, which is an abstraction activity. By virtue of an example, a developer writing a piece of code for a given user story may want to know which are the business functions, a given user story realising, and through an abstraction process, the developer moves up the levels of the HoQ-e.

HoQ-e provides a rich and intuitive collaboration platform to all  the stakeholders of the agile value chain and it exposes the traceability matrix to the Business stakeholders, BAs, PMs,  Developers, Testers  to share common understanding of the User Stories, whenever and wherever you are in the development journey.


With smart automation of  the tedious and error prone activities within the agile life cycle, one can add engineering and structural rigour whilst augmenting the agility of the process. We presented 6 questions that challenges the process of developing high quality software in an agile process. The tools of the ZDLC platform, namely the HoQ-e and RMS-e propose a unique solution to significantly improve the yield and quality of agile executions. ZDLC constantly attempts to find the equilibrium between the number of iterations in backlog grooming and “getting it right first time“.

Can we increase Agility by increasing the engineering discipline to an agile process? Yes and we are doing it.

“Is that what you meant?” one asks the client

Most probably the answer is no.

We do believe this question to be a fundamental one in any development life cycle and the earlier the question is answered, the cheaper it is to achieve quality and customer satisfaction.

In software engineering and IT, we proposed a number of development models based on iterations and small increments to plan and ensure several small deliveries to the client so that the latter may confirm: “that is what I meant” as quickly as possible for the production process to flow proficiently. Yet should there be too many of those small deliveries, it may become irritating  to the client but above all else, getting time from the client or business SMEs to frequently check the work is not feasible, not feasible at all.

So an equilibrium has to be achieved; an equilibrium between the number of iterations and getting it right first time. ZDLC proposes techniques to achieve this equilibrium, wherein we balance between the intensity or scientific rigour of getting it right first time and the number of iterations required. ZDLC succeeds in automating and hiding many parts of the scientific rigour and enables the question of  “is that what you meant?” to be asked as early as possible. Based on which, remediation and reinforcement activities take place.

ZDLC reaches the equilibrium by enforcing quality by design and this capability is embedded in the tools proposed, and these are as follows:

  • HoQ-e
  • RMS-e
  • TiA-e
  • CPN-e

May we present to you, four articles wherein each article demonstrates how each distinct tool achieves the equilibrium between rigour and iterations.

This article talks about the first tool of the ZDLC, the HoQ-e.

“Ensuring requirements are aligned and consistent is never easy. Add prioritization and it gets scary.”


The article explains how the HoQ-e is used to gather:

  • unambiguous requirements
  • requirements which can be justified against business goals and drivers and
  • requirements which can be traced at any point in the Software development Lifecycle (SDLC)

The objective is to describe our process of engagement to ensure consensus on the approach and potential outcomes over a time line.

The High Level Structure and Usage of the House of Quality enhanced (HoQ-e)

The HoQ-e is a tool of the ZDLC platform. We employ it to enable the following capability in the problem of requirement engineering:

  • rapid requirements elicitation,
  • intuitive validation,
  • structured analysis,
  • consensus-based decision-making all based on objectively prioritised and dependency-aware metrics and
  • permitting dynamic traceability and change-impact assessment is our goal.
HoQ-e is based on a proven methodology & technique that enables problems and requirements to be addressed, validated, prioritized and used for transparent decision making enabling higher quality and more rapid outcomes.
Why do we need it?
  • Use of the House of Quality has been shown to reduce costs of quality by over 50% in the manufacturing Industry
  • HoQ-e is a unique and faithful adaptation of HoQ that has been customized for the Software Development industry and enriched with latest features of technology for enhanced design and improved usability.

What does it do?

  • It enforces structure in how information is captured and represented  and optimizes the effectiveness of the Business Analyst (or decision maker) and the Engagement with the Business SME
  • It graphically represents findings for rapid and effective quality control and governance
  • It traceably aligns captures information to interpretation to decisions – more quickly and effectively

How does it work?

  • It permits conventional styles of working whilst imposing better structure and rigour
  • It enables the Analysis approach to be pre-planned and (if needed) iterated safely
  • It represents information in levels which permits easier identification of patterns (e.g. For Re-Use)
  • It objectively prioritises decisions permitting un-contestable conclusions
  • It objectively quantifies co-dependency allowing for safe Program and Test Planning

The HoQ-e Approach

To illustrate a typical approach of HoQ-e, we employ a transformation programme  for a claim processing platform as example. Prior to starting the requirement workshop, there were key questions to be answered, and in answering those questions, we traced the journey we undertook to achieve an exhaustive yet unambiguous list of user requirements. The questions are as follows:
  • Who are the key stakeholders in claims and how do we classify them?
  • What are the key business functions of the stakeholders?
  • What is the high level problems in the current claims systems relative to the stakeholders?
  • What is the root cause for the the high level problems for claims?
  • What sort of solution characteristics need to be in play to address the root causes of the high level problems of claims?
  • What are user requirements do we need to implement in order to meet the solution characteristics?

HoQ-e Claims Transformation Output

The HoQ-e preserved a logical traceability between the level 1 questions to the level 5 questions, and this traceability routed the journey that the Business Analyst took to reinforce and enrich the requirement elicitation process without additional effort.

How do we identify the Requirements?

The HoQ-e facilitates the process of identifying and classifying  the problems statements. The key activities of the method are listed as follows:

  • Mapping the stakeholders for the domain gives a balanced prioritization.
  • Elicitation of business functions provides a map to business imperatives.
  • Mapping high level problems to business functions gives a view a balanced prioritization of the problems to solve.
  • Mapping the root cause to the problems enables engineers to address the root cause of the problem rather than the symptoms of the problems.
  • Mapping the Solution characteristics to the root causes of the problems defines an accurate expression of the user requirements, scoping and constraining the work to be done within the limits of the problems in hand, hence avoiding scope creep.

The ZDLC Approach to using HoQ-e in Software Requirement Engineering

We plan the requirement elicitation workshops with key stakeholders and/or their representatives as follows:
  • Workshop 1: Identify the Stakeholders and their  business functions within the problem domain under investigation. Level 1: Business Functions against Stakeholders.
  • Workshop 2: Identify the high level problems they are currently experiencing for each business function. Level 2: Business Functions against Problems
  • Workshop 3: Identify the root causes of the problems by asking why do theses problems exist for the business functions. Level 3: Problems against Root Causes
  • Workshop 4: Propose Solution Characteristics to resolve the root causes of the problems. Level 4: Root Causes against Solution Characteristics
  • Workshop 5: Formulate the user requirements that are to be implemented in order to realise the solution characteristics. Level 5: Solution Characteristics against User Requirements
  • Workshop 6: Derive the technical requirements from the user requirements. Level 6: User Requirements against Technical Requirements

HoQ-e Claims Transformation Level 1

HoQ-e enables a  logical flow down process of refining requirements. At each level of the HoQ-e, prior to traversing to the next level, the HoQ matrix is reviewed, justifying the requirements and asking the vital question “is that what you meant?” as early as possible during the planned time with the business SMEs or client. Such reviews lead to the concept of  micro sign off of each level, ensuring that the next level of the HoQ-e starts from a validated and firm foundation.
We elicit the Root Causes of the problems and drill down the requirements over a period of 3 weeks, as depicted on the following diagram:
HoQ-e Claims Drill Down Process
The HoQ-e is designed in such a way, where the information and annotations of the requirement attributes are provisioned at the appropriate location within the traceability matrices. With a structured and logical flow down process, it accentuates the correct questioning techniques during the investigation or study. To reduce ambiguity and enrich requirement attributes, there are two fundamental capabilities of the HoQ-e to be considered:
  1. Placing the Right Information in the Right Place: In order to reduce ambiguity in the description of the requirements, HoQ-e provide a feature to annotate the requirements based on some predefined meta dictionary, which are based on some best practices proposed by the IEEE. The additional information provided, improves the comprehensiveness of the requirement attributes and this is a core capability of the HoQ-e to become a proficient communication tool. The annotations for each of the requirement attributes are agreed by consensus at each level prior to a micro sign off. The following diagram shows how the requirements attributes are annotated. It has been observed in many classical approaches to requirement elicitation where information missed or lack of enrichment of requirements led to expensive change requests. HoQ-e Claims Transformation Annotation Req
  2. Finding the Right Information at the Right Time: The ability to investigate a user requirement to justify its existence for the solution is essential; this is asking the question why and tracing back the user requirements of the lower levels  to the higher levels business goals. Unlike the conventional approach, this exercise is easy and intuitive in the HoQ-e. The latter is a traceability matrix which by default structures the requirements and their roots over a tree model. Traversing the tree nodes, empowers one to walk trough the requirement definitions and validate their origin against the business goals at any point in time and by anyone collaborating on the HoQ-e. This capability ensures validation is done correctly and swiftly. So now it is not only about answering the question “is that what you meant?” but urging the client to answer “do you actually mean this?, or is it…”

HoQ-e Claims Transformation Traceability

In 3 weeks, the quality and richness of the requirements gathered are much more accurate and much less ambiguous than any classical approach. The yield of quality is significantly increased with less effort required. With the HoQ-e in hand, it is like having a drum beat whilst doing workit fetches the right rhythm to compel the process to flow efficiently.
In the next article we talk about the Requirement Modelling Solution (RMS-e) and how it is employed in the ZDLC Platform to answer the question “is that what you meant?” as we leave requirement elicitation to dwell into Requirement Modelling.
From the ZDLC Team

“Quality cannot be tested, it should be embedded”

The Story

This story is about the motivation behind the Zero Deviation Life Cycle (ZDLC). A motivation driven by real business problems, where projects are plagued by cost and schedule overrun and requirements no longer resembled the business needs or IT solutions failed to resolve the correct problems. Many methodologies emerged, most of them tackled the management issues of realising a rationalised development life cycle. Very few looked at the engineering parts; the parts that define the quality and reliability of the end results; the part that makes one proud of the product. There are several topics and line of thoughts written towards the concepts of Application Lifecycle Management within a new business dynamics. As a result, the motivation behind the ZDLC and its origin is to propose the concept of Application Lifecycle Engineering (ALE). This is because, management constraints cannot dictate how engineering techniques are applied and above all else, management constraints cannot sacrifice engineering methods for speed and time.

ZDLC is about ALE, or “smart ALM”. It complements all ALMs by focussing on the engineering aspects of a typical Software Development Life Cycle, advancing techniques to employ statistical and probabilistic models, formal methods, simulation and intelligent automation that speed up the process of developing software whilst augmenting quality and productivity of the process. Yet all the scientific rigour is well hidden through clever abstractions and simplified user interface and experience of the tools.

Many organisations seek to use ZDLC in an agile execution mode, where ZDLC automates many of the tedious and time consuming software validation processes that may hinder agility. But this story is about an organisation which did not want to implement agile but wanted the agility of their existing Waterfall model (SDLC) to be increased. This organisation shifted Agile from an execution model to become a quality attribute of waterfall, and wanted best of both worlds. ZDLC thrives in such environment.

We start with the business drivers of the organisation which are as follows:

  • To capture requirements for projects/programmes (including new projects) more effectively, ensuring that the client receives the downstream benefits of higher quality deliverables and innovation.
  • To bring more agility to the current Waterfall approach to project definition and delivery, distinct from a purely agile approach.

We were asked to comeback with our experience of helping clients address these challenges and how they may be applicable to the organisation in question. We have been working with customers for some time who have had similar challenges and concerns. The outcome and experience from these engagements has allowed us to develop a new platform, the Zero Deviation Life Cycle (ZDLC).

By using Cognizant’s ZDLC platform we have achieved the following measurable benefits:

  • 20-25% saving in the e cost of Software Development Life Cycle (SDLC) ) delivery
  • 40%-50% reduction in the cost of quality in support and maintenance cycles

The ZDLC platform also ensures a better decision-making process, leading to o a higher degree of project success. This is achieved through a structured, yet unrestricted, requirement gathering process, establishing consistent communication across life cycle stages and controlled impact analysis.

ZDLC is a platform that comprises of the following principal tools:

  • (HoQ-e)- House of Quality enhanced
  • (TRiZ-e)- Theory of Inventive Problem Solving
  • (RMS-e)- Requirement Modelling Solution
  • (TiA-e) – Testable Integration n Architecture
  • (CPN-e)- Coloured Petri Nets
  • (SDP-e)- Systemic Defect Profiler

These are used by joint client and Cognizant teams across the project life cycle. The ZDLC platform is based on some key principles to propose an approach of development which is a scientific and quantitative means of managing and measuring ever-changing requirements. This provides the ability for each requirement change to be mathematically analysed and assessed against the impact of the change across business processes. ZDLC has allowed our clients to:

  • instigate a culture of sustainable and measurable innovation within t the programme lifecycle
  • trace requirement through the SDLC, providing a consistent communication mechanism
  • prioritise requirements and bring consistency across SDLC, injecting great agility into the process
  • eliminate contradictions within a solution
  • minimise defects throughout the SDLC and thereby reduce cost

In these engagements ZDLC increases the power of modelling software applications and creating innovative solutions at pace. Whilst Cognizant has a dedicated agile team, we were empowered by the organisation’s focus to bringing greater agility to the current waterfall way of working rather than introducing the agile methodology.

Detailed below are two examples of how ZDLC added agility to the standard Waterfall methodology allowing our clients to realise significant benefits.

  • To prove the benefit of the new approach a comparison exercise was undertaken. Two streams of work were started at the same time to solve the same problem. The objective was to gather sufficient requirements and produce technical specifications to meet a project need. The team using a classic Waterfall approach took 15 days to complete the task, whereas the team using ZDLC took only 3 days (due to intelligent automation).
  • The second example e focuses on using ZDLC to bring innovation to a client’s on-line platform. The aims were to increase the number of functional software releases over a 12 month period from 1 to 4 and deliver a reduction in the cost of quality. ZDLC allowed the team to deliver 42% reduction in cost of quality a and in the first six months two functional releases have been delivered putting the c client on-track to meet their business goals. In addition ZDLC found 3 major design flaws that the classical Waterfall approach failed to identify.

The ZDLC approach adds significant value to bring agility to waterfall and rigour to agile and the principle tools has been carefully crafted to achieve this.

The Tools

The House of Quality (HoQ-e), adapted to the problem domain of IT for requirements engineering and business requirement traceability to strengthen reliable communication amongst the stakeholders of the ZDLC.

  • Inputs: Structured questioning, Aligned Business and Architectural Analysis, Customer engaged decision making process.
  • Outputs: Prioritised and dependency aware work packages and consensus building across teams.
  • Benefits: Auditable alignment to goals, pattern-based solution definition, strategic alignment and powerful decision-support.

The Theory of Inventive Problem Solving (TRIZ-e) adapted to the problem domain of IT for focused innovative solution -definition.

  • Inputs: HoQ analysis (re-used), prioritised list of contradictions to solve.
  • Outputs: Contextualised and d measurable Innovation options.
  • Benefits: Directed process off ideation, reliability that ideas generated meet n needs and can be measured before building.

Requirement Modelling Solution (RMS-e) Used to model and compile user requirements, model process flow diagram for each user requirement and generate test scenarios for each process flow diagram.

  • Inputs: HoQ analysis (re-used), prioritised list of contradictions to solve.
  • Outputs: Generated Software Requirement Documents (SRDs), Process Flow diagrams and Test Cases.
  • Benefits: Accelerated process of requirement modelling and automates the process of generating SRD and requirement verifications.

Testable Integration Architecture (TiA-e) Used for low-level requirement consistency, verification of design against requirements and generation of validated artefacts to drive delivery and to assist in governance.

  • Inputs: RMS-e artefacts (re-used) 100% transparent design decisions prioritised process and entities for communication.
  • Outputs: Industry-standard m models testable against requirements and generated technical contracts.
  • Benefits: Auditable alignment to requirements, notionally formal, technical contracts to drive development and testing, earlier and more comprehensive defect detection, requirements consistency, lower cost of quality.

Coloured Petri Nets (CPN-e) adapted to modelling process and non-functional requirements and simulation of models against them.

  • Inputs: HoQ-e analysis (re-used), prioritised process-entities.
  • Outputs: Machine readable deployment model for solutions.
  • Benefits: Deployment model can be simulated against non-functional requirements, capacity planning, stress testing support, early defect detections, lower cost of quality.

Systemic Defect Profiler (SDP-e) for automated root cause analysis.

  • Inputs: TiA models, log files from development work streams or network layer data.
  • Outputs: Formal analysis from design to run-time reconciliation, logging sanitation.
  • Benefits: True, enabled Governance, faster root cause analysis. Much lower cost of quality/defects.

The story begins…


Production environments in large enterprises are expensive to maintain, support and enhance. Organisations take extreme care to provision and configure their environments and they are very wary to change. This is because a small change may result to a domino effect of failures resulting to costly system downtime. Consequently, engineers are also looking for reliable ways of testing environmental changes prior to deploying to production. One way is to maintain an exact replica of the production environment which is used to test and verify changes (software and hardware changes). However, this is an expensive solution to test small recurrent changes and doubles the support cost. Another approach is to simulate. This article is about simulation and how the principles of ZDLC have been employed to simulate changes of deploying software to an enterprise production environment in the banking arena.


  • To create and deliver an executable simulation model that can be used to validate test environmental changes with the objective of reducing the risks associated with deploying a software version into production.
  • Use the simulation results to observe the value and propose recommendations.
  • Use the lessons learned to reinforce the next iterations of modelling and continue repetitively until we have all the required components of the infrastructure captured by the simulation models. This typically using an agile method of simulation.

The Tool Adopted

The simulation tool that is employed is called CPNTools which is abstraction of the Petri Net method. CPN Tools is a tool for editing, simulating, and analyzing Colored Petri Nets. A Petri net is one of several mathematical modeling languages for the description of distributed systems and it proposes a directed bipartite graph, in which the nodes represent transitions (i.e. events that may occur, signified by bars) and places (i.e. conditions, signified by circles)
Reminiscent of industry standards (UML activity diagrams, BPMN), Petri nets offer a graphical notation for step-wise processes that include choice, iteration, and concurrent execution. Yet, unlike these standards, Petri nets have an exact mathematical definition of their execution semantics, with a well-developed mathematical theory for process analysis which is originally developed by the CPN Group at Aarhus University, Denmark. CPN Tools has been used for modelling everything from waste disposal plants to communication protocols. The tool is good for figuring out how something will behave on your network. The tool is completely graphical which makes it easy to use, masking all of the underlying maths from the user. This is a key principle of ZDLC platform. The tool uses ML – functional programming language developed by Robin Milner at University of Edinburgh.
The Network Architecture and Component
The production environment sits on a network architecture which has been defined and documented (but not always). There are few fundamental behaviour of a network architecture that are required to be modelled in CPNTools in order to simulate the behaviour of state machines or a software running over the network. In this study, we considered a web application system for an online banking platform. The simulation model is created for web server application residing on a HTTP Layer. The architecture was created in CPNTools with the objective to:
  • Calibrating the simulation model against the production a environment statistics
  • Identify the margin of error of deviation between the production system (real) and the simulation system (virtual)
  • Validate conformance based on 5 distinct scenarios.

The 4 network behaviour, that were required to be studied, hence modelled in CPN to emulate the behaviour of the production system are explained as follows:

  • FIFO – First In First Out Queuing System – it is an abstraction in ways of organizing and manipulation of data relative to time and prioritization. This expression describes the principle of a queue processing technique or servicing conflicting demands by ordering process by first-come, first-served (FCFS) behaviour: what comes in first is handled first, what comes in next waits until the first is finished, etc. This is an extension to the traditional pipe concept on Unix and Unix-like systems, and is one of the methods of inter-process communication (IPC).


  • Round Robin – Load Balancing Mechanism – Round robin DNS is usually used for balancing the load of geographically distributed Web servers. e.g. a company has one domain name and three identical home pages residing on three servers with three different IP addresses. When one user accesses the home page it will be sent to the first IP address. The second user who accesses the home page will be sent to the next IP address, and the third user will be sent to the third IP address. In each case, once the IP address is given out, it goes to the end of the list. The fourth user, therefore, will be sent to the first IP address, and so forth.

Round Robin

  • Sticky IP Address – A sticky IP is one that is assigned dynamically (through DHCP, but it has a habit of hanging onto your broadband modem as long as you don’t have to reset. The difference between a static IP and a sticky IP is that a static IP is basically a DHCP from the SBC server, no authentication needed from the computer/router/modem, you just statically put the IPs in your TCP/IP/router. Whereas a sticky IP is the opposite in a sense, you still receive the same IP every time but you have to authenticate and the static IP is given through the radius once you authenticate, so you can just set your equipment/OS to obtain the IP automatically.
  • Thread Pooling – The Thread pool pattern is where a number of threads are created to perform a number of tasks, which are usually organized in a queue. Typically, there are many more tasks than threads. As soon as a thread completes its task, it will request the next task from the queue until all tasks have been completed. The thread can then terminate, or sleep until there are new tasks available. The task queue has many waiting tasks (blue circles). When a thread opens up in the queue (green box with dotted circle) a task comes off the queue and the open thread executes it (red circles in green boxes). The completed task then “leaves” the thread pool and joins the completed tasks list (yellow circles).

Thread Pooling

The 4 network behaviours are modelled as CPN components inside the simulation model and the following diagrams depict the design of the network behaviour or network elements in CPNTools


Model & Solution

Performance and Reliability are often central issues in the design, development, and configuration of systems. It is not always enough to know that systems work properly, they must also work effectively. Simulation-based performance analysis of a model involves a statistical investigation of output data, the exploration of large data sets, the appropriate visualization of output data, and the verification and validation of simulation experiments. Examples of performance measures that can be calculated by extracting data from occurring binding elements include:
  • End-to-end delay, where one of the variables of the transition is bound to an object, e.g. a packet that has arrived at its final destination.
  • Network reliability, where packet loss can be represented as 1)A particular transition occurring and 2) A transition occurring with particular bindings.
In many cases, timed CPNs will be used for performance analysis. A timed CPN can model how much time certain activities require, and how much time passes between other activities.In most cases it is insufficient to model the average amount of time that a certain activity takes – it is necessary to include a more precise representation of the timing of the system. The Random distribution functions can be used to precisely model time delays.This function can then be used when modelling the arrival of new items to a system. A new item is created, i.e. it arrives in the system, when the transition Arrive occurs. The time stamp for the token on place Next determines when the Arrive transition will be enabled, and the expTime function is used to ensure that the inter-arrival times of new items are (approximately) exponentially distributed.
CPN1Use of Monitor in CPN

A monitor is a mechanism in CPN Tools that is used to observe, inspect, control, or modify a simulation of a CP-net. Monitors can inspect both the markings of places and the occurring binding elements during a simulation, and they can take appropriate actions based on the observations.
Purposes of Monitors
  • Stopping a simulation when a particular place is empty
  • Counting the number of times a transition occur
  • Updating a file when a transition occurs with a variable bound to a specific value
  • Calculating the average number of tokens on a place

The Kinds of Monitors

The different kinds of monitors are explained as follows:
  • Break point monitors are used to stop a simulation.
  • Data collector monitors are used to extract numerical data from a net. The numerical data is then used to calculate statistics, and the data can be saved in log files. The log files can then be post-processed, e.g. by importing them into spreadsheet programs or plotting them in graphs.
  • Write-in-file monitors are used to update files during simulations.
  • User-defined monitors are generic monitors that can be used for any purposes that are not covered by the other kinds of monitors. For example, a user-defined monitor could be used to update a message sequence chart (MSC) or to check that a particular property holds during the simulation

Statistical Observation

The main purpose of the simulation exercise is to emulate the behaviour of the production system so that the output of the exercise can be used to diagnose and predict potential problems of deploying changes to the system. After executing the simulation, a statistical report is produced that illustrates the performance and reliability of the simulation model. In order to emulate the behaviour of the production system, probabilistic distribution is often used to configure the workings of key network elements of the model (e.g. assigning a Poisson distribution to define the arrival rate at a server).
In order to find the probabilistic distribution that best represents or mimics the behaviour of the system, we first use real production data, from the production logs and profile the output of the logs, to find out which probabilistic distribution bests fit the data. In this study it was observed that the distribution that best fits the behaviour of the production system was Erlang distribution.
Erlang is a continuous probability distribution with wide applicability.
Erlang was developed to examine the number of telephone calls which might be made at the same time to the operators of the switching stations.It was e
xpanded to consider waiting times in queueing systems in general. The distribution is now used in the fields of stochastic processes.
The next activity carried out, was to tune the model to ‘best fit’ the data, so as to minimise and calculate errors and maximise accuracy of the simulation model. The production data output was compared to the simulation data output for calibration of the simulation model.
Siulation vs. real
The diagram shows the results of calibrating the simulation responses to the production responses. There is an economical factor to be considered when calibrating the simulation model. At some point a decision has to be made. We agree on the deviation between the production system and the simulation model is an acceptable margin of error. Hence the agreed margin of error is taken into account when decisions are made based on the statistical output of the simulation exercise. It has been observed, the calibration of the simulation model is a very critical exercise in this study.
The Simulation Experiments and Observations
Experiment 1 was to compare the simulation results (Response Time and Thread usage) with the production environment results with 5 sec delay or latency on back end.
  • The articulated objectives and expectation of experiment are: Change configuration parameters and check if the results align with production environment results. The changes are for 1)
    Max Connections and 2)
    Thread pool.
  • Brief description of method:
    Change simulation parameters to Production environment configuration (1
    HTTP Server and
    6 WAS Clones (Presentation Layer Servers));
    Change back end delay to 5 seconds;
    Run 50 concurrent user simulation and c
    ompare the thread pool usage and response times with production environment.


Experiment 2 is to compare the simulation results (Response Time) with production environment with burst in throughput (transaction per second (TPS)) can only do 10 TPS in production environment.
  • The articulated objectives and expectation of experiment are: 1) Step-Up Load with high back end latency and run the simulation on a scaled down version of the model, and the compare the results from production system.


Between the production system and the simulation model, we have achieved a 3% margin of error for the HTTP Server and a 4% margin of error for the WAS (WebSphere Application Server).
Experiment 3 is to exercise simulation model with different Request Type and identify various distinct distributions of request Type.
  • The articulated objectives and expectation of experiment are: 1) Simulate skewing inputs; 2) group requests based on its type and run simulation for each type and 3) next stage would be further development of model (Separate distributions seen in the real data and Change model to include TWO separate distributions).


As we analyse the outcome of experiment 3 we identified different pages on the web application that make up the overall shape of the curves seen in the data. We suspected there are approximately 4 page types contributing to the separate distributions.
In Experiment 4 we compare the simulation results (Response Time and Thread usage) with the production environment results with 5 sec delay on Back end.
  • The articulated objectives and expectation of experiment is: 1) DDOS (Direct Denial Of Service) Attack simulation – Normal mix of requests + large number of authentication requests.
  • Brief description of Method: 1) Simulation was run with 4 HTTP servers and 24 WAS Clones (cloned servers) with Production latency for each layer; 2)8000 concurrent users. Each user accessing one page; and 3) Simulation took 2 hours to finish.
  • Show observations: The Web server thread usage goes full and The WAS clones thread usage remains under 200


Experiment 5 is to compare Simulation results (Response Time and Thread usage) with production environment results with 5 sec delay on back end.
  • The articulated objectives and expectation of experiment is: 1) Increase the timeout on a WAS cloned server to infinity while running and see the effects.
  • Brief description of Method: 1) No. of concurrent users : 100; 2) No. of HTTP Servers : 1; 3)No. of WAS Clones : 6 and 5)Run for 4 minutes under production configuration, then increase the processing time to infinity just for one clone and then run for another 4 minutes.


Simulation is always cheaper than maintaining a replica of the production environment but there is a deviation between the virtual system and the real system. However, as we have demonstrated there are techniques which the ZDLC platform adopts to mitigate the risk of deviation to produce a minimal margin of error; a level at which sound decision about the behaviour of the production system can still be made. Between the production system and the simulation model, we have achieved a 3% margin of error for the HTTP Server and a 4% margin of error for the WAS (WebSphere Application Server).
The ZDLC platform employs CPNTools to propose the Enterprise Simulation Product with a clear objective of mitigating the risk of cascading system failures in the production environment whenever software changes are implemented. It does so by employing the techniques of simulation and statistical modelling to provide a cheap yet reliable solution to be proactive and predict the impact of changes to a production system.
A summary of the main activities of the simulation exercise is as follows:
  • Model the structural aspect of the production system in CPN Tools
  • Identify and design the main network behaviours or network elements (in our case was FIFO queue, Round Robin Load Balancing algorithm, Sticky IP and Thread Pooling)
  • Gather statistics from the production environment and find the probabilistic distribution(s) that best fit(s) the statistics of the production environment
  • Configure the simulation model with data and place the probabilistic distribution in the simulation model to emulate behaviour of the production system
  • Place Monitor to gather statistics from the simulation model
  • Run the simulation model against some predefined scenarios and check for / fix errors
  • Calibrate the simulation model against the production environment results
  • Agree on a margin of error for the deviation
  • Carry out simulation scenarios to verify & validate software changes to production environment; and predict the behaviour of the production environment under certain given conditions (i.e, running what if scenarios).
  • Produce reliable decision support and analysis report.


The principles of the Zero Deviation Life Cycle (ZDLC) complement the Agile methodology. By means of employing ZDLC, we are empowered with a unique set of tools that enables us to achieve the following:

  • Mitigate certain risks within an Agile execution,
  • Reduce the manual effort,
  • and accelerate the overall process through intelligent automation.

We observe that the aforementioned objectives, lead to an enhancement of effective Agile Adoption.

Key Agile Artefacts

In order to understand the risks and challenges involved, we looked at the important artefacts and tasks in a given Agile execution and these are depicted in the following diagram.


Each of the processes requires effort to avoid waste and agility to speed up the end results. There are risks and challenges that are required to be diagnosed and treated.

Risks facing any Agile Execution

There are key questions that need to be answered in order to model the solution to mitigate the risks. The questions are as follows:

  • How do we continuously Tie-Back User Stories to original Business Vision ?
  • Once we have got the Vision in place how does the Product Owner consistently do Validation and Verification of user stories (user requirements)?
  • Backlog Grooming – How do we continuously Prioritize User Stories ?
  • Backlog Grooming – How do we dynamically quantify the dependency amongst User Stories (user requirements) ?
  • How are we going to handle the volume of work and Manual Overheads associated with creation and management of test cases for user stories (user requirements)?
  • How do we ensure Knowledge is managed consistently across this highly complex and distributed Program ?

Consequences of the risks

The problem of “continuous Tie-Back of user requirements or User Stories to original Business Vision” is a constant battle to ensure that: what is delivered is what has been asked from the business (Voice of the Customer). This process is tedious and time consuming and very often if incorrectly handled may result to developing unyielding capabilities to the business stakeholders. As a result validation of the user stories is necessary, which leads to the next question.

Once the vision of the business is in place, how do we instigate a process of consistently validating and verifying the user stories against the vision. The consequence of failing this exercise will lead to reworks as user stories will either be 1) not reflecting the needs of the business (validation) or 2) incorrect formulation of user stories against a predefined set of best practices (verification). This exercise of validation and verification (V&V) is time consuming, and may not be thorough since V&V may be sacrificed for speed leading to more rework and a growing product backlog of the Agile Lifecycle.

Product backlog grooming is a vital activity in an Agile environment and getting this process right defines the success of delivery. There is a continuous need to treat the backlog as new user stories or incorrect user stories stream into the backlog queue. Backlog grooming is a repetitive task of re-prioritising and re-mapping the inter-relationships of user stories so as to plan the next sprints efficiently. As the Product Backlog increases in size, the effort required to prioritise and re-prioritise increases and the human error will increase. Incorrect priority leads to incorrect planning of sprints.

In the next question of addressing risks, we discuss on the problematic of handling large volume of work and manual overhead associated to the creation and treatment of test cases for each user story. This problem hinders the flow of activities and slow downs the process of Agile. The creation of test cases is tedious and time consuming and as seen in classical state of affairs these test cases are formulated manually. The latter allows for human injected defects in the test cases which requires extra effort to 1) correct the test cases and 2) keep the user stories and test cases in sync.

The last question addresses the challenge of ensuring Knowledge is managed consistently across a complex and distributed Programme. Ideally, the perception of a given user story in the eyes of a Business Analysts should be the same for the Tester or the Developer and so on. It is required to achieve a common understanding of the description of requirements. Yet the complexity of social dynamics and geographical dispersion of the programme, transform this activity into a very challenging and risky issue. If the knowledge is not managed correctly, communication amongst peers of the development process is ambiguous and unclear, resulting to defects and waste in the agile process. Subsequently the product backlog grows.

The next diagram illustrates a typical Agile process.

Agile Process

The consequences of these risks in an Agile execution lead to waste, poor quality, low yield and growing cost to the client. Like any process, an Agile process is also subjected to entropy, and work has to be done to minimise the waste so that the value of agility and speed is not loss. Now the question is: work that has to be done can either be done manually or with the help of some smart tools and techniques.

Are we comfortable handling these risks manually or do we agree that these challenges warrant a tools-based mitigation approach ? If the answer is yes, then follow part 2 of the blog, wherein we present the Zero Deviation Life Cycle Agile Enablement Product. The latter was designed to seamlessly blend process automation and formal mathematical rigour into the capability of Agile. It adds rigour to agility without hurting agility but augmenting it.