Blog

SCM Test Plans – Deploy

In this series we are exploring the importance of adopting a collaborative approach when testing enabling technologies in complex, SCM environments. The nature of such deployments implies a hands-on mindset for both the solution provider and the business; success often requires beginning early in the project, with the solution provider owning test plan development, and with the business progressively taking more ownership in later project stages.

The initial test type, the Unit Function Test (UFT), supports the Prepare project phase, introducing the business to software workflows, terminology, and key functional capabilities. The Integrated Functional Test (IFT) supports the Design phase, and ensures solution design satisfies all business process and functional requirements. The System Performance Test (SPT) supports the Construct phase by ensuring the solution conforms to the approved design and satisfies functional and technical requirements. The final test type, the User Acceptance Test (UAT) supports the Validate phase by ensuring plan quality requirements are met and that the solution is deployable.

The UAT test phase:

  • Ensures consistent plan quality on multiple production-level data sets in an end-to-end, production-equivalent test environment.
  • Is the most important test phase and type, and must include tests covering the entire solution scope.
  • Must clearly document test plan priority and Pass/Fail criteria, so as to be consistent with business metrics, business process and functional requirements.
  • Must provide a test period of sufficient duration to ensure consistently publishable plan quality in a dynamic, supply chain context.
  • Comprises a milestone gateway for the Deploy project phase.

UATs focus on plan quality, always include live, full-sized production data sets, and may be conducted in multiple phases. Each UAT is workflow-based, representing an actual user workflow on a current production data set as expected in the live production environment. An initial phase of UAT may involve a tuning phase where solution parameters are adjusted to improve plan quality, followed by a stabilization phase where plan quality is consistent over a defined period. Successful completion of UAT is the milestone for the Deploy project phase, and is a key focus of the change management program during the Validate phase.

Of the four test types and phases, the business ultimately only cares about UAT, and this is certainly reasonable; users must be able to use the solution to consistently generate high quality plans, or the solution is unusable. The common objective of the solution provider and the business is to deploy a usable solution, implying that UAT is indeed the most important test type; it must therefore be given due diligence and priority. But this does not imply that the other types of tests are unimportant.

Observe that SPTs must pass for UATs to pass, and that well-designed and executed SPTs can be expected to both enable successful UAT design, and to both minimize and facilitate resolution of UAT issues, saving valuable time on the project’s critical path, as plan quality concerns can be difficult to understand, measure and resolve.

Similarly, SPTs will only pass if related IFTs and UFTs pass, so these four plan types define an intuitive testing hierarchy and dependency which enables one to break the overall testing requirement down into more manageable and progressive steps, and to execute the entire complex of testing activity in stages throughout the course of the implementation without significantly extending the critical path, rather than bunching up all the testing at the end during the Validate phase.

UATs involve repeating various structured planner workflows on current, dynamic production data sets and determining if each workflow pattern results in a publishable plan/schedule in its business context. Defining sufficient Pass / Fail criteria here up front, prior to test plan execution, is critical; it is perhaps the most important consideration in the entire testing process, and it can be the most difficult to get right. The root cause of this difficulty lies in the fact that it is generally easy for planners to spot unacceptable characteristics of a plan, yet identifying necessary and sufficient criteria for a good plan up front can be much more difficult; this will likely require an iterative process, and fine-tuning over time. Failure to do this well often results in significant subjectivity in evaluating plan quality during the Validate phase, and may tempt the business to deploy without fully validating the software solution, under pressure to adhere to project budget and timeline. Successfully completing the three earlier test phases is likely the best way to prepare system users to develop and execute these UATs successfully and enable a more seamless Deploy phase.

In our final post in this series, we’ll explore the overall execution methodology, how to test to ensure success.

SCM Test Plans – Construct

In our last post, we introduced the concept of the Integrated Functional Test (IFT) as a milestone for completing the Design phase of an SCM implementation. These tests ensure that all functional requirements can be met with the software solution and that all data requirements are fully understood prior to the Construct project phase. They also ensure that the software optimally supports the business requirements, and further facilitate solution acceptance and adoption, change management, and they enable root cause analysis of more complex scenarios.

The next type of test positioned in complex SCM planning scenarios is the SPT – the Solution Performance Test – and enables the Construct project phase, prior to Validate. The SPT:

  • Ensures a specific feature, or set of features is working as specified in a full-scope production data set, as it will exist in the live, end-to-end solution.
  • Involves scenario testing multiple variations of production-quality data sets with varying properties representing all relevant business scenarios.
  • Test data sets may be taken directly from production, or massaged to reflect various business scenarios.
  • Primarily focuses on ensuring that the overall solution and system is performing to specification given the data supplied to it; some basic plan quality testing is appropriate at this level.
  • Includes both Data Quality (DQT) and End-To-End (E2E) tests
  • Is conducted throughout the Construct project phase as appropriate.
  • Test suite comprises a milestone gateway for entering the Validate project phase.

Unlike UFTs and IFTs, SPTs are not stand-alone, they leverage full-scope, implementation-dependent, production-like data models covering all key workflows, user scenarios and business cases. The SPT test suite includes tests which focus on key subsets and combinations of functional requirements, as well as numerous workflow tests covering all expected data requirements, use cases and business scenarios, all using production data. These tests and data sets are developed and executed during the Construct phase, and ensure that the solution is working according to spec in all functional areas using real production data. Some tests focus on plan quality, but the primary focus is on basic solution behavior, data quality, end-to end workflow mechanics and data flow validation. Successful execution of the SPT test suite defines a milestone for entering the Validate project phase, where UAT is conducted, and is a primary focus of change management initiatives during the Construct phase.

Various types of SPTs will likely be needed in the test suite to isolate different solution features in the context of multiple business requirements and scenarios. All business and functional requirements, including technical behavior and performance, must be represented within one or more SPTs to ensure a successful SPT test phase.

Similarly, well-designed SPTs further ensure that the fundamental business and functional requirements are properly understood, correctly designed and modeled in the solution, that planning data is adequately cleansed, complete and accurate, and that the required solution features are working together in context as expected at some basic structural level with production data. Successful completion of an SPT phase during Construct can be viewed as a milestone prior to staring the Validate phase and engaging in UAT. Doing so greatly decreases the probability of encountering significant issues and project delays during the Validate phase itself.

SPTs are significantly different than UFTs and IFTs since SPT data sets tend to be larger, the correct test plan results are not known in advance, and entire workflows and dataflows are being tested which may contain many different potential failure points. Test plan scope and Pass/Fail criteria must therefore be carefully thought through and clearly documented in each test plan and data set instance, in terms of how to interpret plan results and what constitutes success / failure in each test plan and data set context.

SPTs also introduce a unique kind of complexity related to test plan interdependency: certain types of solution features may require testing in a certain sequence or pattern, where failure at any given point in the test sequence implies that further testing of downstream steps is uninteresting, whether in the same test plan or in dependent test plans which presume a successful upstream test result. In these cases, the test plans must be executed in their dependency order, and this dependency must be clearly documented and followed in the testing and issue resolution process. Failure at any given point in such test precedence implies a need to resolve the issue before continuing with related dependent test plan steps.

Data Quality Tests (DQT) are a subset of SPT, but belong in a class of their own, requiring a detailed knowledge of solution data requirements and the actual business production data. Data quality is a significant factor in most complex supply chain planning improvement projects, and effective DQTs can be key in early detection and correction of data quality issues, thereby mitigating deployment delays and risk. Further, since most production data evolves over time, DQTs may be used to continuously monitor data quality after Deployment, and support a crucial Data Quality KPI in maintaining system performance and plan quality during the life span of the solution. Typically, DQTs can be automated since data integrity issues are often identified by the solution itself and merely need to be reported to IT and business owners, or DQTs can easily be codified and included in routine batch processing of data interfaces.

Clearly documenting the business and functional requirements represented in each SPT helps identify test plan priorities, understand test dependencies, rank SPTs in importance, and ensures that solution design and workflows address all business and functional requirements.

Ensuring that all UFTs, IFTs, and SPTs pass before embarking on UAT is a practical way of breaking down the overall testing process into more manageable components. These first three test types and phases also help to refine and correct solution design, guide issue resolution, provide for early detection of product defects, and more efficiently focus energy and time during the critical UAT Validation phase, which we will explore in our next post.

SCM Test Plans – Design

In our last post, we introduced the concept of the Unit Functional Test (UFT) as a milestone for completing the Prepare phase of an SCM implementation. Though these tests seldom fail for product reasons, they serve to align the user community on software UI design, workflows, terminology, functional behavior, data requirements, test plan templates, and test plan execution and issue resolution protocol. Additional ancillary benefits of the UFT include facilitation of solution acceptance and adoption, change management, and enabling root cause analysis of more complex scenarios.

The next type of test positioned in complex SCM planning scenarios is the Integrated Functional Test (IFT) – and enables the Design project phase, prior to Construct and Validate. The IFT:

  • Tests a combination of functional requirements on a larger, static data set with a known result;
  • Generally uses production data with some dummy data filled in as needed;
  • Test suite covers all functional requirements;
  • Includes complete workflow tests representing typical planner use patterns, solution workflows and planning scenarios relevant to all relevant business processes;
  • Is useful for explaining and illustrating functional solution behavior in robust, holistic combinations, as well as solution design, data and configuration requirements.

Like UFTs, IFTs are also stand-alone, but unlike UFTs, IFTs are more implementation-dependent, configured to combine specific features required in the current SCM implementation. These are more comprehensive tests on larger data sets which focus on key business cases and scenarios, including full-scale production-quality data sets, and have a known result. IFTs combine multiple solution features in key combinations according to business requirements, including full-scope test plans and data sets which span all functional and business requirements and represent realistic planner workflows. IFTs are developed and executed as part of the Design project phase, depend on the successful passing of the UFT test suite, and are essential for validating overall solution design.

Since building and validating the prototype model is a key requirement of the Design phase, as required functionalities and solution configurations are determined they can be conveniently instantiated into a series of IFT data sets by solution architects with minimal effort, and executed by business users as a milestone for entering the Construct phase. So, building and executing IFTs should not require significant development time unrelated to key project milestones, and may decrease the deployment time by pinpointing design issues early in the project timeline.

Like the UFT, the IFT enables business users to become comfortable with both the software UI and solution behavior in the context of complete user workflows. They are leveraged by change management program initiatives and activities to generate awareness, understanding, confidence and enthusiasm in the solution deployment during the Design phase. This activity equips business users with an understanding of how the software meets their business requirements, helps identify and resolve gaps in the solution design, and positions the business for solution adoption and acceptance. User  engagement in developing and executing the IFT test suite enables them to understand how important the next test type, the System Performance Test, is for implementation success, and to drive and own this test phase.

SCM Test Plans – Prepare

In our last post introducing a world-class SCM testing strategy, we considered the importance of a collaborative approach when testing enabling technologies in complex SCM environments. A hands-off mindset in such contexts is naïve at best, generally leading to frustration and sub-optimal solution value, if not outright failure. Experienced SCM professionals recognize the nature of this problem and adopt a comprehensive, collaborative strategy for ensuring success.

Four distinct types of tests are relevant when addressing this challenge, corresponding to four, distinct SCM project phases; each kind of test plays a necessary role in ensuring project success.

The first type of test recommended in complex SCM planning scenarios is the UFT – the Unit Functional Test – and enables the Prepare project phase, prior to Design and Construct. The UFT:

  • Tests a specific solution feature on a small, static data set with a known result;
  • Might use dummy data rather than realistic production data;
  • Is useful for explaining and illustrating functional solution behavior, both in isolation and in limited combinations, as well as solution design, data and configuration requirements.

UFTs can be stand-alone, independent of any given software implementation, so they are easily developed and executed early in the Prepare project phase. These tests are typically short and simple, based on small data sets which are relatively easily understood, have a very focused scope, and a known result. They each comprise a simple, clearly-documented workflow and Pass/Fail criterion, such that they can be executed and understood by those who do not yet know the new software.

Due to their simplicity and limited scope, the UFT can serve to familiarize users with the application UI, navigation and workflows, train users in solution-specific terminology and behavior, and clarify data requirements for solution architects and data integrators. Since each UFT scenario tends to focus on a single aspect of the solution using a simple, static data model, developing Pass/Fail criteria also tends to be straightforward.

Further, the inherent simplicity of the UFT implies that there is no reason to reinvent the wheel in creating them; the software solution provider should be able to deliver a relevant suite of UFTs out of the box, ready to go and used as-is, or adapted to current business needs with very minimal effort, requiring an insignificant time commitment in the project timeline for positioning them.

Executing and passing a comprehensive set of UFTs relevant to the current software deployment, demonstrating key solution features and behavior in a small, informal test environment, can be conveniently leveraged to enable super-user training, again with no excessive strain on project timelines or resources, and positioned to define a key milestone for the completion of the Prepare phase.

Since UFTs should seldom fail, executing a UFT test suite tends to build momentum and confidence in the new cutting-edge technology through hands-on experience, and provide valuable, practical business-critical insights into the capability and value inherent in an SCM enhancement project early in the implementation, avoiding misinformation and confusion, alleviating fears, and thereby facilitating change management and encouraging solution acceptance and adoption.

If a UFT does happen to fail, which might be due to a solution defect, but more often than not is rooted in incorrect or inaccurate data or solution configuration, the earlier this problem is detected and resolved the better, which is all the more reason to execute this type of test early in the project timeline.

Knowing that all required solution features are represented in the UFT test suite, and that all of them have passed as a milestone to complete the Prepare project phase, simplifies resolution of the problems inevitably encountered in more complex scenarios involving combinations of features and complex interactions with less clearly understood data, and may, in fact, avoid them altogether through aligned user expectations, correct solution design, and thorough data configuration, cleansing and validation.

Finally, and significantly, well-designed UFTs also enable business users to understand the importance and protocol of the overall testing and issue resolution process, and familiarize them with the required test plan templates and issue reporting and resolution workflows and protocols.

The value of Unit Functional Testing during the Prepare phase is significant, but not readily apparent to those unfamiliar with the difficulty and complexity often inherent in SCM implementations. Engaging users in this activity early in the project with simple tests equips them to engage more fully and efficiently in creating and executing the remaining types of tests leveraged in SCM testing methodology in subsequent project phases, as they take on the next test type, the IFT – the Integrated Functional Test – during the Design Phase, which we describe in detail in our next post.

SCM Test Plans – Intro

When deploying enabling technologies in complex supply chain management (SCM) environments, there is significant business benefit in a rigorous testing methodology for validating solution design and performance in each unique deployment. Since plan quality is mission critical in SCM, it is imperative that an appropriate, collaborative testing strategy be developed and executed, especially for highly configurable, data-sensitive applications, to ensure deployment success.

Robust SCM planning applications tend to be highly configurable and extremely data-intensive, such that few deployments are exactly alike. In such contexts it is unrealistic to expect the software solution provider to rigorously test every conceivable solution configuration and data combination within any given software release cycle, implying that each separate instance of software deployment may contain some untested configurations. The testing requirement in these business scenarios inevitably begins to resemble that of purely custom development, where the need for thorough testing is much more intuitive. Given the fact that such systems are mission critical, where deployment failure is unacceptable, it is essential that a collaborative approach to testing be adopted by both the business and the software solution provider.

Designing an effective test strategy in a complex SCM environment requires an intimate knowledge of both the business and functional requirements, as well as the software solution design, features and capabilities. Since it is uncommon for the solution provider to have an intimate knowledge of the business context, and unlikely for business users to be adequately familiar with design and capability of the software application, it is difficult for either party to effectively construct and execute test plans in isolation, so a collaborative effort between the business and the software solution provider in test plan creation and execution is a must.

Unless SCM program managers have relevant prior experience implementing these types of complex systems, they may not realize the importance of their team’s participation in the testing and validation effort, expecting this to be the sole responsibility of the software solution provider. Such a hands-off approach is often a recipe for failure. It is critical that business managers understand this requirement, and adopt a collaborative role in the testing process, even taking ownership and driving it, working closely with the software solution provider in joint-partnership to ensure deployment success.

In the coming posts, we’ll consider four distinct types of tests, and how each plays a key role in an integrated, comprehensive, effective SCM testing strategy.

BPM vs CM: Designing for Success

Change is essential in facing today’s business challenges, and it will likely become even more so over time, so let’s take a fresh look at it. Perhaps the root cause for transformation failure isn’t in what many call Change Management, but rather in inadequate business process methodology.

Business Process Management (BPM) is the science of designing and improving business processes; Change Management (CM) is the science of helping people transition from Current State to Future State business processes with minimum disruption and uncertainty, and it’s where the blame is generally placed for the 70% failure rate of transformation initiatives.

The definitions of BPM and CM imply significant overlap between the two domains; the similarity is compelling enough to explore what the real differences are, if any. In other words, when we do BPM correctly, shouldn’t this address most, if not all of the CM requirements, and significantly improve the success rate of our transformation efforts?

World-class business process design focuses on fulfilling business requirements driven by business metrics, and defines business requirements as behaviors required by the business (rather than focusing on feature/function). So, presenting detailed Current State and Future State Business processes in this context, as behaviors, should naturally convey the change that is expected of our people in the transition, taking all the mystery out of it and facilitating process adoption for everyone involved. So, if we did this well, engaging everyone in understanding current and future state behaviors, explaining the differences between them at a detailed level and providing a value-based justification for the transition, what kind of change is left to manage?

Certainly, in addition to the actual business processes changing, there might be systems, applications and technical infrastructures that are changing; in fact, the need for system improvements often drives the need for process change. Yet documenting these tool and system related changes, and how user workflows leverage them in Future State processes, is inherent in any thoughtful process design. If we’re doing BPD as we ought, we’re covered here.

There’s also a need for proper user training in Future State processes and workflows, and for engaging subject matter experts and relevant business roles in shaping, defining, and ironing out Future state process details, but best practice BPM methodology takes this into account as well.

What else is left for CM when we’re following best practice BPM? Perhaps the concept of CM as a separate domain has arisen because we’ve not been thorough and thoughtful in our business process design. And perhaps it continues to be a problem because we aren’t thinking about change the way we should, as an inherent aspect of BPM. Addressing transformation failure as a process gap and employing world-class BPM methodology to resolve it is an insight that can drive success.

Value-Driven Business Reporting

Business Intelligence (BI), has been corporate center stage for a while now, but experience tells us that creating a new report for a business and getting it right can be immensely challenging, time consuming, and frustrating.

More often than not, those asking for a new report don’t fully understand what they want, or why they want it, or exactly what to ask for, and those providing the report don’t understand much more. The result can be a number of painful iterations, and significant wasted time and effort. In most IT departments, this type of waste is expensive, in both opportunity cost and monetary cost. What are some keys that can help us collaborate efficiently to get the outcome we need without all the hassle?

The first question to ask when designing a new report, or sprucing up an existing one, is: What action should this report trigger? When a user views the report, what decision(s) are they looking to make? This is another way of stating the business requirement enabled by the report. What does a user want to do as a result of looking at it? This is, by definition, the purpose of the report. Until this is clear, the need is suspect; answer this question before proceeding with any design.

And if we understand what a business requirement is, then we also realize that every well-defined business requirement drives a valuable business behavior, measured by a business metric. This gives the What of our report the needed Why, providing a sense of the business value potential inherent in the report.

Once the purpose and value of the report are clearly understood, the next question to ask is: What data is both necessary and sufficient to inform the user to take this action? The user needs information to act on, not too much and not too little, so determine what this specific information is, what the user needs to know to accomplish their purpose, and how the user is going to interpret this data.

Next, we must source the data, and synchronize it with the user’s purpose. Locate or envision the data store, determine who should be responsible to maintain this data and technical infrastructure, and specify when the data is needed and how often it must be refreshed, so that we understand supporting system requirements to keep the input data up-to-date and accurate. We may determine from this research that the data is not available in any of our systems, that it will be difficult to acquire and maintain, and/or that the user will need to interact with the report to supply additional tribal or business knowledge to get the desired outcome. We may also find that the data is sensitive or crucial to our business, such that special permissions are needed to see or edit it, which implies more difficulty and cost to develop and manage the report.

The next step is to determine how best to present the required information so that the report enables the user to understand what needs to be done with the least possible effort in the shortest amount of time. This typically involves minimizing the number of screens the user needs to view, the amount of scrolling needed, and the number of mouse clicks required to obtain and correctly interpret the relevant information. This usually means maximizing the use of screen real estate by compacting column headers, smart filtering, and eliminating unnecessary detail. Using colors, graphs and pictures to draw the eye of the user to essential details is often a must, and it’s smart to assume the user understands the report and how to use it, rather than designing for a first-time experience, since this will most often be the case and allows for much more efficiency and elegance in the report layout.

With a concept for the report layout and content in mind, it’s time to create a quick and dirty mock up for the users so they can make tweaks and corrections before investing in the final product. There’s nothing like seeing to put things in perspective, especially for those of us with a more visually-oriented learning style.

Finally, now that we understand the data sourcing and reporting requirements, we begin to understand the cost of the creating and maintaining the report, which ought to be justified by the report’s value. So here is where we get ask and answer the key fiscal question: Is the value we expect from using this report worth the cost we expect in developing and maintaining it?

If the expected cost of the report is warranted by the expected value, if the payoff appears to be worth the investment in time and company resources, then it’s time to get to work and build it, working closely with the user community to ensure that the report provides optimal value for our business.

And, last but not least, be sure to document how to use the report and the justification for its design, why the report is built the way it is. This helps train new users, keeps the report’s purpose and value in view, and prevents recycling through bad designs as new people inherit and begin using the report. Provide this documentation on line through a Help button built into the report, so that this information is always readily accessible to everyone with access to the report.

Good reporting design and infrastructure makes life easier for everyone, and can bring immense, ongoing value to our business. So let’s be smart about it, and do it right.

Value-Driven Metric Maturity

In this series on value-driven metrics we have looked at the purpose and structure of business metrics, and have identified a number of characteristics of a world-class business metric design. How do we leverage this information to set our sights on actually making this a reality? How do we get started, and make it happen?

Knowing what we need in each metric through a world-class metric definition, and how each component of the metric fits with the others and completes the whole, allows us to identify inter-dependencies and precedence relationships between the metric elements. This insight allows us to sort the requirements into a series of milestones, forming roadmap for developing a complete metric.

Once we understand the roadmap, we have an intuitive way to classify the maturity or completeness of any given metric, and by extension, the overall maturity of a business metric infrastructure.

Rating Status Description
1 Conceptual identified as useful
2 Manual calculated and reported manually
3 Automated calculated and reported with no manual intervention
4 Systematic  consistently monitored in a business process
5 Bounded
trigger points define acceptable behavior
6 Actionable corrective actions are defined and carried out
7 Detailed business case details are fully documented and accessible
8 Integrated
placed in the global metric hierarchy
9 Tuned trend/tension analysis maintains globally optimized trigger point(s)
10 Mature intrinsic to system governance, enables root cause analysis, and full business value is established

By classifying each metric as above and taking a weighted average across all of the metrics required for a business, based on a relative prioritization of their perceived importance, one may derive an overall maturity score for a metrics system and set goals for enhancing overall metrics capability.

A well-defined metrics system is essential for the value-driven enterprise. Understanding world-class metrics methodology is the first step toward realizing it.

Value-Driven Metric Design

In our last post we considered the importance of defining corrective actions for each business metric: a business process with the specific goal of reversing sub-optimal performance trends, and triggering these corrective workflows by setting metric trigger points which identify an acceptable range of behaviors associated with the metric. These insights identify two key design components of a well-defined metric: trigger points and corrective actions. What are the other components of a world-class metric design?

To define the ideal design for a metric, we must keep its purpose in mind: to drive a business toward optimal performance. What else do we need in a metric definition to enable an enterprise to reach its full potential?

Firstly, a clear description of the metric, its purpose and scope, and its value to the business, should be documented. This should include any business scenarios where the metric is particularly relevant, how the metric is to be understood and interpreted, clear definitions of any special terms used in defining or describing the metric design or intent, why the current metric trigger points are understood to define acceptable business behavior and how these were determined, and how the metric is correlated with other metrics in the metric hierarchy.

Further, each metric requires a formula, or rule of some kind to translate the underlying system behavior the metric is designed to monitor into an objective measure of system performance that can be used to evaluate how well the business is achieving its goals. This formula is normally mathematical in nature, where the metric value and trigger points are numerical, but this need not be the case. Whatever the method for calculating the metric value, to be most effective it must be clearly documented, understood, and agreed to by all stakeholders. This includes the data sources for populating all of the variables in the formula, who is responsible for maintaining this data, and how frequently this data must be refreshed. When there are debates about the correct formula or data sources, these discussions and the resulting decisions should be captured for reference. Documenting this information removes ambiguity and uncertainty related to interpreting the metric and acting on it, which is especially essential when behaviors are evaluated and corrective actions are taken based on the metric.

The business roles and relationships tied to the metric should also be clearly documented and understood, including who is Responsible, Accountable, Consulted and Informed in the execution of corrective actions, and any business consequences for prolonged or significant non-compliance with related business processes. In addition, documenting the history of the evolution of the metric definition, past metric trends, any issues or problems and how they were addressed, why certain enhancements were made by whom and why, all serve to explain and justify current business practices and provide guidance for future deliberations in continuous improvement initiatives.

Finally, the technical details of how the metric is reported should be documented: what is needed in the metric report(s), who interprets the reports, what business decisions are made from them, and how/why these required features enable related business processes and add value to the business. Historical changes to report content and layout should be documented to capture enhancement motivations and prevent regressing back to less optimal states.

Clearly documenting all of this information for each metric, where all stakeholders can easily refer to it as needed, facilitates process adoption and compliance, and enables stakeholders to make knowledgeable, informed recommendations for continuous improvement. The completeness and accessibility of this documentation is a key enabler in driving the business to realize its full potential, and is particularly important in dynamic environments where continuous improvement initiatives require frequent modifications to hierarchical relationships, trigger points, formula(e), data sources, and / or the terminology related to each metric.

In our next post, we’ll consider a world-class metrics maturity model, to help us understand the current state of a metric system, which will help us take steps toward improving it.

Value-Driven Metrics: Corrective Action

In our last post we considered how well-designed metrics enable root cause analysis. Positioning metrics within a global hierarchy of interrelated metrics equips a manager to quickly traverse the hierarchy, drilling down into sub-metrics which are reporting sub-optimal behavior until root causes are identified. But discovering the root cause is just a diagnosis; the real goal is to restore optimal behavior within the system. This capability is provided by identifying corrective actions for each metric.

A metric provides measurable value when it both reports sub-optimal performance, and also triggers responsible business roles to carry out corrective actions: business processes specifically designed to reverse undesirable trends and restore optimum business behavior. When timely and appropriate corrective actions are taken in the context of problematic system dynamics, business goals are achieved more consistently and optimally.

For example, when a Customer Service metric begins trending downward, triggering a root cause analysis workflow might indicate that Forecast Accuracy has also been deteriorating, requiring an update to inventory segmentation and buffering policies, and an update or enhancement to demand forecasting workflows and enabling technologies. If Forecast Accuracy has been stable but inventory targets are still being violated, it might be due to increasing supply variability caused by lack of plan conformance, requiring an enhancement to supply planning capabilities, plan conformance metrics, and a review of execution-level business processes.

As in the above example, corrective actions are often not the same as routine business process execution, since in a well-defined system business processes are inherently designed to produce optimal business behavior when followed correctly. When sub-optimal behavior occurs under such conditions, this is often symptomatic of a systemic, underlying problem with the business processes themselves and/or their corresponding process metrics. This may be due to incomplete or inappropriate design, or to unexpected system dynamics which require an adjustment or enhancement to one or more business processes or metrics. This principle often positions corrective actions within relevant system governance business processes, where problematic situations are often unique and require innovative approaches to resolve.

Identifying effective corrective actions for a given metric requires a thorough analysis of the business behavior being measured, understanding which actions most directly contribute to and influence this behavior, what steps must be taken to most efficiently correct sub-optimal trends and restore and maintain optimal behavior in likely business scenarios, and which business roles are responsible to carry out these actions.

For higher-level KPIs (those with supporting sub-metrics) corrective action often focuses on root cause analysis: exploring the metric hierarchy to understand why a given behavior is occurring and who is responsible to correct it. For base-level metrics (those without sub-metrics), corrective action is often related to parametric changes in planning and/or execution systems, business process or workflow enhancements, training and/or change management. Corrective actions and their corresponding RACI matrices must be well-documented, easily accessible to relevant roles, and continually enhanced as the business environment continues to evolve.

Now that we have considered how metrics should enable root cause analysis and drive corrective actions, next we will look at remaining aspects of a world-class business metric architecture.