In this series we are exploring the importance of adopting a collaborative approach when testing enabling technologies in complex, SCM environments. The nature of such deployments implies a hands-on mindset for both the solution provider and the business; success often requires beginning early in the project, with the solution provider owning test plan development, and with the business progressively taking more ownership in later project stages.
The initial test type, the Unit Function Test (UFT), supports the Prepare project phase, introducing the business to software workflows, terminology, and key functional capabilities. The Integrated Functional Test (IFT) supports the Design phase, and ensures solution design satisfies all business process and functional requirements. The System Performance Test (SPT) supports the Construct phase by ensuring the solution conforms to the approved design and satisfies functional and technical requirements. The final test type, the User Acceptance Test (UAT) supports the Validate phase by ensuring plan quality requirements are met and that the solution is deployable.

The UAT test phase:
- Ensures consistent plan quality on multiple production-level data sets in an end-to-end, production-equivalent test environment.
- Is the most important test phase and type, and must include tests covering the entire solution scope.
- Must clearly document test plan priority and Pass/Fail criteria, so as to be consistent with business metrics, business process and functional requirements.
- Must provide a test period of sufficient duration to ensure consistently publishable plan quality in a dynamic, supply chain context.
- Comprises a milestone gateway for the Deploy project phase.
UATs focus on plan quality, always include live, full-sized production data sets, and may be conducted in multiple phases. Each UAT is workflow-based, representing an actual user workflow on a current production data set as expected in the live production environment. An initial phase of UAT may involve a tuning phase where solution parameters are adjusted to improve plan quality, followed by a stabilization phase where plan quality is consistent over a defined period. Successful completion of UAT is the milestone for the Deploy project phase, and is a key focus of the change management program during the Validate phase.
Of the four test types and phases, the business ultimately only cares about UAT, and this is certainly reasonable; users must be able to use the solution to consistently generate high quality plans, or the solution is unusable. The common objective of the solution provider and the business is to deploy a usable solution, implying that UAT is indeed the most important test type; it must therefore be given due diligence and priority. But this does not imply that the other types of tests are unimportant.
Observe that SPTs must pass for UATs to pass, and that well-designed and executed SPTs can be expected to both enable successful UAT design, and to both minimize and facilitate resolution of UAT issues, saving valuable time on the project’s critical path, as plan quality concerns can be difficult to understand, measure and resolve.
Similarly, SPTs will only pass if related IFTs and UFTs pass, so these four plan types define an intuitive testing hierarchy and dependency which enables one to break the overall testing requirement down into more manageable and progressive steps, and to execute the entire complex of testing activity in stages throughout the course of the implementation without significantly extending the critical path, rather than bunching up all the testing at the end during the Validate phase.
UATs involve repeating various structured planner workflows on current, dynamic production data sets and determining if each workflow pattern results in a publishable plan/schedule in its business context. Defining sufficient Pass / Fail criteria here up front, prior to test plan execution, is critical; it is perhaps the most important consideration in the entire testing process, and it can be the most difficult to get right. The root cause of this difficulty lies in the fact that it is generally easy for planners to spot unacceptable characteristics of a plan, yet identifying necessary and sufficient criteria for a good plan up front can be much more difficult; this will likely require an iterative process, and fine-tuning over time. Failure to do this well often results in significant subjectivity in evaluating plan quality during the Validate phase, and may tempt the business to deploy without fully validating the software solution, under pressure to adhere to project budget and timeline. Successfully completing the three earlier test phases is likely the best way to prepare system users to develop and execute these UATs successfully and enable a more seamless Deploy phase.
In our final post in this series, we’ll explore the overall execution methodology, how to test to ensure success.




Robust SCM planning applications tend to be highly configurable and extremely data-intensive, such that few deployments are exactly alike. In such contexts it is unrealistic to expect the software solution provider to rigorously test every conceivable solution configuration and data combination within any given software release cycle, implying that each separate instance of software deployment may contain some untested configurations. The testing requirement in these business scenarios inevitably begins to resemble that of purely custom development, where the need for thorough testing is much more intuitive. Given the fact that such systems are mission critical, where deployment failure is unacceptable, it is essential that a collaborative approach to testing be adopted by both the business and the software solution provider.
Business Process Management (BPM) is the science of designing and improving business processes; Change Management (CM) is the science of helping people transition from Current State to Future State business processes with minimum disruption and uncertainty, and it’s where the blame is generally placed for the
The first question to ask when designing a new report, or sprucing up an existing one, is: What action should this report trigger? When a user views the report, what decision(s) are they looking to make? This is another way of stating the
Knowing what we need in each metric through a world-class metric definition, and how each component of the metric fits with the others and completes the whole, allows us to identify inter-dependencies and precedence relationships between the metric elements. This insight allows us to sort the requirements into a series of milestones, forming roadmap for developing a complete metric.
To define the ideal design for a metric, we must keep its purpose in mind: to drive a business toward optimal performance. What else do we need in a metric definition to enable an enterprise to reach its full potential?
A metric provides measurable value when it both reports sub-optimal performance, and also triggers responsible business roles to carry out corrective actions: business processes specifically designed to reverse undesirable trends and restore optimum business behavior. When timely and appropriate corrective actions are taken in the context of problematic system dynamics, business goals are achieved more consistently and optimally.