In our last post, we introduced the concept of the Integrated Functional Test (IFT) as a milestone for completing the Design phase of an SCM implementation. These tests ensure that all functional requirements can be met with the software solution and that all data requirements are fully understood prior to the Construct project phase. They also ensure that the software optimally supports the business requirements, and further facilitate solution acceptance and adoption, change management, and they enable root cause analysis of more complex scenarios.

The next type of test positioned in complex SCM planning scenarios is the SPT – the Solution Performance Test – and enables the Construct project phase, prior to Validate. The SPT:
- Ensures a specific feature, or set of features is working as specified in a full-scope production data set, as it will exist in the live, end-to-end solution.
- Involves scenario testing multiple variations of production-quality data sets with varying properties representing all relevant business scenarios.
- Test data sets may be taken directly from production, or massaged to reflect various business scenarios.
- Primarily focuses on ensuring that the overall solution and system is performing to specification given the data supplied to it; some basic plan quality testing is appropriate at this level.
- Includes both Data Quality (DQT) and End-To-End (E2E) tests
- Is conducted throughout the Construct project phase as appropriate.
- Test suite comprises a milestone gateway for entering the Validate project phase.
Unlike UFTs and IFTs, SPTs are not stand-alone, they leverage full-scope, implementation-dependent, production-like data models covering all key workflows, user scenarios and business cases. The SPT test suite includes tests which focus on key subsets and combinations of functional requirements, as well as numerous workflow tests covering all expected data requirements, use cases and business scenarios, all using production data. These tests and data sets are developed and executed during the Construct phase, and ensure that the solution is working according to spec in all functional areas using real production data. Some tests focus on plan quality, but the primary focus is on basic solution behavior, data quality, end-to end workflow mechanics and data flow validation. Successful execution of the SPT test suite defines a milestone for entering the Validate project phase, where UAT is conducted, and is a primary focus of change management initiatives during the Construct phase.
Various types of SPTs will likely be needed in the test suite to isolate different solution features in the context of multiple business requirements and scenarios. All business and functional requirements, including technical behavior and performance, must be represented within one or more SPTs to ensure a successful SPT test phase.
Similarly, well-designed SPTs further ensure that the fundamental business and functional requirements are properly understood, correctly designed and modeled in the solution, that planning data is adequately cleansed, complete and accurate, and that the required solution features are working together in context as expected at some basic structural level with production data. Successful completion of an SPT phase during Construct can be viewed as a milestone prior to staring the Validate phase and engaging in UAT. Doing so greatly decreases the probability of encountering significant issues and project delays during the Validate phase itself.
SPTs are significantly different than UFTs and IFTs since SPT data sets tend to be larger, the correct test plan results are not known in advance, and entire workflows and dataflows are being tested which may contain many different potential failure points. Test plan scope and Pass/Fail criteria must therefore be carefully thought through and clearly documented in each test plan and data set instance, in terms of how to interpret plan results and what constitutes success / failure in each test plan and data set context.
SPTs also introduce a unique kind of complexity related to test plan interdependency: certain types of solution features may require testing in a certain sequence or pattern, where failure at any given point in the test sequence implies that further testing of downstream steps is uninteresting, whether in the same test plan or in dependent test plans which presume a successful upstream test result. In these cases, the test plans must be executed in their dependency order, and this dependency must be clearly documented and followed in the testing and issue resolution process. Failure at any given point in such test precedence implies a need to resolve the issue before continuing with related dependent test plan steps.
Data Quality Tests (DQT) are a subset of SPT, but belong in a class of their own, requiring a detailed knowledge of solution data requirements and the actual business production data. Data quality is a significant factor in most complex supply chain planning improvement projects, and effective DQTs can be key in early detection and correction of data quality issues, thereby mitigating deployment delays and risk. Further, since most production data evolves over time, DQTs may be used to continuously monitor data quality after Deployment, and support a crucial Data Quality KPI in maintaining system performance and plan quality during the life span of the solution. Typically, DQTs can be automated since data integrity issues are often identified by the solution itself and merely need to be reported to IT and business owners, or DQTs can easily be codified and included in routine batch processing of data interfaces.
Clearly documenting the business and functional requirements represented in each SPT helps identify test plan priorities, understand test dependencies, rank SPTs in importance, and ensures that solution design and workflows address all business and functional requirements.
Ensuring that all UFTs, IFTs, and SPTs pass before embarking on UAT is a practical way of breaking down the overall testing process into more manageable components. These first three test types and phases also help to refine and correct solution design, guide issue resolution, provide for early detection of product defects, and more efficiently focus energy and time during the critical UAT Validation phase, which we will explore in our next post.



Robust SCM planning applications tend to be highly configurable and extremely data-intensive, such that few deployments are exactly alike. In such contexts it is unrealistic to expect the software solution provider to rigorously test every conceivable solution configuration and data combination within any given software release cycle, implying that each separate instance of software deployment may contain some untested configurations. The testing requirement in these business scenarios inevitably begins to resemble that of purely custom development, where the need for thorough testing is much more intuitive. Given the fact that such systems are mission critical, where deployment failure is unacceptable, it is essential that a collaborative approach to testing be adopted by both the business and the software solution provider.
The first question to ask when designing a new report, or sprucing up an existing one, is: What action should this report trigger? When a user views the report, what decision(s) are they looking to make? This is another way of stating the
To accomplish an objective we plan: we break our objective down into a series of steps we can follow to achieve it efficiently. We include sufficient detail to reasonably ensuring feasibility, more detail in earlier steps. We measure how well our plan meets our objective, how well we’re following our plan, and re-plan remaining steps as circumstances change. The better we plan, the more likely we’ll succeed.
The objective in each domain is unique, so the tools and logic employed in each should suit the relevant objective.
We plan/schedule to help us execute: to do the right things at the right times in the right ways to efficiently achieve an objective. But in a dynamic world we’re constantly adjusting plans/schedules to reflect reality, which begs a key question: What parts of a plan/schedule can we change without sacrificing productivity?
Planning/Scheduling workflows need an accurate picture of the conditions expected at the Execution boundary, at the end of the frozen period. Given this starting condition, re-planning/scheduling can align plans with Execution, closely reflecting reality without destabilizing our work environment. When Execution experiences a major disruption, requiring a relaxation of all or part of the frozen period to enable more complete re-planning, execution and planning/scheduling workflows and interfaces should be designed to provide this capability.
Scheduling looks at small subsets of tasks within our overall plan, within a much shorter horizon, considering much more detail and using different measures, to generate efficient task sequences. It requires a completely different skill set and supporting capability than Planning.
Every type of scheduling problem is different, but we know in general that they’re notoriously difficult to solve optimally, being in an altogether different space than planning. Their difficulty is often completely independent of planning complexity, and varies enormously by problem type and size. We need good answers fast, not perfect answers, using common-sense rules of thumb to simplify problems and very specialized optimization tools to do the heavy lifting.
The foundation of Planning, especially in complex scenarios, lies in our ability to break down an objective into a series of steps we can follow to achieve it. Insufficient detail may confuse us, where we don’t know what’s next or we forget something; too much detail wastes time on things that should be obvious and that may change by the time we get there. Knowing how much detail to include is as much an art as it is a science, it’s dependent on context, and often involves a bit of trial and error.
need to plan for contingencies. We also need a way to measure how well our plan is achieving our objective, a way to monitor how well we’re doing as we execute, and an efficient way to re-plan when we’re getting off track.
And as we execute to a plan, life happens and we re-evaluate and adjust: we re-plan as needed and continue executing until we either meet our objective or give up. Whether we’re running errands, building a house, fighting a war, or managing a supply chain, the same basic principles apply.
Most every plan requires set of tasks to be done in a sequence. When the sequence is intuitive we may not even think about it, but if there’s some flexibility, and it appears that certain sequences will be much more efficient than others, we’ll likely do some scheduling. We initially plan these tasks at a high level, lumping them into periods where we need them, then as we get closer we include more detail and apply additional skill and logic to determine a good sequence within the context of our plan. This is definitely a kind of planning, but a completely different, more intense kind.