Congratulations! You spent a small fortune acquiring the latest and greatest test tools—now what?
New generation automated test tools for ERPs are often sold under the marketing theme of “Anyone can use them without technical knowledge,” and “All you do is point, click, drag, and drop when using our test tools.” The truth is nothing is that simple and there is a learning curve for implementing and maintaining a test tool no matter how simple a marketing or sales team makes it appear.
Enterprise Resource Planning (ERP) systems require lots of testing with both manual and automated test tools when that are heavily customized, configured, rebuilt to look and feel like old legacy systems, and interfaced with other applications running on the web or other home-grown tools. To test large, complex, highly configured ERPs with automated test tools, it is often necessary to have technical and coding expertise to develop automated test scripts for end-to-end ERP business scenarios, such as order-to-cash, hire-to-retire, and purchase-to-pay scenarios. The typical ERP functional consultant or subject matter expert (SME) is not equipped to learn an automated test tool on the fly. At least not well enough to test and validate business rules and other test conditions for complex end-to-end scenarios with heavy customizations and integration points with non-ERP applications.
Below is a roadmap designed to help companies and government agencies navigate the landscape of deploying and effectively maintaining automated test tools within their ERP environments.
A test tool has the potential to provide your organization with a repeatable test script that can be replayed in multiple environments with different sets of data, but not every process or business scenario within your organization is suitable for automation. A business scenario is suitable for test tool automation if it is mission critical to the production environment, time consuming to execute manually, requires verification of multiple GUI objects and calculations, or involves participation from more than one business analyst or SME. The project team consisting of SMEs, business analysts (BAs), and system architects will need to decide what processes should be automated. Test tools over the medium and long run are supposed to save an organization money and pay for themselves over time.
There are a few good heuristics to calculate whether automating a process is feasible from a financial perspective. One is to estimate the number of times per quarter a process is executed manually, for instance, as part of production support (regression testing). Other heuristics include how long it takes to run the process in total number of man-hours per year, how critical the process is to the business, and how much it currently costs to run a process manually by executing test steps, reporting test results, obtaining screen captures, and executing different sets of data.
For example, at a chemical company running an ERP solution, we estimated that a production-support functional consultant was spending at least 50 percent of his time (about 1,000 man hours a year) conducting ERP manual testing. This testing was very repetitive, time consuming, error prone, and lacked robust test result documentation. It was also very reliant on this employee. If he was out sick or on vacation, no one else in the organization had the same level of knowledge to test the process manually. We automated this process and determined we could compress and shrink the overall manual testing that the employee conducted by up to 90 percent. After the test scenarios were automated, all the manually executed test scripts could be executed unattended with automatically created test logs and notifications in one hundred hours for the entire year. This was a significant time savings that allowed the test tool to pay for itself.
The lesson here is always automate processes that will minimize risk, time, and money by going after processes that are time consuming and highly repetitive and going across business sensitive areas. A proof of concept (PoC) has been crucial for many of my clients to show and demonstrate the time savings that can be gained through test case automation. In this case, we select a business process that meets the criteria for test automation and is being currently manually executed to prototype with a PoC. We first collect metrics on how long it takes to test the business process manually (number of man hours) as well as how many times it is executed during the year; we then translate this into the financial cost based on labor hours. Next, we proceed to build with an automated test tool for a specific functional area—such as finance, logistics, and maintenance—by automating a process that is system stable (i.e. not subject to changes), business critical, well documented, has valid test data, and participation from SMEs and BAs. After we build the PoC for a specific functional area, we collect and gather metrics to show the time and financial savings gained by automating a process that we previously executed manually. We then prototype the PoC for upper management and roll out the PoC to other functional areas. The goal of the PoC is to show that test automation can be successful and can be quantified with measurable outcomes of time and cost, which helps to get buy-in from upper management for further test automation efforts.
Test Tools Will Not Fix Broken Testing Processes
If your organization does not have documented testing standards, strategy, or an overall test plan, a test tool will not magically resolve broken internal and deficient testing processes. Just like buying a plane does not make you a pilot, buying a test tool does not mean all your testing woes are automatically resolved.
In addition to having sound testing practices, such as requirements management, entrance and exit criteria, categorization of defects, reported test metrics, and a documented test strategy, your organization will also need individuals who are technically proficient to utilize the test tools.
In a typical ERP environment, previously automated business processes may change in the live production environment due to any of the following reasons: new logic; additional custom fields; the application of patches; and new business rules that can cause the automated test script to fail during playback since the previously automated business process may behave differently from what was originally automated, recorded, and captured with the test tool. Someone will have to update and maintain the newly automated test scripts to reflect the new production-based process even if the test tool maker claims its tool picks up new system changes automatically by scanning the screens. So, it will be necessary as part of your testing processes to define the roles, responsibilities, and skill sets needed to maintain automated test scripts over the long run. After the outside contractors and consultants leave your organization, who is equipped in house to maintain and update the library of automated test scripts? In my experience, BAs and SMEs are not able to handle this role because they are not sufficiently technical to update the programming code used to build the automated test scenario. BAs and SMEs are, for the most part, functional business experts who understand an underlying business process. However, they are not considered programming experts that understand the coding techniques necessary to maintain an automated test.
Test Cases Need Granular Detail
Most organizations are in denial about the current state of their test cases. When documenting test cases, functional and technical specs fall through the cracks due to time constraints and deadlines that give development a priority over testing. In regulated and audited environments, ERP testing adds an additional level of sensitivity and risk, and these organizations will need to show well-documented and well-executed test results that trace to requirements.
To get to that level, it is necessary to have version controlled, well-documented, key-stroke-level test cases that define test conditions and business rules that must be validated to show that the test case meets the requirements. This is especially true when you have in-house resources that take documentation for granted. When outside consultants join the organization and are expected to turn incomplete or missing test cases into automated test scripts, the level of risk increases. The consultant may not be aware of the blueprint activities, baseline requirements, assumptions, or flow process diagrams that took place in order to develop the project’s existing test cases.
As a general rule, I advocate that a functional consultant, SME, or BA be able to step through the documented test case with valid test data at least once before the test case automation is attempted. When I have advocated this to previous clients, we found that the clients’ own SMEs and BAs could not follow the documentation of their own test cases, which caused rework of the test cases.
The second rule I advocate is that the consultant with expertise in the test tool be able to step through the test case individually and manually within the organization’s test environment before automation is attempted. When I have done this at previous clients, I found that the existing test case documentation is highly flawed and, at a minimum, does not trace to a requirement, is not ERP-role based, is missing test data, and does not have defined test conditions that must be validated. This leads to team members writing detailed test cases that lend themselves to test script automation. The best test tool in the world will not know which test conditions or scenarios need to be validated for your environment.
Repeatability, Hand Off, and Storage
For organizations that rely heavily on outside consultants for ERP test case automation, I strongly recommend they have the consultants demonstrate to the SME or BA that their automated test scripts can be executed with multiple sets of data, have verification points to show test conditions, capture status bar data, and can successfully run in different machines. The SME or BA should approve the automated test script if, in fact, it meets all the conditions of the documented test case. Sessions need to be held between the automation engineer and the SME or BA that convincingly demonstrate that the automated test scripts are a reflection of the documented test case. This is why it is so essential to have detailed key-stroke-level documented test cases.
The automation test engineer needs to show the SME or BA where the automated scripts are stored, how they are executed, and what preconditions or steps are necessary to run the script. For example, on one project, I was able to successfully link together sixteen automated test scripts via a batch job with built-in dependencies that a successor test script would not run unless its predecessor automated test script also executed. Workflows were automatically triggered should a test script fail while capturing and annotating any errors that were produced. The SME was trained on how to run this batch job of scripts, how to resume execution in the event of failure, and how to update the data by working with external data sheets linked to the automated test script. While the SME was not a test tool expert he had sufficient knowledge to run independently a very large batch job and conduct basic troubleshooting as well.
Additionally, the automated test scripts must be stored in a dedicated test environment subject to audit trails, version controls, and mapping to the appropriate test requirement.
While no test tool is an elixir to bad testing practices, tools can help an organization streamline and add greater discipline to its testing practices. Test tools have been proven to save time and money at multiple organizations across multiple industries, but doing so requires an initial investment of time, technical expertise, buy in from management, maintenance of automated test scripts, and allocation of resources that can provide input on what business processes stand to gain the most from test automation.