Shifting Testing Left Is a Team Effort

[article]
Summary:
There is a lot of talk in the testing world about shifting left. Basically, “shift left” refers to moving the test process to an earlier point in the development process, independent of the development approach. This article explores a case in which shift-left has been applied, and the lesson is that shifting left cannot be achieved by testers alone—it must result from a team effort.

Shift-left is an approach to software development where testing is performed earlier in the lifecycle—in effect, moved left on the project timeline. Shifting left does not refer only to dynamic testing, or actually running the system; it also can refer to static testing, so it means conducting reviews and inspections too, as well as more unit and integration testing instead of GUI-based testing.

In an agile or DevOps environment, it often means testing small parts of the software as early as possible instead of testing at the end of the sprint. In that sense, shift-left and continuous testing relate partly to the same goals and practices, although continuous testing focuses more on automation. Test-driven development (TDD) and behavior-driven development (BDD) also can be an implementation of shifting left.

There are many reasons to shift left, including to prevent project delay, reduce lead time, uncover implicit requirements, and find bugs as early as possible. This article will look at one case study of a large organization that shifted its testing left, along with the practical testing techniques its team members employed, the challenges they encountered, and how they found success.

Context for the Case Study

The implementation of shift-left on which this article is based took place in a large financial organization. This organization has to implement laws and regulations in their software, and these implementations have fixed deadlines for production. The organization was transitioning from a waterfall development approach to agile/SaFE software development. Development took place in a multifunctional team with four analysts, two developers, and one tester, who were working on a component that calculates payments to individuals.

One of the issues the team often ran into was the availability of shared testing environments between teams. The team was not the only user of the test environment, and the functionality to be tested was often in a different software branch or version than what was installed on the testing environment. As it was a shared environment, you cannot just install the version you need for your tests, because the other users of the test environment might be running tests they need to finish first. This means idle time for the team while it waited for the other team.

In order to meet the deadlines, the team had to be creative and see if they could speed up testing, or at least minimize idle time before the testing environment would be available.

Shifting Left in Practice

The team took its first steps in shifting left during regular sprints for a software release. The test design was made parallel to development, and both were based on requirements and specifications made by the analysts. (This is also done in a traditional waterfall project environment, and shift-left can also be used there without issues.) The test design was elaborated on in test scenarios, which were documented in the test management tool.

After this point, the team usually had to wait for the developer to commit his changes to the repository and for the software to be deployed in the testing environment. However, the team was confronted with a longer than expected waiting period due to the time the test environment was available for our team and the changes another team had deployed in the testing environment.

But the testing environment was not the main cause of these issues. The main cause was the fact that multiple teams were working in the same codebase without distributed version control, which made it challenging to specifically test the change you are working on instead of the combined changes of all the developers that are in that software build.

To spend the idle time as well as possible, the developer and the tester decided to run the tests locally in the development environment before the software was committed to the software repository. Normally, this is not in line with the principles of independent testing; besides, the local development environment differs from the testing environment. However, the risk due to running on a different configuration was deemed minimal because integration tests are also run after each software delivery on a test environment.

A testing environment should be as close to the production configuration as possible in order to make sure those factors do not influence functionality or behavior of the application. But the team decided to do the initial test on the development environment and run regression tests on the test environment after it became available.

This way of working had some consequences. First, it meant that we needed to ensure that the development environment was capable of running tests. The tests were executed by the test tool used in this project, an in-house developed propriety test tool developed by the tester himself. The test tool needed to be run locally on the developer's computer so the tester could upload and run the test cases.

Second, it meant the team needed to minimize potential double work. Running on a developer's laptop means running on a different configuration compared to the test server (i.e., locally versus an integral test environment), meaning you would have to test twice to make sure configuration issues would not lead to different test results. To cope with this, we automated the tests directly after running them and made sure they would be included in the regression test set that was running in the continuous integration build after each commit. This meant we only needed to execute tests twice: once manually and once automated.

It also meant that the way issues were processed was different. Normally, if tests were run in the test environment and a bug was found, it was recorded in the bug tracking system. Because the tests were run in the development environment first, the developer could follow the path the test case took through the code. This meant that if something went wrong, the developer could see it happen in real time and make adjustments accordingly. It’s a bit like test-driven design—you have a test case and you make the software work before you move on to the next one. The only time we deviated from this was when the fix would take longer than thirty minutes, in which case we would record the issue in the bug tracking system.

But most importantly, it meant that the developer and the tester were working very closely together during test execution. This is logical if you are working in an agile team, but it is nice to experience it in practice.

Pros and Cons of Shifting Left

We do not want to imply that shifting left is always perfect, or even always applicable; like every other method in software development, there are a number of advantages and disadvantages.

The first advantage is that the testers and the team as a whole had less overhead and less paperwork. This is more of a precondition for shifting left. In some projects and test methodologies, the amount of documentation is overwhelming, and you end up with a lot of paperwork that is probably never looked at again. When working in a shift-left environment, we document only the necessary things, like bug reports for issues that cannot be resolved quickly.

The second advantage we experienced was better collaboration between disciplines. Because the tester started earlier with test preparation, most test conditions were collected by discussions with the analysts who designed the functionality. By being involved in the design activities, you can familiarize yourself with the functionality and how the developer is planning to implement it. And by doing the testing together with the developer, you can discuss what you see and how the developer interprets the functionality to be built. The developer and tester can also discuss bugs, which is definitely more efficient compared to recording bugs in a tool.

The third advantage is that everybody in the team was thinking and working outside their comfort zone. Usually testers just focus on the specs to make the test design. But in a shift-left situation, you work together with the developer, so you get the same information concerning the functionality. This saves a lot of time discussing what the system is meant to do, and you get a better idea of what the other development disciplines are doing. The developer can also help you when you make your test design and prepare your tests, which gives the developer and tester better insight into what the other discipline is doing.

The last advantage we saw was that shifting left enabled early automation. By testing earlier, teams created the possibility to automate test cases earlier and reuse the automated test cases, even within the sprint. It is good practice to execute the test case before automating it in order to make sure the test case runs smoothly when automated. By executing it earlier, you can automate it earlier.

However, there also were some pitfalls and downsides to shifting left. The first disadvantage was that testing was less independent. When working closely with the developer during testing, you can be tempted to actually test the code the developer has implemented—but this is not always what you should be testing. What the tester should be testing in this situation is what the specifications say, not the developer’s interpretation of that. This issue can be mitigated by running integration tests, but it remains something a tester should be aware of.

The second disadvantage has to do with metrics. They don’t tell the whole story. This disadvantage is a bit subjective and mostly depends on what you do regarding issues. In this project, we decided to record only the issues the developer could not fix directly. However, one of the key performance indicators that was used in this organization was the number of bugs found during system tests and integration tests versus the bugs found during the acceptance tests and production. Because we recorded very few bugs, the number seemed pretty low. Management was curious about how this was possible because their metrics were not in line with the issues being found in acceptance testing. Due to this way of working, this metric no longer had value.

Becoming Shift-Left Believers

The main goal in this project was to meet the deadline of the implementation of the new business rule engine. By starting the test earlier than planned in another environment, we managed to do so. But what contributed mostly to our success was that every member in the multidisciplinary team looked beyond their own responsibility. By the time the team was used to this way of working, they all preferred it. The collaboration between testing and the other disciplines greatly improved.

One funny detail of this story is that nobody on the team had heard of shifting left before. We were just looking for practical solutions to problems they encountered, and this was the logical way of working. Now that we know this is a proven approach that is gaining traction, we can say that we are supporters of shifting left.

User Comments

5 comments
Tim Thompson's picture

Any advise on shifting left when there are essentially no specifications and requirements? I find myself often in the position where all I get is "Build feature X", sometimes I get a rough description of what that feature is supposed to accomplish, but that is rarely more than two or three bullet points.

Maybe that is an advantage? By being forced to do the business analysis myself I might have a better means to include testing, but I am not sure how to go about it. Without knowing what a feature is supposed to do it is hard to run tests that confirm the functions.

June 13, 2018 - 7:38am
Jan Jaap Cannegieter's picture

We think there are two solutions here. The first one is what you describe, do the business analysis yourself, ask around what it is supposed to do. By now I’ve accepted that some users think it is strange that a tester is the first one to ask them what a specific feature should do. I think that is strange too, but that is how some projects go. The other possibility is just test it and when you don’t find big technical issues report back what it actually does, with the question is that is what it is supposed to do. I sometimes make it more a presentation of the actual functionality instead of a classical test report.

If I have the time to do the business analysis I would do that and test is in a more classical way (conformation testing: does it what it is supposed to do), if I don’t have that time I would do exploratory testing and report what is actually does. I don’t know this is called shift left, but that is the least interesting question.

 

June 14, 2018 - 2:08am
Preety Sehrawat's picture

Hi,

Thanks for explaining the concept so nicely and provided a case study for better understanding.

But still have got some doubts. I am not sure how will that work in practical world and feel it might lead to many non existing defects. Is this applicable only for features which are marked as "Done" by the developers. 

There could be situtations where there will be conflicts of interests between the developer and tester.

For example, in a scenario where a feature is ready for testing in development environment, and tester has started testing that feature. At the same time, there is another feature which uses the same piece of code which tester is testing.

One more example could be any change in database in the development environment and same table/database object is accessed by code which is being tested.

Then how would tester or developer proceed in such scenario. 

August 6, 2018 - 7:08am
Mark Franssen's picture

Thanks for your comment and compliments! In a perfect world there shouldn’t be a conflict of interest because as a team you are working towards delivering working software, untested code is not ready for delivery. So it is in the interest of developers it is tested. In reality it means that it can be an option to test on the development environment. If that means that you run into delays because code is being changed and the developer runs into delays because you are testing on his environment it isn’t really an option and you should probably test on the separate test environment.

 

August 13, 2018 - 3:41am

About the author

About the author

CMCrossroads is a TechWell community.

Through conferences, training, consulting, and online resources, TechWell helps you develop and deliver great software every day.