Macadamian Blog

Creating a Holistic Strategy for Software Quality Testing

Bogdan Blaga & Sarah Savoy

Use of a proper testing strategy can save you a ton of money in the long run and at the end of the day, give you a competitive advantage in the marketplace. To illustrate how to develop an effective testing strategy, we consider quality in a modern large-scale web app.

Strategy for software quality testing

Any good software project should strive to deliver high quality products and services to their users. It is, after all, one of the most important aspects that will define your product’s success. Quality is an umbrella term and comes in many forms. To a developer it might mean modular, maintainable, and testable while to a user, it could mean correct, efficient, reliable, secure, etc.

No matter your definition of a quality product, every team must establish a method of determining the level of quality achieved at any given time and they must do so in a cost-effective manner that will reflect their goals and objectives. Use of a proper testing strategy can save you a ton of money in the long run and at the end of the day, give you a competitive advantage in the marketplace. To illustrate how to develop an effective testing strategy, let’s consider quality in a modern large-scale web app.

Quality Assessment

Software quality assessment is typically divided into 2 categories: static and dynamic analysis. Static analysis often comes in the form of code reviews, algorithm analysis, etc. It is generally done by developers and is vital to a cohesive and sound architecture and code base. A little investment in this area will go a long way in ensuring the system is healthy, maintainable and above all else, feasible. Dynamic analysis, on the other hand, includes test program execution, observing behaviour, etc. It is typically done by several groups including developers, quality assurance specialists, stakeholders, and above all else, users. This type of analysis is by far the most expensive piece of the puzzle to ensuring our desired level of quality is met. Both types of analyses are complementary. In this article we will take a closer look at dynamic analysis and automation testing.

You Might Also Like: Guide to Mobile Application Testing Strategy.

Dynamic Analysis

This type of analysis is what most people consider to be the true form of testing because it simulates real world scenarios in order to verify and validate the application’s behavior. It is arguably more important to find defects in a typical user workflow than to flag a less than optimal algorithm that ultimately will not prevent the user’s desired action to be taken. Analyzing dynamic behaviour is a complex task because as the name implies, it is dynamic. Problem sets can grow exponentially and you will soon realize that an exhaustive testing strategy is doomed to fail on many levels. For this reason, it’s important to define a strategy that will be effective, reliable, repeatable and as I’m sure the product owner will tell you, affordable.

In order to define this strategy, we need to rely on a few important heuristics. The International Software Testing Qualifications Board (ISTQB), an international testing qualification organization, defines the following 7 principles:

Principle 1 – Testing shows the presence of defects

What it means: It sounds obvious, but what this really means is that testing cannot prove there are no defects. In contrast, it aims to increase confidence in the system and ensure requirements are met. Therefore, it is important to design test cases which find as many defects as possible.

Principle 2 – Exhaustive testing is impossible

What it means: You will never be able to test everything. You should instead focus your testing efforts on higher risk items and priorities.

Principle 3 – Early testing

What it means: Defects become more expensive to fix the later they are found in the development process. Begin testing in the early stages to minimize risks and costs.

Principle 4 – Defect clustering

What it means: A small number of modules contain most of the defects. Statistically, 80% of defects are generally contained within 20% of the code, which highlights the importance of optimizing your testing strategy to the areas that are most important.

Principle 5 – Pesticide paradox

What it means: If the same kind of tests are repeated, eventually they will not find new bugs. You should regularly review and update test cases to further optimize a test run and potentially find new bugs or cover important gaps.

Principle 6 – Testing is context dependent

What it means: There is no one-size-fits-all case when it comes to testing. Different types of applications have different needs and priorities. A life-critical system will require an extensive amount of testing, while your fitness tracking app may not. Furthermore, it’s important to be realistic. Your app may not need to be outstandingly performant and scalable on day one. Work those tests in when you begin to see a need for that type of coverage, otherwise, you risk wasting a lot of time on useless tests with diminishing returns.

Principle 7 – Absence-of-errors fallacy

What it means: The application must, above all else, meet the user’s needs and expectations. Failing to build the right system is a bigger concern than failing to build a good system.

Given these 7 principles, we begin to understand the importance of a realistic test strategy that will be more effective. Principles 1, 2, 4 and 6 have taught us to optimize our process efficiently, principle 3 to build a plan that produces quick and early feedback, principle 5 to review and maintain our test plan and lastly principle 7 has taught us to ensure the right system is built.

Testing Boxes

There are typically 3 types of testing: white-box, grey-box and black-box. White-box testing uses knowledge of internal logic to analyze code statements, branches, paths, conditions, etc. Black-box doesn’t consider the internal system design, but instead evaluates the requirements and user flows. Both are useful and will often uncover different sets of problems. Grey-box is a combination of both where the tester has limited knowledge of the internal system.

Automation testing

We are aware that planning a good automation testing strategy can be a daunting endeavor, so, where should you invest your time to get the best return on investment? As previously mentioned, there is no one-size-fits-all strategy, but if you have to start somewhere, then I’d recommend the 70/20/10 split when forming your automated test strategy. That is 70% of the testing effort goes towards unit testing (white-box), 20% towards integration testing (grey-box) and 10% towards end-to-end UI automation testing (black-box). Let’s try to understand why.

Unit testing is a form of automated analysis that is fast, reliable and isolates failures – all features essential to a good testing strategy. Thousands upon thousands of tests can be executed and validated in a matter of seconds ,or minutes, giving you instant feedback on whether or not your code changes have impacted the correctness of the different modules. Because this testing evaluates small units of code, you can easily pinpoint where an error occurred, and if the naming conventions are defined properly, reduce debug time to find the root cause of the problem.

This type of immediate feedback can reduce the total feedback loop time that would otherwise have taken days with manual testing. In an agile project where the requirements evolve over time, it is easy to lose sight of the big picture. Sometimes you look back at your code and are left wondering why something was done a certain way. Unit tests are great at answering these types of questions because they are likely part of an important scenario that you may have forgotten about. This helps prevent regressions in long-term projects and with growing team sizes that may not always have the necessary context to make sense of it all.

Pro Tip: You should test for your users, not numbers, as achieving 100% test coverage is typically unrealistic. Unit tests are ultimately your first line of defense and should drive development.

You Might Also Like: Automating Alexa Skill Dialog Testing.

Next up on the list are integration tests. These are similar to unit tests but will require a smaller investment, as modules generally define “contracts” under which they can communicate and these contracts should rarely change. They are also like a funnel. It’s simple to make a call to a module, but what you don’t realize is that internally it is going through hundreds of lines of code to evaluate the proper results.

Lastly, we have end-to-end (E2E) tests. These are arguably the most sought after tests by stakeholders because they ensure functional requirements are met. In theory, developers love them because they offload testing to others, managers like them because they simulate real user scenarios, and testers like them because they verify real-world behaviour. In practice however, scripted E2E testing can have a diminishing return.

Automated E2E tests take time to execute (hours to days).

Finding the root cause of a failure is painful and takes a long time.

Technical failures can ruin your test results over the span of multiple days.

Many smaller bugs can be hidden behind bigger bugs.

The tests are often flaky (visit a web page and it hangs, but on refresh it works).

Developers have to wait until the next day to know if their fix worked or not.

Tests will only find issues in the specific areas they are written to test.

Despite these problems, it is not to say that E2E tests are not important. They are very useful in testing typical user interactions and traversal paths through the application, as well as ensuring that existing functionality is not affected by newly added code.

After all, for a successful testing strategy it is important to not let your tests get stale, to update them as the project moves forward and to add new tests for cases not previously considered. It is also extremely important to supplement your automated test strategies with manual exploratory testing. Engage skilled quality assurance specialists who can strategically test areas and corner cases not covered by existing test suites, allowing for the exposure of less obvious issues.

You Might Also Like: Test Strategies for HIPAA Compliance.

Non-functional testing

When developing for a particular business problem, you have to understand the overall space and ask yourself: Are there any particular challenges that this business space needs to address? In healthcare this becomes particularly important, as this industry reveals the need for gaining trust as one of the biggest and toughest problems to solve. In industries like healthcare, it is more and more important to have a holistic test strategy that accounts for testing areas like security, performance, and accessibility. These specialized testing types are complex on their own and will be covered in a future article.

To produce a high-quality product it is crucial to have a well thought-out and feasible testing strategy, one which encompasses a suitable range of applicable testing types. This will involve the types mentioned above – unit testing, integration testing and E2E testing – combined with exploratory testing and relevant non-functional testing types. At Macadamian, we consider this a quality-focused mindset, enabling us to deliver the quality standards required by the healthcare industry.

Insights delivered to your inbox

Subscribe to get the latest insights in IoT, digital health, and connected products.

Author Overview

Bogdan Blaga

A tech geek at heart, Bogdan’s passion is founded in the challenge of designing and developing web and mobile applications with an user experience that is outstanding. As Engineering Manager of Macadamian’s Romanian team, Bodgan brings an amazing depth and breadth of technical expertise in areas such as Cloud, NoSQL, Web, distributed, and large application scaling; in addition to a strong ability to see and articulate the business value of technology. Bogdan is completing his Masters Degree in Artificial Intelligence from the Technical University of Cluj Napoca , Romania.