Semi-automated approach to select suitable test cases for automation

Chamila Ambahera
6 min readApr 21, 2024

--

You might have seen hundreds of articles about selecting test cases for test automation. So why another article?

Stick to the end, you will find how this article differs from the rest of the articles and videos.

Today, I had a conversation with Sameera De Silva, Medium author and QA Lead of a reputed multi-national company regarding implementing a fool-proof guide on selecting candidates for test automation.

As we all know the different QA teams have different experiences in Quality Assurance. We needed a fool-proof approach that novices could follow. So we searched the internet for the best approach and only found a bunch of duplicated articles.

So the result, this article….

As your experience grows you know what test to be automated. However, can we guarantee that we have selected all the possible candidates for automation, especially when you have thousands of test cases?

Selection criteria for test automation

Before we deep-dive into the approach let’s understand the main test case selection criteria for test automation.

  1. Frequency of Execution:
  • Identify test cases that are executed frequently during the software development lifecycle.
  • Prioritize test cases that are run repeatedly across different builds or releases.

2. Business Criticality:

  • Focus on test cases that validate critical functionalities of the application.
  • Prioritize test cases that directly impact the core business processes or user experience.

3. Complexity and Risk:

  • Select test cases that cover complex functionalities or features prone to errors.
  • Prioritize test cases that address high-risk areas of the application, such as security vulnerabilities or regulatory compliance.

4. Regression Testing:

  • Identify test cases that ensure the stability of previously implemented functionalities after code changes.
  • Prioritize test cases that verify the correctness of existing features and prevent regression defects.

5. Data Variations:

  • Choose test cases that encompass a wide range of input data and scenarios.
  • Prioritize test cases that validate different data types, boundary values, and edge cases.

6. Integration Points:

  • Include test cases that verify the interaction between different modules, components, or systems.
  • Prioritize test cases that validate end-to-end workflows and integration points with external systems.

7. Performance and Load Testing:

  • Identify test cases that focus on performance metrics, such as response time, scalability, and resource utilization.
  • Prioritize test cases that simulate various load conditions and stress test the application under different scenarios.

8. Cross-Platform and Browser Compatibility:

  • Select test cases that ensure compatibility across different operating systems, devices, and web browsers.
  • Prioritize test cases that validate the responsiveness and functionality of the application on various platforms.

9. Usability and Accessibility:

  • Include test cases that assess the usability and accessibility of the application for different user demographics.
  • Prioritize test cases that validate adherence to accessibility standards and guidelines.

10. Automatability:

  • Assess the feasibility of automating each test case based on factors such as stability, repeatability, and predictability.
  • Prioritize test cases that are well-suited for automation and offer significant return on investment.

11. Traceability and Coverage:

  • Ensure that selected test cases provide adequate coverage of requirements, user stories, and acceptance criteria.
  • Prioritize test cases that contribute to comprehensive test coverage and fulfil traceability requirements.

12. Maintenance Effort:

  • Consider the effort required to maintain and update automated test scripts over time.
  • Prioritize test cases that minimize maintenance efforts while maximizing test coverage and effectiveness.

Are they equally important?

No, So let’s give a weight to them.

Assigning weights to the criteria

Please note that this is as per my thoughts. If you have different thoughts please share in the comments.

  1. Frequency of Execution
  • Weight: 8

2. Business Criticality

  • Weight: 9

3. Complexity and Risk

  • Weight: 9

4. Regression Testing

  • Weight: 8

5. Data Variations

  • Weight: 7

6. Integration Points

  • Weight: 7

7. Performance and Load Testing

  • Weight: 8

8. Cross-platform and Browser Compatibility

  • Weight: 6

9. Usability and Accessibility

  • Weight: 6

10. Automatability

  • Weight: 9

11. Traceability and Coverage:

  • Weight: 8

12. Maintenance Effort:

  • Weight: 7

Ok. Now we have selection criteria and weigh them as per their importance.

How do we evaluate all the test cases that we have?

Simple approach. :) Use an Excel sheet.

Even if you don’t see the preview of the Google sheet, please click on the link. It will navigate you to the sheet.

Based on the test case you only need to select the weight of each criterion in the appropriate column. Then based on the Cutoff percentage that you have defined, it will automatically select whether it’s suitable for automation or not.

How to set the Cutoff percentage?

The specified threshold value for determining a good automation candidate depends on the different factors of the project objectives, the organization’s resources, and the complexity of the software being tested. Nonetheless, the more widely practised way is that the cutoff is set between 60% and 80% of the total weight.

Project Goals: Think about the project’s objectives as well as the degree of test coverage you need. For total automation coverage, the cutoff percentage can be set relatively high to emphasize the test cases that are appropriate for automation.

Resource Availability: To evaluate the availability of resources, such as time, money and skills, for automation activities is the need of the hour. If resources are limited, you may decide to have a lower cutoff value in order to concentrate only on the most crucial and high-effect test cases.

Complexity of Tests: Judging the complication of the tests and the capabilities of automation. Very complex solutions might need more efficient ways of automation. In situations like this, you should probably raise the cutoff percentage in order to carefully select only test cases which have a relatively high chance of being automated successfully.

Return on Investment (ROI): Think about the expected ROI of automation. Automation of cases that will cost large but will save time, increase quality, or reduce the manual effort may have a higher break of percentage cutoff.

ROI Calculator. Thanks to dzmitry yashyn

Iterative Approach: First, begin with a conservative percentage for the initial cutoff and modify it after each automation effort based on its outcomes. Periodically examine the results supplied by automation and adapt the cutoff percentage accordingly in order to improve automation over time.

To begin with, you need to select a threshold of about (70%) of the total weight and then fine-tune the number in order to match the exact situation for the current organization and project. Don’t forget that the cutoff percentage might not be the definitive number. It is subject to change as the project progresses and the needs and requirements keep on unfolding.

Is there a fully automated approach?

As far as I know, there is no fully automated approach for selecting the best candidates for automation even using AI-integrated tools.

Join my AI in testing article series to learn more about AI-Integrated automation tools.

Final Notes

Proper testing case picking becomes the most important for automation test work and getting the maximum automation test advantage. The use of these criteria like how often it will be tested to the business criticality, complexity, integration points, cross-platform support, usability, automation, traceability, and support during maintenance will enable teams to prioritize test cases strategically and make sure that the software product is of good quality. Implementing a structured test case selection approach not only gives a team the capacity to manage and expedite their testing processes but also ensures that the end product attains market competitiveness in terms of quality.

If you make changes to this initial spreadsheet, don’t forget to share it with me.

--

--

Chamila Ambahera

Principle Automation Engineer | Arctic Code Vault Contributor | Trained Over 500 engineers