Best Practices for UI Test Automation Frameworks

Chamila Ambahera
7 min readSep 8, 2024

--

Before we jump into the best practices, let’s discuss why you need to follow best practices in your framework.

Why

In summary;

Best practices are important because they ensure reliable, stable tests that catch bugs early and reduce flaky results. They improve maintainability, making updating and scaling the test suite easier as the application grows. Best practices also promote code readability and collaboration, allowing teams to work efficiently, and ensure that tests are reusable and modular. They enhance test performance, make execution faster, and help integrate tests smoothly into CI/CD pipelines. In the long run, they reduce costs, improve cross-browser compatibility, and lead to cleaner code that’s easier to maintain and debug.

Best Practices

1. Framework Design and Structure

  • Layered Architecture: Implement a layered design to separate concerns. Common layers include:
  • Test Layer: Contains test scenarios and business logic.
  • Service/Business Layer: Handles interactions between the test and UI interaction layers.
  • UI Interaction Layer: Deals with page interactions, such as clicking buttons or entering text (often using a Page Object Model).
  • Page Object Model (POM): Use the POM to create an object repository for UI elements. This centralizes element locators, making the framework more maintainable and reusable when UI changes occur.

In the latter part of this guide, you can find a sample Framework structure.

2. Choosing the Right Tools

  • Framework/Library Selection: Choose the right tools for your technology stack (e.g., Selenium, Playwright for web apps). Consider Appium for mobile automation or Winium for desktop applications.
  • Cross-Browser Testing: Ensure the framework supports multiple browsers (e.g., Chrome, Firefox, Edge, Safari). Integrate with services like BrowserStack or Sauce Labs to cover cross-browser testing.

3. Locator Strategy

  • Stable Locators: Use stable and unique locators (e.g., ID, name, CSS selectors, or XPath) to identify UI elements. Avoid brittle locators that depend on dynamic attributes.
  • Fallback Locators: Have backup locators for elements that might dynamically change, to reduce test flakiness.

4. Synchronization and Wait Strategies

  • Explicit Waits: Use explicit waits to wait for specific conditions (e.g., element visibility, element to be clickable) instead of hard-coded timeouts (implicit waits).
  • Avoid Thread Sleep: Minimize the use of static waits (Thread.sleep), as they can slow down tests and make them flaky due to timing issues.

5. Test Data Management

  • Externalize Test Data: Store test data separately from test scripts in external files (e.g., JSON, CSV, XML, or databases). This helps when modifying or expanding data-driven tests.
  • Data-Driven Testing: Implement data-driven tests where test scripts are executed with different sets of data to cover multiple scenarios with a single test script.

6. Handle Dynamic Elements and UI Flakiness

  • Dynamic Elements: Handle dynamic elements, such as changing element IDs or AJAX-loaded content, by employing smart wait strategies or using dynamic locators.
  • Retry Logic: Implement retry mechanisms for dealing with intermittent failures, especially for complex web applications where elements may not load as expected.

7. Robust Exception Handling

  • Error Handling: Implement proper error and exception handling. Capture screenshots and logs when tests fail to provide clear debugging information.
  • Graceful Failures: Ensure the framework fails gracefully and generates meaningful error reports without breaking the entire test suite.

8. Test Execution Strategies

  • Parallel Execution: Enable parallel execution of tests to improve test execution speed, especially for large test suites. Tools like TestNG, JUnit offer parallel execution capabilities.
  • Continuous Integration (CI): Integrate the framework with CI/CD pipelines (e.g., Jenkins, GitLab CI, CircleCI) to automatically trigger tests on each code push or deployment.

9. Maintainable Test Cases

  • Atomic Test Cases: Write small, focused test cases that validate a specific functionality. Avoid long, complex tests that make debugging difficult.
  • Avoid Dependencies: Ensure test cases are independent of each other, meaning one test should not rely on the execution of another. This ensures better maintainability and easier debugging.
  • Clear Naming Conventions: Use descriptive and consistent naming conventions for test cases, functions, and locators to make the framework readable and maintainable.

10. Reporting and Logging

  • Comprehensive Reporting: Use detailed test reports to show passed, failed, and skipped tests, along with screenshots and logs for failures. Tools like Allure, Extent Reports, and Cucumber Reports can enhance report quality.
  • Detailed Logs: Capture detailed logs during test execution (e.g., actions performed, errors encountered) to aid in debugging.

11. Cross-Browser and Cross-Platform Testing

  • Cross-Browser Compatibility: Ensure the framework supports running tests across different browsers and operating systems. Use WebDriver or cloud-based testing platforms (e.g., BrowserStack, Sauce Labs) for coverage.
  • Mobile and Responsive Testing: Extend the framework to support mobile devices and test responsive designs if needed.

12. Version Control and Collaboration

  • Version Control: Store all test scripts and configurations in a version control system like Git to track changes and collaborate with team members.
  • Code Reviews: Perform regular code reviews to maintain high standards in test scripts and framework code.

13. Handling Pop-ups, Alerts, and IFrames

  • Pop-ups and Alerts: Ensure your framework handles browser alerts, confirmation dialogues, and other pop-ups using appropriate methods (e.g., switchTo() in Selenium).
  • IFrames and Windows: Manage to switch between IFrames and browser windows using the appropriate switch commands in your automation tool.

14. Maintainability and Scalability

  • Reusable Methods: Create reusable methods for common actions (e.g., clicking, entering text, handling dropdowns) to avoid duplicating code across multiple tests.
  • Refactor Regularly: Continuously refactor your test code to improve its readability, maintainability, and performance as the application under test evolves.

15. Performance Testing

  • Basic Performance Metrics: Collect basic performance metrics (e.g., page load times) during UI tests to catch slow-loading elements or potential bottlenecks.
  • Integration with Performance Tools: For deeper insights, integrate UI automation with performance testing tools like JMeter or Gatling.

16. Security and Access Management

  • Secure Credentials: Store sensitive information such as login credentials, API keys, and access tokens securely in environment variables or encrypted files, rather than hardcoding them into scripts.
  • Role-Based Testing: Test the UI with different user roles to ensure access control and permission checks are functioning properly.

17. Test Coverage

  • Smoke and Sanity Testing: Implement smoke tests for essential functionality to ensure the system is working after each deployment. Use sanity tests to validate specific components after bug fixes.
  • Regression Testing: Include a robust suite of regression tests to ensure that new changes do not break existing functionality.

18. Handle Browser Differences

  • Cross-Browser Testing: Ensure tests cover multiple browsers (Chrome, Firefox, Safari, Edge) to handle browser-specific issues. You can leverage tools like Selenium Grid, BrowserStack, or Sauce Labs to run tests on multiple browsers.

19. Test Maintenance

  • Regular Updates: Regularly update tests to account for changes in the UI, business logic, or new functionality.
  • Monitor Flakiness: Identify and resolve flaky tests by monitoring test execution over time. Address root causes like timing issues, unstable locators, or environmental dependencies.

20. Documentation

  • Comprehensive Documentation: Maintain up-to-date documentation for the framework, covering test setup, running tests, adding new tests, and troubleshooting issues.
  • Test Case Documentation: Document individual test cases, including test steps, expected results, and test data to ensure that other team members can easily understand and maintain them.

Folder Structure.

This folder structure can vary based on the tools that you are using.

This is folder structure for Java and PlayWrite projects.

/ui-automation-framework
├── /src
│ ├── /main
│ │ ├── /java
│ │ │ ├── /config
│ │ │ ├── /drivers
│ │ │ ├── /pages
│ │ │ ├── /utils
│ │ │ ├── /reporting
│ │ └── /resources
│ ├── /test
│ │ ├── /java
│ │ │ ├── /tests
│ │ │ ├── /data
│ │ └── /resources
├── /logs
├── /reports
└── /docs

Detailed Breakdown

/src/main/java/

This directory contains the core framework code, utilities, and configuration.

  • /config:
  • Contains configuration files and classes such as Playwright configuration, browser settings, and environment management.
  • Example: PlaywrightConfig.java, TestBase.java
  • /drivers:
  • Manages the browser drivers for different browsers (e.g., Chromium, Firefox, WebKit) if you need custom handling for Playwright’s drivers.
  • Example: BrowserFactory.java, DriverManager.java
  • /pages:
  • Houses the Page Object Model (POM) classes for each web page. Each class in this directory represents a webpage or component, encapsulating element locators and actions (methods) specific to that page.
  • Example: LoginPage.java, HomePage.java
  • /utils:
  • Contains utility classes for common tasks like reading from files (JSON, CSV), handling waits, logging, screenshots, or browser interactions.
  • Example: WaitUtils.java, FileUtils.java, ScreenshotUtils.java
  • /reporting:
  • Handles reporting logic such as integrating with Allure or Extent Reports to generate detailed execution reports.
  • Example: ReportManager.java, AllureReportHelper.java

/src/main/resources/

This directory contains static resources such as configurations and test environment settings.

  • /resources:
  • Configuration files like application.properties, log4j.properties, and Playwright-specific settings (browser context, headless options, etc.).
  • Example: application.properties, playwright.config.json

/src/test/java/

This directory contains your test classes and test data.

  • /tests:
  • Contains test cases written using JUnit or TestNG. These classes test the actual functionality of the application by interacting with the pages (POM classes).
  • Example: LoginTest.java, HomePageTest.java
  • /data:
  • Holds external test data files (JSON, CSV, Excel) used in data-driven testing.
  • Example: testData.json, userData.csv

/src/test/resources/

Similar to /main/resources, but specifically for test-related configurations (like test environment settings).

  • /resources:
  • Environment-specific test configuration files and resources related to test execution (such as log4j-test.properties).

/logs/

  • Stores log files generated during the execution of tests. Proper logging ensures that errors are traceable and easy to debug.
  • Example: test-log.log, error-log.log

/reports/

  • Stores test execution reports. Integrate with reporting libraries like Allure or Extent Reports to generate detailed HTML or XML reports.
  • Example: index.html, test-report.xml

/docs/

  • Documentation related to the framework, such as setup guides, test plans, or architectural diagrams.
  • Example: setup-guide.md, framework-design.pdf

--

--

Chamila Ambahera

Principle Automation Engineer | Arctic Code Vault Contributor | Trained Over 500 engineers