The first open-source implementation of the paper that will change automatic test generation is now available! In February, Meta published a paper introducing a tool to automatically increase test coverage, guaranteeing improvements over an existing code base. This is a big deal, but Meta didn't release the code. Fortunately, we now have Cover-Agent, an open-source tool you can install that implements Meta's paper to generate unit tests automatically: https://lnkd.in/eCitDjin I recorded a quick video showing Cover-Agent in action. There are two things I want to mention: 1. Automatically generating unit tests is not new, but doing it right is difficult. If you ask ChatGPT to do it, you'll get duplicate, non-working, and meaningless tests that don't improve your code. Meta's solution only generates unique tests that run and increase code coverage. 2. People who write tests before writing the code (TDD) will find this less helpful. That's okay. Not everyone does TDD, but we all need to improve test coverage. There are many good and bad applications of AI, but this is one I'm looking forward to make part of my life.
Automated Testing Frameworks
Explore top LinkedIn content from expert professionals.
-
-
Creating a E2E Automation Framework for ECommerce application like Amazon 🔰 Start Phase: Laying the Foundation 1) API Automation Layer -> User Management: Auto-create test accounts, assign roles & preferences. -> Product Setup: Spin up catalogs, inventory, pricing & promos via POST APIs. -> Order & Payment: Mock cart retrieval, order placement & payment flows. 2) Data‑Driven Testing Layer -> Test Data Sources: CSV for simple tables, JSON for nested objects, direct DB queries for live data. -> Payment Scenarios: Valid cards, expired cards, low‑fund edge cases. -> User Personas: New vs. returning vs. premium shoppers, single vs. bulk orders, domestic & international journeys. 3) Framework Architecture -> Tech Stack: Selenium + TestNG/JUnit for UI, REST Assured for APIs, Maven/Gradle for builds. -> Design Patterns: Page Object Model, Factory, Strategy & Builder for clean code. -> CI & Reporting: Dockerized Jenkins pipelines + Allure for polished test reports. ⚙️ Mid Phase: Building Out Core Scenarios 1) High‑Priority E2E Scenarios -> Registration & Login: Email format, password strength, “remember me,” invalid creds. -> Product Discovery: Search keywords, filters, sort; browse categories & recommendations. -> Shopping Cart: Add/remove items, quantity changes, variant selections. -> Checkout Flow: Address entry/validation; card validation & payment processing. -> Order Management: Confirmation page, email notification, status history & refunds. 2) Test Execution Flow -> Setup: Kick off API calls to prep users, products & promos. -> Data Load: Push test scenarios & persona data into system. -> UI Tests: Execute journeys—registration through order tracking. -> Validation: Cross‑check UI elements, DB states & API responses. -> Cleanup & Reporting: Tear down data, capture screenshots & generate metrics. Maintenance/Improvement Phase: Scaling & Enhancing 1) UX & Performance Automation -> Load Times: Measure navigation & DOM ready metrics. -> Visual Regression: Automated screenshot diffs across browsers. -> Accessibility: WCAG checks, keyboard navigation & screen‑reader tests. 2) Cross‑Platform Coverage -> Browsers: Chrome, Firefox, Safari, Edge across multiple versions. -> Devices: Desktop breakpoints, tablet viewports & mobile touch interactions. -> Mobile App (future): Extend same patterns to native/hybrid apps. 3) Continuous Improvement -> Shift‑left performance checks in CI pipeline. -> Enrich data‑driven scenarios with real customer insights. -> Integrate AI‑powered visual testing for faster UI comparisons. -x-x- Coding Interview Prep for Manual Testers & QA: https://lnkd.in/ggXcYU2s Become playWright typeScript pro and E2E framework: https://lnkd.in/gSDPSqeC #japneetsachdeva
-
#InterviewQuestion: Can you walk me through how you’ve designed an automation framework from scratch? What approach did you take and why? (applicable for experienced profs). My answer: Sure. I’ve designed several automation frameworks over the years, and one of the most practical and scalable ones I built was for a large e-commerce web application. I used a modular approach combining Page Object Model (POM) with data-driven testing using TestNG, Java, and Maven. When I started, I focused on three main goals: reusability, maintainability, and ease of integration with CI/CD. Here’s how I structured it: * I created a BaseTest class to handle WebDriver initialization and teardown logic. * All page-related elements and actions went into separate Page Object classes, so if a UI element changed, I only had to update it in one place. * I separated test data from test logic. For example, I pulled data from Excel files using Apache POI, and in another project, I used JSON to make test data easier to version control. * I wrote custom utility classes for actions like dropdown selection, waits, screenshots on failure, and logging using Log4j. * To improve reporting, I integrated ExtentReports so the QA team and stakeholders could quickly see pass/fail status with screenshots and test logs. * Finally, I set it up in Jenkins to run the test suite daily and also trigger it on pull requests. For example, in this project, our manual regression cycle used to take over 2 days. After framework implementation, we cut that down to about 3 hours, and the suite became reliable enough to run unattended every night. So overall, I focused on clean structure, loose coupling, and making the framework flexible enough to adept those new changes which we get frequently in our project/sprint. do you want to add anything more to it? please feel free to add. #QAAutomation #Automationtesting #TestAutomation #Testautomationframework
-
Automation is more than just clicking a button While automation tools can simulate human actions, they don't possess human instincts to react to various situations. Understanding the limitations of automation is crucial to avoid blaming the tool for our own scripting shortcomings. 📌 Encountering Unexpected Errors: Automation tools cannot handle scenarios like intuitively handling error messages or auto-resuming test cases after failure. Testers must investigate execution reports, refer to screenshots or logs, and provide precise instructions to handle unexpected errors effectively. 📌 Test Data Management: Automation testing relies heavily on test data. Ensuring the availability and accuracy of test data is vital for reliable testing. Testers must consider how the automation script interacts with the test data, whether it retrieves data from databases, files, or APIs. Additionally, generating test data dynamically can enhance test coverage and provide realistic scenarios. 📌 Dynamic Elements and Timing: Web applications often contain dynamic elements that change over time, such as advertisements or real-time data. Testers need to use techniques like dynamic locators or wait to handle these dynamic elements effectively. Timing issues, such as synchronization problems between application responses and script execution, can also impact test results and require careful consideration. 📌 Maintenance and Adaptability: Automation scripts need regular maintenance to stay up-to-date with application changes. As the application evolves, UI elements, workflows, or data structures might change, causing scripts to fail. Testers should establish a process for script maintenance and ensure scripts are adaptable to accommodate future changes. 📌 Test Coverage and Risk Assessment: Automation testing should not aim for 100% test coverage in all scenarios. Testers should perform risk assessments and prioritize critical functionalities or high-risk areas for automation. Balancing automation and manual testing is crucial for achieving comprehensive test coverage. 📌 Test Environment Replication: Replicating the test environment ensures that the automation scripts run accurately and produce reliable results. Testers should pay attention to factors such as hardware, software versions, configurations, and network conditions to create a robust and representative test environment. 📌 Continuous Integration and Continuous Testing: Integrating automation testing into a continuous integration and continuous delivery (CI/CD) pipeline can accelerate the software development lifecycle. Automation scripts can be triggered automatically after each code commit, providing faster feedback on the application's stability and quality. Let's go beyond just clicking a button and embrace automation testing as a strategic tool for software quality and efficiency. #automationtesting #automation #testautomation #softwaredevelopment #softwaretesting #softwareengineering #testing
-
With new mobile devices constantly entering the market, ensuring compatibility is more challenging than ever. Compatibility issues can lead to poor user experiences, frustrating users with crashes and functionality problems. Staying ahead with comprehensive testing across a wide range of devices is crucial for maintaining user satisfaction and app reliability. I would like to share the strategy that I have used for comparability testing of mobile applications. 1️⃣ Early Sprint Testing: Emulators During the early stages of development within a sprint, leverage emulators. They are cost-effective and allow for rapid testing, ensuring you catch critical bugs early. 2️⃣ Stabilization Phase: Physical Devices As your application begins to stabilize, transition to testing on physical devices. This shift helps identify real-world issues related to device-specific behaviors, network conditions, and more. 3️⃣ Hardening/Release Sprint: Cloud-Based Devices In the final stages, particularly during the hardening or release sprint, use cloud-based device farms. This approach ensures your app is tested across a wide array of devices and configurations, catching any last-minute issues that could impact user experience. Adopting this 3 tiered approach ensures a comprehensive testing coverage, leading to a more reliable and user-friendly application. What is the strategy that you are adopting for testing your mobile apps. Please share your views as comments. #MobileTesting #SoftwareTesting #QualityAssurance #Testmetry
-
Keeping Your Tests Clean: Best Practices for Test Data Cleanup in Selenium (Java) Ensuring a clean testing environment is crucial for reliable and repeatable Selenium tests. Test data clutter can lead to unexpected behaviour and mask actual bugs. Let's dive into best practices for test data cleanup using Selenium in Java, along with a code example to illustrate! Best Practices: Database Isolation: Use a separate database instance dedicated to testing. This allows for easy data manipulation without affecting the production environment. Consider tools like DBUnit for database backups and restoration before/after test runs. Test Data Seeding: Pre-populate the test database with known data relevant to your test cases. Utilize tools like JPA or Hibernate for data manipulation within your tests. Test Cleanup Methods: Implement methods to clean up test data after each test execution. These methods can perform actions like deleting test users, orders, or entries created during the test. Utilize Testing Frameworks: Leverage annotations like @After from TestNG or @AfterClass from JUnit to ensure the cleanup is executed regardless of the test outcome. Code Example (TestNG): @Test public void testLogin() { // Login logic using Selenium // ... } @After public void cleanUp() { // Delete test user data from database // ... } #SeleniumTesting #JavaAutomation #TestAutomationFramework #DatabaseTesting #TestNg #JUnit #CleanCode
-
Here’s my step-by-step action plan whenever I work with a client to help them get a new automation project started. Maybe it’s useful to you, too. 0. Write a single, meaningful, efficient test. I don’t care if it’s a unit test, an integration test, an E2E test or whatever, as long as it is reliable, quick and produces information that is valuable. 1. Run that test a few times locally so you can reasonably assume that the test is reliable and repeatable. 2. Bring the test under version control. 3. Add the test to an existing pipeline or build a pipeline specifically for the execution of the test. Have it run on every commit or PR, or (not preferred) every night, depending on your collaboration strategy. 4. Trigger the pipeline a few times to make sure your test runs as reliably on the build agent as it does locally. 5. Improve the test code if and where needed. Run the test locally AND through the pipeline after every change you make to get feedback on the impact of your code change. This feedback loop should still be VERY short, as we’re still working with a single test (or a very small group of tests, at the most). 6. Consider adding a linter for your test code. This is an optional step, but one I do recommend. At some point, you’ll probably want to enforce a common coding style anyway, and introducing a linter early on is way less painful. Consider being pretty strict. Warnings are nice and gentle, but easy to ignore. Errors, not so much. 7. Only after you’ve completed all the previous steps you can start adding more tests. All these new tests will now be linted, put under version control and be run locally and on a build agent, because you made that part of the process early on, thereby setting yourself up for success in the long term. 8. Make refactoring and optimizing your test code part of the process. Practices like (A)TDD have this step built in for a reason. 9. Once you’ve added a few more tests, start running them in parallel. Again, you want to start doing this early on, because it’s much harder to introduce parallelisation after you’ve already written hundreds of tests. 10 - ∞ Rinse and repeat. Forget about ‘building a test automation framework’. That ‘framework’ will emerge pretty much by itself as long as you stick to the process I outlined here and don’t skip the continuous refactoring.
-
🚀 Hybrid Framework | Data-Driven Testing | Apache POI 📅 #MyDailyLearning in Test Automation – Selenium | Java | TestNG Today, I focused on implementing data-driven testing within a Hybrid Framework, combining the power of TestNG, Page Object Model (POM), and Apache POI to handle dynamic test scenarios efficiently. ✅ What I Achieved: 📊 1. Prepared Test Data Created test data in an Excel file Placed the Excel under the testData folder for better project structure 🛠 2. Excel Utility Class Built a reusable ExcelUtility class inside the utilities package Used Apache POI to read data row by row, cell by cell 📦 3. DataProvider Integration Created a DataProviders class in the utilities package Integrated @DataProvider annotation to feed data into the test methods dynamically 🧱 4. Updated Page Object Class Modified locators and methods to support new test data Ensured clean separation of logic and UI elements 🧪 5. Connected Everything Together Linked the data provider to the test case using @Test(dataProvider = "") Now the tests can run with multiple sets of data from Excel 🎯 🔍 Why This Matters: ✅ High reusability ✅ Easy to scale and maintain ✅ Separates test logic and test data ✅ Enables testing with various inputs in just one run 📌 Next, I’ll be enhancing the framework by integrating custom listeners, logging, and extent reporting for better tracking and analysis. Let’s connect if you're working on automation frameworks or want to exchange ideas! #TestAutomation #SeleniumJava #ApachePOI #HybridFramework #DataDrivenTesting #TestNG #PageObjectModel #AutomationEngineer #QA #SDET #ExcelDrivenTesting #MyLearningJourney #Java Below is the sample code which i have used to implement this
-
Automation ALONE won't give you the coverage you're looking for. It needs to be in line with manual testing ✅ Automation won’t yield instant results ✅ Automation usually comes with high upfront cost ✅ Your mindset is ready. What’s missing for successful adoption? 👉 A clear, step-by-step strategy. Here’s what I've seen working for our customers: 🎯 Define why you're thinking about automation, what the ideal end-state would be and, based on that, you'll be able to define the metrics that will help you measure your ROI (hint: end-state can't be to replace manual testing) 🔍 Evaluate your existing tests to determine which ones are good candidates for automation (hint: need to be run frequently, technically feasible, etc.) 🛠️ Choose tools that best match your team's skills and can scale across teams (hint: if your team can't write code, there are low-code/no code automation tools. If they want to learn how to code, these tools offer an easy on-ramp towards coded automation) 👥 Ensure your team has the necessary skills and training for test automation (hint: don't underestimate the need for proper education around test automation strategy. If you start it wrong, it's hard to scale later) 🌱 Start small and scale gradually (hint: this is key to capture the value/ROI in small steps from the beginning) 📈 Continuously monitor automation performance and refine your strategy (hint: if you're not getting ROI, something is wrong with your automation strategy. Always monitor your metrics) ⚖️ Leverage the strengths of both manual and automated testing for a comprehensive testing approach (hint: all automated testing enables is speed in test execution. Combining both your slower, but critically valuable, manual test executions with your super fast automated test executions will be key to achieving your desired coverage) By following these steps, I've seen our customers navigate the complexities of automation adoption and achieve a more efficient, reliable, and scalable testing process. 🚀 What other advice would you share? 🫵 #AutomationStrategy #SoftwareTesting #TestAutomation #QualityEngineering #SoftwareQuality Derek E. Weeks | Mike Verinder | Lucio Daza | Mush Honda | Gokul Sridharan | Hanh Tran (Hannah), MSc. | Daisy Hoang, M.S. | Parker Reguero | Florence Trang Le | Ritwik Wadhwa | Mihai Grigorescu | Srihari Manoharan | Phuong Nguyen
-
Agentic Testing: a New Chapter in Software Quality The last weeks has shown something fascinating: the same AI coding agents we use every day in our IDEs are quietly becoming testing agents. Not theoretical research toys but practical, reasoning-driven workers that can analyse codebases, explore APIs, drive UIs with MCP tools, and even generate and evolve entire test suites. I’ve written a deep dive exploring this: https://lnkd.in/dQY4b7bz In the article I cover: - How coding agents already behave like general-purpose engineering agents - White-box agentic testing: reading your code, analysing coverage, simulating execution - Black-box agentic testing: exploratory API and UI testing via terminal, Playwright MCP and Chrome DevTools MCP - Real examples: Spring Boot backend + React frontend - The benefits, limitations and the cost problem - How agentic testing fits (and doesn’t try to replace) the classic test pyramid If you’re curious how AI can transform the daily work of testers, SDETs and developers Would love to hear your experiences and whether you see these agents becoming part of your testing workflow.