I recently completed a project with the goal of automating our functional testing infrastructure. I should actually call it semi-automation since a team of human engineers are still needed in order to build and fire test runs (which is not a downside, it was designed with that in mind). Below I will describe a rough plan for everybody who wants to do something similar. I won’t go into much details but I will surely give an overall workflow that is easy to follow.
The challenge here was mostly the integration of TestRail with the local QA environment, and this is what I am going to describe. If you don’t use TestRail, I am not sure if this will be of any benefit to you.
I suppose that to search for this article you have an idea of how to implement testing automation and have already installed some of the needed tools. In any case here are some of the tools I used:
- TestRail: TestRail is described as a “Test Case Management & Test Management Software Tool”, which is actually a pretty good description. You can think of it like a CMS to organize your tests. You can create different projects, categories, test suites and test cases. And then you can take some of these test suites as a whole or specific test cases and group them in what is called a “test run”. A test run is let’s say a runnable instance of the selected tests. The goal of the overall project was to automate the actual execution of each test run and report back the results to TestRail.
- Amazon Web Services (AWS): One or more servers where our tests would run on were needed, so some commodity EC2 instances served that need. You don’t have to buy something powerful to begin, a small EC2 instance can be enough.
- Selenium: In particular, Selenium Server and WebDriver, in order to run each test properly since they actually automate browser functionality and actions. Using a combination of those on the same machine you can either run the tests locally or remotely (perhaps on a cloud service like Sauce Labs or BrowserStack).
- Python with unittest to write the actual functional tests using the WebDriver API and py.test to execute the tests (perhaps in parallel) and get back their results.
- Headless Firefox, Chrome, Opera, PhantomJS and Xvfb to emulate a screen.
- Other standard languages/technologies like PHP, MySQL, git, cron, etc.
Bare in mind that this is not what was eventually implemented (as it gets more complex down the road with multiple servers, parallel testing, etc) but it was my first working prototype and it should serve as a good basis to start with.
So here is how the TestRail integration works:
1) First of all, we have to take the test run from TestRail, i.e. which tests are contained in the specific run. We can do that by adding a customization on TestRail’s UI. This simply means adding a button that will make an AJAX request to a special webpage, which in turn is called the trigger page. The request will contain the run ID as a parameter and this is what we will later use to get all the test cases contained in the specific test run through TestRail’s API. Here are two sample files: https://github.com/gurock/testrail-custom/tree/master/automation/trigger-run-example to achieve that.
2) Secondly, what occurred to me was the need for a “middle-man” between TestRail and the testing environment (the server/s). And in our case this role is served by a simple MySQL database. This way we can schedule test runs, we can keep an eye on the progress of each test run, keep logs of all the tests we have run, etc, plus it can take care of some permissions-related difficulties. So, in our trigger page as outlined above, we actually get the test cases contained in the test run and we put them in a database table instead of executing them right away. Just so you know, a test in general coming from TestRail can be identified by using two different attributes: 1) either the so-called test ID, or 2) a pair of run ID and case ID. These are the stuff that I keep in my DB: test ID, run ID, case ID, title of the test, a unique hash to reference the test, the full path of the actual Python test file, and statistics like creation date-time, started date-time, completed date-time, reported date-time, status (result) and console output as text.
The tricky part was to find a way to uniquely identify each of our to-be-run tests, as a tester could have added the same test case in two different runs and scheduled both to be executed in the DB. In order to distinguish between the two you can simply create a hash for each test, for example by hashing the concatenated result of the attributes of each test plus a random value (or perhaps the creation time which will be different each time). Another tricky part was to correlate each test case found in TestRail with the corresponding actual Python test file. For that, we decided with our testers, to just use a prefix in each filename that contained the case ID as found in TestRail and then the trigger script can scan our tests file root directory on the server in order to find the correct file for each test.
3) Third, I setup two simple cronjobs: 1) to get all non-started tests and execute them, 2) to get all previously completed tests and report back the results to TestRail. To achieve the first, a cron-powered script takes all the non-started tests from the DB and runs them using py.test. You can then identify and import the results into your DB by simply scanning the console output for success/failures OR you can use the XML outputting capabilities of py.test and parse that instead. In either case you should end up with an integer in your status column indicating the result of each test. Bare in mind that you can create your own TestRail result IDs to facilitate your needs, for example I needed one for “error” and one more for “file not found” statuses. Something you should take care of is the event of cron firing your test execution script before a previous instance of the same script has finished. This can be handled by creating a simple lock file that gives permission to only one instance of the script to run.
4) Lastly, to achieve the second goal, i.e. reporting these results back to TestRail, we use the second cron-powered script that takes all tests marked as completed but not already reported from the DB and makes a request back to TestRail using its API specifying the result status of each test.
Just for stats, the above prototype was implemented with around 1000 lines of code in total (OOP PHP with a separate ORM library: RedBeanPHP and various small shell scripts).
If you need greater elaboration or a more technical workflow, let me know below. Have fun.