Comparison of Robot Framework and pytest for functional tests automation

Robot Framework and pytest are two popular test automation frameworks used in the Python community. At first glance, their usage seems to be different. Robot Framework is advertised as a “generic test automation framework for acceptance testing” and the first example of the home page is a login/password test written in a DSL that puts it in the black-box testing category. Whereas pytest home page shows a unit test of a Python method hence more white-box testing oriented.

So is the situation clear? On one side Robot for not-so-technical QA doing automation on the final product and on the other side pyTest for developers writing unit tests? Not so sure as pytest can be used for high-level functional test automation as well (“complex functional testing for applications” stated in pytest home page). So how do both framework compare for functional tests automation?

Let’s start by writing a minimal test in both frameworks. For our simple example, the SUT will be git and we will check the output of git version command called from the command line.

The test in pytest:

import subprocess
def test_command_line():
p = subprocess.Popen('git --version', shell=True, stdout=subprocess.PIPE)
assert == 'git version 2.15.2 (Apple Git-101.1)\n'

And its execution:

======================================== test session starts ========================================
platform darwin -- Python 2.7.15, pytest-3.7.2, py-1.5.4, pluggy-0.7.1
rootdir: /Users/laurent/Development/tmp/pytest, inifile:
collected 1 item . [100%]
===================================== 1 passed in 0.02 seconds ======================================

The test in Robot Framework:

*** Settings ***
Library Process
*** Test Cases ***
${result} = Run Process git --version
Should Be Equal ${result.stdout} git version 2.15.2 (Apple Git-101.1)

And its execution:

pybot test_with_robot.robot
Test With Robot
test_command_line | PASS |
Test With Robot | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
Output: /Users/laurent/Development/tmp/pytest/output.xml
Log: /Users/laurent/Development/tmp/pytest/log.html
Report: /Users/laurent/Development/tmp/pytest/report.html

On this specific example, the test syntax is quite similar. There is not so much boilerplate code for pytest so the advantage of Robot’s DSL is not so obvious. A first difference
we can note is that is for the report/log: by default pytest does not provide any HTML report/log file whereas Robot does. To get HTML report for pytest one need to install pytest-htm and invoke pytets with argument --html=report.html

Here is a quick comparison of other useful features:

FeaturepytestRobot Framework
Setup/Teardownxunit setupsetup and teardown
Tagging/Marking testsmarkerstags
Management of expected failures xunit skipping testsnon critical tests
Get some print/debug messages on stdout while tests are runningpass -s option to pytest commandLog to console
Rerun failed testrerunning-only-failures-or-failures-firstre-executing-failed-test-cases
running tests in parallelxdist and pytest-parallelpabot
Stopping after the first (or N) failuresstopping-after-the-first-or-n-failuresnot available

On a more general note, Robot should be easier to start with for people with lower development skills as it is using a simple test-oriented DSL with rich built-in libraries. But to build more complex/complete automation, users will have to switch do Python code anyway at some point. On the other end, pytest requires to start with a full-fledged langage but this comes with many features as well (huge IDE support, static code analysis etc.). So all in all, it seems that both frameworks can be used for functional test automation with no obvious winner.

For another comparison of the two frameworks, see the blog post “The classic dilemma for testers – Robot or pytest” from Opcito.

And if you have any input on this topic, feel free to leave a comment!

Robot Framework, REST and JSON with RESTinstance

This is mostly a follow-up of the article Robot Framework, REST and JSON. As this article is now 5 years old, situation has evolved a bit, and recently a new REST library for Robot Framework got some attention: RESTinstance. So let’s take a quick look at it.

Before testing this new lib, let’s rewind a bit. First we need a product to test via a REST API and we will choose one that returns a JSON. For example a GET on should return:

   "api": "rest",
   "framework": "robot-framework"

Here is how we checked the response of that endpoint in the previous article using requests library:

*** settings ***
Library  Collections
Library  requests
*** test cases ***
    ${result} =  get
    Should Be Equal  ${result.status_code}  ${200}
    ${json} =  Set Variable  ${result.json()}
    ${framework} =  Get From Dictionary  ${json}  framework
    Should Be Equal  ${framework}  robot-framework
    ${api} =  Get From Dictionary  ${json}  api
    Should Be Equal  ${api}  rest

And here is how it can be checked using RESTinstance library:

*** settings ***
Library  REST
*** test cases ***
    GET  /framework/robot-framework/api/rest
    Object  response body
    String  response body api  rest  
    String  response body framework  robot-framework

So when we do a GET, we create a new instance of an object that can then be used by the other keywords of the library. This object can be written on the console log using Output keyword. In my case I get this output:

The current instance as JSON is:
    "spec": {},
    "request": {
        "body": null,
        "scheme": "http",
        "sslVerify": true,
        "headers": {
            "Content-Type": "application/json",
            "Accept": "application/json, */*",
            "User-Agent": "RESTinstance/1.0.0b35"
        "allowRedirects": true,
        "timestamp": {
            "utc": "2018-08-15T19:23:04.046254+00:00",
            "local": "2018-08-15T21:23:04.046254+02:00"
        "netloc": "",
        "url": "",
        "cert": null,
        "timeout": [
        "path": "/framework/robot-framework/api/rest",
        "query": {},
        "proxies": {},
        "method": "GET"
    "response": {
        "seconds": 0.173237,
        "status": 200,
        "body": {
            "framework": "robot-framework",
            "api": "rest"
        "headers": {
            "Content-Length": "56",
            "Server": "Google Frontend",
            "X-Cloud-Trace-Context": "8e86c2afa42b5380847247dbdc8a7e24",
            "Date": "Wed, 15 Aug 2018 19:23:03 GMT",
            "Access-Control-Allow-Origin": "*",
            "Content-Type": "application/json; charset=ISO-8859-1"
    "schema": {
        "exampled": true,
        "version": "draft04",
        "request": {
            "body": {
                "type": "null"
            "query": {
                "type": "object"
        "response": {
            "body": {
                "required": [
                "type": "object",
                "properties": {
                    "framework": {
                        "enum": [
                        "type": "string",
                        "example": "robot-framework"
                    "api": {
                        "enum": [
                        "type": "string",
                        "example": "rest"

So when we do String   response body api   rest we are exploring this object and checking the content of [“response”][“body”][“api”] but as we can see there is much more that can be checked. This syntax was a bit unexpected for me but I guess one just has to get used to it.

The library is also quite powerful in terms of JSON schema checking (and generation). I invite you to check the README to explore it further.

Re-executing failed test cases and merging outputs with Robot Framework

In a previous post, I discussed solving intermittent issues aka building more robust automated tests. A solution I did not mention is the simple “just give it another chance”. When you have big and long suites of automated tests (quite classic to have suites in the 1000’s and lasting hours when doing functional tests), then you might get a couple of tests randomly failing for unknown reasons. Why not just launching only those failed tests again? If they fail once more, you are hitting a real problem. If they succeed, you might have hit an intermittent problem and you might decide to just ignore it.

Re-executing failed tests (–rerunfailed) appeared in Robot Framework 2.8. And since version 2.8.4 a new option (–merge) was added to rebot to merge output from different runs. Like explained in the User Guide, those 2 options make a lot of sense when used together:

# first execute all tests
pybot --output original.xml tests 
# then re-execute failing
pybot --rerunfailed original.xml --output rerun.xml tests 
# finally merge results
rebot --merge original.xml rerun.xml

This will produce a single report where the second execution of the failed test is replacing the first execution. So every test appears once and for those executed twice, we see the first and second execution message:


Here, I propose to go a little bit further and show how to use –rerunfailed and –merge while:

  • writing output files in an “output” folder instead of the execution one (use of –outputdir). This is quite a common practice to have the output files written in a custom folder but it makes the whole pybot call syntax a bit more complex.
  • giving access to log files from first and second executions via links displayed in the report (use of Metadata). Sometimes having the “new status” and “old status” (like in previous screenshot) is not enough and we want to have details on what went wrong in the execution, and having only the merged report is not enough.

To show this let’s use a simple unstable test:

*** Settings ***
Library  String

*** Test Cases ***
    should be true  ${True}

    ${bool} =  random_boolean
    should be true  ${bool}
*** Keywords ***
    ${nb_string} =  generate random string  1  [NUMBERS]
    ${nb_int} =  convert to integer  ${nb_string}
    Run keyword and return  evaluate  (${nb_int} % 2) == 0

The unstable_test will fail 50% of times and the stable test will always succeed.

And so, here is the script I propose to launch the suite:

# clean previous output files
rm -f output/output.xml
rm -f output/rerun.xml
rm -f output/first_run_log.html
rm -f output/second_run_log.html
echo "#######################################"
echo "# Running portfolio a first time      #"
echo "#######################################"
pybot --outputdir output $@
# we stop the script here if all the tests were OK
if [ $? -eq 0 ]; then
	echo "we don't run the tests again as everything was OK on first try"
	exit 0	
# otherwise we go for another round with the failing tests
# we keep a copy of the first log file
cp output/log.html  output/first_run_log.html
# we launch the tests that failed
echo "#######################################"
echo "# Running again the tests that failed #"
echo "#######################################"
pybot --outputdir output --nostatusrc --rerunfailed output/output.xml --output rerun.xml $@
# Robot Framework generates file rerun.xml
# we keep a copy of the second log file
cp output/log.html  output/second_run_log.html
# Merging output files
echo "########################"
echo "# Merging output files #"
echo "########################"
rebot --nostatusrc --outputdir output --output output.xml --merge output/output.xml  output/rerun.xml
# Robot Framework generates a new output.xml

and here is an example of execution (case where unstable test fails once and then succeeds):

$ ./ unstable_suite.robot

# Running portfolio a first time      #

Unstable Suite
stable_test                                       | PASS |
unstable_test                                     | FAIL |
'False' should be true.
Unstable Suite                                    | FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
Output:  path/to/output/output.xml
Log:     path/to/output/log.html
Report:  path/to/output/report.html

# Running again the tests that failed #

Unstable Suite
unstable_test                                     | PASS |
Unstable Suite                                    | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
Output:  path/to/output/rerun.xml
Log:     path/to/output/log.html
Report:  path/to/output/report.html

# Merging output files #

Output:  path/to/output/output.xml
Log:     path/to/output/log.html
Report:  path/to/output/report.html

So, the first part is done: we have a script that launch the suite twice if needed and put all the output in “output” folder. Now let’s update the “settings” section of our test to include links to first and second run logs:

*** Settings ***
Library   String
Metadata  Log of First Run   [first_run_log.html|first_run_log.html]
Metadata  Log of Second Run  [second_run_log.html|second_run_log.html]

 If we launch our script again, we will get a report with links to first and second run in the “summary information” section:


The script and the test can be found in a GitHub repository. Feel free to comment on that topic if you found out more tips on those Robot options.

Randomizing test execution order

Many testing framework offer optional randomization of test execution order. For example:
– Robot Framework with –randomize
– Rspec with –order random

I consider this option as very useful and use it by default for all the automated tests portfolio I run. The advantages I see are:

– we detect ordering dependencies as soon as possible.  If we execute tests A and B always in the same order, test B could work only because test A left the system in a state that is used by test B. If one day we invert the order (by renaming the tests for example, if order depend on alphabetical order) then the suite will fail and it will take us some time to understand the problem (because test A and B were maybe written months/years ago). This happens also if we insert a new test between A and B or refactor test A or B. If we run the tests in random order all the time, we will detect this issue very soon.

– we might detect bugs in the SUT that appear only in some specific sequence of actions that a random order of test could meet with luck. The problem then could be how to duplicate the bug we just bumped into. For Rspec, randomness can have predictability via seeding.  JUnit had a randomization problem when Java 7 came out, and had to think that over and came up with a deterministic but random order. There is no such thing for Robot Framework so we have to manually reproduce the test order that caused the failure.

– we won’t always run the same tests first. And usually when we read a test report, we start by the top and analyse error one by one. This could help us to not analyze the “Access”, “Audit” and “Authentication” tests first all the time…

One could argue that using a fixed test execution order is useful to run some smoke/sanity tests first, and then the rest of the porfolio. I think in that case, it is better to split that in 2 different jobs. A first “smoke test” job that runs quickly (5-10 minutes) and another “full test” that can take several hours. In Robot Framework, this can be easily achieved using tags.

Another reason to push for fixed tests execution order could be performance optimisation. Test A could be preparing the system for test B to start and when test B ends, the system is ready for test C to go. One of the reason this is a bad pattern is that you won’t be able to run only test B or only test C! If the setup of a given test is in the previous test, then you are doomed to run always the full portfolio. This is simply not bearable. When a full portfolio detect a couple of failed tests, we want to be able to run those tests once more to double-check they are failing and then start to analyse the problem.

We could also introduce randomness in the test themselves, but this is another topic… for a future post!

Live-coding session of Robot Framework on Mac

Back in november, I hosted a talk at SoftShake. Like mentioned in my feedback post, I included some live-coding during the session. For the record, I’d like to share the set of tools I used to be comfortable doing this Robot Framework live-coding on my Mac.

1) Screen resolution: once plugged to the projector, the resolution available suddenly become very low. So it is important to rehearse the session using this low resolution, otherwise it will be very disturbing on the actual day. Choosing the largest display in Preference should get a resolution close to the projector if we don’t have access to one.


2) Window manager: I use Spectacle to quickly change windows arrangement setup. There are keyboard shortcuts to display windows on full, half or quarter of the screen. Once you get used to it, you can very quickly set 4 different windows in a couple of seconds on the screen.


3) Terminal  : I use iTerm2. In order to quickly get 4 small shell windows, I place them on the screen using Spectacle and save the window arrangement in iTerm2. Then I can associate a keyboard shortcut to this arrangement to get it very quickly during the presentation.

iterm2 restore windows arrangment


4) Text Editor. I use Textmate 2 (pre-built version here) with the Robot Framework Bundle. The bundle offers some keyboard shortcuts like *s⇥ which insert the *** settings *** header and nice color highlighting . There is no keyword completion though…



5) Live execution of the test. I wanted to obtain an effect similar to infinitest plugin with which each time a change is made on the source code, Infinitest runs the tests in the IDE. So I installed fswatch to run a script every time a directory content would be modified (i.e. every time my test would be updated) and created a small script that would be launch in such an event (script does clear + launch Robot).



So, I chosed to display 4 windows during the live-coding:
1) top-left: live-execution of Robot Framework
2) top-right: source code of the Test modified live
3) bottom-left: Software Under Test. Jenkins for this session (either file structure or the UI)
4) bottom-right: web browser with Robot Framework Library documentations to show the keywords I used in my code.



During the session, the test regularly went from green to red when I added a not-yet-created keyword or used a keyword in a library I forgot to import. So the idea was to save the file as often as possible to see the result change color at the same time.

An idea for a future session could be to do the full BDD-TDD test-code-refactor cycle:
– show a spec of dev to do
– write the failing functional test with Robot
– write the failing unit test in Java with any unit test framework
– write the minimal code to get the unit test green
– refactor the code and the test
– launch the functional test and update the code to make it green if it is still red

This could even be performed by 2 people: one QA and one dev… Will see if I ever try this!

Robot Framework Test Automation – The Book

Packt Publishing recently released a new book about  “Robot Framework Test Automation” by Sumit Bisht. The book is a quick and easy 100 pages read that can be useful to those who find the user guide a bit to dry.

First chapter helps go through the installation steps but fails at giving a clear picture of Robot Framework ecosystem: a diagram shows a “testing tool” component being the one that actually target the SUT while Robot Framework is more on the user end side as a coordinator. This is just one way to use Robot (a way that is later described more in details in section about Robot+Selenium and Robot+Sikuli). But Robot can be used to test directly the SUT through its native libraries or some custom libraries.

Second chapter is quite clear about the different file format and the organization of the test portfolio. Third chapter treats the actual creation of test cases with some data about syntax and libraries, but it lacks some examples going from the easiest to more complex cases. Chapter four discuss Robot association with other testing tools (like mentioned before) and finally the last chapter helps generating standard and custom reports.

A topic that is not covered is who is the “ideal” Robot Framework user. It is implied in chapter three that “developer and stakeholders” will collaborate in writing the tests. My experience is rather than a tool like Robot Framework is a very good fit for a QA/testers team that is distinct from dev and stakeholders. For such a team it makes sense to have a specific tool/DSL for testing rather than coding the functional/acceptance tests in the langage of the product.

Globally though the book is a good read, it fails at being a real “missing manual” compared to what the User Guide already offers. The book would maybe have benefit from taking an alternate approach with much more example, like a cookbook, and more experience from the field. Anyway, thanks to Sumit and Packt for the effort in sharing some knowledge about Robot Framework!

Automation of Functional Tests with Robot Framework at SoftShake 2013

Last 24-25th october I went to SoftShake conference in Geneva. The conference was very interesting and was happy to see a lot of source code and live coding/demos. I was given the chance to make a presentation about automations of functional tests with Robot Framework. Here follows some feedback on it.

The slides can be found bellow though it misses what I consider being the most interesting part: live coding.  To show practical examples of code, I wrote some tests with Jenkins as a SUT. Those are very simple tests but that helped people to get a better idea of what kind of test can be written with Robot. Those little tests can be found in Github. I used fswatch to have the tests be executed every time I would save the tests. So with a split screen code+robot it was looking like an infinitest setup.

Questions I got after the presentations are worth noting as they give some information about what people care/think when they discuss tests automation:

1) Is it possible to have variables in the middle of a Robot Framework keyword? Question arose because in the Robot code I showed and wrote, I always write :

keyword argument1 argument2
ex/ file should exist in folder    file name    directory name

where in fact I should use the facility offered by Robot to put argument inside keywords like described in the documentation. This helps ending up with tests that are easier to read. Here is a simple example:

*** test cases ***
Embedding arguments into keyword name
number 2 and 2 should be equal

*** Keywords ***
number ${x} and ${y} should be equal
should be equal ${x} ${y}

2) “With a tool like Robot Framework, are QA supposed to commit the tests in the SCM?”. Answer is yes as the tool makes the more sense when integrated in a continuous integration environment that will retrieve the tests in an SCM. But the question is worth asking because in my previous company, having QA using the SCM was not an obvious step to make. With some non-technical rather-functional QA we might have a group that is more at ease with Office documents and filesystems than with developer tools. Personally, I consider using the same tool is also a very good opportunity to make the QA and Dev team be closer. (further reading on this topic: “Technical artifacts including test automation and manual regression test scripts (if any) belong in the Source Control System versioned with the associated code” by Elisabeth Hendrickson)

3) is it possible to write keywords in Java rather than Python. Answer is yes, although I consider Python to be a good choice and more homogenous with the syntax used in Robot DSL.

4) finally, there were several questions about what “component tests” are and what entry points could be used in the SUT to write such tests. My definition of component tests is rather shallow: everything that is between unit tests and end-2-end tests! That’s enough an explanation for developer-type people who can envision a REST API for example. That is more complicated to grasp for non-technical QA or managers. I was, once more, surprised to see that for many people they were 2 obvious way to test a product: unit tests written by developers (white box testing) and gui-driven-tests written by QA (black box testing)… Some more evangelism to do for gray box testing I guess!

So this was good opportunity to share my experience of Robot and to be challenged with unexpected questions is a very good way to progress. So I might be back with other sessions in the future…


Robot Framework and Excel

It can be useful to read Excel documents within test cases for two reasons : get some input for a test and check the output of a test when an Excel document is produced by the SUT. I propose to discuss quickly those 2 cases and see how easy it is to achieve it with Robot Framework.

The main reason I see for Excel document to be a good candidate for test cases input data is that Excel sheets are able to do computations (quite complicated ones in fact). In my previous job, the expected of our test cases were often some financial computation (derivate products, rates etc.) and the tool of choice for Product Management and QA team to compute the expected of different use cases was Excel. When we chose to automate those examples, we basically copy/pasted the results from Excel to our testing tool. Until one day we thought it would be even more convenient to read the Excel file directly from our testing tool. To be honest, the idea was stolen from a demo of a tester building a data-driven tests framework based on Quality Center and Excel (yes, I know…)

The other side of the coin is when the SUT is producing Excel documents that we want to check. In that case, the expected could be stored in our tests, or it could be another XLS file (and we could compare results found in both).

So, here is some Robot Framework code that you could use as a start:

*** Settings ***
library  openpyxl.reader.excel
*** variables ***
${expected_value}  expected result
*** test cases ***
Excel sheet
    ${wb} = load_workbook sample.xlsx
    ${ws} = set variable ${wb.get_active_sheet()}
    ${cell} = set variable ${ws.cell('A1')}
    ${actual_value} =  set variable ${cell.value}
    should be equal  ${expected_value}  ${actual_value}

Here I open the document sample.xlsx and check that the value in the first cell match a value that I stored in a variable. I am using the openpyxl library (warning : can read only XLSdocuments). This is, IMHO, quite compact and simple code. From there, you can write your own keyword to parse the document according to your needs.

Some final thoughts :

  1. Robot Framework text format for test suites/cases is great as it fits perfectly in SCM. And this is where we want to store our tests artifacts (quote from a great article from Elisabeth Hendrickson : “Technical artifacts including test automation and manual regression test scripts belong in the Source Control System versioned with the associated code”). Unfortunately, Excel document (even in the XLSX format, which is XML… but zipped!) are binary files and if we store them in SCM, we won’t be able to compare different versions. That’s a major limit to their use.
  2. as I swim in the open source world all day long, I could have come up with a post about how to read some files that have a more open format than XLS…. The thing is that sometimes we don’t have the choice of the tools (example, customers are asking for export as XLSX in my product, I will have to test it one way or another) and that is the tool with which I had the experience I wanted to share.
  3. It is interesting to see that this Excel topic is discussed for many testing tools. Some are reading Excel from Junit, some other from Cucumber and there was a proposition to include it by default in Robot Framework.

Oh and thanks to Jean-Charles and Matthieu for the testing dojo in which we came up with this little snippet of code !

Robot Framework, Selenium, Jenkins, and XVFB

Until recently, my test cases written with Robot Framework were not driven by the GUI. Those tests were interacting with the SUT using many different ways: process library, database access, file system, REST requests etc. Those access usually don’t change too often during the lifetime of a product as they often become a contract for users/tools communicating with product (i.e. API). Whereas driving the SUT through the UI can be brittle and engage the team in a constant refactoring of the tests. This is summarized in the famous pyramid of tests. But, having no automated GUI tests, my pyramid was headless! So I decided to invest a bit in the topic.

From there I had the choice between :

  • writing tc with Robot using the Robot Framework Selenium 2 library
    pro: integrated with other existing Robot tc + easy to start
    con: library a bit behind the official one + selenium users are mostly not using this API
  • writing tc in Java using the Selenium 2 Java binding
    pro: up to date + large community of users
    con: another set of tools to write/launch/report test cases in addition to Robot

I took the easy path and went for the Robot Library. There is an excellent demo of the library which helps to start-up. Before digging into the tests of my SUT, I added another 2 steps:

  1. going a bit further in my knowledge of Chrome Developer Tool. There is an excellent online course by Code School. Those tools are soon becoming very useful when creating GUI driven test case. Note that, Firefox dev console seems to offer the same service, but somehow I feel more ease with Chrome ones.
  2. running a testing dojo with a group of peers. We used Jenkins as SUT (as I already did to perform some stress test trials) and it worked quite well although Jenkins does not seem to be the easiest web app to test as we had to use XPATH to locate element which can become obsolete quite fast.

I was then able to create my first tests on the SUT I am testing. Those were more easy as expected as all the element I had to address had unique IDs! The biggest obstacle is to chain actions without using the horrible sleep keyword (see Martin again). With the Robot Selenium library, the easiest way is to use (and abuse) the “wait until page contains element” keyword.

Last but not least, I needed to launch the tests using Jenkins (very easy and quite nice with the plugin) on a linux VM…. without a display! Here enters PhantomJS that is a headless browser supported by the Robot Selenium library. But somehow, my tests were randomly failing with it, so I gave it up and went back to Firefox. And here is the last character of this little story: xvfb. This display server sends the graphic to memory so no need to have a full/real display available. The install is not immediate but some shared their feedback on it so it is quite smooth after all.

So the current configuration  of my Robot Selenium tc in Jenkins is:

  • xvfb running on the VM on which tests are going sending display to :89
  • in Jenkins, in my Robot task, in “Build Environment / Set environment variable” I put “DISPLAY=:89”

And that’s it, tests are running OK on this VM. And on the top of that, when there is a failure in the tests, the Robot Selenium lib is able to perform a screenshot and save it to disk to ease analysis of the issue!

Improve the performance of your automated tests

If you google “performance of automated tests”, you will get loads of articles about “automated performance tests” but very few about how to speed-up the execution of your test portfolio. Although this is one of the goals of a test suite: it should give a quick feedback. Of course we don’t expect the same performance for unit tests (measured in seconds or minutes) than for larger system tests (measured in dozens of minutes to several hours probably). But at any layer of the test pyramid, having faster tests is an advantage.

Here is a collection of ways to enhance the performance of the automated tests:

1) Execute Tests in Parallel
If the tests are independent (as they should be), you could split your test portfolio in pieces and run the different parts in parallel. You can start on the machine on which your tests are already running as it probably has a multi-core processor that could run different “testing threads” in parallel. In that case, beware to customize each install/set-up so that the different instances of your products don’t step on each other foot. You can also send the execution to several machines in your lab. Here are a couple of articles related to this topic:
– Pivotal Labs on how to parallelize RSpec tests
– blog post on how to parallelize JUnit tests
– How to distribute Selenium tests on multiple machine with Selenium Grid

2) Avoid Sleep
On the higher levels of the testing pyramid, we are testing user scenarios and our scripts might need some pauses in order to run successfully. For example, on the filesystem level, we could have to wait for a log file to be created before checking its content. On the UI level, we might need to wait for a button to be here to click it. An easy way to cope with this is to add sleep() all over the tests scripts until they pass. That, of course, should be avoided as much as possible. First reason is because it makes the tests brittle: when we will run the test on a slower machine the test might fail. Another reason is that pilling up those sleep() will make the test very slow. So we should use any kind of wait() instead that would regularly check for an object/event to be there. Here is how to do it in Selenium and Robot Framework:
– Presentation of WebDriverWait by Mozilla
– Wait until keyword succeeds in Robot Framework

3) Share Setup and Teardown
“Every test case should be independent” does not mean that every test case should handle a full setup and a full teardown. A good pattern is to share the setup and teardown on different levels to enhance the performance of the portfolio. For example, we could have a global setup that deploy a MySQL database that some tests will use later on. We could have another shared setup for a group of a dozen of test cases that add some lines in a table of that database. Finally each test case will finish its own setup by tweaking the database again before doing the test itself. The trick is to share the setup among a group of test cases that won’t modify what the setup configured! This is very convenient and easy to do with Robot Framework and with JUnit.

4) Focus your Optimization Effort
Another way to look at the test performance issue could be to start by identifying the tests or the functions/methods/keywords that are the more time consuming over your whole portfolio. Focussing your effort on those parts could lead to quick wins. Here are two examples:
– My humble code to measure most expensive keywords in a Robot Framework test suite
– A smart XSL on Stackoverflow to to identify the longest running unit tests on Junit

Hope this might help some,
and don’t hesitate to post a comment with other ideas/links!

EDIT : found some slides from David Gageot on the very same topic  :