Comparison of Robot Framework and pytest for functional tests automation

Robot Framework and pytest are two popular test automation frameworks used in the Python community. At first glance, their usage seems to be different. Robot Framework is advertised as a “generic test automation framework for acceptance testing” and the first example of the home page is a login/password test written in a DSL that puts it in the black-box testing category. Whereas pytest home page shows a unit test of a Python method hence more white-box testing oriented.

So is the situation clear? On one side Robot for not-so-technical QA doing automation on the final product and on the other side pyTest for developers writing unit tests? Not so sure as pytest can be used for high-level functional test automation as well (“complex functional testing for applications” stated in pytest home page). So how do both framework compare for functional tests automation?

Let’s start by writing a minimal test in both frameworks. For our simple example, the SUT will be git and we will check the output of git version command called from the command line.

The test in pytest:

import subprocess
 
def test_command_line():
p = subprocess.Popen('git --version', shell=True, stdout=subprocess.PIPE)
p.wait()
assert p.stdout.read() == 'git version 2.15.2 (Apple Git-101.1)\n'

And its execution:

pytest test_with_pytest.py
======================================== test session starts ========================================
platform darwin -- Python 2.7.15, pytest-3.7.2, py-1.5.4, pluggy-0.7.1
rootdir: /Users/laurent/Development/tmp/pytest, inifile:
collected 1 item
 
test_with_pytest.py . [100%]
 
===================================== 1 passed in 0.02 seconds ======================================

The test in Robot Framework:

*** Settings ***
Library Process
 
*** Test Cases ***
test_command_line
${result} = Run Process git --version
Should Be Equal ${result.stdout} git version 2.15.2 (Apple Git-101.1)

And its execution:

pybot test_with_robot.robot
==============================================================================
Test With Robot
==============================================================================
test_command_line | PASS |
------------------------------------------------------------------------------
Test With Robot | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==============================================================================
Output: /Users/laurent/Development/tmp/pytest/output.xml
Log: /Users/laurent/Development/tmp/pytest/log.html
Report: /Users/laurent/Development/tmp/pytest/report.html

On this specific example, the test syntax is quite similar. There is not so much boilerplate code for pytest so the advantage of Robot’s DSL is not so obvious. A first difference
we can note is that is for the report/log: by default pytest does not provide any HTML report/log file whereas Robot does. To get HTML report for pytest one need to install pytest-htm and invoke pytets with argument --html=report.html

Here is a quick comparison of other useful features:

FeaturepytestRobot Framework
Setup/Teardownxunit setupsetup and teardown
Tagging/Marking testsmarkerstags
Management of expected failures xunit skipping testsnon critical tests
Get some print/debug messages on stdout while tests are runningpass -s option to pytest commandLog to console
Rerun failed testrerunning-only-failures-or-failures-firstre-executing-failed-test-cases
running tests in parallelxdist and pytest-parallelpabot
Stopping after the first (or N) failuresstopping-after-the-first-or-n-failuresnot available

On a more general note, Robot should be easier to start with for people with lower development skills as it is using a simple test-oriented DSL with rich built-in libraries. But to build more complex/complete automation, users will have to switch do Python code anyway at some point. On the other end, pytest requires to start with a full-fledged langage but this comes with many features as well (huge IDE support, static code analysis etc.). So all in all, it seems that both frameworks can be used for functional test automation with no obvious winner.

For another comparison of the two frameworks, see the blog post “The classic dilemma for testers – Robot or pytest” from Opcito.

And if you have any input on this topic, feel free to leave a comment!

Robot Framework, REST and JSON with RESTinstance

This is mostly a follow-up of the article Robot Framework, REST and JSON. As this article is now 5 years old, situation has evolved a bit, and recently a new REST library for Robot Framework got some attention: RESTinstance. So let’s take a quick look at it.

Before testing this new lib, let’s rewind a bit. First we need a product to test via a REST API and we will choose one that returns a JSON. For example a GET on http://echo.jsontest.com/framework/robot-framework/api/rest should return:

{
   "api": "rest",
   "framework": "robot-framework"
}

Here is how we checked the response of that endpoint in the previous article using requests library:

*** settings ***
Library  Collections
Library  requests
 
*** test cases ***
simpleRequest
    ${result} =  get  http://echo.jsontest.com/framework/robot-framework/api/rest
    Should Be Equal  ${result.status_code}  ${200}
    ${json} =  Set Variable  ${result.json()}
    ${framework} =  Get From Dictionary  ${json}  framework
    Should Be Equal  ${framework}  robot-framework
    ${api} =  Get From Dictionary  ${json}  api
    Should Be Equal  ${api}  rest

And here is how it can be checked using RESTinstance library:

*** settings ***
Library  REST  http://echo.jsontest.com
 
*** test cases ***
simpleRequest
    GET  /framework/robot-framework/api/rest
    Object  response body
    String  response body api  rest  
    String  response body framework  robot-framework

So when we do a GET, we create a new instance of an object that can then be used by the other keywords of the library. This object can be written on the console log using Output keyword. In my case I get this output:

The current instance as JSON is:
{
    "spec": {},
    "request": {
        "body": null,
        "scheme": "http",
        "sslVerify": true,
        "headers": {
            "Content-Type": "application/json",
            "Accept": "application/json, */*",
            "User-Agent": "RESTinstance/1.0.0b35"
        },
        "allowRedirects": true,
        "timestamp": {
            "utc": "2018-08-15T19:23:04.046254+00:00",
            "local": "2018-08-15T21:23:04.046254+02:00"
        },
        "netloc": "echo.jsontest.com",
        "url": "http://echo.jsontest.com/framework/robot-framework/api/rest",
        "cert": null,
        "timeout": [
            null,
            null
        ],
        "path": "/framework/robot-framework/api/rest",
        "query": {},
        "proxies": {},
        "method": "GET"
    },
    "response": {
        "seconds": 0.173237,
        "status": 200,
        "body": {
            "framework": "robot-framework",
            "api": "rest"
        },
        "headers": {
            "Content-Length": "56",
            "Server": "Google Frontend",
            "X-Cloud-Trace-Context": "8e86c2afa42b5380847247dbdc8a7e24",
            "Date": "Wed, 15 Aug 2018 19:23:03 GMT",
            "Access-Control-Allow-Origin": "*",
            "Content-Type": "application/json; charset=ISO-8859-1"
        }
    },
    "schema": {
        "exampled": true,
        "version": "draft04",
        "request": {
            "body": {
                "type": "null"
            },
            "query": {
                "type": "object"
            }
        },
        "response": {
            "body": {
                "required": [
                    "api",
                    "framework"
                ],
                "type": "object",
                "properties": {
                    "framework": {
                        "enum": [
                            "robot-framework"
                        ],
                        "type": "string",
                        "example": "robot-framework"
                    },
                    "api": {
                        "enum": [
                            "rest"
                        ],
                        "type": "string",
                        "example": "rest"
                    }
                }
            }
        }
    }
}

So when we do String   response body api   rest we are exploring this object and checking the content of [“response”][“body”][“api”] but as we can see there is much more that can be checked. This syntax was a bit unexpected for me but I guess one just has to get used to it.

The library is also quite powerful in terms of JSON schema checking (and generation). I invite you to check the README to explore it further.

Should the test automation framework be written in the same language the application is being written or developed in?

Just tweeted an article published by Sauce Labs on “Reducing False Positives in Automated Testing” that takes the form of a Q&A about automation. I mostly agree with the article but there is one answers that bothers me :

Should the test automation framework be written in the same language the application is being written or developed in?

A: Yes. Creating the automation framework in the same language of application will help the development team in testing specified areas whenever there is any code change or defect fix.[…]

I would replace the “yes” by “it depends”. I also see value in using another langage to automate functional tests. Specially when you have a dedicated QA/tester team. Here are a couple of reasons:

  • if your company is producing different products in different langages, then it could be difficult for your QA team to keep up with those different langages for their automation. So using a single langage for the tests would ease QA life. That would also allow QA to share libraries/tools among different tests for different SUT.
  • hiring skilled QA is not easy. Finding people with good functional understanding/curiosity of the SUT *and* a good technical knowledge can be difficult. Asking them to code in a langage like Python/Ruby rather than coding in C, C#, Java etc. could be make it easier to achieve
  • using a different langage than the SUT is also a good way to force QA (or dev if they code tests) to take a new perspective on the SUT. If you test your Java product in Java, you might swim in the “integration tests” water and even when you try to black-box test your application, you might end up interacting with the SUT in a gray-box way.

In the end it really depends on your organisation, culture, people etc. but it is worth having a debate on that point. You might want to take a look at this other article on the same subject: Your automated acceptance tests needn’t be written in the same language as your system being tested

Re-executing failed test cases and merging outputs with Robot Framework

In a previous post, I discussed solving intermittent issues aka building more robust automated tests. A solution I did not mention is the simple “just give it another chance”. When you have big and long suites of automated tests (quite classic to have suites in the 1000’s and lasting hours when doing functional tests), then you might get a couple of tests randomly failing for unknown reasons. Why not just launching only those failed tests again? If they fail once more, you are hitting a real problem. If they succeed, you might have hit an intermittent problem and you might decide to just ignore it.

Re-executing failed tests (–rerunfailed) appeared in Robot Framework 2.8. And since version 2.8.4 a new option (–merge) was added to rebot to merge output from different runs. Like explained in the User Guide, those 2 options make a lot of sense when used together:

# first execute all tests
pybot --output original.xml tests 
# then re-execute failing
pybot --rerunfailed original.xml --output rerun.xml tests 
# finally merge results
rebot --merge original.xml rerun.xml

This will produce a single report where the second execution of the failed test is replacing the first execution. So every test appears once and for those executed twice, we see the first and second execution message:

modified

Here, I propose to go a little bit further and show how to use –rerunfailed and –merge while:

  • writing output files in an “output” folder instead of the execution one (use of –outputdir). This is quite a common practice to have the output files written in a custom folder but it makes the whole pybot call syntax a bit more complex.
  • giving access to log files from first and second executions via links displayed in the report (use of Metadata). Sometimes having the “new status” and “old status” (like in previous screenshot) is not enough and we want to have details on what went wrong in the execution, and having only the merged report is not enough.

To show this let’s use a simple unstable test:

*** Settings ***
Library  String

*** Test Cases ***
stable_test
    should be true  ${True}

unstable_test
    ${bool} =  random_boolean
    should be true  ${bool}
    
*** Keywords ***
random_boolean
    ${nb_string} =  generate random string  1  [NUMBERS]
    ${nb_int} =  convert to integer  ${nb_string}
    Run keyword and return  evaluate  (${nb_int} % 2) == 0

The unstable_test will fail 50% of times and the stable test will always succeed.

And so, here is the script I propose to launch the suite:

# clean previous output files
rm -f output/output.xml
rm -f output/rerun.xml
rm -f output/first_run_log.html
rm -f output/second_run_log.html
 
echo
echo "#######################################"
echo "# Running portfolio a first time      #"
echo "#######################################"
echo
pybot --outputdir output $@
 
# we stop the script here if all the tests were OK
if [ $? -eq 0 ]; then
	echo "we don't run the tests again as everything was OK on first try"
	exit 0	
fi
# otherwise we go for another round with the failing tests
 
# we keep a copy of the first log file
cp output/log.html  output/first_run_log.html
 
# we launch the tests that failed
echo
echo "#######################################"
echo "# Running again the tests that failed #"
echo "#######################################"
echo
pybot --outputdir output --nostatusrc --rerunfailed output/output.xml --output rerun.xml $@
# Robot Framework generates file rerun.xml
 
# we keep a copy of the second log file
cp output/log.html  output/second_run_log.html
 
# Merging output files
echo
echo "########################"
echo "# Merging output files #"
echo "########################"
echo
rebot --nostatusrc --outputdir output --output output.xml --merge output/output.xml  output/rerun.xml
# Robot Framework generates a new output.xml

and here is an example of execution (case where unstable test fails once and then succeeds):

$ ./launch_test_and_rerun.sh unstable_suite.robot

#######################################
# Running portfolio a first time      #
#######################################

==========================================================
Unstable Suite
==========================================================
stable_test                                       | PASS |
----------------------------------------------------------
unstable_test                                     | FAIL |
'False' should be true.
----------------------------------------------------------
Unstable Suite                                    | FAIL |
2 critical tests, 1 passed, 1 failed
2 tests total, 1 passed, 1 failed
==========================================================
Output:  path/to/output/output.xml
Log:     path/to/output/log.html
Report:  path/to/output/report.html

#######################################
# Running again the tests that failed #
#######################################

==========================================================
Unstable Suite
==========================================================
unstable_test                                     | PASS |
----------------------------------------------------------
Unstable Suite                                    | PASS |
1 critical test, 1 passed, 0 failed
1 test total, 1 passed, 0 failed
==========================================================
Output:  path/to/output/rerun.xml
Log:     path/to/output/log.html
Report:  path/to/output/report.html

########################
# Merging output files #
########################

Output:  path/to/output/output.xml
Log:     path/to/output/log.html
Report:  path/to/output/report.html

So, the first part is done: we have a script that launch the suite twice if needed and put all the output in “output” folder. Now let’s update the “settings” section of our test to include links to first and second run logs:

*** Settings ***
Library   String
Metadata  Log of First Run   [first_run_log.html|first_run_log.html]
Metadata  Log of Second Run  [second_run_log.html|second_run_log.html]

 If we launch our script again, we will get a report with links to first and second run in the “summary information” section:

report

The script and the test can be found in a GitHub repository. Feel free to comment on that topic if you found out more tips on those Robot options.

Visual Regression Tests

The first rule of “UI automated regression tests” is “You do not perform any UI automated regression tests”.

The second rule of “UI automated regression tests” is “In some contexts, if you really know what you are doing, then some automated regression tests performed via the UI can be relevant”.

Martin Fowler explained it very well in his Testing Pyramid article: “a common problem is that teams conflate the concepts of end-to-end tests, UI tests, and customer facing tests. These are all orthogonal characteristics. For example a rich javascript UI should have most of its UI behaviour tested with javascript unit tests using something like Jasmine”. So the top of a testing pyramid should be:

  • built on top of a portfolio of unit and service/component/api tests (that should include some tests focussed on the UI layer)
  • a (small) set of end-2-end tests performed via the GUI to check that we did not miss anything with our more focussed tests.

For those end-2-end tests, the usual suspect in the Open Source scene is Selenium. Driving the browser through our app via the most common paths is a good way to gain some final confidence on the SUT. But one should really understand that what Selenium will check is the presence of some elements on the page and the events associated with some elements. “if I fill this input box and click on that button then I expect to see a table with this and that strings in it”, but it does not check the visual aspect of the page. To put it another way, with Selenium we are checking the nervous system of our SUT, but not its skin. Here come the visual regression testing tools.

Thoughtworks Radar raised their level from “assess” to “trial” on “visual regression testing tools last July with this comment: “Growing complexity in web applications has increased the awareness that appearance should be tested in addition to functionality. This has given rise to a variety of visual regression testing tools, including CSS Critic, dpxdt, Huxley, PhantomCSS, and Wraith. Techniques range from straightforward assertions of CSS values to actual screenshot comparison. While this is a field still in active development we believe that testing for visual regressions should be added to Continuous Delivery pipelines.”

So I wanted to give it a try and did a quick survey of the tools available. I wanted to know which project were still active, what was their langage/ecosystem and what browser were supported. Here is list I just built:

My shopping list was:

  • Python, as I am already using Robot Framework in this langage
  • Support for another browser than PhantomJS because my SUT does not render very well under PhantomJS at the moment

So I chose needle which, according to its author is “still being maintained, but I no longer use it for anything”. I gave it a try and it works well indeed. I now have a basic smoke test trying to catch visual regression tests. More on that soon if I have more feedback to share.

How to solve intermittent issues and get more robust automated tests

In the last episode of their podcast, Trish Khoo and Bruce McLeod discussed how to solve intermittent issues in automated test suites. Trish also made a presentation on this topic at a Selenium Conference. In those both media, Trish and Bruce go over different topics:

1) test design
2) logging
3) tolerance
4) have stable systems
5) some tips (prefer “wait for” to “sleep” for example)

I recommend you to consume those 2 presentations. Here are some notes on those topics with some comments on how I approach them with Robot Framework.

On test design, as automated tests are code, all the principle of clean code should be applied to tests: small functions, single responsibility, DRY, no side-effect. There is a specific chapter on “clean tests” in “clean code” by Uncle Bob where he discuss readability, use of DSL, single assert per tests and FIRST principle. In Robot Framework, creation of keywords is so quick and easy, that I tend to create as many as needed until I obtain test cases which are 10 lines long max with given-when-then sections clearly identified.

On logging, in Robot Framework, I have a keyword that I launch in the Teardown of every single test that check the ${TEST_STATUS} variable (filled by Robot engine). If the test if FAILED, then I create a backup folder where I backup a lot of data on the current state of the SUT (log files, audit files, configuration files, database content etc.).

On tolerance, in my previous company we were doing financial computation, and from version to version, the results of some functions could differ very slightly due to change in algorithm or compiler. So we agreed with Product Management team on threshold that were accepted and created smart comparison keyword that were taking 3 arguments: expected result, actual result, threshold. That made the tests way more stable.

On tips, the “wait for” rather than “sleep” tip is a one I use all the time. First because most of the fixed-time-sleep should be prohibited in the tests for performance reason (see my post on performance of tests) and then because sleep might be too long or too short on some systems. Another keyword/pattern I use in Robot Framework in the “wait until keyword succeeds“. This is the “wait for” applied to any keyword. When there are some timing issues in the tests, this keyword can be very very handy.

Finally, setup and teardown are a keystone of stable tests. They should be extracted from the tests and should leave the system in a state where any other test can be run afterward. This does not mean to leave the system in a identical state than when the test started (that could be very time consuming depending on the SUT and the actions performed by the tests). But we should be in a “known state” in which we know the following tests will be able to run OK.

All that being said, I have a couple of instable tests, so I’d better apply all those rules quickly…

Robot Framework Test Automation – The Book

Packt Publishing recently released a new book about  “Robot Framework Test Automation” by Sumit Bisht. The book is a quick and easy 100 pages read that can be useful to those who find the user guide a bit to dry.

First chapter helps go through the installation steps but fails at giving a clear picture of Robot Framework ecosystem: a diagram shows a “testing tool” component being the one that actually target the SUT while Robot Framework is more on the user end side as a coordinator. This is just one way to use Robot (a way that is later described more in details in section about Robot+Selenium and Robot+Sikuli). But Robot can be used to test directly the SUT through its native libraries or some custom libraries.

Second chapter is quite clear about the different file format and the organization of the test portfolio. Third chapter treats the actual creation of test cases with some data about syntax and libraries, but it lacks some examples going from the easiest to more complex cases. Chapter four discuss Robot association with other testing tools (like mentioned before) and finally the last chapter helps generating standard and custom reports.

A topic that is not covered is who is the “ideal” Robot Framework user. It is implied in chapter three that “developer and stakeholders” will collaborate in writing the tests. My experience is rather than a tool like Robot Framework is a very good fit for a QA/testers team that is distinct from dev and stakeholders. For such a team it makes sense to have a specific tool/DSL for testing rather than coding the functional/acceptance tests in the langage of the product.

Globally though the book is a good read, it fails at being a real “missing manual” compared to what the User Guide already offers. The book would maybe have benefit from taking an alternate approach with much more example, like a cookbook, and more experience from the field. Anyway, thanks to Sumit and Packt for the effort in sharing some knowledge about Robot Framework!

Automation of Functional Tests with Robot Framework at SoftShake 2013

Last 24-25th october I went to SoftShake conference in Geneva. The conference was very interesting and was happy to see a lot of source code and live coding/demos. I was given the chance to make a presentation about automations of functional tests with Robot Framework. Here follows some feedback on it.

The slides can be found bellow though it misses what I consider being the most interesting part: live coding.  To show practical examples of code, I wrote some tests with Jenkins as a SUT. Those are very simple tests but that helped people to get a better idea of what kind of test can be written with Robot. Those little tests can be found in Github. I used fswatch to have the tests be executed every time I would save the tests. So with a split screen code+robot it was looking like an infinitest setup.

Questions I got after the presentations are worth noting as they give some information about what people care/think when they discuss tests automation:

1) Is it possible to have variables in the middle of a Robot Framework keyword? Question arose because in the Robot code I showed and wrote, I always write :

keyword argument1 argument2
ex/ file should exist in folder    file name    directory name

where in fact I should use the facility offered by Robot to put argument inside keywords like described in the documentation. This helps ending up with tests that are easier to read. Here is a simple example:

*** test cases ***
Embedding arguments into keyword name
number 2 and 2 should be equal

*** Keywords ***
number ${x} and ${y} should be equal
should be equal ${x} ${y}

2) “With a tool like Robot Framework, are QA supposed to commit the tests in the SCM?”. Answer is yes as the tool makes the more sense when integrated in a continuous integration environment that will retrieve the tests in an SCM. But the question is worth asking because in my previous company, having QA using the SCM was not an obvious step to make. With some non-technical rather-functional QA we might have a group that is more at ease with Office documents and filesystems than with developer tools. Personally, I consider using the same tool is also a very good opportunity to make the QA and Dev team be closer. (further reading on this topic: “Technical artifacts including test automation and manual regression test scripts (if any) belong in the Source Control System versioned with the associated code” by Elisabeth Hendrickson)

3) is it possible to write keywords in Java rather than Python. Answer is yes, although I consider Python to be a good choice and more homogenous with the syntax used in Robot DSL.

4) finally, there were several questions about what “component tests” are and what entry points could be used in the SUT to write such tests. My definition of component tests is rather shallow: everything that is between unit tests and end-2-end tests! That’s enough an explanation for developer-type people who can envision a REST API for example. That is more complicated to grasp for non-technical QA or managers. I was, once more, surprised to see that for many people they were 2 obvious way to test a product: unit tests written by developers (white box testing) and gui-driven-tests written by QA (black box testing)… Some more evangelism to do for gray box testing I guess!

So this was good opportunity to share my experience of Robot and to be challenged with unexpected questions is a very good way to progress. So I might be back with other sessions in the future…

 

Improve the performance of your automated tests

If you google “performance of automated tests”, you will get loads of articles about “automated performance tests” but very few about how to speed-up the execution of your test portfolio. Although this is one of the goals of a test suite: it should give a quick feedback. Of course we don’t expect the same performance for unit tests (measured in seconds or minutes) than for larger system tests (measured in dozens of minutes to several hours probably). But at any layer of the test pyramid, having faster tests is an advantage.

Here is a collection of ways to enhance the performance of the automated tests:

1) Execute Tests in Parallel
If the tests are independent (as they should be), you could split your test portfolio in pieces and run the different parts in parallel. You can start on the machine on which your tests are already running as it probably has a multi-core processor that could run different “testing threads” in parallel. In that case, beware to customize each install/set-up so that the different instances of your products don’t step on each other foot. You can also send the execution to several machines in your lab. Here are a couple of articles related to this topic:
– Pivotal Labs on how to parallelize RSpec tests
– Java.net blog post on how to parallelize JUnit tests
– How to distribute Selenium tests on multiple machine with Selenium Grid

2) Avoid Sleep
On the higher levels of the testing pyramid, we are testing user scenarios and our scripts might need some pauses in order to run successfully. For example, on the filesystem level, we could have to wait for a log file to be created before checking its content. On the UI level, we might need to wait for a button to be here to click it. An easy way to cope with this is to add sleep() all over the tests scripts until they pass. That, of course, should be avoided as much as possible. First reason is because it makes the tests brittle: when we will run the test on a slower machine the test might fail. Another reason is that pilling up those sleep() will make the test very slow. So we should use any kind of wait() instead that would regularly check for an object/event to be there. Here is how to do it in Selenium and Robot Framework:
– Presentation of WebDriverWait by Mozilla
– Wait until keyword succeeds in Robot Framework

3) Share Setup and Teardown
“Every test case should be independent” does not mean that every test case should handle a full setup and a full teardown. A good pattern is to share the setup and teardown on different levels to enhance the performance of the portfolio. For example, we could have a global setup that deploy a MySQL database that some tests will use later on. We could have another shared setup for a group of a dozen of test cases that add some lines in a table of that database. Finally each test case will finish its own setup by tweaking the database again before doing the test itself. The trick is to share the setup among a group of test cases that won’t modify what the setup configured! This is very convenient and easy to do with Robot Framework and with JUnit.

4) Focus your Optimization Effort
Another way to look at the test performance issue could be to start by identifying the tests or the functions/methods/keywords that are the more time consuming over your whole portfolio. Focussing your effort on those parts could lead to quick wins. Here are two examples:
– My humble code to measure most expensive keywords in a Robot Framework test suite
– A smart XSL on Stackoverflow to to identify the longest running unit tests on Junit

Hope this might help some,
and don’t hesitate to post a comment with other ideas/links!

EDIT : found some slides from David Gageot on the very same topic  : http://fr.slideshare.net/dgageot/lets-make-this-test-suite-run-faster-softshake-2010

Robot Framework, REST and JSON

The software I am testing with Robot Framework offers a REST API as main entry point. So the question arose of what library to use to write my Robot tests.

First one I tried was the robotframework-restlibrary.

  • Pro: it is test-oriented and works well with Robot
  • Con: some features are missing (inspecting returned headers for example) and there is no community/activity around it (no commit since 2009)

I used it for a couple of weeks for my first tests but once I reached the limitations of the library, I quickly looked for another one.

I bumped into 2 others libraries tagged as “REST” and “Robot Framework”

but none of them seemed to be a good fit for me (not much documentation, not the right level of keywords, not very active…). Though, the second one was built on “requests library” which seemed to be worth looking at.

Requests Python Library is not Robot or Test oriented but the feature support is very wide, the use guide is very detailed and the community looks very active.

Here is a simple example of how to use it with Robot Framework:
(we use jstontest.com that sends back a json object when we do a get)

*** settings ***
Library  Collections
Library  requests
*** test cases ***
simpleRequest
    ${result} =  get  http://echo.jsontest.com/framework/robot-framework/api/rest
    Should Be Equal  ${result.status_code}  ${200}
    ${json} =  Set Variable  ${result.json()}
    ${framework} =  Get From Dictionary  ${json}  framework
    Should Be Equal  ${framework}  robot-framework
    ${api} =  Get From Dictionary  ${json}  api
    Should Be Equal  ${api}  rest

The trick then is to create appropriate keywords for often-used command.
For example, to check the value of a property in a one-level json object, we could have this keyword:

json_property_should_equal    
    [Arguments]  ${json}  ${property}  ${value_expected}
    ${value_found} =    Get From Dictionary  ${json}  ${property}
    ${error_message} =  Catenate  SEPARATOR=  Expected value for property "  ${property}  " was "  ${value_expected}  " but found "  ${value_found}  "
    Should Be Equal As Strings  ${value_found}  ${value_expected}  ${error_message}    values=false

I am still looking for a good json lib to create/read/update my json objects.
In the meantime I built a collection of keywords like the one mentioned here.

Hope this help some people facing the same kind of starting point (robot + rest + json).
Ping me if you want some more details.