Last week I went to Agile Grenoble Conference. The whole day was of very good quality. I was lucky with my choice of sessions and shared my day between good discussions and great talks.
My menu was:
– Kanban by Romain Couturier
– Testing and refactoring legacy code by Sandro Mancuso
– Functional Programming by Neil Ford
– TDD by Michael Borde
– Angular + Jersey by Laurent Leseigneur and Olivier Cuenot
All of them were very interesting but I was really bluffed by Sandro’s performance.
His slides can be seen here (with video at the very last slide):
That is really the kind of session I am looking for:
1) just a couple of main messages (“testing from shortest branch, refactoring from deepest one) during 5 minutes
2) live coding with some best practices, tips and personal opinions
And this was also very motivating to become more fluent with an IDE as part of the performance was the advanced usage of Eclipse, which for a Java developer is a real add-on.
Before the keynote of the afternoon, organizers found a great way to read the whole 12 principles of the Agile manifesto to the full audience of the conference. Strangely enough, I think it was the first time I heard the 12 principles read totally during the conference after 5 editions. The idea was that every person in the room had received a cup with 3 sentences from the manifesto. When our sentences were read, we got to sit. After the 12, a group of 10 people remained standing and they had to go on stage to read the sentence they had on their cup. Those were fake sentences like “company result is the primary measure of progress”. Very clever game!
This year I was a full visitor: I was neither organizing nor presenting. So thanks to all the contributors of this great day and I hope to give a help one way or another next year!
A couple of conferences are coming up this fall.
First one is SoftShake in Geneva where I will be presenting a session on Tests Automation with Robot Framework. The program is very appealing and I expect to come back with some take aways. Looking forward for the cucumber session and also to meet some french people I have a chance to see only during conferences.
Second one is Agile Grenoble where the main problem will be to choose some sessions among the 9 tracks and the multiple interesting speakers that are lining up! But this is will be more relaxed for me as I won’t have my own session.
Hope to see you in one of these events!
If you google “performance of automated tests”, you will get loads of articles about “automated performance tests” but very few about how to speed-up the execution of your test portfolio. Although this is one of the goals of a test suite: it should give a quick feedback. Of course we don’t expect the same performance for unit tests (measured in seconds or minutes) than for larger system tests (measured in dozens of minutes to several hours probably). But at any layer of the test pyramid, having faster tests is an advantage.
Here is a collection of ways to enhance the performance of the automated tests:
1) Execute Tests in Parallel
If the tests are independent (as they should be), you could split your test portfolio in pieces and run the different parts in parallel. You can start on the machine on which your tests are already running as it probably has a multi-core processor that could run different “testing threads” in parallel. In that case, beware to customize each install/set-up so that the different instances of your products don’t step on each other foot. You can also send the execution to several machines in your lab. Here are a couple of articles related to this topic:
– Pivotal Labs on how to parallelize RSpec tests
– Java.net blog post on how to parallelize JUnit tests
– How to distribute Selenium tests on multiple machine with Selenium Grid
2) Avoid Sleep
On the higher levels of the testing pyramid, we are testing user scenarios and our scripts might need some pauses in order to run successfully. For example, on the filesystem level, we could have to wait for a log file to be created before checking its content. On the UI level, we might need to wait for a button to be here to click it. An easy way to cope with this is to add sleep() all over the tests scripts until they pass. That, of course, should be avoided as much as possible. First reason is because it makes the tests brittle: when we will run the test on a slower machine the test might fail. Another reason is that pilling up those sleep() will make the test very slow. So we should use any kind of wait() instead that would regularly check for an object/event to be there. Here is how to do it in Selenium and Robot Framework:
– Presentation of WebDriverWait by Mozilla
– Wait until keyword succeeds in Robot Framework
3) Share Setup and Teardown
“Every test case should be independent” does not mean that every test case should handle a full setup and a full teardown. A good pattern is to share the setup and teardown on different levels to enhance the performance of the portfolio. For example, we could have a global setup that deploy a MySQL database that some tests will use later on. We could have another shared setup for a group of a dozen of test cases that add some lines in a table of that database. Finally each test case will finish its own setup by tweaking the database again before doing the test itself. The trick is to share the setup among a group of test cases that won’t modify what the setup configured! This is very convenient and easy to do with Robot Framework and with JUnit.
4) Focus your Optimization Effort
Another way to look at the test performance issue could be to start by identifying the tests or the functions/methods/keywords that are the more time consuming over your whole portfolio. Focussing your effort on those parts could lead to quick wins. Here are two examples:
– My humble code to measure most expensive keywords in a Robot Framework test suite
– A smart XSL on Stackoverflow to to identify the longest running unit tests on Junit
Hope this might help some,
and don’t hesitate to post a comment with other ideas/links!
EDIT : found some slides from David Gageot on the very same topic : http://fr.slideshare.net/dgageot/lets-make-this-test-suite-run-faster-softshake-2010
Did a little presentation of Robot Framework at Human Talk Grenoble about Robot Framework. Quite difficult to sell that kind of tool to an audience into web development with no or few experience of tests. This is more easy when talking to bigger team working on more traditional (old) softwares. Have to understand more about the possibilities and constraint of web development to understand how Robot could be used for the business/functional logic of the web projects.
During a second track (see first one here), Philippe Kruchten discussed the coexistence of Agility and Architecture.
He did not publish his slides but one can find a special issue of IEEE software on “Agility and Architecture” that is worth reading and close to what was discussed in Grenoble.
To sum it up, Kruchten promotes the integration of Architecture topic in Agile projet. He does it quite smoothly by stressing that the first thing to do is to study the context of the product/project to decide if architecture is a major topic.
« How much architectural activity will a project need? It usually fluctuates widely depending on the project’s context. By context we mean the project’s environment—the organization, the domain, and so on—as well as specific factors such as project size, stable architecture, business model, team distribution, rate of change, the system’s age, criticality, and governance. Other influences can include the market situation, the company’s power and politics, the product’s expected life span, product type, organizational culture, and history »
The whole article and presentation are well balanced and non dogmatic which is quite different from what can be read by the promoters of XP and Scrum.
To have a better feeling of the position of Krutchen in the Agile sphere, I would advice to read The Elephants in the Agile room.