Software testing : the gap between academic and business circles

Last time I made a presentation to an audience about my work as a software engineer for a software vendor, I was asked if I were aware of a team of researchers in the local university studying…. software testing. The answer won’t surprise the french reader : of course I never heard of those guys.

Back when I was a student in computer science in a french university, I felt we did not deal enough with the business world. I was happy with the very academic way we were taught computer science. I felt a bit awkward about the research topic that were covered by the researchers (articles answering to articles mentioning articles…. but not much about the use of all this). For example, we spent hours on the “theoretical network” (the OSI’s 7 layers) and just mentioned what was actually used in company’s network. Or we discussed “theory of database” but first time I really used one was year later (Oracle !). Maybe that was ok after all. Still, I did not follow the PHD path and went in the software business world.

Now 15 years later (yes), I see this from the other side and I feel the same disappointment. To be very factual, in the software testing world there is of course tons of books, conference, mentors, tools and methods that are coming from the trenches. And in a parallel world, there is a research field :

My concern is : how come none of the software tester I know have ever read those paper or attended such a conference ? And why I never met any researcher in all the Agile Testing conference I went ?

I guess that in some fields academic and business meet more. For example, complex static tests performed on high quality demanding software takes a lot from academic research (e.g. Polyspace, french startup acquired by Mathworks a couple of years ago).

I wonder what is the situation in other country were software testing is big (US, Germany, India for example).

How Google tests software

Just finished “How Google tests software” book written by some Googlers. It is very much worth reading. I have very few informations about how testing goes at Google but as far as I could tell it seemed quite an honnest account on their practices. Here are some informations/data I found to be worth sharing.

First, it seems that a lot of the google testing culture comes from Patrick Copeland who joined the company in 2005 (a bit late in their history as they started a couple of years before 2000). At that time they were 50 full time testers and the culture of test was quite low (and bad) to say the least. It took years to Patrick and his teams (including Alberto Savoia and James Whittacker, authors of the book) to reach their idea of how quality should be managed.

Even if the word “agile” does not appear in the book (probably not much used within Google), one can detect a lot of Agile practices at Google. The major ones beeing the full integration/collaboration between developers and testers, the production in the most continuous way (up to the deployment) and the goal to reach a pyramide-shaped automation portfolio (much unit test, few end-to-end).

One point that is very much stressed and not so spread is the fact that they decided that testers should be as good programmers as programmers themselves. Basically they first stated that programmers (software engineers) were going to test as much as all the other engineering people (everyone is in charge of quality). Second step was to change the name of the testers/QA to “software engineers in tests” and “tests engineers” to make them more equal to programmers. And finally those SET and TE where in charge of helping the teams to test/automate, not just test. Given this, it became clear that finding the right people for the job was quite tough. All the parts about the hiring process are interesting.

Another quite counter-intuitive fact is that the methods and tools are not so centralized in Google. They are going in this direction and the best tools/methods they came up with are quite popular inside (and even outside as the release some tools). But even basic opinions like “how much automation versus exploration” seem to differ from one test manager to another. The book is quite honest and humble about this and does not try to sell THE google way of testing.

The future of the test as seen by James Whittaker in the last chapter is quite unexpected to ! He figures that testing will disappear as a job because it will so much become everyone else responsibility (Product manager,  programmers…. customers ?) that the job will be diluted in the company.

Other misc facts :
– Google does not really think in term of dev/qa ratio. In their model with SET and TE, it does not make much sense
– the whole continuous integration system was very complex to set-up. Testing teams pushed a lot to have it and it became, of course, the backbone of their organisation
– they try to build automated test cases that are independent of the order in which they are run and that let the “system” in the same state they discovered it. This allow to run all the tests in parallel. This is a good practice I tried myself, and it appears to be not so easy to build and maintain.

Oh…. and my favorite paragraph about automation :

“The larger an automation effort is, the harder it is to maintain and the more brittle it becomes as the system evolves. It’s the smaller, more special purpose automation that creates useful infrastructure and that attracts the most software engineers to write tests.
Overinvesting in end-to-end automation often ties you to a product’s specific design and isn’t particularly useful until the entire product is built and in stable form. By then, it’s often too late to make design changes in the product, so whatever you learn in testing at that point is moot. Time that testers could have invested in improving quality was instead spent on maintening a brittle end-to-end test suite.”

Amen !

 

 

Testing at Google

Google testing blog is waking up after a long silence !

They were probably busy handling HR issues as 2 main actors of google testers left Google :

1) James Whittaker who shared his thought about leaving google

2) Alberto Savoia who is famous for his “test is dead” talk)

Funny thing is that James Whittaker was writting a book about “how google test software”. Now the book just released while James is at Microsoft already !

So, the book is available

And Google Blog is starting to discuss its testing strategy again.

Next steps for me : read James book and share some feedbacks…

 

10 Reasons to fix bugs as soon as you find them

Andy Glover and Matt Archer published a nice infographic about “Reasons why you fix bugs as soon as you find them”. The topic may seem obvious to some, but this is a real issue I encountered. I’d like to comment the reasons mentioned using my own experience as a QA.

– unfixed bugs camouflage other bugs
=>  I agree in theory, but don’t have vivid memory about such a case. (I do have lots about bugs hiding bugs, but usually those were seen as major and fixed asap)

– unfixed bugs suggest quality isn’t important
=> QA are aware that doing a bug-free software is impossible and that accepting some bugs is part of the game, but it is quite discouraging when it become to frequent. In this case, QA start to wonder why they are hunting bugs in the first place.

– discussing unfixed bugs is a waste of time (planning, replanning, rediscovering…)
=> So true. When you end up having repetitive “bugs triage meeting” requested by the QA so that their “known and annoying bugs” could be fixed, you should realize you are losing your time and fixing those issues would maybe have been quicker.

– unfixed bugs leads to dupplicate effort (risk to report the same bug again and again)
=> So true. Demotivating for the whole team when the new hired QA comes proudly with a bug and we all say “oh, this bug ! that’s an old friend…. we’ve know him for 3 years”

– unfixed bugs leads to unreliable metrics (if you measure “open issues”)
=> I am not into metrics to much. No comment.

– unfixed bugs distract the entire team
=> True. Unfixed bugs become a sort of “common knowledge” that everyone in the team should master whereas we should focus on knowing how the soft works (not how it fails)

– unfixed bugs hinder short-notice releases
=> I did not met the problem. Usually “unfixed bugs” were not major enough for me to block delivery (that’s why they ware “accepted” in the product in the first place)

– unfixed bugs leads to inacurate estimates
=> agree. Although I did not witness this so much.

– fixing familiar code is easier than unfamiliar code
=> Sure. After a while, a buggy piece of code becomes impossible to fix by the team.

– fixing a bug today costs less than tomorrow
=> that is a summary/consequence of the last 9 points

In addition I would like to add that unfixed bugs are a mess for QA team doing automation. How do you deal with an automated test case that is testing a (probably partly) broken feature ? Do you run the test and class it as “KO but we know it” ? Do you skip it ? (but then the test case might become obsolete very soon”). Whatever choice you make, it leads to a more complex automation framework.