“Tools and machines are great, but in the end, it is the people who matter. Putting together an effective automated test team requires lots of planning and skills.” This might the emblematic quote of this book about software testing automation. The book groups case studies written each by a different author, some of them being already known by the Methods & Tools readers like Lisa Crispin, Elfriede Dustin or Jonathan Kohl. Each case study is presented in the preface, so you able to pick the story that you prefer, based on many criteria like application domain, tool type… or if the project was successful or not.
I think that this is the unique selling proposition of this book: each chapter is the story of a personal journey through a test automation effort, with its good and bad days. One of my favorite chapters describes the effort to automate the tests at a vendor of software testing tools. On top of this, the two editors have done a very good job to present all stories, highlighting the lessons they tell us and putting them in perspective of the testing automation domain. This book is highly recommended to every software tester and to all developers that want to produce higher quality code or are wise enough not to consider every testers as a frustrated wannabe programmer unable to recognize the beauty and efficiency of their beautifully crafted code.
Reference: “Experiences of Test Automation – Case Studies of Software Test Automation”, Dorothy Graham and Mark Fewster editors Addison-Wesley, 605 pages, IBSN 978-0321754066
Automation code, including scripts, is software and, like all software, can have bugs and therefore must be tested. Reviews, even inspection, of automation testware can help find problems and bugs in the testware itself and can help to ensure that the automated tests are fit for purpose. They can also help to spread knowledge among the team about the details of the testware. You don’t need to review everything - choosing a representative sample of test scenarios to review with stakeholders is an effective way to see if you are on the right track.
Tests with randomly generated input data sometimes revealed serious defects but could not help in the debugging process because of the test’s inability to reproduce the data that caused the failure. Those tests were often discussed, generally from an economic standpoint: They usually required extra resources for analysis, often failed to find the cause of the bug, and too seldom led to the root cause of a defect.
In general, industry experience indicates that having manual testers also doing automation work has been unsuccessful – largely due to trying to get nontechnical testers to do engineering-level work. In this model, we built our tools and framework in such a way that the less technical aspects of automation script development and execution may be done with technical support.
Looking back I can now say that the more specialized an automated test person becomes, the more it is required to get a person with business knowledge on board. When I started to automate scripts, I was a manual tester with lots of business knowledge. But now, I have hardly anything to do with it. There is a risk that I automate a test but completely miss the meaning of the original test: what it was supposed to test or show.