This book describes how Google organizes its software testing process and, more interestingly, how and why it created the current organization. It is organized on the main roles that are specifically involved in software testing at Google: Software Engineer in Test (SET), Test Engineer (TE), Test Engineering Manager. For each of this role, there is an explanation of its activities. This material is completed by some case studies and interviews of people that work for these roles at Google.
The book is well written with a lot of interesting concepts about software testing and how to implement it in an organization. This is balanced with the practical view from software testers and software managers at Google that speak about their day-to-day work. I will naturally recommend this book to every software tester and software development manager, but more broadly to everybody who feels concerned with quality in software development.
Reference: “How Google Tests Software”, James Whittaker, Jason Arbon, Jeff Carollo, Addison-Wesley, 264 pages, IBSN 978-0-321-80302-3
Get more details on this book or buy it on amazon.com
Get more details on this book or buy it on amazon.co.uk
Although it is true that quality cannot be tested in, it is equally evident that without testing, it is impossible to develop anything of quality. How does one decide if what you built is high quality without testing it? The simple solution to this conundrum is to stop treating development and test as separate disciplines. Testing and development go hand in hand. Code a little and test what you built. Then code some more and test some more. Test isn’t a separate practice; it’s part and parcel of the development process itself. Quality is not equal to test. Quality is achieved by putting development and testing into a blender and mixing them until one is indistinguishable from the other.
Instead of distinguishing between code, integration, and system testing, Google uses the language of small, medium, and large tests (not to be confused with t-shirt sizing language of estimation among the agile community), emphasizing scope over form. Small tests cover small amounts of code and so on. Each of the three engineering roles can execute any of these types of tests and they can be performed as automated or manual tests. Practically speaking, the smaller the test, the more likely it is to be automated.
Test plans are the first testing artifact created and the first one to die of neglect. At some early point in a project, the test plan represents the actual software as it is intended to be written but unless that test plan is tended constantly, it soon becomes out of date as new code is added, features veer from their preplanned vision, and designs that looked good on paper are reevaluated as they are implemented and meet feedback from users. Maintaining a test plan through all these planned and unplanned changes is a lot of work and only worthwhile if the test plan is regularly consulted by a large percentage of the projects’ stakeholders.
I suppose there is some fairytale world where every line of code is preceded by a test, which is preceded by a specification. Maybe that world exists. I don’t know. But in the innovative and fast-paced world I live in, you get what you get. Spec? Great! Thank you very much, I will put it to good use. But being realistic, you have to find a way to work within the system that exists. Demanding a spec won’t get you one. Insisting on unit tests won’t make those unit tests valuable. Nothing a spec writer or a unit test can do (besides finding an obvious regression bug) will help us find a problem that a real user will encounter. This is my world, a tester’s world. You get what you get and you use it to provide value to the product and the team.