It’s been a long time, I shouldn’t have left you… without some strong code to step to. Step to, step too… step to… (Cliff re-quoting Timbaland re-quoting
I’ve been attending a Srcum certification course led by none other than James Coplien. This guy is fantastic and I’ve already learned a great deal about the practice in the one day I’ve spent with him. Now here’s the controversial topic. Jim, is totally against TDD. If you know me, I am from the polar opposite camp. I had a breif dsicussion with Jim yesterday which I plan to continue today if he allows the time. In the interim I took the liberty of reviewing some of a standard email he sends to people like me who are unaware/unsure of his strong opposing position. The 1st link features Jim in a debate with a person I admire, Uncle Bob Martin from Object Mentor. This debate felt almost identical to my discussion with Jim yesterday and while both leave me unconvinced that TDD is harmful I remain open-minded. I am so open-minded to Jim’s position that I wanted to rush a quick post this morning to explain where I could potentially agree that TDD does more harm than good.
Testing at the wrong level
One of Jim’s primary arguments is that developers practice TDD at the wrong level, the class level that is not responsible for external features and contracts. This leads to code bloat from the test and gives you the wrong architecture from the onset. I whole heartedly agree with Jim on this point and it is something I still find myself doing from time to time. The practice requires a certain discipline and experience that you get only from making this and other similar missteps. In all, your design should come from your specifications or contracts and these should precede your implementation. I believe (without asking him) that Jim would agree with me here.
Poorly factored unit tests
One of the primary reasons TDD fails is because many miss the last R in the RGR cycle. RGR stands for Red Green Refactor. That means you write a failing test (reports red), make it pass (reports green), then you refactor both you system under test and your test code. I’ve made the mistake of not properly refactoring my test code in a hurry to move onto the next thing. Your test code should read like a contract or a usage guide for how to interact with your code. In practice the test code tends to grow rather quickly, which exceeds a developer’s ability to properly maintain it, which leads to the code bloat mentioned above. This step takes discipline and can be easy to neglect even for experienced developers.
Writing too much test or too much code
Following Uncle Bob, the TDD cycle is a tight and minimalistic cycle where you write only enough test to state or explore the current part of the specification you’re working on and only enough code to satisfy that test. Problems arise from writing too much test code without iterating over implementation code which is the inverse of YAGNI, YAGII! (You Ain’t Gonna Implement It.) Also you get into trouble from writing too much implementation code without a requirement or specification to justify it. The power comes from the iterative approach to explorative development. As you iterate you uncover pieces of the spec that are not complete which might require discussion with your QA or business analyst and trigger discussions and that leads to an important distinction I’d like to mention. Many people make a distinction between bugs and features. I see them as one in the same. A bug is simply a non-feature or a hole in your spec. It represents some edge case or usage scenario that has not been explored which leads to errant unexpected or undefined behavior. You iterate on these the same as you do your features by amending your spec and filling in the holes in both the spec and your test cases which should mirror your spec.
I have to cut this short now since class is starting. It’s an interesting topic and I’d love to hear more about what Jim and others have to say on the topic.