If your code’s reusable why can’t you use it?


I called a meeting with my team today to discuss an issue that’s been at the top of my mind (and the tip of my tongue, and the butt of my jokes, and the heel of my boot) for the last few years. You know how you have that feeling of constipation of the lip? It’s like you have something really important to say but no time/courage/appropriate words/availability/presence-of-mind to articulate. And then someone gives you milk of magnesia for the mouth. (Or whatever the constipation relief equivalent is for the tongue.) That’s a bit of the release I’ve had. Here’s the back story. The team has been discussing code reuse lately and taking steps to redeploy major sections of some of our projects. While I agree whole heartedly that we need to reuse some pieces I’d been noticing a pattern. I finally decided to do something about it.

I called a meeting. I prepared a power point. Mind you. I have no expertise at public speaking. I tend to come off as a rambler. Not nearly as direct and to the point as I am here online. I also tend to feel extremely nervous going into these meetings where I’m the host. There’s always some last second gotcha with the AV equipment that I usually could solve in 15 seconds or less but get too nervous to think through. Also I had my manager and I think (I have yet to confirm) my director on the call. They both know I’m a nut and overly passionate about agile development and I don’t want to come across as too egotistical. Suffice it to say I had the “you’re gonna sound like an egotistical idiot” jitters.

I had one thing in mind through out the meeting, one point that I tried convey throughout. “If your code’s reusable, why can’t you use it?” I’m sure this struck a nerve or two. But think about it. We all are on the same side here. We all want the same thing. We just want that thing through different channels. We want modular reusable design. It just so happens that we disagree on how we actually use source code. I use source code to feed my compiler until it barfs out a binary blob. You might use source code to literally run your program. (An idea that really only makes sense in the world of interpreted languages.) I like to use binary blobs to build/run an application while others feel as though the source code is what’s running. I think in terms of features and behaviors while others are way more comfortable thinking in terms of the literal source statements involved in achieving the same thing.

Today I questioned what everyone thinks of as reuse. In the end I put it out as an assignment and discretely as a challenge. The idea is to get some code, recent or old, code that either defines a feature or fixes a bug. The code should likely be something that you’re proud of. The challenge is to run that code in complete absence of the program that contains it. The exercise is to get the team thinking in terms of true modularity. I drew several analogies to explain my point. I’m not sure how successful I was because in the end engineers will be engineers and I’m no exception. We dove straight into the details and started debating specifics of his project vs. theirs, when/how to merge and branch, and other off topic stuff. The true point of sharing features was soon dropped on the floor.

Can you reUSE your code?
My question still stands on its own (as should your code). If you think you have a solid architecture prove it by taking a snippet of your code (a single bug fix or feature) and executing it completely outside of your project. Care to take me up on the challenge? Care to comment? Leave the details of your experience in the box below.

Classic vs Mockist, TDD vs BDD


I’ve been having constant issues with Behavior Driven Design using xUnit tools. For those that don’t know what behavior driven design means check out Dan North’s introduction. When writing tests you can choose from a number of approaches. I feel I’ve followed an evolutionary chain through the most popular. Most people start with CWT, or Coding While Testing. This is an approach where the tests are written with the code, not necessarily before but along side. People take this “feel good” approach in order to cover their code with tests and to gain confidence that their code won’t break without them knowing why/where. Eventually tests become brittle and break all around and the tests become a burden to maintain. Some people evolve into TDD or Test Driven Design. This is where the test is written first in an attempt to achieve better design. When I evolved this far I was still struggling in the “implementation zone”. That’s where most developers get their mail delivered. The implementation zone is where the major concern is how things work. You can’t rest well unless you know exactly how things work beneath a method call. There are no surprises because everything is understood. Doing TDD while living here leads to “White Box” tests. These are tests that mirror the implementation of the system. These test scream immediately when the system change in the slightest. It’s no fun.

Later I learned about the Behavior Driven approach which lead to my understanding DSLs, or Domain Specific Languages. Actually I picked up my understanding of DSLs when I started using JMock, which is still one of my favorite tools. I started to remember why programming languages were invented. “Code is for Humans” became my mantra. I stopped caring about how code worked and started concerning myself with why code was written. With today’s systems rivaling NASA in their complexity it becomes impossible to obsess over every implementation detail. It’s more natural to trust that something works through an automatable specification that says it works. Its hard to put in English, but there’s a subtle difference in how I look at code compared to how I see most of my co-workers look at code. I rely on a system of trust which helps both in design and in debugging. I digress.

This morning I began reading Martin Fowler’s revised write up on Mocks vs. Stubs. If you do any automated testing of your system at all I demand you read this article. If you’ve read it in the past (as I had) I demand you read it again because it’s updated. If you read it last week then I demand you read it to your 5 year old tonight at bed time. The point is that Martin makes very good distinctions on what he calls classic TDD and Mockist TDD. He explains the difference between state verification and behavior verification. He explains it in a way that shocked me, because I’d classify myself as a mockist. The entire time I obsessed over implementation being secondary to design I see how and why the mockist frame of thought can easily shackle your design to your implementation details. I almost wanted to change ships, jump on the classical bandwagon. However the article very fairly points out several flaws in both approaches while not condemning either. In the end the choice is left to you. Well done Martin. Well done.

I titled this post, “Classic vs Mockist, TDD vs BDD” not to align BDD with the mockist approach though it fits better. The truth is that you can use Martin’s classic TDD style when doing BDD and vice versa. There really isn’t much of a difference between classic or mockist, TDD or BDD. It’s all really a matter of style as all these approaches aim in the same direction. Its similar to comparing Baptist to Methodist religions. There are differences but one is not more true than the other and its the faith or general direction that both point to that makes them equally as important. Here’s what’s really important.

Back to my problem. I posted on Stack overflow yesterday asking the question of how to avoid so many mocks. I have yet to receive a solution to the core problem I’m seeing. If you or someone you know has a mastery of TDD please chime in. I’d love to finally have a good discussion on how to solve a complex problem using the practice. I feel like I’m 80% there but I’m missing one piece of the pie.