Test-driven development (or even Behaviour-driven development) is increasingly popular, but sometimes old habits make you write tests afterwards – just to keep your test coverage up.

This is just plain wrong.

If you’ve never seen your test fail, how can you know that you’re testing the right thing? Let’s imagine we were trying out XMLBuilder for the first time:

module Demo
  def self.build_xml(xml)
    xml.parent do
      xml.child :id => 1
    end
  end
end

Could it get any easier. We already have a parent tag. Now let’s write the test afterwards:

require 'demo'

require 'rubygems'
gem 'builder'
require 'builder'

describe Demo do
  before(:each) do
    @xml = Builder::XmlMarkup.new(:indent => 2, :encoding => 'UTF-8')

    Demo::build_xml(@xml)
  end

  it 'should have a parent' do
    @xml.should match(/

Indeed, it seems that we have a parent. And our test coverage is blasting 100%.

However, just out of curiosity, let's try writing an invalid test:

it 'should not behave like this' do
  @xml.should match(/THIS WAS NOT IN MY XML/)
end

It still passes. Why? Well, XMLBuilder's method_missing happens to catch RSpec's should method. All possible matchers just pass. When outputting XML everything could seem to be in order.

And this is not the only problem. I've even seen cases, where someone is using a clever doas(:username) helper to run tests under a certain user. Too bad that the helper method itself happened to be broken and even the craziest tests passed.

I pretty much assume that if a certain code block is not unit tested, it's broken. The same goes with tests: if you haven't seen a spec failing before implementing the feature, there's usually something wrong with the spec.