Geeks With Blogs
Liam McLennan hackingon.net

Important-icon

 

IMPORTANT: Before reading this post open this link and let it play.

 

This is my response to the 2nd Developer Blog Banter. The question asked is

How do you organise your tests. Do you separate your unit tests, integration tests and UI tests into separate projects? Do you do anything specific to keep track of your tests? What naming conventions do you use? Do you run them before a check in or is that what the build server is for?

The first developer blog banter was about technology stack.

I organise tests into two groups: slow and fast. That is the only dichotomy that I care about. I try to run the fast tests all the time (using testdriven.net keyboard shortcuts). The slow tests are run less frequently such as when I change something external or when I think there is a chance I may have broken something as well as during the continuous integration build.

The majority of my tests are written as BDD style executable specifications using StoryQ.  I spoke about this at the South Australian code camp and I will be presenting a new and improved version at CodeCampOz. This is obviously a topic close to my heart; I presented similar material at this year’s Seattle Code Camp. I test this way to realise the following benefits:

  • link the tests to the requirements. If the tests pass then the application IS correct.
  • drive the design from the outside in. Like a sculptor.
  • prevent bugs, during development and in regression.

To briefly summarize the storyq approach, you start with a story like:

Story: Customer Complaints
In order to get satisfaction
As a customer
I want to submit a complaint

From which we can derive scenarios to specify the feature by example. We use the familiar given, when, then syntax to separate the context and the observations from the event:

Scenario: Stupid Complaint
Given I am a customer
and my complaints are stupid
when I complain
then my complaint is ignored

Scenario: Valid Complaint
Given I am a customer
and my complaint is valid
when I complain
then my complaint is handled
and the manager contacts me to apologise

StoryQ provides a GUI tool which converts these scenarios into boilerplate code:

[TestFixture]
    public class StoryQTestClass
    {
        [Test]
        public void CustomerComplaints()
        {
            new Story("customer complaints")
                .InOrderTo("get satisfaction")
                .AsA("customer")
                .IWant("to submit a complaint")

                        .WithScenario("stupid complaint")
                            .Given(IAmACustomer)
                                .And(MyComplaintIs, ComplaintValidity.Stupid)
                            .When(IComplain)
                            .Then(MyComplaintIsIgnored)

                        .WithScenario("valid complaint")
                            .Given(IAmACustomer)
                                .And(MyComplaintIsValid)
                            .When(IComplain)
                            .Then(MyComplaintIsHandled)
                                .And(TheManagerContactsMeToApologise)
                .ExecuteWithReport(MethodBase.GetCurrentMethod());
        }

        private void IAmACustomer()
        {
            throw new NotImplementedException();
        }

        private void MyComplaintIs(ComplaintValidity validity)
        {
            throw new NotImplementedException();
        }

        private void IComplain()
        {
            throw new NotImplementedException();
        }

        private void MyComplaintIsHandled()
        {
            throw new NotImplementedException();
        }

        private void TheManagerContactsMeToApologise()
        {
            throw new NotImplementedException();
        }
    }

If we run the tests now the output shows, as expected, that nothing is implemented.

image

The developer’s job is now clear: to get all of the tests to green. When that is done we know that the requirements have been satisfied:

image

Liam’s Final Thoughts

The way of working that I have described is similar to the workflow implemented by many ruby developers using cucumber. Where they typically choose to have their tests interact with their application’s interface I usually try to test against the layer immediately below the user interface. I developed this approach after listening to Hanselminutes episode 151 - Fit and Fitness with Ward Cunningham and James Shore. In that episode Ward says:

what that does is that
separates you from the objects in question.  For
example you can’t ask it anything that isn’t in the
output, whereas when I’m talking to the objects
directly, I can ask the object the question that isn’t
going to be in the output, right?  And  I can get an
answer form it too because I'll just let that object do
that for the sake of testing and that is the power of
objects.  Well, so objects are kind of had their day and
people are on to other things and that’s a technique
that was important to me and I think that it’s
something that was a little bit of my religion that hasn’t
really stuck.

This comment resonated with me because of the pain that I had been experiencing with UI tests. I still do the occasional UI test but I believe that attempting to prove that an application satisfies it’s requirements by testing through the UI is frustrating and ultimately futile.

Until next time, take care of yourselves and each other.

Posted on Wednesday, October 13, 2010 9:22 PM | Back to top

Copyright © Liam McLennan | Powered by: GeeksWithBlogs.net