Geeks With Blogs


Dylan Smith ALM / Architecture / TFS

So as I mentioned in my last post I have this large-ish .Net application which has a pretty low quality level.  The first task was to convince the powers that be that there is sufficient benefit to be had to justify the cost of dedicating 1-2 developers to focusing on quality improvements.  Now we need a plan for how to approach making quality improvements.

Ultimately I think we are going to need to do some major work refactoring large parts of the application (hopefully we can break these down to perform them incrementally), but before we even consider that there are some other things that we can and should do first.  For starters we currently have no automated tests in place.  We have some manual test scripts that we perform on a routine basis to check for regressions, but even those are fairly limited in how much of the functionality they actually exercise.  So an obvious place to start it to work on putting a suite of automated tests in place with the goal of enabling us to refactor with some level of confidence that we didn’t break the rest of the application.

Writing the automated tests may prove a little tricky since this application wasn’t written with testability in mind.  The “seams” that Michael Feathers talks about in his book - areas where you can decouple pieces of behavior to test them in isolation - are simply not available in the current code-base to allow tests to be written in the way that they would be if this application was written with testability in mind (using TDD for example).  So to begin my plan is to write tests from a high-level by attempting to use a UI automation framework such as NUnitForms.  This should allow us to introduce some very broad tests that exercise the entire application stack to begin creating that safety net so we know when a code change affects the application behavior.  We plan on using code coverage metrics to track our progress in this activity.  I’m going to set some rather arbitrary goal to begin with – say 40% coverage – that we’ll attempt to achieve across all the assemblies.  Once we reach that we’ll start to identify some good candidates for refactoring, and focus any new tests on those specific areas of the code with a higher goal for code coverage – maybe 60% for the specific assembly we’re targeting.  At this point we can start performing some of the refactoring work using the tests as our safety net to give us some level of confidence that we aren’t destabilizing the codebase with the changes we make.

** I’m aware that code coverage stats alone don’t represent whether you have a useful test suite or not, but they do give a convenient quantitative measure you can use to track progress.

In parallel to the exercise of building up a test suite I’m planning on making use of some static analysis tools to help improve quality at a low-level.  The first thing we’re going to do is go through and fix up all the compiler warnings we currently have.  Once we get that cleaned up we’ll update the build to fail on warnings to prevent any from getting re-introduced.  Next on the list is starting to apply the Visual Studio Code Analysis rules (aka FxCop).  The plan is to apply one rule at a time, go through the entire code-base and fix up all violations, then update the build to apply that build and fail on violations.  Doing this one rule at a time is a must since there are literally tens of thousands of violations in the code-base, going one rule at a time at least allows us to do it somewhat incrementally.  Once the FxCop rules are all in place next up is starting to apply StyleCop rules in the same manner to introduce some style consistency across the code-base.  After StyleCop we might move on to using NDepend and using the CQL (Code Query Language) built-in rules as well as defining some of our own to help encourage proper use of the various components/layers within our application.

Check back in a while and I’ll update you guys on how these activities went, whether we realized any significant value, and any lessons learned that may help other people out in a situation.

Also, if anybody has any experience using UI Automation frameworks for writing tests at the UI level I’d be interested to know what framework you used, what you thought of it, and if you were using Continuous Integration any issues you ran into and you worked around them (for example, I’ve heard that doing UI Automation testing from a build server can cause all kinds of issues if there are any modal dialogs going on due to the fact that most build servers run as a windows service).

Posted on Tuesday, February 10, 2009 4:13 PM | Back to top

Comments on this post: Plan of Attack for Dealing With Low Quality Codebase

No comments posted yet.
Your comment:
 (will show your gravatar)

Copyright © Dylan Smith | Powered by: