Blog Series: TDD and Process Part I

March 7, 2017 — Posted by Scott Bain

Part 1: The False Dichotomy

Traditions in software testing suggest that the balance among the various types of test types (Acceptance, API, Integration, Unit) should be weighted toward the lower-level, more granular tests and less toward the larger-scale, or end-to-end tests.  The visualization is typically something along these lines, in terms of the effort that should be devoted to each:

The Test Pyramid

It has been suggested that more time and effort should be spent on unit tests, less on business rules, yet less on integration, and so on.  The rationale is this:

  • When unit tests are written properly, a failure of a given test will give you very precise feedback as to where the defect lies.  If we acknowledge that the expensive part of debugging is almost always finding the defect in question, plus the fact that up to 80% of the cost of software development is in maintenance[1],  then we want a test failure to give this kind of feedback.  A big, end-to-end acceptance test will not do this.  When it fails the cause of the failure could be one of many things.
  • Unit tests run much faster than the tests north of them on this diagram, with acceptance tests being the slowest of all.  Therefore we will be able to run unit tests very frequently with little cost, and thus will tend to do so.  Test that are run frequently provide rapid feedback which allows us to work more quickly.  This is one of the reasons that TDD does not slow developers down, even though one might logically expect it to.
  • Unit tests are much easier to automate than, say, acceptance tests, and this also makes them much more efficient to run.  They can even be run automatically by a build process, or through continuous integration.

But another value of these tests, apart from detecting defects, is that they capture knowledge about the system: what its behaviors are, how they work, how they interact, and so forth.  In this mode they are of highest value when the largest number of people can obtain this knowledge by reading them, or add to this knowledge by writing/changing them.

From the viewpoint of this consideration, we note that the broadest audience can read, write, modify, and understand the acceptance tests, because they are written in human-readable language.  Something like this:

Given: A base sale amount of $200.00

And: A commission rate of 8%

When: The total sale is calculated

Then: The total sale is $216.00

This is the language of “Behavior Driven Development”[2] and is a representation of behavior that anyone can comprehend.  The unit test for this same behavior would be something like:

[TestClass]
public class CommissionTest
{
    [TestMethod]
    public void TestComissionsIsApplied()
    {
        var calculator = ComissionCalculator.GetInstance();
        var baseSale = 200.00;
        var commissionRate = .08;
        var expectedTotalSale = 216.00;
        var actualTotalSale = calculator.ApplyComission(baseSale, commissionRate);
        Assert.AreEqual<double>(expectedTotalSale, actualTotalSale);
    }
}

This is a very simple unit test, but most non-technical people would have a lot of trouble gaining any knowledge by reading it, and far fewer would be able to write or modify it.

So there would seem to be a dichotomy here:

  • The more fine-grained a test is the better the feedback it provides will be, but fewer people will be able to gain value from it, especially as time goes by and the suite gets larger and more complex.
  • The more coarse-grained a test is the worse the feedback it provides will be, but more people will be able to gain value from it and contribute to its creation.

So: your tests can either provide you a great deal of value and precise information when they fail, or they can be more broadly useful to the organization when they are passing (when they accurately represent the behavior the system has).  You can’t have both.

What we seek to do here is to debunk this dichotomy.  We think you should be greedy, you should want both fast, granular feedback and to be able to have everyone participate collaboratively about what the system behaviors should be (before development) and what they are (after development).  This would include developers, testers, product owners, project managers, business analysis, etc…  Everyone on the team, and everyone who has a stake in the development process.

The next few parts of this will series hopefully make it clear how you can accomplish this.

 

Subscribe to our blog Net Objectives Thoughts Blog

Share this:

About the author | Scott Bain

Scott Bain is an consultant, trainer, and author who specializes in Test-Driven Development, Design Patterns, and Emergent Design.



        

Blog Authors

Al Shalloway
Business, Operations, Process, Sales, Agile Design and Patterns, Personal Development, Agile, Lean, SAFe, Kanban, Kanban Method, Scrum, Scrumban, XP
Cory Foy
Change Management, Innovation Games, Team Agility, Transitioning to Agile
Guy Beaver
Business and Strategy Development, Executive Management, Management, Operations, DevOps, Planning/Estimation, Change Management, Lean Implementation, Transitioning to Agile, Lean-Agile, Lean, SAFe, Kanban, Scrum
Israel Gat
Business and Strategy Development, DevOps, Lean Implementation, Agile, Lean, Kanban, Scrum
Jim Trott
Business and Strategy Development, Analysis and Design Methods, Change Management, Knowledge Management, Lean Implementation, Team Agility, Transitioning to Agile, Workflow, Technical Writing, Certifications, Coaching, Mentoring, Online Training, Professional Development, Agile, Lean-Agile, SAFe, Kanban
Ken Pugh
Agile Design and Patterns, Software Design, Design Patterns, C++, C#, Java, Technical Writing, TDD, ATDD, Certifications, Coaching, Mentoring, Professional Development, Agile, Lean-Agile, Lean, SAFe, Kanban, Kanban Method, Scrum, Scrumban, XP
Marc Danziger
Business and Strategy Development, Change Management, Team Agility, Online Communities, Promotional Initiatives, Sales and Marketing Collateral
Max Guernsey
Analysis and Design Methods, Planning/Estimation, Database Agility, Design Patterns, TDD, TDD Databases, ATDD, Lean-Agile, Scrum
Scott Bain
Analysis and Design Methods, Agile Design and Patterns, Software Design, Design Patterns, Technical Writing, TDD, Coaching, Mentoring, Online Training, Professional Development, Agile
Steve Thomas
Business and Strategy Development, Change Management, Lean Implementation, Team Agility, Transitioning to Agile
Tom Grant
Business and Strategy Development, Executive Management, Management, DevOps, Analyst, Analysis and Design Methods, Planning/Estimation, Innovation Games, Lean Implementation, Agile, Lean-Agile, Lean, Kanban