What is design? When do you do it? How much to you do? And how do you begin?
As teachers of Design Patterns and TDD, David Bernstein and I are often asking these questions. Invariably, we include the notion that one engages in "design" (not necessarily up front design, mind you) as a way to mitigate risk, among other things.
But which risks? Can you mitigate all risks? Do you even know all the risks that you might need to address in design? In most cases, the answer is almost certainly "No."
Let me wax a bit "Rumsfeldian" here and talk about the knowns and unknowns in assessing risk:
But when does this "we have this risk and we know it" kind of design lead us into over-design? Even if we know about a risk, really know it, do we have to design for it? Should we always? This leads us to think about aspects of risk, and to realize that not all risks are created equal.
Here is an interesting way to delineate the nature of a risk: Consider its
…with each of these being measure on a continuum from high to low. The thought here is that you might be able to say if a risk is high on two of these aspects, if you are aware of it in initial requirements analysis or if you become aware of it during refactoring, then addressing it in design is not overdesign.
Addressable and severe, but not likely: Think of a car accident. Most people are not in very many of them, and so we can say they are relative unlikely. But when they happen, they can be extremely severe. We cannot prevent them, but we can address the issue in a car's design, including seatbelts and airbags. In technology, this would include things like database backups or enabling rollbacks in transactional systems.
Addressable, not severe, but highly likely: Think of stubbing your toe. This happens pretty frequently, but when it does the result is nothing dire. It's easy to address, if we wish to: we wear shoes. In designing systems we usually try to avoid concrete coupling in design, by encapsulating construction. Many have suggested that refactoring tools (or simple global search and replace) make this unimportant, but encapsulation of construction makes it easy to address, so why not do it? The same could be said for programming by intention, making all state private, and other simple practices.
Addressable, but neither severe nor likely: These are the issues that may be overdesign, and this is where we'd like to ask for a discussion. A good example is what some call "exception-driven programming", or failing to code to the "happy path". If you focus in your code on every possible thing that can go wrong before you code the path that assumes good, valid parameters, etc… then you will likely overdo it (often you can make certain combinations impossible rather than having to guard for them, etc….)
For another example (and if you run Outlook), fire up Outlook and examine the thread count in your task manager. At first run (not doing anything) Outlook shows 50 threads running on my laptop. Why? I don't actually know, but I'll bet there was some condition, terribly unlikely and probably not very severe, that someone found a clever solution to. Unfortunately, the solution is more severe than the problem (each of those threads takes up a megabyte of my memory).
What you think about this "severity, likelihood, addressability" set of distinctions?
Have you encountered other examples (in any combination)? How does that play into the notion of over-design?
Comment on this blog or (better yet) post at our Yahoo Groups Lean Programming group:
And, by the way, if you don't know some of these terms (refactoring to the open-closed, patterns and forces, encapsulating the constructor, Commonality-Variability Analysis, etc…) please visit our resource section on Design Patterns. We have ezines and streamzines that explain these things.