Design, Analysis, and Risk Mitigation

November 16, 2007 — Posted by Scott Bain

What is design? When do you do it? How much to you do? And how do you begin?  

As teachers of Design Patterns and TDD, David Bernstein and I are often asking these questions. Invariably, we include the notion that one engages in "design" (not necessarily up front design, mind you) as a way to mitigate risk, among other things.

But which risks? Can you mitigate all risks? Do you even know all the risks that you might need to address in design? In most cases, the answer is almost certainly "No."

Kinds of Risks 

Let me wax a bit "Rumsfeldian" here and talk about the knowns and unknowns in assessing risk:

  • Known Unknowns. You don't know something, but at least you are aware that you don't know it.
    • When you are aware of an unknown, the approach is usually to abstract the issue out (to make it flexible), and isolate it from the rest of the system (encapsulate it). Sometimes this awareness comes along after the project has started, and then you can use refactoring to make something flexible that was not before. This is usually called "refactoring to the open-closed".
  • Unknown Knowns. You know something, but you don't realize you know it.
    • Sometimes, requirements have subtleties in them, or important implications that really should be explicitly stated, but are not. Usually, this is because others think these issues are "obvious".
    • To find these between-the-lines issues we use techniques like Commonality-Variability analysis, and consider Patterns as collections of forces, to uncover realities in the domain. Usually, this is thought of as requirements analysis. TDD is a good way to investigate the assumptions in a system, as testing will often lead us to question what is "obvious."
  • Unknown Unknowns. You don't know something and you have no idea that you have a gap in your knowledge.
    • These are the sticky ones. The most empowering advice we have discovered so far is, when in doubt, encapsulate everything you can and only reveal what's necessary.
    • We have a number of best practices here, things that are easy to do, but help to create general encapsulation throughout a system.:
      • Programming by intention
      • Encapsulating constructors
      • Making all state variables private by default
  • Known Knowns. You know of an issue and you know you know it.
    • There are variations you know you must prepare for, security standards you must enforce, business functionality that is specified, etc... Certainly, you can design for these and, in a sense, this is what we were traditionally taught to think of as "design."

But when does this "we have this risk and we know it" kind of design lead us into over-design? Even if we know about a risk, really know it, do we have to design for it? Should we always? This leads us to think about aspects of risk, and to realize that not all risks are created equal.

Addressing Risk in Design 

Here is an interesting way to delineate the nature of a risk: Consider its

  • Severity: How bad will it be if this happens?
  • Likelihood: How likely is it to happen?
  • Addressability: Can you do anything about it?

…with each of these being measure on a continuum from high to low. The thought here is that you might be able to say if a risk is high on two of these aspects, if you are aware of it in initial requirements analysis or if you become aware of it during refactoring, then addressing it in design is not overdesign.

Addressable and severe, but not likely: Think of a car accident. Most people are not in very many of them, and so we can say they are relative unlikely. But when they happen, they can be extremely severe. We cannot prevent them, but we can address the issue in a car's design, including seatbelts and airbags. In technology, this would include things like database backups or enabling rollbacks in transactional systems.

Addressable, not severe, but highly likely: Think of stubbing your toe. This happens pretty frequently, but when it does the result is nothing dire. It's easy to address, if we wish to: we wear shoes. In designing systems we usually try to avoid concrete coupling in design, by encapsulating construction. Many have suggested that refactoring tools (or simple global search and replace) make this unimportant, but encapsulation of construction makes it easy to address, so why not do it? The same could be said for programming by intention, making all state private, and other simple practices.

Addressable, but neither severe nor likely: These are the issues that may be overdesign, and this is where we'd like to ask for a discussion. A good example is what some call "exception-driven programming", or failing to code to the "happy path". If you focus in your code on every possible thing that can go wrong before you code the path that assumes good, valid parameters, etc… then you will likely overdo it (often you can make certain combinations impossible rather than having to guard for them, etc….)

For another example (and if you run Outlook), fire up Outlook and examine the thread count in your task manager. At first run (not doing anything) Outlook shows 50 threads running on my laptop. Why? I don't actually know, but I'll bet there was some condition, terribly unlikely and probably not very severe, that someone found a clever solution to. Unfortunately, the solution is more severe than the problem (each of those threads takes up a megabyte of my memory).

What do you think about this approach? 

What you think about this "severity, likelihood, addressability" set of distinctions?

Have you encountered other examples (in any combination)? How does that play into the notion of over-design?

Comment on this blog or (better yet) post at our Yahoo Groups Lean Programming group:

And, by the way, if you don't know some of these terms (refactoring to the open-closed, patterns and forces, encapsulating the constructor, Commonality-Variability Analysis, etc…) please visit our resource section on Design Patterns. We have ezines and streamzines that explain these things.

Author: 

Share this:

About the author | Scott Bain

Scott Bain is an consultant, trainer, and author who specializes in Test-Driven Development, Design Patterns, and Emergent Design.



        

Free Email Updates!

Sign up for free email updates from
Net Objectives Thoughts Blog

Blog Authors

Al Shalloway
Business, Operations, Process, Sales, Agile Design and Patterns, Personal Development, Agile, Lean-Agile, Kanban, Scrum, Scrumban, XP
Cory Foy
Change Management, Innovation Games, Team Agility, Transitioning to Agile
Jim Trott
Business and Strategy Development, Analysis and Design Methods, Change Management, Knowledge Management, Lean Implementation, Team Agility, Transitioning to Agile, Workflow, Technical Writing, Certifications, Coaching, Mentoring, Online Training, Professional Development, Agile, Lean-Agile, Kanban
Ken Pugh
Software Design, Design Patterns, Technical Writing, TDD, ATDD, Coaching, Mentoring, Professional Development, Agile, Lean-Agile, Scrum
Scott Bain
Analysis and Design Methods, Agile Design and Patterns, Software Design, Design Patterns, Technical Writing, TDD, Coaching, Mentoring, Online Training, Professional Development, Agile