There is a division in the agile community about whether one should rely on people or focus on people supported by systemic thinking (no one I know of suggests systems alone are enough). This debate is often the people over process Vs. people and process (or as Don Reinertsen would say people times process). I've been in the agile community for some time and have seen some interesting things that I think shed some light on this debate. This long-time perspective has enabled me to see an interesting pattern. This blog will discuss the pattern of what happens when smart people do not have the proper understanding of what they are doing.
I'll start with what I consider to be my most embarrassing moment in my career. It was in 1984, 14 years into my development career. I contracted to build what would be the software system to power the touch controlled information kiosks at Vancouver Expo '86. At the time, this was very avant-garde. I was essentially in charge of rewriting a Basic language prototype into C for both improved performance and features. Since I was experienced in both languages, I remember thinking it'll be easy – it's just a re-write.
There were two main components of the application. Mine was the user component that defined how the system should work. Basically you entered events on a timeline that the system would run when the screen was touch. The other was a run-time component that ran the pseudo code mine compiled. I sub-contracted someone else to do the executable program because mine looked to be the more complex beast. At the time, I had a reputation for functioning code extremely quickly (and yes, I intentionally did not use the word maintainable).
It only took me a few week to get the basics of the system up and running – everyone was pleased. I was confident of success because, given this was a rewrite I figured the customer would know what was needed and I would just be adding functionality. Unfortunately, after they started using the system for a while bad things started to happen. It seemed every time they wanted a new input feature (e.g., specifying when a new event, like touching the screen, or starting audio) I would put it in quickly and it would work, but a couple of days later I would find out that I broke something that had been functioning. The problem was that I had tightly coupled code and was not following Shalloway's principle. Up to this time I had studied how I could code better (e.g., structured programming, etc.) but hadn't done a study on what caused errors (e.g., tight coupling, lack of encapsulation). BTW – this is not the embarrassing part yet.
The next few weeks followed this pattern – 1) get a customer request, 2) get the request working, 3) be told by the customer the next day or two that something else was no longer working, 4) fix the new bug. This extra bug fixing work was taking a considerable amount of time. It was clear that we were in serious trouble. Now, with what I know today, I would have concerned myself with writing better code (stopping errors instead of fixing them). But what I did back then was recognize that I was causing bugs because I just wasn't finding all the coupled cases (I was unaware of Shalloway's Law at the time – in fact, it was this experience that inspired Shalloway's law). I figured if there were just a way I could tell I was about to commit an error, I could continue programming fast. The problem of having to type something in several places didn't bother me. At the time I could type about 100 wpm (not my highest speed, but still pretty fast).
I thought the answer to my problems was detecting errors quickly, and (mostly) effortlessly. So here's what I did. I spent a day essentially writing the equivalent of a UI Test runner and sub-contracted someone to run the tests for me. While I could re-run the test cases automatically, I needed someone to set them up and check the results against good cases. I had basically instituted semi-automatic acceptance testing in 1984 (still not the embarrassing moment – this was actually pretty cool).
From this point on we zoomed along. My quick coding style was no longer holding us back. I'd make a change, give it to my tester and within 15 minutes he'd tell me what I unintentionally broke by forgetting to change something that was coupled to my fix.. I fixed it almost immediately because I knew it was something I just changed. Bottom line, we got our system out in very good time. We even became a real product whereas we were originally only supposed to be a tactical solution for the Expo. The strategic product was being built in parallel with a longer timeframe and 30 people (compared to our 4). However, our product ended up being better so they released both.
So what's embarrassing about "inventing" automated acceptance testing in 1984 and building a product for my client within budget and time while exceeding functionality initially envisioned and with a high quality? It was that I didn't do automated acceptance test again until 2000 when I read about XP.
This episode was one reason I knew XP would work the moment I heard about it. I had done an iterative, close customer, automated acceptance test, continuous build (there was only me! ) project 16 years earlier. Only now I had 16 years of experience in considering what made for good programming.
This was why I immediately questioned why XP worked (not if, I was clear that it did). I remember this not being very well received. At the time, Kent Beck and Ron Jeffries (two of the originators of XP) pretty much insisted that you had to do all of the twelve practices of XP or you'd lose its power. There was also little in the way of explaining how to code.
Yes, I know about the four rules of writing simple code:
The problem with this definition is that it is practiced based. It also is stated in a way that is understandable to someone who already understands these practices (that is, has intuited the principles underneath them) but will cause great misunderstanding for those that don't have this intuitive sense.
Of course, Kent, Ron and Ward (the third originator of XP) are all brilliant developers and had the necessary intuition. Unfortunately, most of the people getting excited about XP didn't. I remember talking to several of my associates about XP and said that without the proper understanding of what was underneath XP (something no one wanted to talk about at the time) there would be serious problems for any undertaking it. I even gave a time frame – 6 months. Now be clear, I though XP was brilliant. I just said it was dangerous without the key understanding of it. Sure enough, while many people had great success, many others had great problems with code poorly written (ironically, mostly in the test code).
Those of you who know me know I've said pretty much the same thing about Scrum. I've written blogs on why it works and why it doesn't. Ironically, here, as in the XP case, my comments/concerns were pretty much ignored by the Scrum community. Today we have many (most?) Scrum teams practicing what the Scrum community calls "Scrum-but" (that is, we do Scrum, but …"). I wrote a blog on this as well The 5-whys of Lean as the answer to the but of Scrum. Even Ken Schwaber, Scrum's co-creator and biggest evangelist, has said, I estimate that 75% of those organizations using Scrum will not succeed in getting the benefits that they hope for from it.
So what is the pattern of these three things?
I would suggest it is counting on smart people to find the right thing to do is not always a winning strategy. That giving people understanding of the principles and rules underneath programming and development will make them much better. I admit this begs the question that I am a "smart" person. But, I do think I qualify – summa cum laude, 2 masters degrees (one from MIT), successful author, have run a successful business for 11 years (and still going), … I'm not trying to toot my own horn here. In fact, I'm saying, how could someone as smart as me do something as stupid as not use automated acceptance testing for 16 years (isn't that embarrassing?).
Well, my answer is that relying on practices even if you are smart is insufficient. You must learn why those practices work. Of course, this makes sense only if you believe there are rules underneath what we do. Many in the agile community don't believe this (I'll be writing a blog on this next week). Bottom line for me is, get the best people you possibly can. Then, make sure they study their methods, as explicitly as possible, so they can create solid, support systems and understanding of what they do. You will get a much greater return from their efforts if you do so.
In my case, the understanding would have had me look to see where I could apply automated acceptance testing effectively. Years later, I now understand that one key aspect of automated acceptance testing is to eliminate the added work that comes from the delay between code and test. I clearly knew this at some level in 1984. But not at a deep enough, or consciously high enough, level to take advantage of it on a regular basis.
XP has been around long enough that people have finally gotten to the why it works. In Scrum's case I believe we find people doing Scrum But because their lack of understanding of the principles underneath Scrum prevents them from effectively changing the given practices. They often think they are doing the right thing, when in fact, it is not effective.
This is why, at Net Objectives, all of our training and consulting starts with why things work. If this makes sense to you, and you think you can use some help in doing this, please send me an email alshall AT netobjectives.com to see if we can help.
If you want more information on what we now consider to be useful principles and guidelines for coding better, check out these resources pages (you'll have to register to get access to some of these):