The Difference Between "Inspect and Adapt" and Plan-Do-Check-Act (PDCA)

October 10, 2009 — Posted by Al Shalloway

In our book, Lean-Agile Software Development: Achieving Enterprise and Team Agility, we mention that the "inspect and adapt" is not the same thing as Plan-Do-Check-Act. Yes, they sound the same, but they are manifestations of different causality models. To fully understand the differences in inspect and adapt and PDCA, we must look at these causality models (which we will do shortly).

Scrum portends that because we are working on non-deterministic systems, our own process should be a controlled black box. See The Scrum Papers, pg 58. We disagree. A simple example of this is driving a car. I think we'd all agree that driving is a non-deterministic system. When you leave your home for work you do not know exactly what will happen – who will cut you off and how you'll have to drive defensively, or perhaps who you will cut off in an attempt to make up some time. Yet, most of the time, you get to where you wanted to get to in about the time you expected to get there. The "process" you use – keeping a certain distance between the car in front of you and your car (except, of course, when you are really trying to speed up the idiot), driving on the right side of the road when in the US on two-way streets, pressing the gas to accelerate, pressing the brake to slow down, threatening the kids if they don't shut up, … While simply knowing the rules doesn't mean you know how to drive, knowing the rules helps you drive better.

In an earlier blog (Types of Processes by Don Reinertsen), I discussed how the degree of visibility of your process is a separate issue from the level of randomness of its output. A third, separate, issue is how much feedback you need to control things.

While we agree that software development is a non-deterministic process, we do not believe that there is no causality of the actions involved. We also believe that it is important to create visibility into the process (what we call transparency) and not just visibility into the results (I've stated this several times on user groups but will write an upcoming blog to focus on this shortly). This is a significant difference in flow based systems (e.g., Kanban) and Scrum.

At a cursory level, the project boards for both flow based systems and Scrum look the same. One can see work entering the system, different stages in which the work is in and when the work is done. We call this visibility – that is, we have visibility into the results of the team. Incredibly important, but insufficient in our minds.

As important is the answer to the question- how does the work flow from one end of the board to the other? Is it just up to individual member's decisions on when to work on things or is there a visible set of decisions at work? While no complete definition of Scrum exists, the aforementioned Scrum papers as well as many blogs and user-group comments from CSTs (certified scrum trainers, presumably the highest authorities on what the Scrum Alliance's stance on Scrum is) continuously state the supposition that a defined process is not a good idea (if it were even possible). In other words, most Scrum boards will show you stories waiting to be worked on, those in process (including varying states) and those completed but having a defined set of rules for how things go from one column to the other is not a part of Scrum. It is just left to the team members' judgment. Teams are supposed to pay attention to the effectiveness of their actions and adapt accordingly. You can, of course, add your own rules. Good thing to do in our mind if you are doing Scrum – but then you have a variant of it (we call ours Scrum#).

PDCA is a bit different from "inspect and adapt" in that it requires the team to consider how things are actually working. In other words, we don't want to just inspect and adapt, we want to understand, at least a little, about the causality of things. For example, agile teams often experience a backlog of tests for testers at the end of a sprint. Developers have done the coding but the testers don't quite seem to be able to keep up. A good Scrum team will try to figure out what actions they can do to help here. Perhaps they try to pair developers together with testers. Perhaps they decide to specify their tests before writing code. Both good things. But what they haven't done is to try to understand the dynamics of what is going on by creating a model of it – something they can then try different things against to see what works better. Given one tenet of Scrum is your process is black box, i.e., it shouldn't (can't) be defined, this is not a surprise.

Lean takes a scientific approach. It believes you can understand the affects that your actions have. Lean suggests that one should consider how they are working to be the best way they know how. In this regards, their method of working is an hypothesis – "this is the best way to do our work." We make improvements to how we work by suggesting new a new hypothesis and seeing what happens. That is, we see how our actions affected our results. In Kanban, we focus on managing work in process levels. Our process hypothesis typically includes a set of limits of different types of work plus service level agreements. We adjust these to maximize value delivered to the customer.

In our developer / tester disconnect example, we might also consider pairing developers and testers, but we should also see if we get any improvement. This is how the PDCA cycle works. Our "plan" is "pairing developers with testers." We are hypothesizing that this will be an improvement. Our "do" is trying this out for some time specified by the team. We then "check" to see what happened (validate or invalidate our hypothesis). And then we "act" accordingly – make a new model, way of working, etc. Note, in knowledge work a somewhat equivalent model - Look, Ask Model, Discuss, Act (LAMDA) – has been offered. This probably provides a better metaphor, but is essentially the same intent.

While Scrum treats the team's process as a black-box, Lean treats it as a transparent process which requires feedback to keep it under control (since we are working on a non-deterministic system). We call this characteristic transparency.

Let me illustrate the difference between inspect and adapt and PDCA in a non-software world example. I'm a beginning sailor (have been for 30 years ;) ). I'm not really that skilled, but I do know a few things. When I first started sailing they said to look at the little ties on the sail to see which way the wind is blowing from. As the ties blow around, you adjust the sails to meet where and how strong the wind is coming. This is a simple rule of sailing. Note how it is reactive.

Now, pretty soon, you notice how it'd be great if you could get some sense of what the wind is doing before it hits your sail. Unfortunately, you can't see wind. But you can see the affects of the wind. I remember being told to look at the waves. But I, at first, did not find that very useful. Eventually, it started making sense to me. The primary affect of wind on water is the size of the waves and direction of the waves. But these are due to the prevailing wind – what I needed was the more micro changes that were affecting my sailing. Abrupt wind changes do not change the general height or direction of the waves. However, these do set up ripples on the waves. I started wondering if I could see a pattern between how the wind changed and how the ripples appeared. In other words, if there were different ripple patterns approaching me, did that mean the wind was changing? I would make guesses in my mind and see what happened. I would then see how I could respond to this. Over time I learn to "see wind coming at me." I wasn't merely inspecting and adapting I was PDCAing. J

Why is this important? Three reasons I'll just claim now, and follow up with blogs shortly:

  • Transparency allows the deeper use of systemic thinking tools, e.g., "5-whys"
  • Transparency facilitates team learning
  • Transparency enables positive management involvement and self-curtails adverse management interference

I'll be writing blogs on all three of these topics shortly.

Related blogs:

Differences in Beliefs Results in Differences in Approach

Subscribe to our blog Net Objectives Thoughts Blog

Share this:

About the author | Al Shalloway

Al Shalloway is the founder and CEO of Net Objectives. With 45 years of experience, Al is an industry thought leader in Lean, Kanban, product portfolio management, Scrum and agile design. He helps companies transition to Lean and Agile methods enterprise-wide as well teaches courses in these areas.


Good post, Alan, but wrong on several counts.

How stories advance from stage-to-stage is not based on Team Members' judgments; it is based on the Team's judgment. The self-organization you hear about in scrum is a Team thing, not an individual thing. AFAIK, scrum is the first (if not only) process that explicitly "inspects and adapts" its process in order to improve it and has a Team Member (the ScrumMaster) whose job is to manage that process. So, a scrum team's process is not a "defined process" in the sense that somebody defined it up-front, but it is a "defined process" at any given time. And, if it doesn't work, it is the ScrumMaster's job (via the Retrospective) to change it.

As you know, a major part of a scrum team's process is the "definition of done" - which could be different from storyotype to storyotype. There is nothing that says that the "definition of done" can't contain process steps (or gates). For example, a sample "def'n of done" (for coding stories) that I have been advocating for a couple of years includes criteria like: test review, design review, interface review, code review, and so on. It could even contain a fairly detailed process, but that seems like overkill to me (but maybe you'd like it :D). Once again, it's not a defined process - but it has the appropriate constraints on it (or rules, as you call them). And we're constantly retrospecting on it, too... and it's visible.

So, basically, great post. Everything you say about why a defined process would be good is good stuff, but it is not a discriminator between scrum and kanban.

Just sayin'

Dan ;-)
Dan Rawsthorne, PhD, CST
Danube Technologies, Inc

Dan:Lean has had the deeper PDCA for over 50 years (with the equivalent of the Scrummaster - first line manager done correctly) so I'm not quite sure why you say Scrum is the first. Scrum definitely has the definition of done. 

I know it's supposed to be about the team but given the lack of explicitness called for in the process it devolves into individual judgements.

I guess we need to talk about two types of Scrum.  Some in the Scrum community (I'm referring to CSTs here) say that there is no value stream underneath things - there is no causality that can be explained.   I was writing this blog contrasting Lean to them.  With that belief system no PDCA is possible. PDCA is forward looking, not reactionary to what has happened.

Alan Shalloway, CEO Net Objectives


When I look at Scrum I see a series of interconnected PDCA cycles.

1. At the top is the Release made up of one or more iterations

Plan = Release plan,
Do = Each iteration,
Check = after each iteration, we plot our release burndown, and review the iteration with users to get feedback,
Adjust = remove or add scope to release and add emergent requirements learned through the check cycle or other sources.

2. Then there is the sprint:

Plan = Plan,
Do = Develop,
Check = Review progress against a goal,
Adjust = Retrospective where the team discusses how to meet or exceed goal and adjust process.

3. Daily checks in the sprint

a) Iteration tracking
Plan = Sprint burndown chart
Do = code stories
Check = Adjust task hours remaining and Sprint burndown chart.
Adjust = Remove (or add) scope and, if needed, renegotiate sprint goal
Plan = Adjust our burndown chart to indicate the new plan

b) Daily Standup
Do= The actual work I did yesterday is the "Do".
Check = What I say I did yesterday is the check. If as a team member or Scrum Master you remember what I said I would do yesterday and now compare that to what I actually did yesterday, the difference is the Check. It is a spoken (which is close enough to visible for me since the team is the same and in all standups) indicator of hypothesis to reality or target to actual.
Adjust and Plan = What I will do today is a combined Adjustment and Plan except for the first day of the iteration when it is just a plan.

Within the Daily Standup there is a discussion of obstacles. This is a form of waste (usually causing a delay or wait) and it is recorded for all to see. The problem solving tends not to need a formal PDCA template. But the obstacle’s existence and removal is visible to the team.

4. Coding the story, usually described as define, code, test. But really there is:

Plan = convert story to tasks and assign task hours to each task
Do = write the code
Check = unit tests and acceptance testing
Adjust = Task hours remaining and add new tasks that emerged

You could make the argument that everything should be visible against the target and maybe all tasks should be compared against estimates, but in software development this tends to make the focus on hitting a non-meaningful metric of task hours and optimizing locally rather than the whole. So I think Lean would be ok in using task hours remaining and using that in the Sprint Burndown to keep the team focused on the whole.

I agree you can say Scrum doesn’t explicitly use PDCA or the language of Lean. But in my view it encapsulates the principles. It has also been my experience that members of development teams are well schooled in the scientific method. They understand cause and effect. They understand that inspect and adjust has some implied steps of formulating a hypothesis and seeing if it improves the outcome. Lean was developed on the shop floor with employees who were not schooled in the scientific method and who had to be taught problem solving skills. A developer on the other hand solves problems for a living. So the developer doesn’t necessarily need the same framework that a manufacturing worker does. I also agree, even the best problem solvers can get lazy with identifying measures to know if our hypothesis worked, but just as Lean can be implemented poorly, so can Scrum.

At the end of the say, I see many more similarities between Scrum and Lean than differences.

I have been saying Scrum is a manifestation of Lean principles for 5 years.  I have gotten into several public conversations with Ken Schwaber about this who has, ironically, denied it.  The PDCA cycles you mention above, however, are related to results mostly.  I am glad you see the value of the scientific method.  But many Scrum practitioners do not believe you can define your process and test it scientifically.  Sounds like you do.  I'm glad.  I think these two types of Scrum are considerably different.  In this blog, when I refer to Scrum I am referring to the Scrum that treats process as a black box - not the results, but the rules for doing the work.

There is a difference, however, when one considers Scrum as a black box and uses Lean tools within it, than when one considers the foundation and principles of Lean and does Scrum in that.  One of the biggest differences is how one does an enterprise engagement.  Scrum with Lean will focus on the team and try to scale - an approach which has had very limited success. Lean would suggest keeping the entire value stream in mind from the beginning.  It is not a scaling to agility approach - and is one which we have found to be very successful.

Alan Shalloway, CEO Net ObjectivesAchieving Enterprise and Team Agility

Thanks for a thorough post.

The comment that, paraphrased, that scrum is the first process that learns and adapts with a defined leader is insane. But on to your point, here's a non-deterministic system - a military mission. Yes there is a plan, but it's almost always based on tons of unknowns such as, oh, I don't know, the opposing force. Yet the U.S. Army has a built in PDCA loop called the After Action Review. We use this tool all the time. It is 4 questions : 1. What was supposed to happen? 2. What did happen and why? 3. What can we learn? (weaknesses to improve and successes to sustain) 4. What will we do differently? It's a learning loop. The next mission will be completely different, yet how we execute the work can still be improved.

Jamie Flinchbaugh

I should add that we created a simulation and educational program specifically for teaching PDCA. It's called the Mouse Trap Experience. We've used it for hospital executives, high school technical classes, global SAP teams, and more. It's short, focused learning and lots of fun. Check it out here:


I see a huge difference between the Check (of Shewhart) and the Study (of Deming). A key difference in my opinion. One might say the difference between QA that looks for variance vs Continuous Improvement such as Toyota's TPS.

I have switched between Check and Study and think I am not true to either necessarily.

Up until recently, I have preferred the Study of Deming. The idea was to learn - therefore study is necessary.  But, once one considers that PDCA/PDSA is a scientific method, then in some ways calling it PDCA tends to emphasize that you are not just studying results (which is implied in systemic thinking anyway) but that you are checking if your actions made the difference you expected.  

So, I wouldn't interpret the PDCA or PDSA as either Shewhart or Demint but rather would use it as a starting point in the conversation.

Alan Shalloway, CEO Net Objectives

One of the best case study of the PDCA cycle I found here:

Blog Authors

Al Shalloway
Business, Operations, Process, Sales, Agile Design and Patterns, Personal Development, Agile, Lean, SAFe, Kanban, Kanban Method, Scrum, Scrumban, XP
Cory Foy
Change Management, Innovation Games, Team Agility, Transitioning to Agile
Guy Beaver
Business and Strategy Development, Executive Management, Management, Operations, DevOps, Planning/Estimation, Change Management, Lean Implementation, Transitioning to Agile, Lean-Agile, Lean, SAFe, Kanban, Scrum
Israel Gat
Business and Strategy Development, DevOps, Lean Implementation, Agile, Lean, Kanban, Scrum
Jim Trott
Business and Strategy Development, Analysis and Design Methods, Change Management, Knowledge Management, Lean Implementation, Team Agility, Transitioning to Agile, Workflow, Technical Writing, Certifications, Coaching, Mentoring, Online Training, Professional Development, Agile, Lean-Agile, SAFe, Kanban
Ken Pugh
Agile Design and Patterns, Software Design, Design Patterns, C++, C#, Java, Technical Writing, TDD, ATDD, Certifications, Coaching, Mentoring, Professional Development, Agile, Lean-Agile, Lean, SAFe, Kanban, Kanban Method, Scrum, Scrumban, XP
Marc Danziger
Business and Strategy Development, Change Management, Team Agility, Online Communities, Promotional Initiatives, Sales and Marketing Collateral
Max Guernsey
Analysis and Design Methods, Planning/Estimation, Database Agility, Design Patterns, TDD, TDD Databases, ATDD, Lean-Agile, Scrum
Scott Bain
Analysis and Design Methods, Agile Design and Patterns, Software Design, Design Patterns, Technical Writing, TDD, Coaching, Mentoring, Online Training, Professional Development, Agile
Steve Thomas
Business and Strategy Development, Change Management, Lean Implementation, Team Agility, Transitioning to Agile
Tom Grant
Business and Strategy Development, Executive Management, Management, DevOps, Analyst, Analysis and Design Methods, Planning/Estimation, Innovation Games, Lean Implementation, Agile, Lean-Agile, Lean, Kanban