Archive for November, 2010

Scrum vs Waterfall in Five Words


On a CSPO course today, I got the following “question” from the participants:

“Benefits of Scrum vs. waterfall in 5 words”

🙂 Never had to put so concise.

So here’s my try:






Not a statement, but five words nonetheless.

But those weren’t the first five words that came to my mind. The first was:

“Scrum projects kick waterfall’s ass”

Not the most politically correct, though :).

But the whole topic is a bit unfair. It’s like asking the benefits of shoes vs. gloves for your feet. This isn’t really a question should we use waterfall or Agile for a software project, because both processes are valid in appropriate process context. Plan-driven approaches are highly valid for predictable environment whereas Agile is for complex environments. Also, there are situations where significant pre-planning is just necessary, because of excessively long feedback cycle or massive rework costs.

Mike Cohn, in his recent book “Succeeding with Agile”, poses this issue as a balance between “anticipation” and “adaptation”. In every situation, we do at least a little bit of anticipation and a little bit of adaptation. How much of each we do beyond that depends entirely on what we are doing. If we are ordering an expensive server with a couple of month’s delivery time, it probably makes sense to do your homework in advance. The only trouble with traditional thinking is that it does not sufficiently recognize the need for adaptation because of the expectation that projects are fundamentally predictable (and claiming that we just don’t know enough of it yet).


What would have been your five words?

PDCA Cycles and Scrum


Recently I “rediscovered” the PDCA cycle, made famous by W. Edward Deming, consisting of Plan, Do, Check and Act phases, and forming the base for every process improvement cycle. According to Wikipedia (

PDCA (plan–do–check–act) is an iterative four-step problem-solving process typically used in business process improvement. It is also known as the Deming circle, Shewhart cycle, Deming cycle, Deming wheel, control circle or cycle, or plan–do–study–act (PDSA).

PLAN – Establish the objectives and processes necessary to deliver results in accordance with the expected output. By making the expected output the focus, it differs from other techniques in that the completeness and accuracy of the specification is also part of the improvement.

DO – Implement the new processes. Often on a small scale if possible.

CHECK – Measure the new processes and compare the results against the expected results to ascertain any differences.

ACT – Analyze the differences to determine their cause. Each will be part of either one or more of the P-D-C-A steps. Determine where to apply changes that will include improvement. When a pass through these four steps does not result in the need to improve, refine the scope to which PDCA is applied until there is a plan that involves improvement.

Does the above sound familiar? It should, because it’s describing Scrum’s control process, albeit with different labels. Essentially, Scrum is a “product improvement” cycle, with some additional stuff thrown in. In each Sprint, we establish the objectives and processes to deliver an improved version of a product, do it, check the results against expectations to learn about the direction and way of working, and seek to understand in which way the product or the development process should be taken towards next.

In fact, I just described two simultaneous cycles, one for product and one for process. And those are not the only ones, since we have daily cycles, release cycles, and many others depending on the actual processes we use.

However, the PDCA cycle provides us some insight into how we would benefit from Scrum more. An integral part of the PDCA cycle is measuring the output (or generally, evaluating it against some criteria) and comparing that to an established expectation. Setting such an expectation is not what I’ve observed in a grand majority of Scrum teams I’ve worked with (partly because I haven’t mentioned it, but partly because no-one else has, either).
For the knowledgeable reader, setting an expectation would be a no-brainer. And frankly, I’ve “known” it for ages. But the importance has escaped me, even if I’ve used the same principle in many other contexts. Acceptance criteria are one example. Commitment at the beginning of a Sprint is another. They are not always used in the same way, but the foundation is the same.

So why is setting an expectation so important (or at least useful)? The question reminds me of a marketing questionnaire exercise I did back in university. We took an issue, planned questions and ran the questionnaire, only to realize upon analysis that we really didn’t know how we could actually use the results. We had posed the questions in such ways that we could not draw any meaningful conclusions, because the questionnaire was lacking key questions or we had phrased the question wrong.

It is easy to fall into the similar trap in our retrospectives. We consider our options and agree some improvement actions. In the next retrospective, we return to the topic and evaluate if the change has had some impact. Except that we don’t really know. We get wishy-washy feelings one way or another. We had failed to establish an expectation and a way to measure results. Such unclarity is a bit demoralizing and it clouds the actual result. It is much more difficult to get people committed to improvement activities when there is no clear feedback about their effectiveness.

I will do my best to incorporate this insight into all retrospectives I run from now on. I will also try to pose these two questions into any other conversation regarding some improvements or changes:

  • What benefit or outcome do we expect out of this improvement/change?
  • How do we measure it?
  • Who is responsible for measuring it?

Ok, three questions.