Nuno writes that “Why shouldn’t we try finding out find a model to deal with unexpectancies instead?
The thing is that if you by some cosmic accident find too much unexpectancies along the way your iteration plan will become a baby monster.”
Well, we do assume change will happen and most times it does. We also try to build some slack into each iteration, so that things can go wrong (because they always do) — that’s what velocity is for. But in the end: you’re right. There is that worst case scenario where everything falls apart and all the high-level estimates are wrong and we have to update all the planning just like it was a classicly managed project — i.e. update the project plan and re-do all the resource planning. From my experience this worse-case rarely actually happens.
There are some underlying assumptions:
- Features are independent, negotiable, valuable to users, estimable, small and testable
- We can easily move features from one iteration to another
- We can use the same resources for each iteration
We know that in reality these assumptions are either wrong and at best partially true. However, we can turn that around and make that a roadmap for our business and IT, e.g.
- Spread the knowledge between BA and developer resources so that it becomes easier to use the same resources for a number of different iterations (hitting different systems)
- Make the technical architecture more modular (there are some pretty standard ways of approaching this for legacy systems)
- Teach the business how to take thin enough client-valued strips of features across systems so that they are small enough for a few to fit into an iteration