The TameFlow Chronologist

Genesis and Evolution of the TameFlow Approach

Actionable Agile Metrics Review - Part 5: Conservation of Flow

This is the fifth post about Dan Vacanti’s book Actionable Agile Metrics for Predictability, An Introduction and how it relates to TameFlow. In the previous installment about Cumulative Flow Diagrams we saw:

  • By representing the “Done” states, the waiting time between any successive steps of the process is captured and portrayed in the Cumulative Flow Diagram.
  • Any violation of the assumption of Little’s Law will become visible on the Cumulative Flow Diagram.
  • You must keep track of the times of arrivals and departures of the single work-items into and out of each state of the process.
  • You must avoid depicting backlogs and projections on a Cumulative Flow Diagram.
  • A MMR is a work package that has been truly committed to.
  • With an MMR, the Cumulative Flow Diagram will look like an S-Curve, due to work being started from zero at the beginning, and then going back to zero at the end.
  • With MMRs you must take into account that the average Cycle Time will be skewed because of the initial batch transfer.
  • The notions of Cycle Time skewing and of the S-Curve effect need to be fully understood and considered if you are using MMRs.
  • With actionable agile metrics, you can run experiments with your process and see what gives the best measured outcome in your context.
  • The horizontal difference between any two lines represents the Approximate Average Cycle Time.
  • Typical patterns and shapes that may develop on a Cumulative Flow Diagram reveal common flow problems and process dysfunctions.
  • The purpose of a Cumulative Flow Diagram is to trigger the right questions about the process, trigger them sooner, and suggest improvement actions.
  • Cumulative Flow Diagrams should not be used to identify bottlenecks.

We will now continue to see what Dan has to teach in chapter 7, 8 and 9 which are all about Conservation of Flow

Chapter 7 Arrivals and Departures

The Chapter starts of with a metaphor of an airport where more plans are supposed to keep on landing than taking off. At the end there would be no more space to park planes on the ground.

The vivid image of the overloaded airport serves to illustrate a general principle. If items enter into a process faster than they exit, the process will become overloaded and collapse.

The principle of Conservation of Flow means that work should only be started (on average) at the same rate that it is finished (on average). This principle must be maintained to support predictability.

The simplest way to maintain the principle is to be very careful about Defining Arrivals. You must clearly identify or define an arrival point. In order not to overload the process you simply need to control how much work is allowed to enter it across the arrival point.

Likewise care must be exercised in Defining Departures. You must clearly identified or define a departure point. Once a work-item exits the process through the departure point, it no longer counts as WIP.

The rate of arrival can be measured as the number of new items pulled into the system per unit of time. The departure rate can be measured as the number of items that exit the process per unit of time.

In principle you only need to be careful about releasing no more work into the process than the amount of work that you see exiting from the process. The average arrival rate should be approximately equal to the average departure rate.

One easy way to check this is to examine the lines of the Arrivals and Departures on a Cumulative Flow Diagram. If the top line and the bottom line of the Cumulative Flow Diagram are diverging, then arrivals exceed departures, cycle times become longer, and predictability becomes impossible.

To establish the conditions of predictability the arrival rate needs to be balanced to the departure rate. When this happens it becomes visible on the Cumulative Flow Diagram. When the process is balanced, the top and bottom lines on the Cumulative Flow Diagram become parallel.

More in general, you need to ensure that a constant amount of WIP (on average) is maintained through the process. Getting a balanced process is the single most important step towards predictability; and how WIP is limited is less important than actually doing it.

The controlling mechanism is typically realized by enforcing a limit on the amount of Work in Progress. This can take the form of a WIP Limit on entry into the process. There are many other ways to limit WIP. In particular, instead of setting a predefined and explicit WIP limit, the TameFlow Approach uses ideas from Theory of Constraints.

The key observation is that every process has a constraint (typically a bottleneck), and that the amount of work that actually passes through the process is limited by the capacity of that constraint. In advanced applications of Tameflow, the amount of work that is allowed to enter into the process is limited to the amount of work that can be handled by the constraint. This limiting mechanism is known as Drum-Buffer-Rope (DBR).

The TameFlow Approach puts a lot of effort in making sure the process is never overloaded with more WIP than what can actually be handled by the constraint of the process itself, and thus ensure that the process is stable and predictable.

Chapter 8 Commitments

The arrival point is very important, not only because it is the threshold that limits the amount of work that is allowed into the process. The arrival point is the point where work is effectively committed to. Any work that passes the entry threshold is expected to flow through completely and get finished.

Dan presents another aviation related metaphor and talks about skydiving. The intent is to illustrate the significance of having a definitive point of commitment. In the case of skydiving, it is the moment you step out of the airplane. After stepping across the point of commitment, there is no returning back.

This is the sense of the second facet of Conservation of Flow: all work that is started must eventually be completed.

In the TameFlow Approach, the scope of work that is committed to is usually more than a single work-item at a time, since TameFlow packages work into Minimum Marketable Releases (MMRs). Any MMR is a set of work-item that are all committed to at once and as a whole. Typically, with TameFlow, the MMR is what gets released into the work process. In TameFlow, a MMR that is started, must eventually be completed.

Just-in-Time Prioritization and Backlogs

A consequence, and often discounted advantage of pull systems, is the ability to employ the Just-in-time Prioritization. The implication is a radical change in how requirement backlogs are managed. Any backlog maintenance work, which is typically done in Agile methods, can be considered as waste and be discarded entirely.

The rationale for this is that whatever gets into a backlog will never get out. There is no WIP limitation on the growing backlog. The recurring backlog grooming sessions will become longer and longer as new requirements keep on being added to the backlog faster than they can be pulled off the backlog for implementation.

The beauty of a pull system is that prioritization takes place only when there is a signal that there is capacity ready to handle more work.

Furthermore, at that moment, prioritization will replenish the process by selecting only as many items as there exists capacity for. Any further prioritization is duplication of work and waste. All and any prioritization is done only when capacity is available and only to the extent that can be handled by that capacity.

Notice that the same reasoning applies to TameFlow even when MMRs are used. With MMRs prioritization covers only as many work-items as needed to fill the next MMR. Any further prioritization can be considered as waste.

However there might be exceptions to this, especially when work-items are high-level epics or larger business projects, and selection and sequencing strategies like the Incremental Funding Method (IFM) are employed. This is not in contradiction with the intent expressed by Dan to avoid redundant and wasteful prioritization work. Sequencing techniques as IFM try to find the next best item to work on, all the while giving due consideration to the broader context of what other items are being contemplated.

Similarly, when employing Cost of Delay or Throughput Octane, sequencing decisions are made with at least some consideration of more items than those that can strictly fit in the process’s currently signaled capacity. The trick is to consider just enough work-items to be able to select what is needed to replenish the process, while at the same time trying to maximize the economic benefit of that selection.

Just-in-Time Commitment

A pull system both allows and needs Just-in-time Commitment. Work is committed to only when there is capacity to deal with it. The important aspect of commitment, beyond the team’s engagement to work, is also in the confidence that the committed to work-item will really flow through the process (possibly with no delays or interruptions) until it exits. In other words, there is an additional commitment. There is a commitment to conserving flow.

This kind of commitment is key to predictability, because it is what enables to switch from planning and estimation to probabilistic forecasting based on measurement and observation. The commitment to conservation of flow allows you to express expected Cycle Time for delivery as ranges with a probability distribution. A lot of guess work disappears, and promises can be kept.

Exceptions to Conservation of Flow

Notwithstanding the intent to uphold the assumptions for Little’s Law, there can be exception to Conservation of Flow.

For instance, events might happen that will induce the abandonment of work-items that are going through the process. Typically there will be well grounded business reasons for this, relating to the arrival of new information that changes the contexts and the needs. Any event that disrupts flow should trigger some kind of analysis, and move the organization to find out why it happened. Flow disruptions should always be taken as an opportunity to ask: Why did it happen?

Even more important, from a predictability perspective, is to properly capture these events in the data and metrics. Items that are prematurely discarded from the process, should never be removed from the data. Without complete data, any Cumulative Flow Diagram will become invalid, and predictability will suffer.

Flow Conditioning

In order to avoid exceptions to the Conservation of Flow as much as possible, Dan presents the idea of Flow Conditioning. Items to be worked are selected based on the best chances of success they might have. In other words: The state of the process should be taken into consideration when making prioritization and pull decisions. This is because the state of the process might influence the chances of work-items going smoothly through it.

For example, if key personnel is on sick leave, it might not be possible to perform certain work. Then it makes sense to select work items that do not require that kind of work.

Conservation of Flow and TameFlow

Dan’s treatment of the Conservation of Flow provides many very valuable ideas that can be applied to extend TameFlow!

The above concepts are all truly important. While it might be exercised intuitively by practitioners, creating an explicit awareness about Flow Conditioning, makes it so much more applicable. Flow Conditioning will definitively be added to TameFlow as an explicit practice for the management of Operational Flow.

Likewise the precept of always capturing all data (including data of items that did not proceed through the whole process, or were abandoned, or were subject to back-flows), are essential and will be made explicit in TameFlow.

TameFlow uses sophisticated practices to perform root cause analysis of flow disruptions. However the triggering of such analysis is induced by signals that typically come from Buffer Management. The valuable idea that Dan contributes, about performing the analysis for any event that causes flow disruption can be seen as a further, significant refinement that gives finer granularity and more opportunities to investigate where the process can be improved. And it is the most sensible alternative when moving over to Continuous Flow where Buffer Management might not be needed.