Calculating Lead Times in a Value Stream Map

I was asked a question recently about the lead time calculations in a Value Stream Map. The question was specifically how the lead time is calculated.

There are two ways, that I have seen, of calculating lead times for value stream mapping. They both produce different results.

1) The first one is the one in “Learning to See”. Here the lead time is calculated as follows. Lead Time = Inventory/Daily Demand. There is no relationship with the consumption rate at the subsequent station. If the WIP is 1000 and the daily demand is 100, the lead time is 10 days. The assumption is that the inventory will be used up in only 10 days. This produces an inflated value for lead time and is not the true current state.

2) Calculation of Lead Time based on Little’s Law. To me, this is more realistic. Here the lead time is calculated as follows. Lead Time = WIP * Cycle Time of subsequent station. I know there is a lot of confusion regarding this.  Think of lead time calculation as the future tense. With the same example above, if the WIP is 1000 and cycle time at the station is 60 seconds, the lead time is 60000 seconds or 1000 minutes. Assuming 460 minutes in a day, this equates to 2.17 days (1000/460). In other words, lead time calculation is based on consumption rate.

The only thing to keep in mind with the second calculation is with the inventory we have at the last stage (Finished Goods). The lead time for this will be calculated as Inventory/Daily Requirement. This is because the customer is going to consume this at the rate of daily requirement.

In the end, please note that, “By overanalyzing the tool, don’t overlook the purpose of the tool.”

Keep on learning…

Understanding the void

As a data scientist or a quality professional, one should understand the whole picture. Sometimes this means that you have to gain information from what is there as well as what is not there. I like to call this the void.

A great story that comes to my mind regarding this is from a talk from Jeffrey S. Rosenthal. He also posted this in a great article called “I am biased, You are biased“.

” During World War II, the U.S. Air Force wanted to strategically reinforce the hull plating of its fighter planes to better withstand enemy fire — but which parts of the plane should be reinforced? Charts and graphs were carefully constructed, showing the location of bullet holes on returning aircraft. The military then decided to consult a statistician — always a clever move. Professor Abraham Wald immediately realised that those graphs were based on a biased sample: they only included data for the planes which actually returned from battle. The real issue was the location of bullet holes on the planes which were shot down and never made it home. The military wisely followed Wald’s advice, to reinforce those parts of the hull that came back clean and bullet-free — those were the places where any shots would be fatal”

Keep on learning…