Central Limit Theorem -- DickesonNovember 2001
Looking at Web Speeds
Above is a chart of net running speed samples of a web offset press applying Central Limit Theory. It has the six-sigma control range plotted in as the UCL and LCL. We predict, with 99.7 percent probability, that the net speed of all runs on this press will fall between 28.86 and 20.16 thousand impressions per hour (iph). Any run faster or slower than these limits is due to special causes that need investigation.
Why is this significant for our industry? We simply have to get a better grip on managing variation. Right now, it manages us.
W. Edwards Deming taught us: Reduce variance to manage productivity and quality. In the chart the difference between the upper and lower control limits—the range—is 8,700 impressions per hour. Great golly, Molly! To predict with 99.7 percent validity, the best we can say is that this press will produce a job at a net speed somewhere between 20 and 29,000 iph. If you're satisfied with 95 percent predictive accuracy you can narrow the range to 5,800 iph. Cut to 68 percent accuracy and you're within 2,900 iph.
How accurate do you want to be? Pick a number from the statistical table. If we want to manage the variance, we must continuously narrow that range of variance.
What causes the wide variance in "net" speeds of that press? The difference between gross and net press speed is caused by unplanned stops. Just as your net highway speed for a trip is regulated by the "rest" stops you have to make, net press speed is regulated by press stops. Cut stops to increase net speed and reduce the range of predictability variance. Reduce stops and decrease quality variance.
Aren't we doing this now? We're trying—but we're not measuring. Until you measure, you can't control! We're blind men seizing the elephant's trunk and calling the animal a snake. To my knowledge we haven't thought about using CLT to manage variance of net press speed, net binder speeds, makeready hours, materials purchasing or any other activity of print production. Instead of a range, we're using single number "standards" to estimate activity time and (ugh!) price our jobs, schedule production facilities and order tons of expensive materials.
Suppose we wish to optimize workflow—throughput. For the least 15 years we've known that the key to increased profitability is the speed rate of workflow. The theories of supply-chain integration require valid prediction. It's the speed of materials from receipt at our dock to delivery to our customer that determines our success. How can we make decisions for buffering workflow between processes without statistical support? How do we keep delivery date promises? (Right now we "pull" jobs off equipment to make scheduled deliveries and suffer additional makereadies, don't we?) How do we follow Deming and Walter Shewhart and continuously improve our printed product? We must manage variance.
There are issues beyond the number of sigmas to use to validate workflow predictions. Questions such as, "Is the process capable of producing at predictable speeds at all?" must be answered. Or, "Is the process under control?" Statistical methods exist to make these determinations. We may be shocked to learn the answers. And, perhaps some scholar should consider whether we're justified in using the Central Limit Theorem to establish control limits for net rates of speed and activity time applied to makereadies.
We risk our business survival because we don't apply information that's readily available—or can be. The world-famous Jack Welch demanded "Six Sigma" performance predictability from the GE Companies. Should we ask less of printing production? Dare we ask less?
—Roger V. Dickeson
About the Author
Roger Dickeson is a printing productivity consultant based in Tucson, AZ. He can be reached by e-mail at email@example.com, by fax at (520) 903-2295, or on the Web at http://www.prem-associates.com.