Dickenson–When Will Sheetfed Benchmarking Arrive?
The X Company is a successful, well-established, commercial printer with sheet and web presses.
Press: five-color Komori sporting an automated data collection device and 28 months of collected data;
Capacity: 24 hours, multiplied by total days of months: 19,704 hours.
Capacity utilization percentages for the press: Makeready–11.63%; Washing Up–2.76%; Washing Blankets–3.50%; Running–15.46%; Productive Total–33.35%; Non-Productive–66.65%
Reflect on those percentages. Do you believe them? Probably 98 percent of us have no basis for doubt, because we either don’t have comparative data available or we don’t look at in this way. “It’s just not the way we do things around here.”
Why not? Why don’t we sheetfed printers have benchmark reporting like the web printers have? If we did, then we’d have some perspective for a judgment. We could look at the data and say whether we’re doing better or poorer, and look for reasons why. We could make some solidly based decisions. Corrective actions would follow.
If we wanted to know what the average run length of jobs was for a given period of time, we’d look at the benchmark. Then we’d want to know the average running speed—did we do better or worse than 10.82miph? How many impressions were wasted, on average? Compare that with a 736 waste impression average per job or 4.83 percent.
We can’t do sheetfed press benchmarking because:
a) we don’t collect the data, or
b) we lack the courage to measure and compare, or
c) no one is prepared to receive the data, process it and report the results. We remain content with voluntary “surveys” of a few firms willing to report their “standards,” based on their definitions and suppositions.
There’s a fourth reason: Printers’ data collection of operations is confused with “chargeable hour” gobbledygook from cost accounting rate-making definitions, replete with capacity utilization assumptions. It always gets everybody, except one person in accounting, confused.