The term “moneyball” is often associated with the Oakland Athletics baseball franchise, which in 2002 assembled a competitive team through rigorous statistical analysis. But how does this concept translate to other industries, especially the apparel industry?
“Moneyball” screen printing is a method used to improve productivity and profitability through data collection. The following describes an example in which we have implemented the presented method to improve a factory without additional investment.
Plant A is a plant in Central America, an 11-press automatic workshop. They had four 12-colour/32-station oval automatic presses. They had seven automatic 18-color carousel presses. Initially, the factory had serious workflow problems and poor performance. It had a large workforce, unskilled in reading and writing and lacking in computer skills. We conducted trainings to establish appropriate data collection methods and to place staff in key roles.
Factory A maintained less than 25% of minimum orders of 288 pieces per unique configuration. Factory A created retail products with a six-month sample until the production run. They maintained at least 25% special effects volume on their most important client.
Six unique data points contributed to the success of plant A. The data points had a percentage weight in the overall success of the result. These included 40% waste, the main factor contributing to the success of the model. Usage, at 10%, included consumables used daily by each department. At 5%, testing included durability results and non-retail or “B” grade products. 15% inventory included inventory for samples and pre-printed samples for restocks. Finally, setup times were 20% and “others” accounted for 10%.
“Other” represented several important key elements. For example, the print information sheet included number of screens, number of flashes, equipment requirements, cost and price, image size, and special effects. This data has been compiled for accurate planning at any stage of the process to avoid bottlenecks. “Other” also included shipping costs on consumables and machine maintenance.
Factory A collected configuration data, including “average screens per day” and “average flashes per day”, for water base and plastisol. “Time per setup” was also collected, as it was relevant for us to create daily charts for the operator workflow.
For example, if the average number of screens was eight, the average number of flashes was four, and the average duration was 40 minutes. We could provide the operator with a map for the day based on the 40 minutes as well as staging areas for flashes, reducing the need to chase peripheral equipment.
Setup in Factory A was the time it took between touching the first screen (the first action in a scheduled map) and handing a finished printed garment to quality control. If quality control rejected the garment for a printing error, the preparation time continued. If they rejected the garment due to an art or screen error, the style would start over and add to the “price per item” as a negative. If they rejected the style for aesthetic reasons, it moved on to the quality control report.
The next step to observe was the “configuration per screen”. This is important because if a factory uses a long average (e.g. 15 minutes per screen), you can aim for a reduction to 14 minutes/screen, which is easy to achieve and reward. When employees follow their success, it is important to motivate them. It’s easier to use the per-screen setup as a basis for creating goals, rather than setting the goal to reduce the time of a task by 50-40 minutes. They may not consider this feasible because the next job might only take 30 minutes due to fewer screens.
We then collected the “average special effects screen”. This is important because it identifies anomalies and helps you see where there may be inefficiencies. Generally, effects take longer to set up than flat inks. Data collection can show you that one operator is much faster at setup and approvals on high-density printing, creating an opportunity to cross-train with another wasting time. It also provides reasons for “out of range” configurations and helps stabilize data.
Finally, the “setup material” – the number of parts used to set up the job – was finally compared to the waste data.
Plant A Analysis
During the initial analysis of Factory A, we set the setup time at 48 minutes per screen. Over a two-year period of data and analysis, they set a time of nine minutes per screen. We reduced 158 setup pieces to 32 standard pieces per job, or one turn on the oval machine.
It is also important to note that Plant A’s results were supported by incentives and positive reinforcement. If the team reached their goals, they would earn meal tickets for the cafeteria. We only gave incentives to teams and never to individuals unless there was a contest. The contests were held only during the training scenarios. For example, to win a restaurant ticket in samples, the team had to produce three fixtures per day and per press operating in samples. For each day they reached the goal, they would earn a meal ticket for the team. They could only win if they finished the entire sample season on time with aesthetic efficiency. The team never lost for the duration of the program, so employees considering such tactics should be prepared to follow through and be proud to offer rewards.
Goals were always small, easily achievable steps. If the teams showed difficulties in reaching their goal, we created a training based on the collected data. At one point, for example, there was an issue with quality control rejecting aesthetics during samples. This happened frequently and caused delays. We ran a competition where tested printers received individual rewards for completing several aesthetic and tactile challenges, which improved overall efficiency.
Inventory data included “empties for configurations”, “printed replenishments” and “price per item”. The “rate per item” was basically an efficiency score showing how many times a style was set up before we produced it. After initial training, the plant operated at a rate of 2.0 per item; after efficiencies and fixes, the factory achieved a rate of 1.15 per item.
Using a linear regression model, we anticipated sample orders and reduced the price per item. We used a pre-printed inventory to minimize multiple setups and save costs. We have added any excess stock to production orders at the end of the season.
The factory created a “B” grade report with 25 data points related to product failure during quality control. We gave each line item a failure percentage to contribute to the overall reject rate. Development could then address core issues and measure success through durability and Grade B data. For example, the failure rate for lint was 20%, and when development added a lint, lint failure dropped to less than 6%, which improved overall durability.
The daily ink report showed consumptions of all products in grams, which helped with targeted training when we saw misuse. The average also contributed to the accuracy of consumable orders.
The Daily Screen Report showed all screens made, moisture levels, broken screens, and scrap recovered in grams. We also measured the HD screen emulsion in grams by checking emulsion usage, which ensured that the measurements taken by the micrometer were accurate.
Finally, the waste data showed the largest increase in the plant’s profitability margin. It offered the most opportunities for behavioral improvement and waste reduction.
To begin waste monitoring, they took the opportunity to print 1.5 million water-based units with a print on the left sleeve. The process required three white ink screens, labeled A, B and C. We standardized flashes, squeegees, screens and flood bars. We took 30kg drums and placed them on the floor and labeled them, A, B and C, while labeling the production ink A, B and C. We removed all the bins. We told the team that any ‘junk’ ink would go into the corresponding 30kg receptacle. The ink room took the 30kg container and removed anything that was really ‘dead’, refreshed the rest and put it back on the ground the next day. After 1.5 million units, we had a 30 kg drum full of “dead” ink. This ink was refreshed using a strong additive for the particular product, and we used the ink for training and contests. Essentially, at the end of a production run of 1.5 million units with a consumable product that could potentially create waste, we had zero waste. We then conducted the store-wide experiment.
Waste measured in grams was reported daily by inks and screens recovery. Each week, the target percentage was reduced primarily through training and on-press handling procedures. After completing the initial ink formation, the configuration shirts were also reduced. We fixed a reasonable number of configuration jerseys and included it in the operating costs. Any excess was “out of reach” and targeted and reduced. Initially, the plant measured 27% consumable waste. At the end of the program, it was less than 2%.
Factory A final results
Plant A experienced a significant increase in margin – over 35% – because the plant was unprofitable at the start of the program and was well within target at its completion. The factory saw its business grow from 4 million to 12 million units with the retail brand it had already established and further retail brand growth. Finally, the factory was the first factory outside of Asia to receive a Vendor Speed Award.
Want more screen printing knowledge and information straight from the experts? Click here to register for PRINTING United Digital Experience – Apparel/Screen Decorating Day (Tuesday, October 27). Or, Click here for more information on the PRINTING United digital experience, including Apparel – Direct-to-Garment/Direct-to-Substrate Day (Monday, November 9). The PRINTING United digital experience runs from October 26 through November 12 and is free.
This article appears in the PRINTING United Digital Experience Guide and has been republished here with permission.