After executing and managing supply chain network analysis projects for over 25 years, I have discovered there are common mistakes made by every first-time logistician, some of which I might be remembering from first-hand experience! I want to highlight some potential pitfalls that will land you squarely in the dreaded fourth quadrant of any ranking system – mistakes that are easy to make due to lack of experience combined with a high probability of generating the wrong conclusions from your analysis.
Mistake #1: Assuming raw system data is accurate and representative of your network.
There is a misconception that best-of-breed supply chain modeling software requires cleaned, verified data – at least at a high level. The truth is that the data needs to pristine as well as validated at as granular a level as possible for accurate modeling. Too often novice modelers assume that data elements such as shipment profiles, equipment types, INCO terms, order profiles, and UOM conversions will not affect the solution, since the business is managed from the same dataset, but the reality is that the software will always find unchecked data and exploit it to get the wrong results. An axiom I tell my clients is that each day of deeper data validation saves four days on the back end of a project.
Mistake #2: Using excessive scaling to get baseline costs to match financials.
I am not sure whether inexperience or tight deadlines drives this behavior more often; however, excessive scaling is over-used by novices. For example, if the model freight is half the actual freight spend in the accounting ledgers, do not reconcile the gap by simply scaling up the model freight. Instead, a root cause analysis is needed so that one of two things happen: either a reasonable correction can be made to a model input, or a reasonable explanation can be provided as to why the actual financials are not “apples-to-apples” as your model and to what extent. Extreme scaling (+/- 50%) is not a viable correction method because it is masking an underlying issue that will likely invalidate your model results.
Mistake #3: Over-aggressive timeline.
Reasonable timing to draw proper conclusions and implementable results from a full-scale network analysis is typically eight to ten weeks for a single person once all (raw) data is provided. Regardless of the time frame or the type of problem, network modeling always takes longer than you expect it to if due diligence is conducted. Everyone underestimates the time it takes to clean and validate data; they underestimate the time it takes for the model to solve, perform proper error diagnosis, and format the results in a professional manner. Thus, a methodical, structured approach to network analysis is required, otherwise clients find that they have non-actionable results and spend the weeks they thought they were saving re-working the effort.
Mistake #4: Trying to answer all queries and strategies with the same model structure.
It is a misnomer to believe that an end-to-end baseline model is the last baseline model needed for the foreseeable future, that a total baseline model will be the basis of comparison for all optimization scenarios to come. The opposite is true. Each strategy and scenario to be tested will require a different version of a baseline model to compare to the optimization models. The level of detail and/or aggregation in each model element needs to vary depending on what the modeler is challenging in the supply chain. Getting a comparable and implementable solution that will pass a rigorous due diligence test will require a baseline model that has the same level of detail and aggregation as the optimization scenario (not a generalized overall model).
Mistake #5: If the model solves, the locations selected are correct and accurate in savings potential.
Best-of-breed modeling software utilizes PhD-level mathematics, thus it can feel like a “black box” to a novice user that simply spits out an answer. Models lack common sense and creativity, thus a “solved” model can be a misnomer of a solution without proper validation. A novice modeler needs to fight the initial feeling of relief that your model finally solved, along with accepting that the solution the model provided must be correct without combing through the results reports to understand HOW and WHY the model generated that solution. I have evaluated too many models that supply a false positive solution due to one low freight rate on a new lane or a zero variable cost at a warehouse. Furthermore, novice modelers do not comprehend that the model provides the best solution given the viable alternatives. If the set-up of candidate warehouses is not 100 percent complete (all lane rates into and out of that location or missing eligibilities), the solver simply ignores it and a savings opportunity is missed unless the modeler further challenges the solution.
Mistake #6: Under- or over-aggregating the raw data into model elements.
In 2020, model creation is still more art than science; there is a delicate balance of aggregation and detail need for each scenario to be tested. Typically, loading SKU-level system data directly into network modeling software without aggregation is a dead end. Most computers are going to choke on the billions of variables or the processing power needed to solve a massive problem. Alternatively, over-aggregating all SKUs into one product group, for example, will not allow the model to make accurate trade-offs with real-world costs, nor allow the modeler to differentiate real-world constraints that the model needs to understand to generate an implementable solution. The proper level of aggregation always depends on the question(s) being asked of the model and the reporting level needed to verify the result.
— Craig Vorse, St. Onge Company