St. Onge Company Links Supply Chain Blog
Strengthening your supply chain one link at a time.
 

Simulation – The Good, the Bad and the…Useful!

Ah yes….Simulation! We’ve all heard of it.  Tales of its magical powers and horrific failures have been told by engineers for decades. If you were to ask a group of engineers what they think of when they hear the word “simulation” you are likely to hear things like…”expensive”, “time consuming”, “complicated”, “confusing”, “amazing”, “powerful”, “enlightening”. The truth is simulation can be all of these things. However, as is the case with most things in life, how we approach a situation has a huge impact on the situation’s outcome.

Discrete Event, Agent based, Monte Carlo, Oh My!

The word “simulation”, according to the dictionary, means “the imitation of a situation or process”. Within the confines of this broad definition there are many different approaches or methods that can be used to model a wide range of situations. Within each approach there is an even wider range of software that specialize in each of these methods, with a fair amount of overlap for many of them. The 4 major methods of simulation typically encountered are:

  • Monte Carlo/Risk analysis
  • Agent Based
  • Discrete Event
  • System Dynamics

Choosing a good method to model a given problem is dependent on many things, such as modeler skill within a given software, visualization requirements, level of detail needed, required KPIs, available data, etc. Don’t be misled by software salespersons that would have us believe their package is simply “better” than others with no context, as this typically isn’t true. Additionally, don’t get “simulation” software confused with “optimization” software as they are different efforts, albeit with overlapping worlds, but that is a topic for another time.

I like to say there are at least 100 different ways to model any problem….but 80 of them aren’t very good and/or robust and will likely result in undesirable outcomes regarding timeline, cost, and results. This statement gets exponentially truer as the problems being modeled grow in size and complexity. However, If the problem you are trying to model is small or focused enough (think demo models we’ve all seen software vendors give), then robustness, scalability, and editability don’t matter, so all 100 ways are fine. However, when it comes to big models, identifying and being able to implement one of the 20 good ways to attack your problem is one of the hardest parts of any simulation modeling effort. This is where experience and good communication come into play.

Taking on a system wide all-encompassing simulation effort as an inexperienced modeler, regardless of the software being used, is usually a recipe for failure. Start small and focused, and increase the complexity and scale of your models slowly as skills evolve. If this isn’t a possibility, partnering with a vetted professional that can showcase to you multiple examples of successful projects of similar size and complexity in your desired domain is the best path forward for success.

Fail to plan, plan to fail

When a person first considers engaging in a simulation effort, one of the first questions they should ask themselves is, “What’s the potential problem I’m trying to simulate?” This sounds like a no brainer, but surprisingly many people can’t answer this simple question. Said a different way, “What questions do I expect to have answered by simulation that statistical engineering approaches such as spreadsheet models can’t help me with?” The reasons for asking ourselves these simple questions up front is because oftentimes folks naively assume that “the simulation” is this powerful AI engine, akin to what Tony Stark has in Iron Man, that can be whimsically input with incomplete summary data, in any random form available and it can take that information and quickly regurgitate back to us all of this rich and wonderful information we can then use to solve our very nuanced and complicated problems. “The simulation” will handle everything for me, so I don’t have to think, right? That would be amazing, but unfortunately this simply isn’t true.

Let’s take, for example, a model encompassing receiving to shipping throughput capability of a conveyorized material handling system with incorporated value-added services staffed by people. The simulation process in this application, when properly applied, forces a modeler to step through the system at a level far below what aggregated annual summary data, theoretical equipment rates, or monthly average throughputs in a spreadsheet will require. It’s during this transactional step by step thinking, by a person attempting to build a model that imitates the real-world system as designed, that all of the nuances of an operation begin to show themselves. Some popular nuances that seem to repeatedly show up in large model environments are:

  • Choreography/Timing: We’ve all heard stories of equipment vendors proclaiming throughput rates that installed systems can’t seem to get close to realizing, right? Well, achievable rates have a lot to do with accumulation buffers between processes, along with balance between upstream and downstream processes. Things like physical conveyor plumbing, inbound/outbound truck schedules, staffing levels, and recirculation can all impact choreography such that a system as designed can’t always achieve the balance necessary to make use of available equipment capacity.
  • Congestion: Congestion can be very difficult to quantify, especially when it comes to people. The reasons for this are because no two people will handle a given congestion situation the same way in the real world, and without standard operating procedures defining exactly what people are expected to do in various situations, quantifying the impacts in a modeling environment can be difficult and should be approached with very clear objectives and clearly defined processes.
  • Dwell times: Oftentimes we model a situation that may involve new or additional equipment to an existing operation, in an effort to understand impacts to throughputs, utilizations, etc. In these efforts a lot of value can be gained by analyzing as much real world historical transactional data as is available for a given process. Hidden inside these time stamped transactions are current day dwell times between steps in the process. Understanding these dwells and considering whether or not our newly proposed system can take credit for improving them or not is a necessary step to ensuring that our simulated results represent some of the operational realities present in the real-world environment.

A good practice is to attempt to write an operational system description document about the process you plan to simulate. Try to include as much detail as possible when writing it out. Doing this will oftentimes lead you to questions, which can be documented and presented to subject matter experts as required. When complete, this document can also serve as a guide for how the model is to work, and what it will answer, leaving less chance of misunderstandings from those involved in the process at the end.

Software is just a tool, the modelers hold the keys to success

When rationalizing which software should be used to model your process, remember this is only part of the equation. In a Discrete Event method, software such as Flexsim, AnyLogic, Simeo, Arena, and Simul8 are all perfectly capable tools, when in the right hands, and there are countless others.

Typically, a modeler chooses to specialize in a given software, as the more complicated problems require a much more robust knowledge and command of the software in order to effectively model and communicate the larger problems. Knowledge and capability come with practice and exposure over time.

Think of simulation modeling capability as a language. A five-year-old toddler can speak a language well enough to communicate the basics to their parents, however their vocabulary and subsequent ability to express themselves is rather limited. On the contrary, a scholar level poet or song writer has command of a vocabulary so rich, that they can express themselves in ways that are deeply impactful and seemingly magical to those around them….and they do it using the same 26 letters in the alphabet that the toddler knows. Modeling is no different. The same language analogy can be said for jumping to different software. Imagine a record executive telling their beloved songwriter, “I love your work….but I need you to write all your songs on the next album in Latin.” Uh oh.

Simulation can be a great tool when properly applied, but don’t worry, everyone will misapply it many times as they gain their knowledge. Be resilient, over communicate, work hard, and try to simplify your models as much as possible, without compromising on the integrity of the very problem you set out to measure and study in the first place. Above all, never give up, try to stay humble, open minded, and always remember what statistician George E.P Box used to say – “All models are wrong, but some are useful.”
 
—Jim Counts, St. Onge Company
 
 

Enter your email to subscribe to "Links":
Loading

 

St. Onge Company is Proud to Have Been Ranked Among the Highest-Scoring Businesses on Inc. Magazine’s Annual List of Best Workplaces for 2023

We have been named to Inc. Magazine’s annual Best Workplaces list! Featured in the May/June 2023 issue, the list is the result of a comprehensive measurement of American companies that have excelled in creating exceptional workplaces and company culture, whether operating in a physical or a virtual facility.

From thousands of entries, we are one of only 591 companies honored.

Click here to see our listing!