2019

3.4 PER MILLION

The Significance of Simulation

Tapping into a powerful tool
to analyze and improve processes

by T.M. Kubiak

The certified Six Sigma Master Black Belt (MBB) body of knowledge, introduced in 2010, included the subject of simulation and, specifically, digital process simulation. This is refreshing because this tool is often omitted from lean Six Sigma training.

Simulation is the act of modeling the characteristics or behaviors of a physical or abstract system. The simulation may be conducted in various manners ranging from a simple manual representation—such as an exercise taught in a lean Six Sigma class—to a highly complex mathematical model involving a high-speed computer system.

Simulation becomes a good alternative when process problems become too difficult to solve analytically or when you want to focus on a problem as a system. This does not mean that mathematics must be excluded from simulations.

On the contrary, many simulations require a host of mathematical formulas to describe system complexities. But simulations can help you gain a better understanding of the system or component interactions taking place and rapidly explore a variety of system configurations.

From a lean Six Sigma perspective, when you think of simulation, you are really thinking of process modeling. Process modeling allows you to build accurate and detailed graphical computer representations or models of chemical, physical, biological or technical processes.

Simulation basics

There are two types of simulations: discrete and continuous. Discrete simulations are also known as event-based simulations, which move through time by advancing from event to event. Examples of such events include the arrival or delivery of an order, the assembly of an order or the rejection of a product at final inspection.

With computer-based simulations, each event can be color-coded to increase visibility, and statistics by event can be captured and analyzed. Further, the properties of events (that is, arrivals and departures) can be defined statistically through probability distributions or empirical distributions defined by the data collected.

Continuous simulations are time-based. That is, they advance through time incrementally. Such simulations are governed by sets of mathematical formulas that describe the behavior of the system and whereby the parameters of interest change continuously with time. During the simulation, output from the equations is typically plotted to facilitate decision-making. As the time increment becomes smaller, plots become more precise. Continuous simulations are often used when modeling electronic components or systems.

Simulation has been around almost as long as computers. In the early years, simulation had to be performed in a programming language (for example, FORTRAN). Such simulations required a developer to track every aspect of the simulation. From the mid-1950s through the 1970s, simulation languages were created that made it easier to develop simulations and track the various aspects.

In the 1980s, graphical simulations emerged. This evolution of simulation software permitted developers to create process simulations simply by placing and moving icons on computer screens to represent analogous real-world entities. Such entities could be linked in any manner desired, and statistics automatically captured as widgets were processed.

Advantages and disadvantages

One advantage of simulation models is you can start at high levels of a process and build or expand as time, money and effort permit. Another is that simulations are run off-line so you can run simulation after simulation without affecting a process.

Consequently, there are no adverse effects to the production, manufacturing or service environments. Also, the price of simulation (also known as process-modeling) software has decreased sharply and now can be run on a desktop or laptop computer, creating significant cost advantages.

The technical sophistication of graphical simulations and the capabilities they provide has progressed to the point that they have become the proverbial double-edged sword. The advantage—and disadvantage—is that processes can be modeled easily and swiftly. Anyone with a little skill and knowledge can build a sophisticated process model, run it and implement changes based on the results.

This is unfortunate. Decades ago, two types of skills were required to develop and use simulations. The first was the skill to understand and write the simulation in an archaic non-graphical, non-user-friendly interface language. (Such was my luck!)

The second was the skill to generate and interpret the statistical results. You also had to validate and verify the simulation. Today, these steps can be easily omitted, often out of ignorance simply because the ability to build the model has been so greatly simplified.

Queuing problems

One of the earliest and most often-used applications of simulation is with queuing models. Some of the simpler queuing models have closed-form analytical solutions.

But change a few parameters, and these models can quickly become mathematically intractable. Consequently, simulation becomes that last resort, particularly when:

  • Arrivals are not following the Poisson distribution.
  • Departures are not following the exponential or Erlang distributions.
  • Limitations exist on queue lengths or other special conditions exist (for example, preemption, balking, reneging or jockeying) that are difficult to incorporate in analytical models.
  • There is the need to study transient behavior. In fact, even the simplest queuing model, M/M/1 (single-queue, single-server with an infinite buffer) has an extremely complex mathematical solution when steady-state conditions are not assumed.

To compare and evaluate queuing simulations, it is important to compute several meaningful queuing performance measurements, including:

  • The probability that there are no customers in the system, usually designated as P0.
  • The probability that there are customers in the system, usually designated as Pn.
  • The probability that an arriving customer will need to wait for service, usually designated as Pw.
  • The average number of customers in the system, usually designated as L. It is also important to compute the variance ofL, V (L).
  • The average number of customers in the queue, usually designated as Lq. It is also important to compute the variance of Lq, V (Lq). Note that in queuing theory, the queue length includes the customer being served.
  • The average time a customer spends in the system, usually designated as W. It is also important to compute the variance of W, V (W).
  • The average time a customer spends in the queue, usually designated as Wq. It is also important to compute the variance of Wq, V (Wq).
  • The utilization factor for each server, usually designated as ρ. This is the fraction of time that each server is busy.

Also, it is useful to capture minimum and maximum values whenever possible. Maximum values, in particular, help provide an understanding of how much stress is being placed on a system.

The preceding statistics not only help compare models, but they also can be useful in determining steady-state conditions. For basic analytical models, these statistics can be easily computed, but when using simulation, the statistics must be computed dynamically during the simulation.

Fortunately, most software packages available today compute these measures automatically.

A tool you need

Simulation has added significant capability to the lean Six Sigma professional’s arsenal of tools and techniques. Simulation is highly versatile, and its application is limited only by the user’s imagination.

It has been used across all business sectors, including manufacturing, retail, pharmaceutical, banking, government, military and education. It has also proved successful and well-suited for use in transactional environments.

Some MBBs and other lean Six Sigma professionals have said that simulation lacks glamour. Perhaps this perception is because of its contrast to straight analytical solutions.

However, you should keep in mind that the selection of tools in project applications should not be a matter of preference, but instead a matter of need.

Don’t let personal preference of a tool or technique stand in the way of using another viable tool, idea or option that could be the difference in how a project is handled or how a situation is analyzed. This will only shortchange the project, and results will fall short.


Bibliography

  • Kubiak, T. M., "Simulation: An Old Tool Revisited," IIE Aerospace News, Vol. 23, No. 2, Fall 1988.
  • Kubiak, T. M., The Certified Six Sigma Master Black Belt Handbook, ASQ Quality Press, 2012.

T.M. Kubiak is founder and president of Performance Improvement Solutions, an independent consulting organization in Weddington, NC. He is coauthor of The Certified Six Sigma Black Belt Handbook (ASQ Quality Press, 2009) and author of The Certified Six Sigma Master Black Belt Handbook (ASQ Quality Press, 2012). Kubiak, a senior member of ASQ, serves on many ASQ boards and committees, and is a past chair of ASQ’s Publication Management Board.


Average Rating

Rating

Out of 0 Ratings
Rate this article

Add Comments

View comments
Comments FAQ


Featured advertisers