Three Sigma Limits and Control Charts

Quick Links

SPC for Excel Software

Visit our home page

SPC Training

SPC Consulting

Ordering Information

Thanks so much for reading our publication. We hope you find it informative and useful. Happy charting and may the data always support your position.


Dr. Bill McNeese
BPI Consulting, LLC

View Bill McNeese

Connect with Us

Comments (19)

  • DaleWJuly 31, 2017 Reply

    Hi Bill,Imagine that you worked at a process with a online monitor that returned a measurement every second.  Suppose that the common cause scatter is close to normally distributed, and there is automated SPC software set up to handle the measurements.  Are you sure that you'd be happy with a false alarm being triggered every 6 minutes or so?

    • billJuly 31, 2017 Reply

      Hi Dale,
      I probably wouldn't chart each data point.  I would probably take a time frame (minute, five minutes, whatever) and track the average of that time frame over time as well as the standard deviation of the time frame, both as individuals charts.  We used to do that with PVC reactors where we tracked reactions temperatures for a batch.  Gave us some good insights into differences in batches.  

      • DaleWAugust 1, 2017 Reply

        A longer interval Xbar-S chart would be a more obvious alternative if we don't need a quick response.  But what if our automated control system with deadband really needs to respond quickly because special cause upsets can grow suddenly?  The traditional 3 sigma limits are ultimately a (deadband) heuristic that works well when the sampling rate is low (a few samples per day).  I think a decent case can be made that SPC limits need to be wider to control the overall false positive rate when applying SPC principles to the much higher frequency sampling often seen in the computer age.

  • HelgeAugust 8, 2017 Reply

    I did a simulation of a stable process generating 1000 datapoints, normally distributed, random values. From the first 25 data points, I calculated 3 sigma limits and 2 sigma "warning" limits. Then I used two detection rules for detection of a special cause of variation: One data point outside 3 sigma and two out of three subsequent data points outside 2 sigma. Knowing that my computer generated normally distributed data points, any alarm is a false alarm. I counted these false alarms for my 1000 data points and then repeated the entire simulation a number of times (19) with the same value for µ and sigma. Then I plotted the number of false alarms detected (on the y-axis) as a function of where my 3 sigma limits were found for each run (on the x-axis). Above 3 sigma, the number of false alarms was quite low, and decreasing with increasing limit. Below 3 sigma, the number of false alarms increased rapidly with lower values for the limit found. At 3 sigma, there was a quite sharp "knee" on the curve which can be drawn through the data points (x = control limit value found from the first 25 data points, y = number of false alarms for all 1000 data points in one run). This simulation was quite convincing to me.The simulation also reminded me that using more detection rules at the same time (of course) increases the number of false alarms. But independent of which rules are used and how many detection rules I use at the same time, the "knee" of this curve will still be at 3 sigma, because all the detection rules are constructed in a similar way with respect to the sigma value found in phase 1 of constructing the control chart.It would be an idea to have some advice on which detection rules should we use! We should not use them all at the same time? I guess that if a "trend" because of wear-out is a typical failure mode you expect to happen to your process, the "trending" detection rule is nice to use. Can anyone give some examples from real life processes, how many rules and which rules are used in practice?

    • billAugust 9, 2017 Reply

      Sounds like you did some detailed work on this.  The number of rules you use, to me, should be based on how stable your process is.  If it is not very stable, I would probably use points beyond the control limits only.  The other thing to consider is how important is a little drift in the average.  If not very important, I would stay with points beyond the control limit.  If is important (and you don't have many beyond the control limits) then I would add the zone tests.  Just personal opinion.

  • RaphyAugust 22, 2017 Reply

     Plotting environmental monitiring microbial counts in a classified room often reveals a signifcant number of "extreme" counts that exceed the 3-sigma limits (Microbial counts are often a skewed distribution). The Quality Assurance (QA) person will be delighted to reduce every false-alarm as this will reduce the gmp requirement to document  every apparent deviation. Besides, he feels there is nothing he can do.Would you, under these so common circumstances, set control limits as 4-sigma, 5-sigma or 6-sigma limits as empirical limits (especially when the regulatory limits of microbial counts are higher)? Is it legitimate to interpret  the above behavior as a "normal process behavior due to normal causes" and only far-extreme counts be suspect of a "special cause" and worthy of investigation?  Is it legitimate of the QA to view the 5-sigma or 6-sigma limits  viewed as a trade-off in monitoring microbial counts just as Shewhart considered the 3-sigma limits as a trade-off in manufacturing processes?

    • billAugust 22, 2017 Reply

      Interesting issue.  I am not familiar with microbial counts; however, I always believe you should use your knowledge of the process.  It is makes sense to you, do it.  How skewed is the distrbution?  Can you share some data with me?  [email protected]

  • NickNovember 14, 2017 Reply

    Shewart states” We usually choose a symmetrical range characterized by limits µ ± t.” are µ and sigma for samples or for population? How those should be calcculated for diffeenrt types of control charts?

    • billNovember 19, 2017 Reply

      Each control chart has different formulas.   YOu can look at the each control chart in our SPC Knowledge base to see the formulas.

  • PaulNovember 14, 2017 Reply

    “The assumptions needed to apply this approach are not met – knowing the process is stable, knowing the exact underlying distribution, knowing the exact average and knowing the exact measure of dispersion. “ Considering above statement of this poblicaton, assume there is an online monitoring system which can measure desired quality characteristic easily and generate thousands of data points (samples). It seems it would be possible to measure (or at least estimate with high confidence) all above discussed parameters. Is that right?

    • billNovember 19, 2017 Reply

      It would be possible to do the calculations although there is no such thing as exact I don't think.  Just because you measure 1000s of points doesn't mean that the process is stable.

  • John123November 19, 2017 Reply

    Sometimes, when external auditors want to evaluate efficiency of monitoring procedure for a specific process, they mainly focus on the process team measures for eliminating special causes. What if process team does their best for finding special cause(s) but couldn’t find any special cause? Based on following section of this publication, could it be concluded that special cause of variation in fact is due to common causes? If so, does this means that maybe process monitoring procedure established and followed properly and not finding any special causes for taking action, is just due to the nature of SPS? In other words, can it be said that, not finding root causes for any out of 3sigma level is not a solid criterian for evaluation efficiency of monitoring process? “It is trade-off between making one of two mistakes – assuming that a result is due to a special cause of variation when in fact it is due to common causes or assuming that a result is due to common causes when in fact it is due to a special cause. You will make one of these two mistakes sometimes. The three sigma limits represent a method of minimizing the cost associated with making these mistakes.”

    • billNovember 19, 2017 Reply

      It is possible that the special cause is really a common cause. The more likely reason is that you simply can't find it the reason.   There are thousands of things that could have caused it probably.  Did the special cause go away?  If so, then just missed finding the reason.  It will probably be back. If it stays around you, you may have to adjust the process.  Please see this link for more info:

  • Adnan October 22, 2018 Reply

    In case of control charts the control limtis are dynamic, varies as mean varies. Data that is within control limit might goes out in future, how to interprete this situation. 

    • billOctober 22, 2018 Reply

      The historical data should not change.  If you have a subgroup size of 4, that is what it is for that subgroup.  Next one might be 3 but it doesn't change the previous subgroup

  • Satish LokareJuly 14, 2021 Reply

    I have calculated the 3 sigma value which is 3.3 to just predict the accuracy of dosing pump what is signification whether my pumps are working accurate ?Please reply me urgentlyThanks & regards,Satish Lokare  

    • billJuly 14, 2021 Reply

      I don't understand what you are asking me.  Can you email me the data?  [email protected]

  • DNovember 23, 2021 Reply

    Hi I'm dealing a test result that is approaching the USL and this leds us to try finding the best solution to solve it. I need to do a new set of 3Sigma baseline. Currently I have removed all the outliers result but I'm not sure what are the best charts can be used to simulate the data. Any advise on this?

    • billNovember 23, 2021 Reply

      I am not sure I understand what you are asking.  Are you saying that a metric is trending to the upper spec limit -not control limit?  Please send me the data and  I will look at it.  [email protected]

Leave a Reply

Your email address will not be published. Required fields are marked *