Search
Close this search box.

Three Sigma Limits and Control Charts

Three Sigma Limits and Control Charts

July 2017

(Note: all the previous SPC Knowledge Base in the control charts basics category are listed on the right-hand side. Select “SPC Knowledge Base” to go to the SPC Knowledge Base homepage Select this link for information on the SPC for Excel software.)

three-sigma

Control charts are based on three sigma limits. Despite this, there are lots of other diverse ways “control limits” have been calculated or just set over the years. Some try to adjust the three sigma limits – to narrow them in – to try to get an earlier warning of a problem. Some set the control limits to the specifications. Some just put the control limits where they want them to be.

Why are control charts based on three sigma limits? This publication addresses that question. Three sigma limits have been around for almost 100 years. And despite some attempts to alter this approach, three sigma limits appear to be the best way to approach control charts. In this issue:

You may download a pfd version of this publication at this link. Please fee free to leave a comment at the end of the publication.

Introduction

The calculation of control limits to place on a control chart is straight forward. The control limits are set at +/- three standard deviations of whatever is being plotted. The calculations have been around a long time. This is how you determine if you only have natural variation in the process (common causes which are consistent and predictable) or unnatural variation in the process (special causes which are unpredictable). This is the only way to separate special from common causes of variation. Yet, people continue to do weird things to determine their own “control limits.”

six-sigmaFor example, there is an on-line article from a teacher who was applying Six Sigma techniques in his classroom. He is to be commended for trying to improve what goes on in the classroom. Below is what he wrote about the “control limits” on his “control chart.”

“In manufacturing these limits are frequently calculated using three times the standard deviation, but that requires a consistent, highly controlled, highly repeatable process. In education, we must set these limits based on experience and our personal grading philosophies. As such, I have set my control limits at 75% and 88% for class-wide classwork weekly averages.”

First control limit calculations do not require a “consistent, highly controlled, highly repeatable process.” And his “control limits?” He plotted those “control limits” on his “control chart” along with the average grades over time from the six classes he teaches. That is like having six different processes on the same control chart. Here is the problem. Control limits are not set by anyone. Control limits are determined by the data. Not by you or me or anyone else. The 75% and 88% are just the teacher’s specifications for where he wants the control limits. They are not control limits and the chart he placed them on is not a control chart. Pure and simple.

The teacher did see some things to improve. But this will often happen if you just plot the data over time. But that doesn’t make it a control chart to allow you to separate special causes from common causes.

Sometimes people just use the specification limits as the control limits. Some use “two-sigma” limits. Others just change the control limits to what their manager wants them to be. Still others treat a control chart as a sequential test of a hypothesis and associate an error rate with the control chart – which essentially treats the control limits as “probability” limits.

Does it really matter how the control limits are set? After all, there is some gain simply from plotting the data over time. Yes, it does matter how control limits are set. The problem is that we seem to have made the control chart a more complex tool than it needs to be in recent years. One reason this has happened is we began to worry about probabilities instead of letting our knowledge of the process help us.

Probability and Control Charts

Some of us appear to have lost sight of what a control chart is supposed to do. We seem to focus more and more on probabilities. You have heard this no doubt: the probability of getting a point beyond the control limits is 0.27% (assuming your data are normally distributed) even when your process is in statistical control (just common causes present). Or conversely, the probability of getting a point within the control limits is 99.73% when your process is in statistical control. I am guilty of doing this in some my writings over the years. We worry about increasing those false signals – assuming something is a special cause when it is due to common cause.

probabilitySome people look at a control chart as a series of sequential hypothesis tests and assign an error rate to the entire control chart based on the number of points. An on-line article(from statit.com) does that and recommends increasing the three sigma limits to larger values as the number of points on the chart increases. In fact, they appear to scoff at the reason the three sigma limits were originally set:

“Well, Shewhart and Deming would tell you that they have been shown to work well in practice, that they minimize the total cost from both overcorrecting and under-correcting.”

And then they say that the reason the three sigma limits worked was because everything was based on 25 subgroups. They then talk about the Type 1 error. The probability of getting a point beyond the control limits is 0.27% even when the process is in statistical control. So, using the sequential hypothesis test approach, the probability of getting a point beyond the control limits for 25 points on a control chart is:

1 – 0.997325 = 0.065

This means that there is 6.5% chance of a point being out of control whenever you have a control chart with 25 subgroups. And as you add more points, that probability increases. For 100 points, the probability is given by:

1 – 0.9973100 = 0.237

So, there is a 23.7% chance of one point being beyond the control limits with a control chart that has 100 points. They recommend you increase the number of sigma limits to get the error rate close to 0.05. For 100 points, they recommend you use 3.5 sigma limits. This drop the error rate to less than 0.05.

percentIf you view control charts from the probability approach, what this article states is true. I did a small experiment to confirm this. I wrote a little VBA code to generate random numbers from a normal distribution with a mean of 100 and standard deviation of 10. I then generated 100 control charts containing 25 subgroups and determined the number of out of control points when using three sigma limits. I repeated the process for 100 control charts containing 100 subgroups and, again, determined the number of out of control points when using three sigma limits.

For the 100 charts containing 25 subgroups, there were 6 control charts with at least one point beyond one of the control limits. So very close to the 6.5% mentioned above.

For the 100 control charts containing 100 subgroups, there were 30 control charts with at least one point beyond one of the control limits. So, 30% had “false signals.” A little higher than the 23.7% shown above.

I then changed the control limits to be 3.5 sigma limits and generated 100 control charts with 100 subgroups. For those 100 control charts, there were 6 control charts with at least one point beyond one of the control limits. Expanding the limits from 3 to 3. 5 for a control chart with 100 subgroups dropped the % of control charts with false signals from 30% to 6%. Not surprising since the control limits are wider at 3.5 sigma. The table below summarizes the results of the simulation.

Table 1: Summary of Sigma Limit Simulation for 100 Control Charts

Sigma Limits Number of Subgroups Number of Control Charts Containing Out of Control Points
3 25 6
3 100 30
3.5 100 6

 

But is this something you should do? Change the number of sigma limits based on the number of points? We seemed to have lost our focus on what control charts are used for. Let’s go back to the start of control charts with Dr. Walter Shewhart.

Shewhart and The Origin of the Three Sigma Limits

Dr. Walter Shewhart is regarded as the “father of statistical quality control.” He developed the control chart almost 100 years ago. Control charts were described in 1931 in his book Economic Control of Quality of Manufactured Product. He is the one who set the control limits at three sigma. How did he arrive at this?

Dr. Shewhart divided variation in a process into two categories: controlled variation and uncontrolled variation. Controlled variation is the process variation that is described by a consistent and predictable pattern of variation. He said this type of variation was due to “chance” causes. It is what we call common causes of variation. Uncontrolled variation is described as patterns of variation that change over time unpredictably. He said these unpredictable changes were due to assignable causes, what we call special causes more often today.

The control chart he developed allows us to determine what type of variation we are dealing with. Does the process show unpredictable variation? Or does the process show predictable variation?

purposeWhy should you care what type of variation you have present? The answer is that the type of action you take to improve a process depends on the type of variation present. If your process has variation that is consistent and predictable (controlled), the only way to improve this process is to fundamentally change the process. The key word is fundamental. But, if the process has unpredictable variation, the special cause responsible for the unpredictability should be identified. If the special cause hurts the process, the reason for the special cause needs to be found and eliminated. If a special cause helps the process, the reason for the special cause should be found and incorporated into the process.

This concept of common and special causes is the foundation of the control charts Shewhart developed. A process that has consistent and predictable variation is said to be in statistical control. A process that has unpredictable variation is said to be out of statistical control.

So, how did Shewhart determine that three sigma limits were the correct ones to use? Here is a quote from his book mentioned above:

“For our present purpose, a phenomenon will be said to be controlled when, through the use of past experience, we can predict within limits, how the phenomenon may be expected to behave in the future. Here it is understood that prediction within limits means that we can state, at least approximately, the probability that the observed phenomenon will fall within the given limits.”

And more from his book:

“We must use limits such that through their use we will not waste too much time looking unnecessarily for trouble.”

“The method of attack is to establish limits of variability . . . such that, when an observation is found outside these limits, looking for an assignable cause is worthwhile.”

” We usually choose a symmetrical range characterized by limits µ ± t.”

“Experience indicates t=3 seems to be an acceptable economic value”

“Construct control charts with limits µ ± t for each statistic. If an observed point falls outside these limits, take this fact as an indication of trouble or lack of control.”

Shewhart’s choice of three sigma limits considered more than just probability. The second part of the first quote above talks about probability but there was much more to his decision. The strongest justification appears to be the simple fact that they work. It is trade-off between making one of two mistakes – assuming that a result is due to a special cause of variation when in fact it is due to common causes or assuming that a result is due to common causes when in fact it is due to a special cause. You will make one of these two mistakes sometimes. The three sigma limits represent a method of minimizing the cost associated with making these mistakes.

Here is one more quote:

“Hence the method for establishing allowable limits of variation in a statistic depends upon the theory to furnish the expected value and the standard deviation of the statistics and upon empirical evidence to justify the choice of limits.”

empirical

So, you need a method of calculating an average and a standard deviation of what you are plotting. That is the statistical part. But, the empirical evidence appears to have been the key. And from Dr. Donald Wheeler in his book Advanced Topics in Statistical Process Control (www.spcpress.com):

“Three sigma limits are not probability limits.…..it is important to remember that there other considerations which were used by Shewhart in selecting this criterion….the strongest justification of three-sigma limits is the empirical evidence that the three sigma limits work well in practice – that they provide effective action limits when applied to real world data.”

In Dr. Wheeler’s book, he does use some statistics to explain why the control limits work so well, but clearly states that these statistics “cannot further justify the use of three sigma limits, but reveal one of the reasons they work so well.”

Back to Probability and Control Charts

Dr. Wheeler wrote explicitly about control charts and the probability approach in his book referenced above. This section summarizes some of his points. First, remember what control charts do. They determine if there is controlled or uncontrolled variation in a process. This is what a control chart does. What is the probability approach to control charts? You have seen it above – that control limits are calculated so that 99.73% of the time a point will be within the control limits and 0.27% of the time out of the control limits. Dr. Wheeler points out that Shewhart addressed this in his book. Essentially Shewhart wrote that if a process was perfectly stable and if we knew the details of the underlying statistical distribution, then we could work in terms of probability limits.

But that is not the real world. In reality, we never know those two things for sure. Nor do we ever know for sure the average and the measure of dispersion (e.g., standard deviation) of whatever underlying distribution there may be. So, the probability approach does not apply. The assumptions needed to apply this approach are not met – knowing the process is stable, knowing the exact underlying distribution, knowing the exact average and knowing the exact measure of dispersion. “Thus a major problem with the probability approach to control charts is that it is totally out of contact with the real world.”

Dr. W. Edwards Deming also spoke about this in his book Out of the Crisis. Dr. Deming said:

“The calculations that show where to place control limits on a chart have their basis in the theory of probability. It would nevertheless be wrong to attach any particular figure to the probability that a statistical signal for detection of a special cause could be wrong, or that the chart could fail to send a signal when a special cause exists. The reason is that no process, except in artificial demonstrations by use of random numbers, is steady, unwavering.”

Dr. Deming goes on to say:

“Rules of detections of special causes and for action on them are not tests of hypothesis that the system is a stable process.”

The probability approach has led to people putting restrictions on control charts. The data must be normally distributed. Control charts work because of the central limit theorem (our May 2017 publication addresses this fallacy). This has hurt the use of control charts over time.

three-sigam

Control charts are consistent with theory but it is the empirical evidence that they work that takes them outside the restrictions of the probability approach. Control charts work in the real world – unlike the assumptions needed to use the probability approach. It is hard for some of us to accept that control limits work because of all the empirical results.

Summary

This publication looked at three sigma limits and the justification behind them. Some approach control charts with probabilities. While Shewhart considered probabilities in his three sigma approach, there were other more important considerations. The major one was that the three sigma limit work in the real world. They give a good balance between looking for special causes and not looking for special causes. The concept of three sigma limits has been around for almost 100 years. Despite attempts to change the approach, the three sigma limits continue to be effective. There is no reason to use anything else on a control chart. Dr. Shewhart, Dr. Deming and Dr. Wheeler make pretty convincing arguments why that is so.

Quick Links

Thanks so much for reading our SPC Knowledge Base. We hope you find it informative and useful. Happy charting and may the data always support your position.

Sincerely,

Dr. Bill McNeese
BPI Consulting, LLC

View Bill McNeese

Connect with Us

guest
21 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
DaleW

Hi Bill,Imagine that you worked at a process with a online monitor that returned a measurement every second.  Suppose that the common cause scatter is close to normally distributed, and there is automated SPC software set up to handle the measurements.  Are you sure that you'd be happy with a false alarm being triggered every 6 minutes or so?

.

Hi Dale,
I probably wouldn't chart each data point.  I would probably take a time frame (minute, five minutes, whatever) and track the average of that time frame over time as well as the standard deviation of the time frame, both as individuals charts.  We used to do that with PVC reactors where we tracked reactions temperatures for a batch.  Gave us some good insights into differences in batches.  

DaleW

A longer interval Xbar-S chart would be a more obvious alternative if we don't need a quick response.  But what if our automated control system with deadband really needs to respond quickly because special cause upsets can grow suddenly?  The traditional 3 sigma limits are ultimately a (deadband) heuristic that works well when the sampling rate is low (a few samples per day).  I think a decent case can be made that SPC limits need to be wider to control the overall false positive rate when applying SPC principles to the much higher frequency sampling often seen in the computer age.

Helge

I did a simulation of a stable process generating 1000 datapoints, normally distributed, random values. From the first 25 data points, I calculated 3 sigma limits and 2 sigma "warning" limits. Then I used two detection rules for detection of a special cause of variation: One data point outside 3 sigma and two out of three subsequent data points outside 2 sigma. Knowing that my computer generated normally distributed data points, any alarm is a false alarm. I counted these false alarms for my 1000 data points and then repeated the entire simulation a number of times (19) with the same value for µ and sigma. Then I plotted the number of false alarms detected (on the y-axis) as a function of where my 3 sigma limits were found for each run (on the x-axis). Above 3 sigma, the number of false alarms was quite low, and decreasing with increasing limit. Below 3 sigma, the number of false alarms increased rapidly with lower values for the limit found. At 3 sigma, there was a quite sharp "knee" on the curve which can be drawn through the data points (x = control limit value found from the first 25 data points, y = number of false alarms for all 1000 data points in one run). This simulation was quite convincing to me.The simulation also reminded me that using more detection rules at the same time (of course) increases the number of false alarms. But independent of which rules are used and how many detection rules I use at the same time, the "knee" of this curve will still be at 3 sigma, because all the detection rules are constructed in a similar way with respect to the sigma value found in phase 1 of constructing the control chart.It would be an idea to have some advice on which detection rules should we use! We should not use them all at the same time? I guess that if a "trend" because of wear-out is a typical failure mode you expect to happen to your process, the "trending" detection rule is nice to use. Can anyone give some examples from real life processes, how many rules and which rules are used in practice?

.

Sounds like you did some detailed work on this.  The number of rules you use, to me, should be based on how stable your process is.  If it is not very stable, I would probably use points beyond the control limits only.  The other thing to consider is how important is a little drift in the average.  If not very important, I would stay with points beyond the control limit.  If is important (and you don't have many beyond the control limits) then I would add the zone tests.  Just personal opinion.

Raphy

 Plotting environmental monitiring microbial counts in a classified room often reveals a signifcant number of "extreme" counts that exceed the 3-sigma limits (Microbial counts are often a skewed distribution). The Quality Assurance (QA) person will be delighted to reduce every false-alarm as this will reduce the gmp requirement to document  every apparent deviation. Besides, he feels there is nothing he can do.Would you, under these so common circumstances, set control limits as 4-sigma, 5-sigma or 6-sigma limits as empirical limits (especially when the regulatory limits of microbial counts are higher)? Is it legitimate to interpret  the above behavior as a "normal process behavior due to normal causes" and only far-extreme counts be suspect of a "special cause" and worthy of investigation?  Is it legitimate of the QA to view the 5-sigma or 6-sigma limits  viewed as a trade-off in monitoring microbial counts just as Shewhart considered the 3-sigma limits as a trade-off in manufacturing processes?

.

Interesting issue.  I am not familiar with microbial counts; however, I always believe you should use your knowledge of the process.  It is makes sense to you, do it.  How skewed is the distrbution?  Can you share some data with me?  bill.com

John123

Sometimes, when external auditors want to evaluate efficiency of monitoring procedure for a specific process, they mainly focus on the process team measures for eliminating special causes. What if process team does their best for finding special cause(s) but couldn’t find any special cause? Based on following section of this publication, could it be concluded that special cause of variation in fact is due to common causes? If so, does this means that maybe process monitoring procedure established and followed properly and not finding any special causes for taking action, is just due to the nature of SPS? In other words, can it be said that, not finding root causes for any out of 3sigma level is not a solid criterian for evaluation efficiency of monitoring process? “It is trade-off between making one of two mistakes – assuming that a result is due to a special cause of variation when in fact it is due to common causes or assuming that a result is due to common causes when in fact it is due to a special cause. You will make one of these two mistakes sometimes. The three sigma limits represent a method of minimizing the cost associated with making these mistakes.”

.

It is possible that the special cause is really a common cause. The more likely reason is that you simply can't find it the reason.   There are thousands of things that could have caused it probably.  Did the special cause go away?  If so, then just missed finding the reason.  It will probably be back. If it stays around you, you may have to adjust the process.  Please see this link for more info:
https://www.spcforexcel.com/knowledge/control-chart-basics/when-calculate-lock-and-recalculate-control-limits

Nick

Shewart states” We usually choose a symmetrical range characterized by limits µ ± t.” are µ and sigma for samples or for population? How those should be calcculated for diffeenrt types of control charts?

.

Each control chart has different formulas.   YOu can look at the each control chart in our SPC Knowledge base to see the formulas.

Paul

“The assumptions needed to apply this approach are not met – knowing the process is stable, knowing the exact underlying distribution, knowing the exact average and knowing the exact measure of dispersion. “ Considering above statement of this poblicaton, assume there is an online monitoring system which can measure desired quality characteristic easily and generate thousands of data points (samples). It seems it would be possible to measure (or at least estimate with high confidence) all above discussed parameters. Is that right?

.

It would be possible to do the calculations although there is no such thing as exact I don't think.  Just because you measure 1000s of points doesn't mean that the process is stable.

Adnan

In case of control charts the control limtis are dynamic, varies as mean varies. Data that is within control limit might goes out in future, how to interprete this situation. 

.

The historical data should not change.  If you have a subgroup size of 4, that is what it is for that subgroup.  Next one might be 3 but it doesn't change the previous subgroup

Satish Lokare

I have calculated the 3 sigma value which is 3.3 to just predict the accuracy of dosing pump what is signification whether my pumps are working accurate ?Please reply me urgentlyThanks & regards,Satish Lokare  

.

I don't understand what you are asking me.  Can you email me the data?  bill.com

D

Hi I'm dealing a test result that is approaching the USL and this leds us to try finding the best solution to solve it. I need to do a new set of 3Sigma baseline. Currently I have removed all the outliers result but I'm not sure what are the best charts can be used to simulate the data. Any advise on this?

.

I am not sure I understand what you are asking.  Are you saying that a metric is trending to the upper spec limit -not control limit?  Please send me the data and  I will look at it.  bill.com

Scroll to Top