Control Charts and the Central Limit Theorem


Quick Links

SPC for Excel Software

Visit our home page

SPC Training

SPC Consulting

Ordering Information

Thanks so much for reading our publication. We hope you find it informative and useful. Happy charting and may the data always support your position.

Sincerely,

Dr. Bill McNeese
BPI Consulting, LLC

View Bill McNeese

Connect with Us

Comments (13)

  • Scott HindleJune 1, 2017 Reply

    As we increase the number of detection rules we increase the sensitivity of the chart to detect process changes. But, we also risk an increased false alarm rate. While Rule 1, a point beyond 3-sigma, will always be applied your article lends to two questions:- Which (other) detection rules to use given skewed data?- What justifies the choice?With detection rules specified in advance of the analysis we have an operational definition of a special cause. Will you cover this in a future article? Your last sentence in the article raises awareness in the user, but doesn't necessarily guide the user to an effective course of action (which detection rules to use) given this increased level of awareness.Thanks for the article and looking forward to hearing your response.Scott.

    • billJune 1, 2017 Reply

      The main thing is to be aware.  I imagine you could, for a set distribution, do the calculations to figure out the probabilities of the various tests for that distribution and adjust them accordingly.  But more work than it is worth.  The key thing to me is that the data are plotted over time and you are asking the question what does this chart tell me about my process.  If i have heavily skewed data, i would probably just use points beyond the control limits and watch for "long" runs below or above the average.  What is "long"?  Depends on your process.  You know your process.  But for non-normal processes, points beyond the control limits is the only for sure test I would use.

  • LAXMI KANT GOYALJune 1, 2017 Reply

     HELPFULL &USEFULL  THANKS & REGARDS

  • Guillermo AdamesJune 18, 2017 Reply

     liked very much your article and there are a couple of points  that come to my mind. Your problem is how to solve the process summarised  in graph 1. I do not want to think that  the statistician /engineer /administrator etc. in charge of analizing your process drew graph 1 with an absolute lack of knowledge of the process itself. Results are what they really are. Subgrouping is a well known refuge  technique that I compare to "moving averages"  in order to provide an answer of the type "in the long run your results are…"   BUT you might crash tomorrow.  Still your problem is that you have a graph 1 like process.  What to do? Probably re-assesing the whole lot of the process and avoid hiding behind statistical/arithmetical  techniques and still crash at the end of the day with all your staistical magic. Dr. Wheeler gets nicely in the "why's" the reacomodation will not work but  to my way of thinking the problem is still the same: Graph 1: it is like cancer: if somebody is sick of cancer, a new subgrouping or other techniques will not solve it: it is cancer. I do not see any other solution but revising completely the process: do you?  Thanks for reading me, regards  Guillermo

    • billJune 18, 2017 Reply

      Changing the process is definitely one thing to do, but there are naturally occurring skewed processes.  The question is how do you deal with skewed distribution?  Subgrouping is one approach.

  • DaleWJuly 31, 2017 Reply

    Hi Bill,Did you know that British SPC (as espoused by John Oakland) had redone Range charts to try to make them work better?  That's right, no fake symmetric ±3 sigma limits on Range or StDev charts for the British.  Their constants draw lines at reasonable 0.1% tails based on a proper Chi-Square distribution for ideal normally distributed data.  Guess they were tired of getting 0.92% false positives above the UCL on their simulated Moving Range charts, instead of the ideal 0.135%, so they fixed it.

    • billAugust 1, 2017 Reply

      Was not aware of that.   I will take a look at his work.  In the simulation used for the newsletter, the range chart had many false signals for the ideal random sampling. 

  • John D KromkowskiOctober 21, 2017 Reply

    Maybe a mention of Chebyshev’s inequality? At least 89% of data in any unimodal distribution must fall will 3 standard deviations.Some place in the archives of the DEN (Deming Electronic Network defunct for a while) there was a discussion about this.

  • Jeff2017November 13, 2017 Reply

    Though normal distribution is not a prerequisite for control chart but suppose there is a process and its historical data show that this specific process had followed a normal distribution over several years. Now, instead of sampling, we run the system for 3 shifts and produce 50 consecutive parts, and then measure required characteristic and check the probability plot. P value is less than 0.05 and box plot also shows an outlier. Can it be concluded that there is (at least) a special cause for this system? Any advice would be appreciated.

    • billNovember 13, 2017 Reply

      50 points may not be enough to determine if data are normally distributed, particularly if  you take then consecutively in a short period of time.  You know your process – has anything you know of changed?  I would not conclude that there is an issue based on 50 consecutive samples.  Have you done a control before and after?

  • RichMarch 8, 2018 Reply

    I'm confused.  You state that the Range Charts for the differing sample sizes "<em>don’t seem to be behaving like subgroup averages. They don’t seem to be forming a normal distribution that is tighter than the original data as the subgroup size increases</em>"I'm looking at the graphs you provided and they sure seem, to me, to behave just like the averages.  The distribution may not be as tight as the average charts, but they sure look to me like they're moving to normality.

    • billMarch 8, 2018 Reply

      Subgroup averages tend to form a normal distribution very quickly as subgroup size increases.  All the data are used to calculate the average.  Ranges, regardless of subgroup size, use two  numbers- the max and the min.  The range and averages chart do not look the same as the subgroup size increases.

Leave a Reply

Your email address will not be published. Required fields are marked *