Cpk vs Ppk: Who Wins?

Quick Links

SPC for Excel Software

Visit our home page

SPC Training

SPC Consulting

Ordering Information

Thanks so much for reading our publication. We hope you find it informative and useful. Happy charting and may the data always support your position.


Dr. Bill McNeese
BPI Consulting, LLC

View Bill McNeese

Connect with Us

Comments (58)

  • AnonymousMay 31, 2014 Reply

    Hi!!I am a regular reader of all the articles posted on your website and they are really very informative as well as useful. Thanks a lot for posting.While going through this article i feel it need corrections in two places:1. We are differentiating standard deviation between Cpk and Ppk with the help of sign s & sigma. But i have seen in formules we have used these signs but when i see the explanation then in both cases we are using "s" as symbol for standard deviation. It is creating little confusion while understading the difference.2. You mentioned above checlist of 5 terms as adviced by Dr. don. In case of first point it is clearly mentioned that we need to construct the control chart to see if our data is in statistical control. Now the limits or we can say natural variation is already calculated in this first point so why are we asked to do same in point number 3 " For a process that is in statistical control, calculate the natural variation in the process data".Kindly clear these doubts.Thanks again for posting such a wonderful posts…RegardsAshok Pershad

    • billMay 31, 2014 Reply

      Hello. Thanks for your comment. Yes, different books/articles/people handle s and sigma differently – or call them both s as you said. There is not consistency in the approach. It would be better to use the terms the “within” and “overall” to describe which one you are talking about. I typcially use “s” for the overall and “s” for the within.
      The natural variaiton is not the same as the control limits. THe natural variation is 6s. The control limits are based on what you are plotting, i.e., the subgroup averages in the examples in this article.
      Best Regards,

      • TMatuSeptember 30, 2022 Reply

        What is the defination of natural variation and how do you calculate it?  Is it the Cp calculation?

        • billOctober 1, 2022 Reply

          Hello,  the natural variation is 6 times sigma.  See above how it is caculated.  It is the denomiator of Cp

  • DnyandeoDecember 31, 2014 Reply

    Thanks really helpful. since it is in simple and plain hence carries no confusion. I have been regular reader of your articles.

  • palashOctober 4, 2016 Reply

    hello bill      Could you please explain how you calculate UCL and LCL in X-bar chart you shown in your article. Generally UCL= X-bar+3* sigma and LCL=X-bar-3*sigma. Please explain this!! 

    • billOctober 4, 2016 Reply

      The controls limts are referred to as three sigma limits, but it is three sigma limits of what is being plotted.  In this case, that is subgroup averages.   Plus the value of sigma is estimated from the average range.  We have a two part series on Xbar-R control charts in the SPC Knowledge Base.  The first part is here:
      We also have an article that explains where control lmits come from:
      Let me know if these do not answer your questions.

      • ShanDecember 7, 2021 Reply

        Good article, but it would be nice if you had an arrows with explanation how you call it each element in formula.

  • HenryMarch 30, 2017 Reply

    What do you suggest is more valuable when the subgroup of the control chart is 1?   Is there any value in estimating sigma from a range chart?  or the overall variation is better and consequently Ppk is more valuable?  What are your thoughts?Thank you

    • billMarch 30, 2017 Reply

      When individual values are used, the moving range chart is used to estimate sigma.  The moving range chart uses the range between consecutive points.   So, sigma estimated from the average moving range still looks at the variation in individual values.  I don't think it makes a difference if individuals values are used or subgroups are used.  I still find Cpk more valuable because it says what the proess is capable of doing in the short term.  Of course, if in control, Cpk and Ppk will be the same essentially.

  • SamirJuly 2, 2017 Reply

    Thank you very much. Good explanation, need just 5 minutes to understand this concept. 

  • AshokJanuary 8, 2018 Reply

    Hi Bill,You have said that the X bar chart for the process is not stable and the points are not in control limits, Then how could we rely on cpk value. Because for an inconsistent process it shows the value to be higher than 2. So how could you conclude that cpk is better and ppk.Dont know whether my understanding is wrong. please explain. Thanks

    • billJanuary 8, 2018 Reply

      Hello.  The point I am trying to make is that many people just calculate the Cpk value without considering whether or not the process is in statistical control.  This one is not.  So the Cpk value has no meaning – nor does the Ppk value.  Since the process is not in control, you have no idea of what hte results will be in the future.  To have meaning, the process has to be in control.  If it is, then Cpk and Ppk will be very close.

  • Brown-ChinaJanuary 30, 2018 Reply

    Hi Bill,First thanks for your information, it's really useful.I have two questions.1.many people think PPK is one index that already consider speical cause and common cause ,thus they also think there's no need to consider if the process is in statistical control or not before caculating PPK.I saw you said if not in statistical control, CPK and PPK are both meaningless, how do you understand the difference?2.Whatever the process is in development or after mass production, always caculate CPK first?

    • billJanuary 30, 2018 Reply

      If your process does not show some degree of consistency (being in statistical control), it is impossible to know what the near term future looks.  You don't know where the process will be so, calculating anything on that process (average, Cpk, Ppk, etc.) doesn't give you any real information because you won't get similar results in the future.  If you have lots of data the impact of special causes can be less when calculating the standard deviating but not from estimating it from a range control chart.  I would always calculate Cpk, but you can calculate both.  If they are similar, the process is probably in control.

  • balaApril 4, 2018 Reply

    if the Cpk more than Ppk . what does it mean? 

    • billApril 4, 2018 Reply

      If there is a large difference between the two, it usually means that the process is not in statistical control.

  • Steve-oApril 5, 2018 Reply

    Hi Bill, In the equation below figure 5 and again in the equation below figure 7 you use 2.059 for d2.  But there are 30 observations in the sub group for the averages.  Why use the d2 for a subset of 4?  Was that arbitrary?

    • billApril 6, 2018 Reply

      d2 is a constant based on subgroup size, in this example, 4 since there are four samples per subgroup.  Yes, my choice of 4 was arbitrary for this example.

  • SDApril 25, 2018 Reply

    Excellen material 

  • AnonymousJune 21, 2018 Reply

    Hello Bill, Really great article about SPC.! I got one question, the only purpose of calculating PPK seems to compare with CPK in order to see if the process is in statistical control or not. PPK looks quite meaningless, doesn't it? There are different articles/opinions that CPK/PPK reprents short/long term capability of process, how do you think? THANKS!

    • billJune 21, 2018 Reply

      Thanks.  Short and long term.  Yes i have read those. Usually Cpk is short tand Ppk is long.   It is a matter of how quickly your process changes i image.  Only use for Ppk is if you can't get your process under control ever.  But in that case you never know what it will be next time.  So, quite meaningless actually as you say.

  • Adriana CortesJuly 18, 2018 Reply

    I haven't seen tables with d2 for a subgroup of 1 but ussing your logic about the difference between Cpk and Ppk when the values are shuffled I will think that for both the value will be the same?How do you calculate cpk for a subgroup of 1?

    • billJuly 19, 2018 Reply

      If there are individual values, the average range is the average of the range between consecutive samples.  d2 is 1.128 in that case.  

    • NoemiOctober 5, 2018 Reply

      I love your blog.. very nice colors & theme.
      Did you create this website yourself or did you hire someone to do
      it for you? Plz respond as I'm looking to construct my own blog and would
      like to know where u got this from. thanks

  • BanaOctober 3, 2018 Reply

    Hi, My question is how did you get the value of d2 is 2.059? Could you explain?

  • David203June 6, 2019 Reply

    The example throws me off. I get it that the goal is to be consistent, but in all things process related error closer to zero is good – or in this case Cpk greater is better. In the second data set the limits pull in naturally because the data shows higher consistency. While that does produce control charts that show greater variance from the norm based on the small sample it still exceeds the process requirements. If a process control chart results in a Cpk increase (bigger is better) why would this mean the process is out of control? The x-bar hart in data set 2 shows out of range based on the small set, but the ultimate goal of exceeding expectations is being met. The process should not be compromised because a subset performed well and had some outliers that still fell into the greater range. Did I miss something?

    • billJune 6, 2019 Reply

      Hello David,
      It is all about consistency.  Unless your process is in control you can't predict what it will make in the future.  So even though an 'out of control" process is within specification, it is not good – for your or your customer probably.  Bringing it into control with reduce the variation and make the process even better.  Cpk increasing does not mean that hte process is out of control.  Cpk has no meaning if the process is out of control because you don't know the average or the variation.

      • David203June 13, 2019 Reply

        When you say Cpk has no meaning if the process is out of control because you don't know the average or the variation, I disagree based on your example. If your Ppk is less than your Cpk you are closer to, not futher from, your average. And your variation is better than, not worse than, your established benchmark. This would indicate your process is performing in control, not out of control. It would indicate you could improve your process and the data is telling you that you could do better, but that would be a business decision. It would make no sense at all to start looking for ways to decrease your Cpk to bring it closer to your Ppk in your process because it is becoming more consistent. If your example was indicating Cpk dropping consistenly lower than Ppk then I would agree with this example, but this is not the case – your example shows Cpk significantly better than Ppk – which is good and in control.

        • billJune 13, 2019 Reply

          You can, of course, chose not to look for a special call of variation.  You just miss that opporutnity to hopefully find and remove the reason for the special cause.  The purpose of this article was to say that, if your process is in stastical control, Ppk and Cpk will the same, as shown in the first example.  If they are signficanlty  different, then that is an indication that your process is not in control – it is not consistent and predictable.  You can't be sure of getting similar results later in the process.  

  • PavanJune 10, 2019 Reply

    Hi Sir, Based on the example given, before calculating cPk the data isnt verified for natural distribution. The data provided is resulting in p value 0.039 (Using Anderson Darling test for Normality must be greater than 0.05) which denotes data is not following a normal distribution (Considered 120 data points from example). Now, whether cPk can be calculated for a data which doesnot follow natural distribution without transforming the data?Please clarify / correct me..Thanks in advance  

    • billJune 12, 2019 Reply

      I didn't worry about checking normality because the histogram looks close enough to me.  Also remember that the Anderson-Darling test will give wrong indications for large data sets – which 120 probably is.  If you take the first sample from each subgroup and run the normal probably plot wtih those 30 points, the p value is .79 – which says it is normally distributed.  For large data sets, rely on the histogram – not the normal probability plot – to decide about normality – and of course your knowledge of the process.

      • Ger de WaardAugust 5, 2022 Reply

        I noticed the same as I was replaying it in Minitab 21. But if you change the dataset numbers
        Table 2 Table 3





        Table 2 Table 3
        Than the p-value goes to 0,057 which is normally distributed and your story , which is very explanatory is more correct.

  • EzhilarasanJuly 15, 2019 Reply

    HiCan you tell my observation is Right or Wrong?   1. When between subgoup variation is more 1a. Material batch variation. 1b. Operator variation 3c. May be measurement variation. from one sub group to another subgroup.With in sub group variation is less wehn Machine give the output (Standard deviation) range is same.Kindly reply I am right or wrong

    • billJuly 17, 2019 Reply

      I am not sure I understand what you are asking.  If the between subgroup variation is much larger than the within subgroup variation, the control limits will be very tight and you should look at using a Xbar-mR-R chart.  

  • Mark AndersonAugust 28, 2019 Reply

    In the automotive manufacturing industry, the standard for when to use Ppk and Cpk differs some. Maybe you could validate or explain the reasoning for this. In the automotive industry, Ppk is used for initial process studies and is based off a single run. Cpk is used to determine capaibility over multiple runs. My understanding of this is because Ppk is a measure of process performance, and Cpk is a measure of process capability. And until you introduce all the different sources of variation such as component lot to lot, operator, changeover… etc, you cannot say the process is stable or capable. And this needs to be done over multiple runs. From a single run you can only analyze the current performance. And that is why for initial process studies with a single run, Ppk is used to evalauate the performance of the process and determine whether it meets the expectations. 

    • billAugust 29, 2019 Reply

      Thanks for the insights.  I agree with what you say.  A true process capabilty study has to have the potential sources of variation present.   

  • RamDecember 8, 2019 Reply

    Hi Dr. Bill, Thanks for the excellent explanation of Ppk vs Cpk. I understood it. I just have a question on the control charts shown. I believe that the LCL and UCL are calculated based on +/- (A2 * R-bar). My thinking was that the value for A2 * R-bar should be close to 3 sigma where sigma is the standard deviation of the sub-group averages. I was working out the numbers on Process 2 and these values are far apart. Appreciate your comment…….Regards/Ram

  • kumal May 31, 2020 Reply

    what's need to be calculate during Part Devoplement and why ???

  • PRASHANTHAAugust 12, 2020 Reply

    Is cpk value 9 is good?Any how it is more than 1.67 which is acceptance criteria for that parameter.Is there any Ideal value like from 1.33 to 2.00??!!  

    • billAugust 12, 2020 Reply

       The higher the Cpk value, the better.  9 is very high, I seldom see one that high.  What is acceptable depends on you and your cusotmer.

  • RajeshSeptember 8, 2020 Reply

    Very nicely explained the difference; would like to know also abot short term and long term sigma 

  • Tim I.October 29, 2020 Reply

    Hi Bill, great info.  Do you know where I can find Ppk ranges (like a table) and what ranges are capable and which are not?  From what I've read, Ppk values >1.33 are robust, 1.0<Ppk<1.33 are capable and Ppk values <1.0 are said to be out of control.  Thanks for your help!

    • billOctober 30, 2020 Reply

      A Ppk value less than 1 is not capable – the process is not capable of meeting specifications.  It could be in statistical control though.  A Ppk vaue greater than 1 is capable.  The desired value of Ppk is usually > 1.33 now.  

  • nikunjDecember 30, 2020 Reply

    Hello Bill,Nice content of article as well as presenation.Just to know that , PpK is used in development stage but in development stage there is results of only 03 to 05 batches, so can we still calculate PpK value ?One more confusion: In commercial stage : what shall be calculate if we have 20 batches manufactured in one year and each batch have one reading of Assay. How to conclude the process?

    • billDecember 30, 2020 Reply

      In the development stage, you use whatever if you have – that is all you can do.  You can calculate process capability.  Also with the 20 batches a year – that is all you have so you just use that data.

  • Siddhesh Borse February 18, 2021 Reply

    if the data distribution is non-normal , one should transform the data using box-cox or Johnson transformation . But, what about the specification limits? How should one transform those ? Thank you for the article, Bill . Regards

    • billFebruary 18, 2021 Reply

      You use the same equation that is used for the individual data points – just put in the specs.

  • OnurSeptember 14, 2021 Reply

    Thank you very much for these useful informations.

  • Tayibur RahmanSeptember 25, 2021 Reply

    Need for Analysis .

  • AnonymousMarch 15, 2022 Reply

    Do identified assignable special variation dtaa points are included while performing Ppk analysis? Please someone answer?

    • billMarch 15, 2022 Reply

      My rule of thumb is that if you know the reason for the assignable cause and have corrected it, you may remove it from the calculations.  With enough data, it won't make much of a difference most likely.

  • RonnMarch 15, 2022 Reply

    Do identified assignable special variation dtaa points are included while performing Ppk analysis? Please someone answer?

  • AnonymousOctober 16, 2022 Reply


  • AnonymousOctober 25, 2022 Reply

    Hi Bill, I have some confusions in concept of Cpk and Ppk while using practically these tools. Let say cycle time of my process is 5 hours and batch size is 100000 pieces. So. if i want to calculate Cp & Cpk for short term analysis then should i take some sample pieces (say 50 pieces) at start of process and calculate Cp & Cpk on the gathered data samples. Then run process and for long term analysis (Pp & Ppk) i collect samples of 20 pieces after every 50000 pieces or every 1 hour then calculate the Pp & Ppk with help of this data. I want to ask my understanding of short term and long term is right or wrong, if wrong please correct me.
    One more question is i read in different articles that for Pp and Ppk whole population is considered for the study so in above case whole population will be 100000 pieces or as i am using 20 pieces after defined frequency can also be considered as whole population?
    Thanks for your time

    • billOctober 26, 2022 Reply

      You can calculate Cp and Cpk as you indicated after getting 50 pieces.  Pp and Ppk usually are a longer time period – all the data if you have it.  If you process is in control, the results will be about the same whether you take it long term or short term.

Leave a Reply

Your email address will not be published. Required fields are marked *