Maybe I'm just being too 'literal man' here. The issue I was asking the OP was he didn't explicitly say this is what he's doing. I have no quarrel or objection to using simulation within say, the JMP Prediction Profiler, to create estimated process capability indices in this manner.Īs you mention.we can discuss the merits of the simulation assumptions 'till we're blue in the face.as well as the often times arbitrary assignment of 'magic' process capability targets like 1.0, 1.33 or the chimeric Cpk = 1.5 for the magical "Six Sigma" goal. That's a philosophical discussion over several adult beverages some day if we ever cross paths :) Back in my day at Kodak as a statistician working on inumerable new product commercialization programs, where DOE was the centerpiece of our product/process design work.we did exactly as you describe as well. Good need to discuss.I understand completely your last concept. You may wish to try Cpk confidence intervals (I and others like the Nagata & Nagahata 1999), and use the LCI as an arbiter of manufacturability. Cpk is better than nothing, as long as you're pragmatic about it and use that power for good and not evil. My experience suggests that one needs larger sample sizes than one can realistically get from DOE's.considerably greater than 200.sometimes much larger than 5000, depending on the calculation method.Īll that said, looking at your data is better than not looking. I find it very difficult to justify any Cpk, especially with small sample sizes. Many references (Montgomery, Wheeler, Kotz & Lovelace) document the “ misuses”, “ fantasy…and outright delusions”, and “ statistical terrorism” of Cpk and Process Capability Indicators. Too often reported as point estimates with no regard to their sampling error. Evil manipulations of these indicators are prevalent among some quality practitioners. Fundamental assumptions are often violently violated. Many statisticians dislike Cpk and Process Capability Indicators intensely, for many reasons: How would you recommend justifying a Cpk? However finding the inconsistency related to question 1 above threw a wrench into this idea - so I came here. Normally I would say "Well a Cpk of X is correlated to ppm defective of XXX PPM," or something along those lines. Why does the normal distribution, with the higher Cpk, also have a higher PPM?Ģ. In review of the same study report, I was asked "Why is a Cpk of 1.33 right for this process?" The other fits a non-parametric curve to the same data.ġb. See the attachments for examples I'm referring to.
I've often found that where JMP reports a Cpk, the PPM it predicts does not match with sources I find online (i.e Cpk=1.33 and ppms are 6210 online vs. I have a couple of questions that have been bothering me for too long:ġa. I use the Cpk to justify that the study results are acceptable.
So I analyze the resulting data by fitting a curve, and JMP outputs a Cpk and predicted PPM defective. Generally the DOE studies require that we meet a certain Cpk or Sigma-quality level for each run. Where I work, we generally perform DOEs to verify that a machine is capable before we release it to manufacture saleable product.