Measurement Systems Analysis (MSA or Gage R&R)
As specifications become more and more stringent, quantifying the variability implicit in the measurement process becomes critical. A Cpk value reflects not only variability in the product, but also variability in the measurement process. In order to improve processes, the measurement process for the characteristic of interest must be adequate. This means that it must be stable, precise, and unbiased.
Two sources of variability that are traditionally studied in measurement systems analysis are repeatability variation and reproducibility variation. Repeatability variation reflects variation in repeated measurements of the same object by the same individual, using the same gage. Reproducibility variation is the variation that results when different operators or raters measure the same items. The overall variability of a measurement process consists of these two sources of variation, as well as other sources such as different gages, conditions, locations, etc.
One of the goals of an MSA (or Gage R&R) is to quantify the total variation in the measurement process in order to determine if the process results in measurements that are precise enough. For example, if the variation in the measurement process takes up half the specification limits, then the measurement process is not adequate for the characteristic of interest. Another goal of an MSA is to help focus efforts to improve the measurement process. This is done by prioritizing the sources of variation to be reduced.
For continuous, or variables, measurement systems, the capability of the system is often expressed as a precision to tolerance ratio (P/T ratio); an interval whose length is six times the estimate standard deviation of the measurement process, as determined from the MSA, is divided by the specification range. For attribute measurement systems, the measurements often consist of categories such as go or no go categories or ordinal rating schemes. Here, the consistency of the measurement system is often expressed by a kappa value, which measures the degree of chance-adjusted agreement both within (repeatability) and between (reproducibility and repeatability) operators or raters.
NHG also provides support and services in the following areas: