- Last Update On : 2016-06-12
Precision is repeatability. Precision indicates how well a method or instrument gives the same result when a single sample is tested repeatedly. Precision measures the random error of a method, which is the scatter in the data. Precision does not indicate that an instrument is reporting the correct result; which is accuracy.
Two types of precision are measured; within run and between run. Within run precision provides an optimistic estimate of the expected daily performance of a method or instrument since there is minimal opportunity for operating conditions to change during a single analytical run. Within run precision, must be evaluated and accepted before proceeding with more comprehensive studies.
Quality control material is available commercially. At least two, and sometimes three, concentrations of quality control material should be run for each analyte. Concentration of the analyte in the quality control samples should be as close as possible to the upper and lower medical decision points. These decision points could represent the upper and lower reference values or nationally recommended decision points.
Clinical Laboratory Standards Institute recommends running two levels of quality control material three times per run for five different runs, giving 15 replicates of each level. Most in vitro diagnostic companies use this protocol when they install a new instrument in a clinical laboratory.
Some laboratories believe that a good precision study should include 20 to 50 replicates. The larger the number of replicates, the more confident you can be in the precision results. For example, if the true SD of a method is 1.00, a precision estimate based on 20 replicates might range from 0.76 to 1.46. The precision estimate based on 50 replicates is narrower, ranging from 0.84 to 1.24.
Mean, standard deviation (SD), and coefficient of variation (CV) are calculated for each level using a spreadsheet.
- Mean is the average value, which is calculated by adding the results and dividing by the total number of results.
- SD is the primary measure of dispersion or variation of the individual results about the mean value. The easiest way to calculate SD is use the statistical tools present in a spreadsheet such as Excel. The greater the imprecision, the larger the standard deviation will be. For many analytes, SD varies with sample concentration. Using glucose as an example, an SD of 10 for a 400 mg/dL sample indicates very good precision, but an SD of 10 for a 40 mg/dL sample represents very poor precision.
- CV is the SD expressed as a percent of the mean (CV = standard deviation/mean x 100). The higher the standard deviation, the greater the percentage of the mean it becomes and the higher the %CV.
- The 95% confidence interval for SD is a measure of the precision of the precision estimate. The width of the confidence interval depends on the number of samples analyzed and the intrinsic SD of the method.
If an instrument or method has good precision, 95% of values should fall within 2 standard deviations of the mean. That means that no more than 1 of the 20 results should fall outside of 2 standard deviations.
Calculated SD and CV should be compared to the manufacturer's published statistics. If the obtained results are higher than the manufacturer's claim, an investigation must be undertaken before proceeding further with the method evaluation.
Below is an example of within run precision for Level 1 quality control material for sodium. The QC material is repeated 30 times and the following results are obtained: 110, 110, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 111, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112, 112,112,112, 113, 113, 113, and 113.
The sum of the thirty results is 3346. The mean is calculated by dividing 3346 by 30, which gives a result of 111.6. SD is 0.8 and CV is 0.72%. The 95% confidence interval is the mean +/-2SD or 110.0 – 113.2.
Between run precision is a better indicator of a method’s overall precision than within run precision because it measures the amount of random error inherent in the method from day to day. Between run precision is affected by many variables such as changes in operators, reagents and ambient operating conditions.
Concentration of the analyte in quality control samples should be as close as possible to the upper and lower medical decision points (usually the reference limits). Between run precision should be evaluated over at least 20 days using at least 2 reagent lots. The mean, standard deviation, and coefficient of variation are calculated for each level.
The standard deviation obtained during day-to-day replication studies is expected to be greater than the standard deviation of within run studies. The maximum allowable between run standard deviation is a matter of judgment. Generally, it should be less than total allowable error (see Appendix B).
Acceptable CVs need to be defined for each analyte based on medical significance. Generally, precision should be equal to or less than one half of the within subject biological variation. Desirable precision levels for some common chemistry analytes are summarized in the following table.
Precision levels vary depending on the analyte and the method. Generally, electrolytes and creatinine have very low CV% indicating very good precision. Enzymes and immunoassays typically have higher CV%. One other rule of thumb is that a method's CV or SD should be <1/8th of the reference range width. This can be calculated more easily using the following formula:
4(CV or SD) < 1/2 of reference range width
If the reference range for sodium is 136-145 meq/L and the SD is 1.25, then 4x1.25 =5, which is one-half of the reference range width of 10.
Precision calculated using quality control material is better than for actual patient samples. Sometimes physicians may notice a drift in assay values before a laboratory’s quality control program detects a problem. Clinical laboratories need to continuously review quality control results and tighten the ranges as much as possible.