# Method Comparison

- Details
- Last Update On : 2016-07-04

Method comparison can be considered a measure of accuracy as long as the reference method is known to be accurate. Method comparison involves testing patient samples during a number of different analytical runs by both the new and current methods. Ideally, the comparison method should be a reference method, but usually it is the existing method in one’s own laboratory or a reference laboratory. Method comparison should be combined with the between run precision study.

At least 40 patient samples should be analyzed by both methods with at least 2 reagent lots on each analyzer. Any specimen with large differences between results of the two methods should be reanalyzed by both methods in the next run. Analyte concentrations should span the entire analytical range, represent a broad range of disease states, and be from both men and women.

The results are plotted in a spreadsheet using an XY plot. The test method (new) is plotted on the Y axis (dependent) and the reference method (existing) on the X axis (independent). A linear regression line is inserted through the data points and the slope and Y intercept are calculated. Linear regression is available in most spreadsheets.

The best fit line is defined by the equation; Y = mx + b, where m is the slope and b is the Y intercept. A perfect correlation will have all points lying on a line at a 45^{o} angle to the X axis. This line will have a Y intercept of 0 and slope of 1. The correlation coefficient (R^{2}) will be 1.00 and the standard error will be 0.

Linear regression is commonly used to analyze method comparison data but has the following limitations:

- Data points must be limited to the linear range. The spread of sample values should test the total linearity of the instrument. Ten specimens should be tested at the high end, 20 in the middle, and 10 at the low end.
- The data range must be broad enough to allow reliable linear regression. For an ideal linear regression it is best to have data where the highest value is three times the lowest value. Therefore, linear regression is not a good tool to compare methods with a narrow analytical or clinical range (e.g. sodium or potassium).
- Outliers must be identified and reanalyzed. The linear regression line is weighted to points at the extreme values. Outliers can greatly influence linear regression. In general, no more than one outlier should be excluded in a set of 40 patient comparisons.

Besides ordinary linear regression, one can also use Deming and Passing-Bablok regression analysis. Some vendors and method evaluation software packages provide these analyses. Regular regression assumes that the reference method plotted on the X axis is free from error and may underestimate the true slope. Deming regression does not assume that the reference method is free from error. Deming regression may be the best approach to use when the two methods are expected to be identical and the data is normally distributed and free of outliers. Passing-Bablok regression is used for nonparametric data which, means it does not have a normal Gaussian distribution. Passing-Bablok performs better than Deming regression when outliers are present. The main weakness of Passing-Bablok is that it is computationally intensive and is best suited for analysis of a large number of data points. It is unreliable for small samples sizes.

Unfortunately, most method comparisons are imperfect. The types of error detected by method comparison are illustrated in the following graph. Slope estimates proportional error and the Y intercept estimates constant error between the two methods. R^{2 }indicates how close the data points lie to the regression line.

Proportional error is often due to calibration differences between the two methods. The new method is a fixed percentage of the reference value at all concentrations tested. For example, if the reference values for glucose samples are 100 and 200 mg/dL and the new method values are 110 and 220 mg/dL, proportional error is 10%. Proportional error is defined as deviation from the ideal slope of 1.00. It is calculated as (slope - 1) x 100. For example, a slope of 0.99 indicates a -2% proportional error. Proportional error can often be corrected by adjusting the calibration.

Constant bias may be related to calibration or set point issues with one instrument. With constant error, new method values stay above or below reference values by a fixed amount as the level of analyte is increased. For example, if the reference values for two glucose samples are 100 and 200 mg/dL, the new method values might be 130 and 230 mg/dL. Constant error is indicated by the Y intercept. In the absence of constant error, the regression line passes through the origin and the Y intercept is 0.00. The amount of deviation from the origin indicates the degree of constant error. If the Y intercept is 2 mg/dL, then this much difference exists between all X & Y values. Constant error can often be corrected by adjusting the blank.

Random error is a mistake in one test or test run that is independent of another test or test run. Correlation coefficient (r or R^{2}) and standard error are used to estimate random error. Correlation coefficient is an estimate of the degree of association between the two methods. A low R^{2} may indicate an inadequate range of values studied, individual samples with interferences, or poor correlation between the two methods. Generally, an R^{2} of 0.98 or higher indicates acceptable correlation between the new and reference methods.

Standard error indicates the scatter of points about the regression line. If the points are widely scattered about the line, there is a significant amount of random error between the 2 methods. It is calculated as the standard deviation of the difference between the test and reference method values. Standard error can be a positive or negative value and can be due to the reference method, test method, or matrix effect.

Systematic error (labeled "both" in the graph) is the sum of constant and proportional error and is an indication of accuracy. Total analytic error is random error plus systematic error. The ultimate goal, which may not always be attainable with current technology, is to have a total analytic error that does not exceed total allowable error. Appendix B lists the total allowable error for many analytes. Alternatively, it can be estimated by multiplying intra-individual variation by 0.5.

Another way to visualize method comparison data is to prepare a bias plot, which is also called a difference plot or a Bland-Altman plot. Bias plot illustrates the degree that the new method differs from the reference method. The bias plot is a scatter plot with the values of the X (reference method) plotted on the x-axis and difference between the two methods (Y-X) plotted on the Y axis. Thus, bias is the difference of Y-X. Bias plots can be expressed as absolute or percent differences.

If both methods gave exactly the same results, all points would fall directly on the zero bias line. This ideal result is unlikely because both methods are subject to random error. In a good bias plot all of the data points are centered on the zero line and surround it with approximately the same width. Constant bias is present when Y is consistently greater or lesser than X by a constant amount. Data points are clustered around an average bias line instead of the zero line.

Proportional bias is present when Y differs from X in a way that is proportional to X. For example, Y may be consistently 5% higher than X at all values of X. On the bias plot, the data points cluster around an upward or downward sloping line instead of a horizontal line.