The accuracy of an analytical procedure is the closeness of test results obtained by that procedure to the true value. The accuracy of an analytical procedure should be established across its range.
In the case of the assay of a drug substance, accuracy may be determined by application of the analytical procedure to an analyte of known purity (e.g., a Reference Standard) or by comparison of the results of the procedure with those of a second, well-characterized procedure, the accuracy of which has been stated or defined.
In the case of the assay of a drug in a formulated product, accuracy may be determined by application of the analytical procedure to synthetic mixtures of the drug product components to which known amounts of analyte have been added within the range of the procedure. If it is not possible to obtain samples of all drug product components, it may be acceptable either to add known quantities of the analyte to the drug product (i.e., to spike) or to compare results with those of a second, well-characterized procedure, the accuracy of which has been stated or defined.
In the case of quantitative analysis of impurities, accuracy should be assessed on samples (of drug substance or drug product) spiked with known amounts of impurities. Where it is not possible to obtain samples of certain impurities or degradation products, results should be compared with those obtained by an independent procedure. In the absence of other information, it may be necessary to calculate the amount of an impurity based on comparison of its response to that of the drug substance; the ratio of the responses of equal amounts of the impurity and the drug substance (relative response factor) should be used if known.
Accuracy is calculated as the percentage of recovery by the assay of the known added amount of analyte in the sample, or as the difference between the mean and the accepted true value, together with confidence intervals.
The ICH documents recommend that accuracy should be assessed using a minimum of nine determinations over a minimum of three concentration levels, covering the specified range (i.e., three concentrations and three replicates of each concentration).
Assessment of accuracy can be accomplished in a variety of ways, including evaluating the recovery of the analyte (percent recovery) across the range of the assay, or evaluating the linearity of the relationship between estimated and actual concentrations. The statistically preferred criterion is that the confidence interval for the slope be contained in an interval around 1.0, or alternatively, that the slope be close to 1.0. In either case, the interval or the definition of closeness should be specified in the validation protocol. The acceptance criterion will depend on the assay and its variability and on the product. Setting an acceptance criterion based on the lack of statistical significance of the test of the null hypothesis that the slope is 1.0 is not an acceptable approach.
The precision of an analytical procedure is the degree of agreement among individual test results when the procedure is applied repeatedly to multiple samplings of a homogeneous sample. The precision of an analytical procedure is usually expressed as the standard deviation or relative standard deviation (coefficient of variation) of a series of measurements. Precision may be a measure of either the degree of reproducibility or of repeatability of the analytical procedure under normal operating conditions. In this context, reproducibility refers to the use of the analytical procedure in different laboratories, as in a collaborative study. Intermediate precision (also known as ruggedness) expresses within-laboratory variation, as on different days, or with different analysts or equipment within the same laboratory. Repeatability refers to the use of the analytical procedure within a laboratory over a short period of time using the same analyst with the same equipment.
The precision of an analytical procedure is determined by assaying a sufficient number of aliquots of a homogeneous sample to be able to calculate statistically valid estimates of standard deviation or relative standard deviation (coefficient of variation). Assays in this context are independent analyses of samples that have been carried through the complete analytical procedure from sample preparation to final test result.
The ICH documents recommend that repeatability should be assessed using a minimum of nine determinations covering the specified range for the procedure (i.e., three concentrations and three replicates of each concentration or using a minimum of six determinations at 100% of the test concentration).
The ICH documents define specificity as the ability to assess unequivocally the analyte in the presence of components that may be expected to be present, such as impurities, degradation products, and matrix components. Lack of specificity of an individual analytical procedure may be compensated by other supporting analytical procedures. [note
Other reputable international authorities (IUPAC, AOAC-I) have preferred the term selectivity, reserving specificity for those procedures that are completely selective.]
For the tests discussed below, the above definition has the following implications:
ensure the identity of the analyte.
ensure that all the analytical procedures performed allow an accurate statement of the content of impurities of an analyte (e.g., related substances test, heavy metals limit, organic volatile impurities).
provide an exact result, which allows an accurate statement on the content or potency of the analyte in a sample.
In the case of qualitative analyses (identification tests), the ability to select between compounds of closely related structure that are likely to be present should be demonstrated. This should be confirmed by obtaining positive results (perhaps by comparison to a known reference material) from samples containing the analyte, coupled with negative results from samples that do not contain the analyte and by confirming that a positive response is not obtained from materials structurally similar to or closely related to the analyte.
In the case of analytical procedures for impurities, specificity may be established by spiking the drug substance or product with appropriate levels of impurities and demonstrating that these impurities are determined with appropriate accuracy and precision.
In the case of the assay, demonstration of specificity requires that it can be shown that the procedure is unaffected by the presence of impurities or excipients. In practice, this can be done by spiking the drug substance or product with appropriate levels of impurities or excipients and demonstrating that the assay result is unaffected by the presence of these extraneous materials.
If impurity or degradation product standards are unavailable, specificity may be demonstrated by comparing the test results of samples containing impurities or degradation products to a second well-characterized procedure (e.g., a Pharmacopeial or other validated procedure). These comparisons should include samples stored under relevant stress conditions (e.g., light, heat, humidity, acid/base hydrolysis, oxidation). In the case of the assay, the results should be compared; in the case of chromatographic impurity tests, the impurity profiles should be compared.
The ICH documents state that when chromatographic procedures are used, representative chromatograms should be presented to demonstrate the degree of selectivity, and peaks should be appropriately labeled. Peak purity tests (e.g., using diode array or mass spectrometry) may be useful to show that the analyte chromatographic peak is not attributable to more than one component.
The detection limit is a characteristic of limit tests. It is the lowest amount of analyte in a sample that can be detected, but not necessarily quantitated, under the stated experimental conditions. Thus, limit tests merely substantiate that the amount of analyte is above or below a certain level. The detection limit is usually expressed as the concentration of analyte (e.g., percentage, parts per billion) in the sample.
For noninstrumental procedures, the detection limit is generally determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be reliably detected.
For instrumental procedures, the same approach may be used as for noninstrumental procedures. In the case of procedures submitted for consideration as official compendial procedures, it is almost never necessary to determine the actual detection limit. Rather, the detection limit is shown to be sufficiently low by the analysis of samples with known concentrations of analyte above and below the required detection level. For example, if it is required to detect an impurity at the level of 0.1%, it should be demonstrated that the procedure will reliably detect the impurity at that level.
In the case of instrumental analytical procedures that exhibit background noise, the ICH documents describe a common approach, which is to compare measured signals from samples with known low concentrations of analyte with those of blank samples. The minimum concentration at which the analyte can reliably be detected is established. Typically acceptable signal-to-noise ratios are 2:1 or 3:1. Other approaches depend on the determination of the slope of the calibration curve and the standard deviation of responses. Whatever method is used, the detection limit should be subsequently validated by the analysis of a suitable number of samples known to be near, or prepared at, the detection limit.
The quantitation limit is a characteristic of quantitative assays for low levels of compounds in sample matrices, such as impurities in bulk drug substances and degradation products in finished pharmaceuticals. It is the lowest amount of analyte in a sample that can be determined with acceptable precision and accuracy under the stated experimental conditions. The quantitation limit is expressed as the concentration of analyte (e.g., percentage, parts per billion) in the sample.
For noninstrumental procedures, the quantitation limit is generally determined by the analysis of samples with known concentrations of analyte and by establishing the minimum level at which the analyte can be determined with acceptable accuracy and precision.
For instrumental procedures, the same approach may be used as for noninstrumental procedures. In the case of procedures submitted for consideration as official compendial procedures, it is almost never necessary to determine the actual quantitation limit. Rather, the quantitation limit is shown to be sufficiently low by the analysis of samples with known concentrations of analyte above and below the quantitation level. For example, if it is required that an analyte be assayed at the level of 0.1 mg per tablet, it should be demonstrated that the procedure will reliably quantitate the analyte at that level.
In the case of instrumental analytical procedures that exhibit background noise, the ICH documents describe a common approach, which is to compare measured signals from samples with known low concentrations of analyte with those of blank samples. The minimum concentration at which the analyte can reliably be quantified is established. A typically acceptable signal-to-noise ratio is 10:1. Other approaches depend on the determination of the slope of the calibration curve and the standard deviation of responses. Whatever approach is used, the quantitation limit should be subsequently validated by the analysis of a suitable number of samples known to be near, or prepared at, the quantitation limit.
linearity and range
Definition of Linearity
The linearity of an analytical procedure is its ability to elicit test results that are directly, or by a well-defined mathematical transformation, proportional to the concentration of analyte in samples within a given range. Thus, in this section, linearity refers to the linearity of the relationship of concentration and assay measurement. In some cases, to attain linearity, the concentration and/or the measurement may be transformed. (Note that the weighting factors used in the regression analysis may change when a transformation is applied.) Possible transformations may include log, square root, or reciprocal, although other transformations are acceptable. If linearity is not attainable, a nonlinear model may be used. The goal is to have a model, whether linear or nonlinear, that describes closely the concentration-response relationship.
Definition of Range
The range of an analytical procedure is the interval between the upper and lower levels of analyte (including these levels) that have been demonstrated to be determined with a suitable level of precision, accuracy, and linearity using the procedure as written. The range is normally expressed in the same units as test results (e.g., percent, parts per million) obtained by the analytical procedure.
Determination of Linearity and Range
Linearity should be established across the range of the analytical procedure. It should be established initially by visual examination of a plot of signals as a function of analyte concentration of content. If there appears to be a linear relationship, test results should be established by appropriate statistical methods (e.g., by calculation of a regression line by the method of least squares). Data from the regression line itself may be helpful to provide mathematical estimates of the degree of linearity. The correlation coefficient, y
-intercept, slope of the regression line, and residual sum of squares should be submitted.
The range of the procedure is validated by verifying that the analytical procedure provides acceptable precision, accuracy, and linearity when applied to samples containing analyte at the extremes of the range as well as within the range.
ICH recommends that, for the establishment of linearity, a minimum of five concentrations normally be used. It is also recommended that the following minimum specified ranges should be considered:
Assay of a Drug Substance (or a finished product):
from 80% to 120% of the test concentration.
Determination of an Impurity:
from 50% to 120% of the acceptance criterion.
For Content Uniformity:
a minimum of 70% to 130% of the test concentration, unless a wider or more appropriate range based on the nature of the dosage form (e.g., metered-dose inhalers) is justified.
For Dissolution Testing:
±20% over the specified range (e.g., if the acceptance criteria for a controlled-release product cover a region from 20%, after 1 hour, and up to 90%, after 24 hours, the validated range would be 0% to 110% of the label claim).
The robustness of an analytical procedure is a measure of its capacity to remain unaffected by small but deliberate variations in procedural parameters listed in the procedure documentation and provides an indication of its suitability during normal usage. Robustness may be determined during development of the analytical procedure.
If measurements are susceptible to variations in analytical conditions, these should be suitably controlled, or a precautionary statement should be included in the procedure. One consequence of the evaluation of robustness and ruggedness should be that a series of system suitability parameters is established to ensure that the validity of the analytical procedure is maintained whenever used. Typical variations are the stability of analytical solutions, different equipment, and different analysts. In the case of liquid chromatography, typical variations are the pH of the mobile phase, the mobile phase composition, different lots or suppliers of columns, the temperature, and the flow rate. In the case of gas chromatography, typical variations are different lots or suppliers of columns, the temperature, and the flow rate.
System suitability tests are based on the concept that the equipment, electronics, analytical operations, and samples to be analyzed constitute an integral system that can be evaluated as such. System suitability test parameters to be established for a particular procedure depend on the type of procedure being evaluated. They are especially important in the case of chromatographic procedures. Submissions to the USP should make note of the requirements under the System Suitability
section in the general test chapter Chromatography 621