Building robust PCR/qPCR assays
Posted: 26 April 2012 |
The process of building robust PCR/qPCR assays is a matter of perseverance and consistency. A few questions that should be answered prior to starting development will help make the process more efficient and effective:
Does the assay need to simply detect the presence of the target (qualitative), or must it assign a value to the detected target (quantitative)? The development process for a qualitative or quantitative assay, although similar in many respects, ultimately will take different paths
In what type of matrix will samples be? Matrix plays an important role in both development and validation of the assay. If the assay is needed for multiple matrices (whole blood, plasma, serum, differing tissue types, etc.), each matrix must be evaluated individually to determine its impact on assay performance
Will extraction be required, and by what method? 4) What throughput will be needed?
Answering these questions early in the process will help prevent ‘reworks’ later.
The process of building robust PCR/qPCR assays is a matter of perseverance and consistency. A few questions that should be answered prior to starting development will help make the process more efficient and effective:
- Does the assay need to simply detect the presence of the target (qualitative), or must it assign a value to the detected target (quantitative)? The development process for a qualitative or quantitative assay, although similar in many respects, ultimately will take different paths
- In what type of matrix will samples be? Matrix plays an important role in both development and validation of the assay. If the assay is needed for multiple matrices (whole blood, plasma, serum, differing tissue types, etc.), each matrix must be evaluated individually to determine its impact on assay performance
- Will extraction be required, and by what method? 4) What throughput will be needed?
Answering these questions early in the process will help prevent ‘reworks’ later.
The critical components of PCR assays (forward and reverse primer concentrations, denatura – tion and annealing times and temperatures) must be carefully optimised to produce the most robust assay. For quantitative PCR assays, the probe must also be optimised. The complexity of a PCR assay makes it difficult to optimise using the traditional OFAT (one factor at a time) approach. Although it is possible, it takes longer to develop and can be misleading since the interactive effects of multiple components may be overlooked. Using Design of Experiments (DOE) approaches will significantly reduce the assay development time and result in a more optimised assay.
If unfamiliar with DOE experimental design, multi-component experiments as described below will begin to identify the combination of primer concentrations and PCR profile that offer the best initial results. A two-way matrix of primer concentrations shows the equimolar primer concentrations for each primer (shaded) for the initial runs.
A gradient thermal cycler, programmed with annealing temperatures ranging from +5°C to -5°C around the Tm of the ‘coolest’ primer, allows quick identification of the annealing temperature ranges for subsequent studies. In a single 96-well run, each of the four equimolar primer combinations from Table 1 (page 18) can be tested against two different target concentrations amplified at 12 distinct temperatures. The target material can be used at a high concentration (undiluted) and 1:100 or 1:1000 dilution in the initial studies, adjusting the concentrations in subsequent runs to suit the assay. For non-quantitative assays, gel analysis can be used to identify which concentration and temperature combinations yield the maximum product. This approach works well with either PCR assay type and reduces the number of experiments required to identify the basic assay parameters. Subsequent experiments are aimed at fine tuning both the primer concentrations and the PCR profile in addition to setting the denaturation times and temperatures. After the initial assay parameters are defined, assessing asymmetric primer concentrations is strongly advised, as often one primer will have a more profound effect on assay results. Once the basic PCR conditions have been defined, the probe concentration for a qPCR assay can be similarly optimised. A beginning probe concentration of 500 xM, tested against different primer combinations, is a good starting point.
The importance of master mix optimisation for each particular primer set cannot be overemphasised. Most assays are built on commercially available amplification kits which contain all the common PCR components. Using the kit components at the manufacturer’s suggestion levels will work well in most cases. However, every primer set has its own idiosyncracies and must be assessed indi – vidually. If the manufacturer offers ‘enhancers’ to improve performance, it is wise to titrate each component separately in the assay. It has been our experience that approximately half of the assays work better without these add-ins, the others fail if not included.
Once a basic PCR profile and master mix are established, the sensitivity of the assay can be assessed. Ten-fold serial dilutions of the target nucleic acid performed under the established profile and conditions are used to determine the assay’s ability to detect low input con – centrations. Multiple lots of diluted target, made by different operators and from different raw material lots if available, will serve to establish the assay variation. The most robust assays are built with the maximum amount of variance during the development: equipment, operators, raw material lots (primers, probe, master mix, target nucleic acid, etc.). Each run should include multiple replicates of a sample in order to monitor the intra-assay variation. Additionally, day-to-day and operator-to-operator variances should be continuously monitored. In short order, it will become obvious when an assay component is under- or over-performing and impacting the assay performance.
Developing standards and controls where none exist
If a new assay is being designed to detect a novel target, there will be no commercial or compendial reagents for use during the development process. Adding another layer of complexity to the assay development, the reagent development process should proceed in parallel with assay development to maximise the return on the experimental investment and shortening the development time. Serial dilutions of the intended target should be spiked into and extracted from the matrix of choice. This extracted target can be used for the determination of primer concentrations-PCR profile, described above, and will aid in defining the limits of detection (LOD) and/or quantitation for the assay. As the primer and probe concentrations and the PCR profile are refined for maximal signal, both the highest and lowest concentrations of target reproducibly detected will become readily apparent. These concentrations define the limits of the assay: the upper limit (ULOQ) and lower limit of quantification (LLOQ). These concentrations, along with the serial dilutions of material between, become the reference standards (calibrators) to be used in the quantitative assays. It is strongly advised (as mentioned in the CDER Guidance for Industry1) that standards and controls be made in the same biological matrix in which the samples will be found.
Developing positive and negative controls for use in PCR assays is a matter of spiking known concentrations of the target material into the intended matrix. Matrix without spiked material serves as the Negative Control for the assay. If multiple matrices are going as expected, each must be tested individually (spiked and unspiked) for any impact on assay performance. Typically, the concentrations of material spikes are determined by the expected ranges of naturally occurring targets and will span the range of the standards being developed: high – near the ULOQ of the assay, mid – near the middle of the range, and low – near the LLOQ of the assay.
Robustness is the ability of an assay to remain unchanged in the face of small, random changes. To develop a fully robust assay, optimisation involves repeatedly testing the standards (calibrators) and controls from multiple sources in combination with small but deliberate changes in the assay conditions such as extraction or amplification kit lots, primer or probe lots, changes in operators and/or equipment. By assessing the assay variation during these changes, it is easy to identify those conditions and components with the greatest impact on assay performance. Maintaining control of the ‘sensitive’ components will yield the most robust assays. At the same time, these experiments define the specifications and expectations for each component under given circumstances. Each specification must be carefully defined as part of the basic assay optimisation. DOE-designed robustness experiments are extremely useful in identifying multiple component specifications in a limited number of runs.
The importance of challenging the assay during optimisation with multiple lots of materials (primers, probe, amplification kits, extraction kits, thermal cyclers, etc.) cannot be overstated. Additionally, the use of a single supplier (‘sole source’) of a reagent is strongly advised against if it can be avoided, since changes to commercial material can have devastating impacts on the in-house developed assay. Having multiple sources of material protects against such uncontrolled changes and helps to ensure a more robust assay. Regardless of the source of material, a program of verifying each lot of material should be instituted and rigorously followed to ensure that any changes to a given component which could negatively impact the assay are identified before the material is put into use. Each new lot should be tested against a previously used lot (‘qualified lot’) with documented performance for comparison. This process of releasing lots for use should also include retention of the qualification data for future reference. Trending the performance of each lot of material, both at qualification and during its use, is critical to identifying any ‘slippage’ in performance before it ultimately results in an assay failure.
The stability of reagents made in-house or purchased should be assessed in real-time to set expiry dates. All critical reagents should be placed on a periodic testing schedule after a new lot is released and the results for each time point carefully analysed for any loss of performance. Initially, expiry dates provided by manufacturers can be used but should be verified by the lab. This data, in conjunction with the new lot qualification data, comprise the monitoring programs for critical reagents.
Assay validation parameters
Assay parameters comprising validation are briefly described below. This is intended as an overview, not an exhaustive treatment. Guidance from CDER1, European Pharma – copoeia2, ICH3, EMEA4 and supporting agencies such as CLSI5 delve into greater detail and, in some cases, give examples for establishing and validating parameters. Additionally, literature searches reveal published treatises covering validation of molecular assays, some of which are included here6-10 although validation of PCR methods is not widely published.
Specificity
For PCR assays, specificity initially refers to the in silico determination of the oligos to preferentially amplify/detect the desired target. Prior to validation, it is imperative that the assay be subjected to actual specificity testing of putative ‘contaminants’, i.e., those reasonably expected to be present in test samples. To completely assess specificity, the desired target should be in a mixture of potential contaminants at higher concentrations. At or near the lower limit of quantification (LLOQ) of the assay (limit of detection (LOD) of a qualitative assay), the ability to find and amplify its target will be indicative of a highly specific and sensitive assay.
LOB, LOD and LLOQ
Analytical assays often calculate the limit of the blank (LOB) using a formula5. Practically speaking, the LOB of a PCR assay cannot be calculated since true PCR blanks (Negative Controls and No Template Controls) have no value with which to perform the calculations.
Whether developing a qualitative or quantitative assay, a basic requirement is to define the lowest level of target that can be reliably detected. Although called Limit of Detection (LOD) by CDER1 and Positive Cut-Off (PCO) by the European Pharmacopoeia2, the LOD/PCO for quantitative PCR assays is the concentration at which 95 per cent of the known positive samples are detected. To further confound the issue, the ICH Q2(R1)3 definition deviates even further: ‘the detection limit can be determined by: (1) visual evaluation, (2) signal-to-noise ratio or (3) based on the standard deviation response further defined as ‘standard deviation of the blank’ or ‘calibration curve’. In the context of PCR assays, the definition of a ‘Positive Cut-Off’ is the more logical choice.
Although similar, LOD and LLOQ have different definitions and need to be developed accordingly. The LLOQ is the concentration that is reproducibly detected with a ‘stated precision and accuracy’. Establishment of the LLOQ requires extensive testing of diluted material (or samples quantified by an external method) and must include the maximum variability (i.e., multiple reagent lots, operators and equip – ment) in order to set reliable specifications. During validation, the LLOQ is verified by testing samples with a pre-defined concentration, typically three to five times the stated LLOQ for the assay, to prove the assays’ ability to reliably detect and quantify at that the LLOQ.
Range and linearity
The range of an assay is the area between the upper (ULOQ) and lower (LLOQ) limits of quantitation for which precision, accuracy and linearity can be demonstrated. During validation exercises, samples spanning the range of the assay should be used to validate the performance of the assay throughout the claimed range.
The linearity of an assay is the range of concentrations over which the assay produces a response directly proportional to the con – centration. The range of an assay covers the region of linearity and samples falling outside the limits of the assay (above or below) should not be quantified. Assessment of linearity is often done using linear regression analysis.
For routine testing, a reference standard set should be developed that contains at least six members ranging from the ULOQ to the LLOQ. During testing, any samples which do not fall within this range can not be quantified. Samples higher than the ULOQ may be diluted and re-tested, with quantification taking into account the dilution factor. Samples below the LLOQ are usually reported as ‘detected but not quantified’ to indicate the low positive nature of the sample.
Precision
Precision is the ability of a quantitative assay to correctly determine the amount of the analyte in a sample under repeated testing. In the early stages of development, a somewhat arbitrary precision can be chosen for use as a qualitative indicator of the optimisation process and tightened as optimisation of the assay nears completion. As PCR assays tend to be highly precise by nature, an optimised assay can be expected to have a precision of less than five per cent CV.
Accuracy
Accuracy, the ability of the assay to reproducibly assess the quantity of target, requires samples which have either been quantified by an external method or samples spiked with known concentrations of analyte. For novel targets lacking compendial or commercially available reference standards (calibrators), accuracy can be inferred once precision, linearity and specificity are established3.
Conclusion
The amount of work required to develop a robust PCR/qPCR assay should not be daunting, if it is approached in a systematic and meth – odical manner. Using either the classical OFAT approach or the newer DOE methodologies, the end result will be a robust assay. As confusing as regulatory guidance appears to be, there is a great deal of logic behind the requirements and carefully designing validation experiments will help ease the process. Even if the intention of the assay is not to pass an audit by a regulatory agency, developing a robust assay and its supporting processes will assure the user of a well performing, long-term reliable assay.
References
1. US Dept. of Health and Human Services, FDA, Center for Drug Evaluation and Research CDER). Guidance for Industry: Bioanalytical Method Validation. May 2001
2. European Pharmacopoeia 4, General Notes 2.6.21, ‘Nucleic Acid Amplification Techniques’
3. ICH Harmonised Tripartite Guideline: Validation of Analytical Procedures: Text and Methodology Q2(R1). Step 4 version. November, 2005. from the International Conference on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use
4. The European Medicines Agency. Guideline on Bioanalytical Method Validation. Committee for Medicinal Products for Human Use. EMEA/CHMP/EWP/192217/2009. effective date: 1 February 2012
5. Clinical and Laboratory Standards Institute. Protocols for Determination of Limits of Detection and Limits of Quantitation; Approved Guideline. EP17-A, vol, 24. no 34. 2004 (ISBN 1-56238-551-8)
6. David A. Armbruster and Terry Pry. Limit of Blank, Limit of Detection and Limit of Quantitation. Clin. Biochem. Rev. 29 (i):S49-S52. 2008
7. Eileen M. Burd. Validation of Laboratory-Developed Molecular Assays for Infectious Diseases. Clin. Microbiol. Rev. 23(3):550-576. 2010
8. M. Burns and H. Valdivia. Modeling the limit of detection in real-time quantitative PCR. Eur. Food Res. Technol. 226:1513-1524. 2008
9. M. Tuomela, I. Stanescu and K. Krohn. Validtion overview of bio-analytical methods. Gene Therapy 12:S131-S138. 2005
10. Hubert GM Niesters. Standardization and quality control in molecular diagnostics. Expert Reviews of Molecular Diagnostics v1(2):129-131. 2001
About the author
Linda Starr-Spires, PhD, leads a group using qRT-PCR assays to assess viral loads in patient samples from clinical trials. Having studied Cell and Molecular Biology at Memphis State University, she has held post-doctoral positions at the University of Southern California, Thomas Jefferson Medical College and University of Pennsylvania School of Medicine in addition to several industrial positions.