Continued process verification – a challenge for the pharmaceutical industry?
Posted: 10 March 2015 |
Nowadays, professional quality and process data trending is key for science-based pharmaceutical development and manufacturing. Recently, the U.S. Food and Drug Administration (FDA) and European Medicines Agency (EMA) issued revised process validation guidance to enforce recurring data analysis as a regulatory core requirement1,2: Periodic product and process monitoring, also known as “Process Verification”, is considered an integral part of process validation, with the aim being to demonstrate product compliance and process robustness during the whole life cycle.
Although this new guidance represents a consequence enhancement of the established processes, it bears the potential to revolutionise the daily business of the industry. Besides the traditional Annual Process Review/Product Quality Review (APR/PQR) monitoring of released attributes, the new PV-guidance (Process Validation guidance) enforces monitoring of critical in-process parameters and material attributes throughout the product life cycle. This review describes the requirements and the challenges arising from the implementation of the new approach, as well as potential pitfalls and upcoming solutions based on the current level of understanding, enabling a process verification that is both lean and efficient.
Inside the new PV guidance, the authorities emphasise the great impact of the manufacturing process to a product’s quality and highlight effective process control as essential for product safety and quality. Both the FDA and EMA’s documents state that the requirements can be gained by a risk based approach evaluating each influence on the process and identifying its cause or rather the causal parameter. Although these are no revolutionary new ideas, the request for an effective demonstration of broad process knowledge and process control represents a change of paradigm in the normal daily process validation business. The traditional APR-process is substituted by a new approach requiring continued data trending through the entire lifecycle of a product. The guidance graduates the product’s lifecycle into three stages: ‘process design’, ‘process qualification’ and the new additional ‘continued process verification’. Furthermore it provides a detailed description of the expected results of each phase.
The new CPV concept
The first phase, the process design, follows a classical approach. It is used to achieve deep knowledge of the process through development activities including Quality by Design methods. At the end of the design phase all parameters having a potential impact on the process and the product quality are identified and well-known. This assessment is used as a basis for the classification of the parameters into critical, the so called critical quality attributes (CQA) and the critical process parameter (CPP), and non-critical.
The subsequent process qualification phase represents the second phase where all identified CPPs and ‘product quality attributes’ are evaluated during the scale-up of the process. The outcome of this is a condensed list of confirmed relevant parameters: the ‘control strategy summary’. Additional objectives of the process qualification phase are the demonstration of process capability and robustness based on scientific methods and the closely linked definition of the specification limits. Upon the completion of the second phase all prerequisites are defined to achieve persistent high product quality.
The last and final phase is specified as continued process verification and represents the enhancement to the classical approach for process validation. It is characterised by the new interpretation of the process validation as continued monitoring of the identified critical parameters at given limits through all stations of the product lifecycle. The FDA defines the continued process verification as “the collection and evaluation of data, from the process design stage through commercial production, which establishes scientific evidence that a process is capable of consistently delivering quality product.” This interpretation of process validation is in contrast to the traditional ‘three batches’ approach and brings two new topics up, process validation as a recurring data analysis through the complete product life cycle, starting with the late stage development phase through to the decommissioning of a product, and the constant verification of process robustness and capability by scientific methods. Any implementation of the CPV-concept will be matched on the realisation of these core requirements, which represent the greatest challenges.
It is quite evident that all this is beyond the capabilities of a regular laboratory worker in the quality department, thus an ‘integrated team’ representing a profile of all involved parties like qualified persons, QA representatives and statisticians is to be implemented as a constant working group. The major objectives are the review of the continued monitoring in periodic schedules and the adaption of the control strategy summary if necessary.
Emphasis on scientific methods
The basis of justification and classification of the process is the scientific analysis performed by experts familiar with the accurate application of statistic methods as statisticians or scientists. The relevant literature therefore contains a great amount of fitting solutions for each situation. Nonetheless, accurately implementing all of these methods is unrealistic. The application would be too complicated in daily business. Even if the guidance is not very specific about the applied statistical methods, a selection of a viable process is to be made. It is good practice to categorize the identified critical parameters and assign applicable analysis methods as a part of the Control Strategy Summary. Examples for feasible solutions are simple visualisation of the data or the application of generalised mathematical analyses. All these activities should follow a risk-based approach for the data evaluation.
Data evaluation
The simplest method to evaluate a process is the visualisation of the data in form of a line or scatter plot. Additional displayed specification limits and historical values of former batches allow a very good first assessment of the process regarding capability and robustness in its historical context. A good supplement to the visualisation of data can be the calculation of the CpK-value, an indicator for the process capability. On account of the assumption that the values are normally distributed, the CpK-value can be calculated based on a simplified formula. The CpK-value is in the majority of the cases a very good indicator for the process capability, but should be used rather as an additional reference than as a black-and-white criterion for a process.
An alternative way of describing the capability or robustness of a process is with statistical parameters calculated on the basis of the measured values in the observation period like average, minimum, maximum, standard deviation or variance coefficient. These values give a good impression of the location and the dispersion of a process without comprehensive analysis or visualisation.
Comprehensive data analysis
Statistics postulate that a process running in a ‘controlled’ state will have a random distribution of the measured values around the process average (Gaussian distribution). This, conversely leads to the view that the deviation from this pattern provides the indication of an uncontrolled process. An established testing method to detect and visualise these non-obvious ‘out-of-control’ conditions is the application of decision rules like ‘general electric‘. Based on the location of the defined observations relative to the limits or the centerline of a control chart, these rules can require an investigation of possible assignable causes. Even though the decision rules are good practice, they represent just a reference and do not classify a process according to capability or robustness. Control Charts (also known as Shewhart Runcharts) depict the actual values against the specification limits and support a classification of the process on the basis of the detection of shifts and trends according to the aforementioned interpretation rules. Thus a control chart supports the early detection of a potentially uncontrolled process in a very simple way.
Process knowledge and process control
A core requirement of the new guidance is the demonstration of broad process understanding and the evidence of constant process control. A common rule of thumb in order to fulfill these requirements is the creation of reports for 25-30 or so batches. This observation period enables preventive action to be taken, instead of being a reactive process, and can help to mitigate the risk of specification violations. Although on an individual basis all the above mentioned methods are easy to apply, the implementation is by virtue of the multiplicity of the required data sources and the required analysis reports are very difficult and challenging. It can easily overstrain a data analysis team without corresponding technical support, meaning the implementation of a suitable IT-system.
Technical requirement
All analysis methods described in this article have the common requirement of capturing a large volume of data derived from different sources to achieve trustful results and informative reports. The IT landscape in most companies is very heterogeneous due to the historical evolution of the systems. There is a need for a customisable middleware and interfaces to all incorporated systems or in the case of joined projects even companies. In order to achieve a complete quality overview of a specific product manufactured on several sites, as required by the product life cycle approach defined by the FDA and EMA, it might be necessary to harmonise and link data from several sources. In principle there are two core functions: the ‘data collection’ and the data analysis. The data collection component of the system must assemble all required data, aggregate it to readable data tables or reports and support the second component, the statistical analysis and visualisation as described above.
Technical solution
In the majority of the cases an out of the box implementation is not available at the moment and there is a special need to develop a proprietary system with respect to the release date of the guidance and the resulting expectations to implement the new introduced concept in near future. Basically there are two approaches which are competing with one another: a real-time and a batch processing data warehouse solution. The more fashionable one is the real-time solution. Its application requires a high quality of data and a very good performance of the underlying IT-infrastructure regarding global data aggregation and continuous availability of the data. Normally it is the preferred solution for companies with unified and centralised IT-systems. Under these prerequisites, the master data management and the linkage of data is an extremely simple and fast setup. The effort for data mapping and transformation processes can be mitigated or even eliminated. The result is a lean and high performing application.
Besides the fact that such a system is often not the reality, the great backdraft is that all data are stored in the origin systems and aggregated in virtual views calculated on demand. After decommission of the origin the data are no longer accessible for comprehensive analysis. The migration to a suitable archiving system is necessary to avoid the loss of data and the violation of data integrity rules. In reality, this may imply great investments and effort for migration of old data into a new environment at the end of the system’s lifetime.
The more old fashioned but no less robust approach is the batch processed data warehouse solution. Usually it is the preferred application in highly diverse companies maintaining a whole bunch of systems for the same purpose on different sites or companies in a network. A data warehouse solution provides the best conditions to maintain data integrity processes, enables persistent data storage and supports a good interface to commercial statistical analysis software. Both approaches have their benefits and backdrafts, but at the end of the day they must be compliant with the guidelines, the applicable law and regulations regarding electronic records. Therefore they must include mechanisms to enable the traceability, integrity and reproducibility of data essential for the subsequent analysis.
Conclusion
Although the new concept of the continued process verification represents a consequence enhancement of the established processes its implementation is a challenging and laborious task. On the other hand the use of smart supporting tools in association with a risk based approach can mean an improvement of product quality and process control.
References
- FDA: Guidance for Industry – Process Validation: General Principles and Practices (2011) http://www.fda.gov/downloads/Drugs/Guidances/UCM070336.pdf
- EMA: Guideline on process validation for finished products – information and data to be provided in regulatory submissions (2014) http://www.ema.europa.eu/docs/en_GB/document_library/Scientific_guideline/2014/02/WC500162136.pdf
Biography
Dr. Michael Rommerskirchen obtained his PhD in Chemistry at the University of Cologne. After several years as project manager for the implementation of LIMS, in 2008, he accepted a position in the pharmaceutical industry and is now head of the Process Database Team responsible of setting up and organising a globally-distributed and automated data analysis system.
Issue
Related topics
Related organisations
European Medicines Agency (EMA), Food and Drug Administration (FDA)