Improving Regulatory Intelligence Through Effective Quality Data Analytics

Introduction

In a Warning Letter issued in March 2024, to a large medical device company, the FDA cites several issues associated with CAPA. Historically CAPA has been the subsystem most often referenced in association with Quality Management System (QMS) deficiencies, so the inclusion of CAPA deficiencies is not surprising. However, in this Warning Letter, FDA specifically addresses the requirements for quality data analysis within 21 CFR 820.100(a). This is not as common as other CAPA findings. This article considers  the FDA expectations for quality data analysis (i.e., trending) as suggested by the Warning Letter and provides key points to consider when trending data (e.g., complaint and non-conformance data). This is important as quality data analytics have a direct impact on management’s decision-making process. Making the right decision at the right time requires regulatory intelligence which is gained through on-market data analysis using the best statistical techniques to find signals within aggregate data.

Concerns in a March 2024  Warning Letter reinforce key issues QualityHub often encounters as we provide CAPA support to our clients, including, for example:
  • – CAPA procedure development,
  • – configuration of CAPA software,
  • – CAPA investigations,
  • – CAPA remediation, and
  • – Mock inspections of CAPAs in preparation for FDA and/or notified body inspection.

FDA’s Expectations

QualityHub’ s collective experience goes back to 2004 when our team began working with FDA regulated industry. Our 20+ years of working with our clients on CAPA compliance indicates that trending and CAPA escalation requirements continue to be ambiguous, poorly documented, poorly communicated, and inadequately defended during audits/inspections.  More concerted effort is required if the device industry is going to properly document the statistical methods used for trending, establish trending triggers/thresholds, and communicate CAPA escalation criteria.

The reality of what FDA expects companies to do to comply with quality data analysis requirements is explored in the context of their March Warning Letter which preferences the quality data analysis finding with the introduction stating:

“Your firm’s CAPA procedure and related standard operating procedures have not been adequately established to analyze quality data to identify existing and potential causes of nonconforming product or other quality problems.”

Highly specific detail regarding statistical methods, data presentation conventions, and rationale for statistical methods is very necessary to ensure signal detection and consistency in how data is assessed between one month and the next. This means understanding and defining procedurally how trending data is to be identified, collated, transformed, normalized, quantitatively assessed, and visually presented to ensure that the methodology is statistically relevant and justified based on the type of data, quantity available for analysis, and intention of the analysis. The Warning Letter continues:

“Your procedure, [X], does not provide uniform process or includes clearly defined criteria to escalate events such as nonconformances to a CAPA.

The processes for nonconformance risk assessment of single nonconformances has not been developed, nor has it adequately defined actions to be taken for different levels of risk and correcting problems and preventing them from recurring….and…the only reference to CAPA escalation is through the procedure [X]. Your procedures do not define a method to escalate a single complaint record to the CAPA Request process.”

Escalation to CAPA can be based on a single event with a high severity or multiple events occurring over time with a lesser severity. (Consider that risk goes up if either the Severity or Frequency of Occurrence increases.)  This concept needs to be defined in both the CAPA procedure as well as within the feeder system procedures. Alternatively, both procedures can provide clear cross-reference to a single, centralized trending work instruction that defines detailed metric trending plans for each data source.

“Your procedure, [X], does not provide a process to determine whether a CAPA Request should be opened for a noted complaint trend.”

A perceived signal might not automatically result in a CAPA. Opening too many CAPAs, if they are not warranted, can cause a significant strain on resources and as many problems as not opening enough. CAPAs require significant resources to process and manage. When a signal is detected, the owner of the data set should initiate a preliminary trend investigation to determine if the statistical signal is valid or possibly a result of “noise” caused by an explainable event or a data anomaly.   If the trend appears to be valid, a CAPA request should be submitted to the CAPA Review Board (or equivalent).    The trending data and signals that result from it provide a means for triggering a CAPA based on the frequency of occurrence of a particular hazard or hazardous situation. As indicated above, a CAPA should also be considered if a single event indicates:

  • – High degree of harm which is attributable to the medical device, such as that resulting in a serious injury or death.
  • – The resulting harm exceeds the harm predicted and is determined to be acceptable based on risk assessments (with reference to the RMF.)
  • – Any new harms, not predicted and previously assessed for risk.

There are cases where an explainable anomaly in the trend analysis data may occur. A common example would be one where the complaint trending data for current month indicates a significant upward trend in particulate levels in immediate device packaging. Further data analysis done as part of a preliminary investigation to determine the validity of the trend determined that a single overseas distributor had held complaints over six months and then sent them all in at once. While the bolus of submitted complaints from the distributor appears to indicate a trend, the events were reported for production lots over a 6–8-month period and re-assessing the data based on actual dates of production and/or dates reported from the field results in stable data and no definitive signal. In this instance, escalation to a CAPA review board would not be warranted and would distract resources from more important work.

“The information is used to determine the severity of…nonconformance trends is not adequately described.”

Trending of product quality data must be based on the frequency of occurrence of specific failure modes (hazards or hazardous situations). Why? Because the most fundamental reason for trend analysis is to monitor risk to the patient/user resulting from specific events that occur in actual use. Risk is dependent on the hazard that occurred and/or the resulting hazardous situation. Our Risk Management Files (RMFs) establish the degree of risk that is anticipated and acceptable based on specific hazards and hazardous situations that occur; it is NOT based on an overall aggregate complaint rate. If trending is limited to looking at the aggregate number of complaints over time, it is not likely that signals will be detected as they will be diluted over the data set. This does not mean that every month every possible hazard or hazardous situation (i.e., complaint code or as reported complaint category) will be statistically analyzed. Each month the data set can be assessed in a Pareto Chart that will delineate which event types are occurring at a frequency that warrants and allows analysis and which ones “fall-off” into the “one-sie, two-sie” buckets that don’t provide adequate data (or concern) for further statistical treatment.

“There is no…definition on how to separate the data, if any trigger limits are used, what cut-off value was set for high volume complaints, and what scale is used for frequency/occurrence ratings for high and low-value nonconformances.”

It is not enough to define the statistical methods to be used in data analysis. Procedures also need to specify what actions are to be taken and when in case a trend is realized. This means defining alert limits, action limits, or thresholds and what must be done and documented when these are reached. Consider that if you set only an action limit that is tied to the highest tolerable anticipated frequency of occurrence stated in the risk management file; at the point, this limit is reached, the company will be operating OUTSIDE of approved and tolerable risk limits. If fielded product exceeds the risk level allowed by the Risk Management File, the company is facing field Correction or Removal. The company is reacting too late. Thresholds and resulting actions need to be sufficiently inside the documented allowable risk level so that the situation is righted before having to move to an expensive recall situation. How far “inside” the RMFs’ frequency of occurrence should the action limit be set to trigger CAPA consideration? This should be based on severity such that the alert limit is more conservative for those hazards/hazardous situations with higher severity. (Consider, for example, setting an action limit 1Σ below the upper allowable frequency of occurrence for severities of 1-2, but 2 Σ below the upper allowable frequency if the severity is >3 (on a 5-point Severity scale.)

Procedures also need to define the trending rules that will be applied when analyzing event occurrences relative to time. Frequency of event occurrence is calculated and plotted on a control chart over time with an average historical baseline or average current baseline imposed on the control chart. The frequency is typically plotted monthly. The charted data may indicate an out-of-control condition either when one or more points on the chart fall beyond a pre-defined control limit or when the collective points exhibit some type of nonrandom or unlikely pattern over time. A “run” is a sequence of contiguous points occurring over time where each point demonstrates the same tendency or condition. These conditions can be described relative to the average baseline value of all the plotted data or relative to a series of contiguous points. The specific nonrandom patterns that we choose to look for should be examples of  nonrandom patterns or patterns with a highly unlikely probability. The specific non-random patterns or unlikely patterns are “run rules.”  A trend is an observed occurrence of a run rule being met, indicating a nonrandom or unlikely pattern in the data. These can be, for example, eight (8) consecutive points in the same upward or downward direction, two (3) three consecutive points occurring 3Σ above the average baseline or (2) two consecutive points 2Σ above the average baseline. There are many published statistical run rules. The specific ones to be applied to any particular data set are at the discretion of the company but should be defined in advance as part of the metric trending plans established for each type of quality data that is subject to statistical data analysis.

Also consider that data sets used in trending may need to include data compiled from multiple sites that manufacture the same product. This facilitates identification of potential design related issues (as opposed to specific issues occurring on the manufacturing line.)  Similarly, it may be necessary to combine data for products that are highly similar in design (design differences being non-essential as they relate to functionality and performance).

Additional Points to Consider

It is important to consider how management has defined the responsibility for defining and preparing routine trend analysis data. Proper treatment of aggregate data and preparation for presentation to management needs to be assigned to a person or team well trained in the use of statical methods to analyze diverse types of data for different purposes. Statistical trend analysis should be considered a “QMS Process” and assigned a qualified Process Owner. Defining statistically relevant data analysis as well as alert and action levels requires specific knowledge, including training on the use of the tools used for analysis such as Mini-Tab, Tableau, or JMP software. Leaving trending obligations to “Production” and “QA” generalists could result in failure to meet requirements of ISO 13485:2016, section 6.2, requiring that personnel performing work affecting product quality are competent based on appropriate education, training, skills, and experience.

The audits QualityHub has been performing since we started consulting in 2001 often document quality data analysis deficiencies similar to what FDA cited in the March 15, 2024, Warning Letter.  While our encouragement  to improve practices in this area have been well received, there have been some auditors who have experienced pushback from clients contending that we are “requiring too much procedural detail,” “not allowing flexibility for data analysis” or are ”going beyond regulatory expectations.”   Quality data analytics is a science that demands detailed instructions to ensure appropriate signal detection. Metric trend plans approved via document control need to define:

  • – Each specific data set for analysis.
  • – Purpose of the analysis (what does it provide or why are we monitoring the data set).
  • – How to extract the data set from data management systems.
  • – Functional area or persons responsible performing routine analysis of the specific data set.
  • – Frequency of analysis.
  • – Number of previous months of data to include on the current month’s trending chart to allow for detection of patterns over time (trends.)
  • – Statistical method to be used for analysis (for example, what type of run chart is to be used, run rules to apply or conditions which, if met, constitute a “trend”, basis for determining historical baseline,  rules for normalizing data, rules for combining related data sets).
  • – Alert and/or action limits (thresholds) at which escalation to investigation and CAPA occurs.
  • – Use of tools/software for data analysis.

Conclusion

If your company’s quality data analysis methodology does not reflect the maturity of your organization and the complexity of your available data, you risk missing the opportunity to find and correct a medical device problem before it becomes a health hazard requiring recall or an FDA enforcement action. Both consume resources and can impact public information affecting public perception, stock price, acquisition, and ultimately market share.   The mathematics of data analysis results in both regulatory and market intelligence and directly impacts management’s ability to make decisions needed to manage on-market product risk.

About the Author

Rebecca D. Fuller has served in quality, regulatory, and compliance roles in the medical device and pharmaceutical sectors for over 30 years. Her career began as an investigator for US FDA and later Ms. Fuller served in management positions for several medical device companies. She later transitioned to consulting and is currently serving as Director of Regulatory Sciences for QualityHub, Inc., a consultancy well-recognized by FDA-regulated industries. Her expertise includes regulatory compliance strategy, QMS development,  GxP audits, risk management, post market surveillance, and product development within the FDA regulated market.

References
References accessed and/or verified on 29 April 2024.

  1. Medical Device Quality System Regulation, 21 CFR § 100, 820.250 (1996). https://www.ecfr.gov/current/title-21/chapter-I/subchapter-H/part-820
  2. International Organization for Standardization. (2016). Medical devices – Quality management systems – Requirements for regulatory purposes. (ISO Standard No. 13485:2016). https://www.iso.org/standard/59752.html
  3. Montgomery, D.C. (2020). Methods and Philosophy of Statistical Process Control. In J. Brady (Ed.), Introduction to Statistical Quality Control (8th Ed., pp. 175 – 217). John Wiley and Sons, Inc.

Looking for more information on CAPA?