When it comes to educating the medical device industry, longtime expert Rebecca Fuller is always prepared to help on a wide array of issues. And on Oct. 9 and 10, she’ll do just that at the GMP University and the MedTech Validation and Innovation University in San Diego.
Presented by KENX, the two university tracks – the agenda for which is here and here, and registration info here and here – will cover issues of high concern to manufacturers, including risk management, post-market surveillance, design history files, quality audits, and much more.
Fuller, who is VP of regulatory compliance for QualityHub and a former US Food and Drug Administration (FDA) investigator, will speak on all four of those topics and others during insightful sessions designed to keep manufacturers out of regulatory hot water.
“This conference is an opportunity to hear from a variety of people on a variety of subjects and to network and share ideas and concerns with other like-minded professionals who will be there on site,” Fuller says. “I enjoy KENX because it really is ‘boots on the ground’ and an amazing way to learn.”
Below, Fuller gives a small taste of what she will discuss while at KENX. This Q&A was edited for clarity and brevity.
On Risk Management…
QualityHub: Let’s start with risk management. If you talk to medical device RA/QA professionals say, 15 years ago, there was a lot of risk management learning going on back then. Risk management was, oddly enough, a relatively new concept to industry back then. Fast forward to today, and risk management is a part of really everything a company does. What’s your sense of how far industry has come on this topic?
Rebecca Fuller: Yes, there has been some good progress. If we compare where we were 10 or 15 years ago to where we are today, one of the main things I see that is different is a familiarity with terms associated with risk. As a consultant, it used to be that you would go into a company and audit, and nobody really knew how to define “risk.” Nobody understood terms such as “severity” and “probability,” and how properly apply them. And when it comes to the basics, there’s now a fundamental understanding of the need to identify risks and mitigate those risks.
These are things that the MedTech industry was much more challenged by 10 years ago, but I’m starting to see consistency in what companies are doing now to define risk, to analyze the degree of risk, the criticality of that risk, and document mitigations to that risk. But while there has been improvement there, we still have further to go. For example, companies continue to be challenged with managing post-market risk and feeding post-market data back into the risk management file.
Further, companies are still challenged once a product is launched by the concept of identifying new risks, proactively responding to those new risks, and keeping an eye peeled for data that could change the risk profile of a product. When doing risk assessments in a pre-market environment, risks are being predicted based on models we may have had from other products. Companies must look at the reality of what the data is telling them, and it may require a shift in the what the firm considers to be an acceptable risk level for a particular product line based on what is being seen. So, that post market work is still there to be done and improved upon.
On Design History Files (DHF)…
QH: During another of your upcoming KENX sessions, this one on Design History Files, one of your key messages will be how companies should best organize DHF information for FDA reviewers. As a longtime professional who has been on the inside of the agency, you would know best what reviewers are looking for. So, what is one major way that firms continually fall down when developing a DHF and why is it such a challenge?
Fuller: I will talk at KENX about not just the importance of the data, but as you said, how it’s presented. It’s got to be presented so it is easily readable – the reviewer must be able to find what they’re looking for quickly. It’s just as important to have the correct data as it is to present that data in a way that is going to lead the reviewer to interpret the data the way you intend it.
Now, how is that done? How is that improved upon? Universally, industry needs to improve its technical writing skills. For example, the engineers who are putting this data together often don’t have the basic technical writing skills that they need. They don’t look at the data from the reviewer’s perspective, and it really helps to have somebody in your organization with an engineering background who’s not directly involved in putting together the data and building the reports to look at that from an outside perspective.
Often people get too close to the data when they’re writing it and make assumptions about background information or other details that are being assumed. So, there’s the concept that we have of spoon-feeding data in the submission, and that means building data in the DHF. It should be that it’s very clear, it’s easy to read, it’s well indexed. Present the information on the page in a way that pulls the reviewer’s eye to exactly what you want them to see. Sometimes this is accomplished by using something as simple as tables with clearly intuitive column headings. Let’s say you’re presenting a data table. Are the column headings interpretable? Are they easily interpretable? Is your audience going to be able to understand the acronyms and the engineering lingo that you’re using on those on those headers?
On Post-Market Surveillance…
QH: When it comes to post-market surveillance, what’s a major blind spot for firms? If I had to take a guess, it would be keeping up with social media and other online comments regarding products and making sure they’re assessed and fed into their complaint handling system if appropriate and addressed as a CAPA – corrective and preventive action – and for Adverse Event or MDR reportability to the FDA. Would I be on target here?
Fuller: Youwould be on target. But I want to expand on that and look at what that’s really an indicator of. It’s not looking at social media, not scouring published data – whether that be informal published data or medical journals. A failure to analyze that type of data denotes a bigger failure by not identifying all sources of quality data.
What are sources of post-market data? In general, people will generally say, “We need to look at complaints or our adverse events we’re reporting.” Yes, absolutely. But what about that complaint data? What’s in that complaint? And how do you need to design your computer-based systems that are used to collect and manage that data? You must design the software intake systems and databases the right way to be able to pull in the right pieces of information from a complaint. And, importantly, the complaint coding needs to be correct if it’s going to be effectively used to identify signals in aggregate data. I find a lot of times that companies are analyzing complaint data and conducting complaint training, but they’re not looking at the right pieces of complaint data when trending.
They may be looking at, say, how a complaint was reported – what the complainant stated about the problem, but not also looking at the post-investigation – the actual problem – because what the customer is perceiving may be translated into a completely different problem once the products return and are analyzed by an engineer to determine the actual failure. So, companies must do both. They must look at both types of data: what the complaint is saying and what the engineers are saying is the actual problem.
On Internal Quality Audits…
QH: When it comes to quality audits, I’d assume a lot of people have blinders on because they’re so sure their work is perfect 100% of the time and they never make mistakes. Or maybe they have a boss who doesn’t want to hear about the bad stuff. So those things are hidden from them, but they’ll eventually be uncovered when they’re facing a costly recall. So, is it practical to do self-checks? And how can that be done, honestly, without the blinders?
Fuller: Blinders happen when auditors repeatedly audit the same system. Auditors must be independent of the process, product, or system being audited. But regardless of that independence, if the same auditor is looking at the same process year-in and year-out, they become very comfortable with that process and they’ve seen the same thing before, so they feel like they’re repetitively looking at the same procedures they’re already familiar with. That’s why companies must switch up their auditors.
A company may have an auditor who is more expert in, say, post-market surveillance and will audit the post-market surveillance program every year, but you need to give a different auditor a chance to look at that. If you’re in a large organization, you typically have a large bench of auditors to use, but if you’re a small company, you don’t, and in those cases, it’s advisable to periodically have an external third party come in and perform an audit. It could satisfy your requirements for your annual internal audit, or it could be just a way to calibrate your own auditors and look for potential areas of concern that weren’t previously identified. The bottom line is a third party is going to have those blinders off.