At the end of 2022, FDA published a draft guidance on Voluntary Malfunction Summary Reporting (VMSR) Program for Manufacturers. The draft guidance explains several aspects of the VMSR Program, including FDA’s approach to determining the eligibility of product codes for the program. Consistent with the goals outlined in the Medical Device User Fee Amendments of 2017 (MDUFA IV) Commitment Letter, the VMSR Program streamlines reporting of device malfunctions. The program began in 2018 when FDA issued an order granting an alternative reporting approach under 21 CFR 803.19. The ...
Turns out, ignorance really is bliss, at least according to the Office of Civil Rights (“OCR”) within the Department of Health and Human Services (“HHS”), in publishing its final rule on algorithmic discrimination by payers and providers. Our concern is that the final rule, based on section 1557 of the Affordable Care Act, creates a double standard where more sophisticated organizations are held to a higher level of compliance. Set to become effective 300 days after publication, health care providers and payers will have a lot of work to do in that time.
In this post, we will lay ...
Combination products present a tremendous opportunity to improve health outcomes, because they leverage multiple disciplines. If we were, for example, to focus on drugs alone with little thought to how they might be delivered, we would be surely missing a chance to enhance safety or effectiveness. Likewise, many devices can be made more effective or safer if paired with a drug.
At the end of 2016, FDA finalized a rule covering Postmarket Safety Reporting for Combination Products that now can be found at 21 C.F.R. Subpart B.[1] A few years later, in July 2019, FDA finalized a guidance ...
FDA’s January 3, 2024, Federal Register notice soliciting comments on the agency’s plan to implement best practices for guidance development got me thinking. What do the data show regarding FDA’s performance in moving proposed guidance to final?
If you haven’t read it, the Federal Register notice explains that the Consolidated Appropriations Act of 2023 directs FDA to issue a report identifying best practices for the efficient prioritization, development, issuance, and use of guidance documents and a plan for implementing those practices. The comment period on ...
This post was co-authored by David Schwartz, CEO and Co-Founder at Ethics Through Analytics, and Michael Shumpert, Data Science Executive at Mosaic Data Science.
As you may know, we have been submitting FOIA requests asking FDA to share data from its various programs. In October, FDA granted[1] our April FOIA request in which we asked the agency to add back demographic data fields that it had previously removed from its public Medical Device Report (“MDRs”) databases. To find potential bias, we encourage manufacturers to use this data to look for any disproportionate impact its ...
Introduction
Frequently, I am asked by clients to predict how long it will take for FDA to review and clear a 510(k). At a high level, I observe that on average clearance can take 160 days according to the data. Then, beyond that, I observe that review times are highly variable among differing product codes, and the very first Unpacking Averages post I wrote in October 2021 provided a graphic to show just how much variation there was depending on the technology. Here, though, I want to dive into yet another separate factor that should be taken into account, the seasonality of FDA ...
Those who have been reading this blog know that I like to analyze collections of documents at FDA to discern, using natural language processing, whether, for example, the agency takes more time to address certain topics than others. This month, continuing the analysis I started in my October post regarding device-related citizens petitions, I used topic modeling on the citizens petitions to see which topics are most frequent, and whether there are significant differences in the amount of time it takes for FDA to make a decision based on the topic.
Discerning the Topics
As you probably ...
Our latest focus is trying to bring data to bear on common questions we get asked by clients. Last month the topic was: how well does my device need to perform to get premarket clearance from FDA? This month it is: how big does my sample size need to be for any necessary clinical trial for premarket clearance?
To date, our typical answer has been, it depends.[1] We then explain that it’s not really a regulatory question, but a question for a statistician that is driven by the design of the clinical trial. And the design of the clinical trial is driven by the question the clinical trial is trying ...
On October 30, 2023, President Joe Biden signed the first ever Executive Order (EO) that specifically directs federal agencies on the use and regulation of Artificial Intelligence (AI). A Fact Sheet for this EO is also available.
This EO is a significant milestone as companies and other organizations globally grapple with the trustworthy use and creation of AI. Previous Biden-Harris Administration action on AI have been guidance of principles (e.g., the AI Bill of Rights) or have been targeted guidance on a particular aspect of AI such as the Executive Order Addressing Racial ...
This month I wanted to take a data-driven look at FDA’s treatment of citizen petitions, and specifically as a starting point how quickly the agency resolves those petitions. Make no mistake, I have an interest in this topic. Over the more than 35 years I have been practicing law, I have filed multiple petitions including a 1995 petition that successfully caused FDA to adopt Good Guidance Practices. But more recently, specifically on February 6, 2023, I filed a citizen petition asking FDA to rescind its final guidance on Clinical Decision Support Software.[1] On August 5, 2023, when we ...
It’s common for a client to show up at my door and explain that they have performance data on a medical device they have been testing, and for the client to ask me if the performance they found is adequate to obtain FDA clearance through the 510(k) process. I often respond, very helpfully, “it depends.” But for some reason clients aren’t completely satisfied by that.
I then volunteer that a general rule of thumb is 95%, but that this is just a rule of thumb. For Class II medical devices undergoing review through the 510(k) process, the legal standard is that the applicant must show that ...
Introduction
Hardly a day goes by when we don’t see some media report of health care providers experimenting with machine learning, and more recently with generative AI, in the context of patient care. The allure is obvious. But the question is, to what extent do health care providers need to worry about FDA requirements as they use AI?
Recently Colleen and Brad had a debate about whether Medical Device Reports (“MDRs”) tend to trail recalls, or whether MDRs tend to lead to recalls. Both Colleen and Brad have decades of experience in FDA regulation, but we have different impressions on that topic, so we decided to inform the debate with a systematic look at the data. While we can both claim some evidence in support of our respective theses based on the analysis, Brad must admit that Colleen’s thesis that MDRs tend to lag recalls has the stronger evidence. We are no longer friends. At the same time, the actual data didn’t really fit either of our predictions well, so we decided to invite James onto the team to help us figure out what was really going on. He has the unfair advantage of not having made any prior predictions, so he doesn’t have any position he needs to defend.
This post explores how bias can creep into word embeddings like word2vec, and I thought it might make it more fun (for me, at least) if I analyze a model trained on what you, my readers (all three of you), might have written.
Often when we talk about bias in word embeddings, we are talking about such things as bias against race or sex. But I’m going to talk about bias a little bit more generally to explore attitudes we have that are manifest in the words we use about any number of topics.
Would it surprise you if I told you that a popular and well-respected machine learning algorithm developed to predict the onset of sepsis has shown some evidence of racial bias?[1] How can that be, you might ask, for an algorithm that is simply grounded in biology and medical data? I’ll tell you, but I’m not going to focus on one particular algorithm. Instead, I will use this opportunity to talk about the dozens and dozens of sepsis algorithms out there. And frankly, because the design of these algorithms mimics many other clinical algorithms, these comments will be applicable to clinical algorithms generally.
In prior posts here and here, I analyzed new data obtained from FDA through the Freedom of Information Act about FOIA requests. I looked at response times and then started to dive into the topics that requesters were asking about. This is the third and final post on this data set, and it builds on the last post by taking the topics identified there to explore success rates by topic. From there, I look at who is asking about those topics and how successful those individual companies are in their requests.
Continuing my three-part series on FOIA requests using a database of over 120,000 requests filed with FDA over 10 years (2013-22), this month’s post focuses on sorting the requests by topic and then using those topics to dive deeper into FDA response times. In the post last month, I looked at response times in general. This post uses topic modeling, a natural language processing algorithm I’ve used in previous blog posts, including here[1] and here[2], to discern the major topics of these requests.
Federal agencies in health care publish large amounts of data, and my posts typically analyze that data. To provide more value to readers, I’ve started submitting FOIA requests for unpublished data to produce additional insights into how FDA works. And what better first topic than data on FDA responses to FOIA requests.
Information is important, and thus so is access to it. Our democracy needs to know what’s going on in our government, and businesses trying to navigate the FDA regulatory process likewise need to understand the regulatory process. For both purposes, the FOIA process should be fair and efficient.
FDA has been releasing data on its FOIA process, specifically its FOIA logs, for a few years. For data analysis purposes, those data are missing some important fields such as the date of the final decision. Further, when it comes to looking at the data on the closed cases, the data only go back four years. In my experience, the pandemic years were anomalous in so many ways that we can’t treat any data from the last three years as typical. As a result, I wanted to go back 10 years.
In this episode of the Diagnosing Health Care Podcast: The U.S. Food and Drug Administration (FDA) recently issued a final guidance document clarifying how the agency intends to regulate clinical decision support (CDS) software.
How has this document caused confusion for industry? How can companies respond?
Introduction
Let’s say FDA proposed a guidance document that would change the definition of “low cholesterol” for health claims. Now let’s say that when FDA finalized the guidance, instead of addressing that topic, FDA banned Beluga caviar. If you are interested in Beluga caviar, would you think you had adequate opportunity to comment? Would you care if FDA argued that Beluga caviar was high in cholesterol so the two documents were related?
The regulatory environment at the US Food and Drug Administration (“FDA”) has a tremendous impact on how companies operate, and consequently data on that environment can be quite useful in business planning. In keeping with the theme of these posts of unpacking averages, it’s important to drill down sufficiently to get a sense of the regulatory environment in which a particular company operates rather than rely on more global averages for the entire medical device industry. On the other hand, getting too specific in the data and focusing on one particular product category can prevent a company from seeing the forest for the trees.
Recently, I was asked by companies interested in the field of digital medical devices used in the care of people with diabetes to help them assess trends in the regulatory environment. To do that, I decided to create an index that would capture the regulatory environment for medium risk digital diabetes devices, trying to avoid getting too specific but also avoiding global data on all medical devices. In this sense, the index is like any other index, such as the Standard & Poor 500, which is used to assess the economic performance of the largest companies in terms of capitalization. My plan was to first define an index of product codes for these medium risk digital diabetes products, then use that index to assess the regulatory environment in both premarket and postmarket regulatory requirements.
It is certainly easy, when writing code to accomplish some data science task, to start taking the data on face value. In my mind, the data can simply become what they claim to be. But it’s good to step back and remember the real world in which these data are collected, and how skeptical we need to be regarding their meaning. I thought this month might be an opportunity to show how two different FDA databases produce quite different results when they should be the same.
The motivation for this month’s post was my frustration with the techniques for searching the FDA’s 510(k) database. Here I’m not talking about just using the search feature that FDA provides online. Instead, I have downloaded all of the data from that database and created my own search engine, but there are still inherent limitations in what the data contain and how they are structured. For one, if you want to submit a premarket notification for an over-the-counter product, it really isn’t easy to find predicates that are specifically cleared for over-the-counter without a lot of manual work.
To see if I could find an easier way, I decided to use the database FDA maintains for unique device identifiers, called the Global Unique Device Identification Database (GUDID). You can search that database using the so-called AccessGUDID through an FDA link that takes you to the NIH where the database is stored. That site only allows for pretty simple search, so for what I needed to do, I downloaded the entire database so I could work directly on the data myself.
While the UDI database is enormous at this juncture (over 3 million products), what I found left me with questions about just how comprehensive and complete the data are. At the same time, it seems like a good way to supplement the information that can be gleaned from the 510(k) database.
Over the spring and summer, I did a series of posts on extracting quality information from FDA enforcement initiatives like warning letters, recalls, and inspections. But obviously FDA enforcement actions are not the only potential sources of quality data that FDA maintains. FDA has what is now a massive data set on Medical Device Reports (or “MDRs”) that can be mined for quality data. Medical device companies can, in effect, learn from the experiences of their competitors about what types of things can go wrong with medical devices.
The problem, of course, is that the interesting data in MDRs is in what a data scientist would call unstructured data, in this case English language text describing a product problem, where the information or insights cannot be easily extracted given the sheer volume of the reports. In calendar year 2021, for example, FDA received almost 2 million MDRs. It just isn’t feasible for a human to read all of them.
That’s where a form of machine learning, natural language processing, or more specifically topic modeling, comes in. I used topic modeling last November for a post about major trends over the course of a decade in MDRs. Now I want to show how the same topic modeling can be used to find more specific experiences with specific types of medical devices to inform quality improvement.
A private equity client asked us recently to assess a rumor that FDA was on the warpath in enforcing the 510(k) requirement on medical devices from a particular region. Such a government initiative would significantly deter investments in the companies doing the importing. Turns out, the agency was not. The FDA’s recent activities in the region were well within their historical norms.
But the project got us thinking, what does the agency’s enormous database on import actions tell us about the agency’s enforcement priorities more generally? There are literally thousands of ways to slice and dice the import data set for insights, but we picked just one as an example. We wanted to assess, globally, over the last 20 years, in which therapeutic areas has FDA been enforcing the 510(k) requirement most often?
You might be thinking, that’s an odd title: obviously FDA’s breakthrough device designation is helpful. However, after looking at the data, my conclusion is that I would avoid the breakthrough device designation for any product that qualifies for the 510(k) process. The process is likely not helpful for such devices.
[Update - August 3, 2022: See the bottom of this post.]
Recalls have always been a bit of a double-edged sword. Obviously, companies hate recalls because a recall means their products are defective in some manner, potentially putting users at risk and damaging the brand. They are also expensive to execute. But a lack of recalls can also be a problem, if the underlying quality issues still exist but the companies are simply not conducting recalls. Recalls are necessary and appropriate in the face of quality problems.
Thus, in terms of metrics, medical device companies should not adopt as a goal reducing recalls, as that will lead to behavior that could put users at risk by leaving bad products on the market. Instead, the goal should be to reduce the underlying quality problems that might trigger the need for recall.
What are those underlying quality problems? To help medical device manufacturers focus on the types of quality problems that might force them to conduct a recall, we have used the FDA recall database to identify the most common root causes sorted by the clinical area for the medical device.
Most companies want to avoid FDA warning letters. To help medical device companies identify violations that might lead to a warning letter, this post will dive deeply into which specific types of violations are often found in warning letters that FDA issues.
Background
As you probably know, FDA has a formal process for evaluating inspection records and other materials to determine whether issuing a warning letter is appropriate. Those procedures can be found in chapter 4 of FDA’s Regulatory Procedures Manual. Section 4-1-10 of that chapter requires that warning letters include specific legal citations, in addition to plain English explanations of violations. The citations are supposed to make reference to both the statute and any applicable regulations.
As a consequence, to understand the content of the warning letters, we need to search for both statutory references as well as references to regulations. Because statutes are deliberately drafted to be broader in their language, references to the regulations tend to be more meaningful.
Overview
In this month’s post, in the medical device realm I explore what kinds of inspection citations most often precede a warning letter. In this exercise, I do not try to prove causation. I am simply exploring correlation. But with that caveat in mind, I think it’s still informative to see what types of inspectional citations, in a high percentage of cases, will precede a warning letter. And, as I’ve said before, joining two different data sets – in this case inspectional data with warning letter data – might just reveal new insights.
It is common for FDA and others to show a map of the United States with the states color-coded by intensity to showcase the total number of inspections done in that state. Indeed, FDA includes such a map in its newly released dashboard for FDA inspections. In reviewing that map with the U.S. map color-coded to reflect where medical device establishments are located, do you notice anything? Not to destroy the suspense for you, but it turns out that FDA tends to inspect where medical device inspection facilities are located. Really.
We wanted to get beneath those numbers in two ways. First, it’s much more informative to look at the data at a county level because there’s actually quite a bit of variation county by county. Second, and more importantly, we wanted to normalize the inspection data by the number of facilities. In other words, by looking at inspections per facility, we can get a better sense of the inspection frequency in each county.
This month, we’re going to look at a visualization that uses network techniques. Visualizing a network is a matter of nodes and edges. If the network were Facebook, the nodes would be people, and the edges would be the relationships between those people. Instead of people, we are going to look at specific device functionalities as defined by the product codes. And instead of relationships, we are going to look at when device functionalities (i.e., product codes) are used together in a marketed device as evidenced by a 510(k) submission.
While this column typically uses data visualizations you’ve probably seen before, I want to introduce one that perhaps you have not. This is in the realm of text analysis. When looking at FDA data, there are numerous places where the most interesting information is not in a data field that can be easily quantified, but rather in narrative text. Take, for example, Medical Device Reports of adverse events, or “MDRs.” While we can do statistical analysis of MDRs showing, for example, which product categories have the most, the really interesting information is in the descriptions ...
In this column, in the coming months we are going to dig into the data regarding FDA regulation of medical products, deeper than the averages that FDA publishes in connection with its user fee obligations. For many averages, there’s a high degree of variability, and it’s important for industry to have a deeper understanding. In each case, we will offer a few preliminary observations on the data, but we would encourage a conversation around what others see in the data.
Chart
This is an interactive chart that you can explore by clicking on the colors in the legend to see how specific ...
The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in ...
At the January 8-9, 2015 FDA public meeting on the agency's proposal to regulate a portion of lab developed tests (LDTs), there was much debate regarding whether FDA has jurisdiction over IVDs made at clinical laboratories. Not coincidentally, on January 7, the day before the meeting, the American Clinical Laboratory Association released a white paper developed for the Association by a couple of prominent constitutional law scholars. The paper outlined the arguments at a high level against FDA jurisdiction over lab developed tests generally. But with all due respect to the authors as well as the speakers at the FDA public meeting, the discussion to date is taking place at such a high level that I do not find it particularly helpful. Mostly the discussions merely stake out the positions held by interested parties. They don't advance the collective understanding of the issues.
In connection with the public meeting, I developed five questions which help me think through the legal issues. I'd like to share those questions, in an effort to drive the discussion to a more granular level where differences can be more effectively debated and resolved. In addition, as with any lawyer, I'm drawn to precedent, so I'd like to share how FDA has tackled similar issues before. At the end of this post, based on precedent but also my conclusion that both sides are overstating their legal positions, I offer a path forward down the middle-of-the-road.
5 Questions That Frame FDA Authority Over IVDs Made at Labs
In posing these questions, I start with the most basic and simple and then move closer and closer to the current facts. In each case, I'll also give you what I think the answer is.
Blog Editors
Recent Updates
- DEA Issues Third Extension to Public Health Emergency Telemedicine Prescribing Flexibilities, Through 2025
- CMS Issuing First Risk Adjustment Data Validation Audit Notices for PY2018 Since the RADV Final Rule
- Just Released: Telemental Health Laws – Download Our Complimentary Survey and App
- HISAA: New Legislation Would Bring Cybersecurity Requirements for HIPAA Covered Entities and Business Associates
- Post-Hurricane Flexibilities Offered by the U.S. Department of Health and Human Services Through the Centers for Medicare & Medicaid Services