For decades, FDA’s Center for Devices and Radiological Health (CDRH) has been recognizing standards that can be referenced in premarket medical device submissions. Congress broadly directed federal agencies to begin relying on standards in 1996, through the National Technology Transfer and Advancement Act, but the informal practice dates back to the 1970s. Congress specifically directed FDA to begin using standards for medical device submissions through the Food and Drug Administration Modernization Act of 1997 (FDAMA).
Being a curious person, I wanted to see what FDA has done with that authority by looking at the CDRH database for Recognized Consensus Standards: Medical Devices. My main takeaway is that CDRH is not yet investing enough time and energy in recognizing standards that support digital health and AI.
Findings
I downloaded the data set on September 20, 2024, and looked when standards were recognized by FDA and to which therapeutic or functional areas they related.
New from the Diagnosing Health Care Podcast: One year ago, on October 30, 2023, President Joe Biden signed an executive order laying the groundwork both for how federal agencies should responsibly incorporate artificial intelligence (AI) within their workflows and how each agency should regulate the use of AI in the industries it oversees.
What has happened in the past year, and how might things change in the next?
On this episode, Epstein Becker Green attorneys Lynn Shapiro Snyder, Eleanor Chung, and Rachel Snyder Good reflect on what is new in health care AI as a result of the 2023 executive order and discuss what industry stakeholders should be doing to comply and prepare for future federal regulation of AI in health care.
The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear). These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.
In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals.
I may be jumping the gun here, but I’m anxious to understand how the new flurry of AI medical devices is performing in the marketplace, or more specifically, whether the devices are failing to perform in a way that jeopardizes health.
FDA keeps a list these days of medical devices that involve AI, and here’s the recent growth in clearances or other approvals.
Note for calendar year 2024, we only have first-quarter data.
The growth is notable. As these devices enter the market, they are subject to all the typical medical device postmarket regulatory ...
Most people have seen the growth in artificial intelligence/ machine learning (AI/ML)-based medical devices being cleared by FDA. FDA updates that data once a year at the close of its fiscal year. Clearly the trend is up. But that's a bit backward looking, in the sense that we are only learning after the fact about FDA clearances for therapeutic applications of AI/ML. I want to look forward. I want a leading indicator, not a laggard.
I also want to focus on uses of AI/ML that are truly therapeutic or diagnostic, as opposed to the wide variety of lifestyle and wellness AI/ML products and the applications used on the administrative side of healthcare. As a result, in this post I explore the information on clinicaltrials.gov because not only are those data focused on the truly health related, they are also forward-looking. The more recent clinical trials involve products still under investigation and not yet commercially available or even submitted to FDA.
On May 17, 2024, Colorado Governor Jared Polis signed into law SB 24-205—concerning consumer protections in interactions with artificial intelligence systems—after the Senate passed the bill on May 3. The law adds a new part 17, “Artificial Intelligence,” to Article I, Title 6 of the Colorado Consumer Protection Act, to take effect on February 1, 2026. This makes Colorado “among the first in the country to attempt to regulate the burgeoning artificial intelligence industry on such a scale,” Polis said in a letter to the Colorado General Assembly.
The new law will ...
Turns out, ignorance really is bliss, at least according to the Office of Civil Rights (“OCR”) within the Department of Health and Human Services (“HHS”), in publishing its final rule on algorithmic discrimination by payers and providers. Our concern is that the final rule, based on section 1557 of the Affordable Care Act, creates a double standard where more sophisticated organizations are held to a higher level of compliance. Set to become effective 300 days after publication, health care providers and payers will have a lot of work to do in that time.
In this post, we will lay ...
On October 30, 2023, President Joe Biden signed the first ever Executive Order (EO) that specifically directs federal agencies on the use and regulation of Artificial Intelligence (AI). A Fact Sheet for this EO is also available.
This EO is a significant milestone as companies and other organizations globally grapple with the trustworthy use and creation of AI. Previous Biden-Harris Administration action on AI have been guidance of principles (e.g., the AI Bill of Rights) or have been targeted guidance on a particular aspect of AI such as the Executive Order Addressing Racial ...
Introduction
Hardly a day goes by when we don’t see some media report of health care providers experimenting with machine learning, and more recently with generative AI, in the context of patient care. The allure is obvious. But the question is, to what extent do health care providers need to worry about FDA requirements as they use AI?
This post explores how bias can creep into word embeddings like word2vec, and I thought it might make it more fun (for me, at least) if I analyze a model trained on what you, my readers (all three of you), might have written.
Often when we talk about bias in word embeddings, we are talking about such things as bias against race or sex. But I’m going to talk about bias a little bit more generally to explore attitudes we have that are manifest in the words we use about any number of topics.
Would it surprise you if I told you that a popular and well-respected machine learning algorithm developed to predict the onset of sepsis has shown some evidence of racial bias?[1] How can that be, you might ask, for an algorithm that is simply grounded in biology and medical data? I’ll tell you, but I’m not going to focus on one particular algorithm. Instead, I will use this opportunity to talk about the dozens and dozens of sepsis algorithms out there. And frankly, because the design of these algorithms mimics many other clinical algorithms, these comments will be applicable to clinical algorithms generally.
In the absence of a federal law directly aimed at regulating artificial intelligence (AI), the Federal Trade Commission (FTC) is seeking to position itself as one of the primary regulators of this emergent technology through existing laws under the FTC’s ambit. As we recently wrote, the FTC announced the establishment of an Office of Technology, designed to provide technology expertise and support the FTC in enforcement actions. In a May 3, 2023 opinion piece published in the New York Times entitled “We Must Regulate A.I. Here’s How,” Lina Khan, the Chairperson of the FTC, outlined at least three potential avenues for FTC enforcement and oversight of artificial intelligence technology.
On February 17, 2023, the Federal Trade Commission (“FTC”) announced the creation of the Office of Technology (the “OT”), which will be headed by Stephanie T. Nguyen as Chief Technology Officer. This development comes on the heels of increasing FTC scrutiny of technology companies. The OT will provide technical expertise and strengthen the FTC’s ability to enforce competition and consumer protection laws across a wide variety of technology-related topics, such as artificial intelligence (“AI”), automated decision systems, digital advertising, and the collection and sale of data. In addition to assisting with enforcement matters, the OT will be responsible for, among other things, policy and research initiatives, and advising the FTC’s Office of Congressional Relations and its Office of International Affairs.
The success of an artificial intelligence (AI) algorithm depends in large part upon trust, yet many AI technologies function as opaque ‘black boxes.’ Indeed, some are intentionally designed that way. This charts a mistaken course.
Artificial Intelligence (“AI”) applications are powerful tools that already have been deployed by companies to improve business performance across the health care, manufacturing, retail, and banking industries, among many others. From largescale AI initiatives to smaller AI vendors, AI tools quickly are becoming a mainstream fixture in many industries and will likely infiltrate many more in the near future.
But are these companies also prepared to defend the use of AI tools should there be compliance issues at a later time? What should companies do before launching AI tools ...
The application of artificial intelligence technologies to health care delivery, coding and population management may profoundly alter the manner in which clinicians and others interact with patients, and seek reimbursement. While on one hand, AI may promote better treatment decisions and streamline onerous coding and claims submission, there are risks associated with unintended bias that may be lurking in the algorithms. AI is trained on data. To the extent that data encodes historical bias, that bias may cause unintended errors when applied to new patients. This can result in ...
After a Congressional override of a Presidential veto, the National Defense Authorization Act became law on January 1, 2021 (NDAA). Notably, the NDAA not only provides appropriations for military and defense purposes but, under Division E, it also includes the most significant U.S. legislation concerning artificial intelligence (AI) to date: The National Artificial Intelligence Initiative Act of 2020 (NAIIA).
The NAIIA sets forth a multi-pronged national strategy and funding approach to spur AI research, development and innovation within the U.S., train and prepare an ...
On October 22, 2019, the Centers for Medicare and Medicaid Services (“CMS”) issued a Request for Information (“RFI”) to obtain input on how CMS can utilize Artificial Intelligence (“AI”) and other new technologies to improve its operations. CMS’ objectives to leverage AI chiefly include identifying and preventing fraud, waste, and abuse. The RFI specifically states CMS’ aim “to ensure proper claims payment, reduce provider burden, and overall, conduct program integrity activities in a more efficient manner.” The RFI follows last month’s White House ...
The healthcare industry is still struggling to address its cybersecurity issues as 31 data breaches were reported in February 2019, exposing data from more than 2 million people. However, the emergence of artificial intelligence (AI) may provide tools to reduce cyber risk.
AI cybersecurity tools can enable organizations to improve data security by detecting and thwarting potential threats through automated systems that continuously monitor network behavior and identify network abnormalities. For example, AI may offer assistance in breach prevention by proactively ...
Blog Editors
Recent Updates
- DEA Issues Third Extension to Public Health Emergency Telemedicine Prescribing Flexibilities, Through 2025
- CMS Issuing First Risk Adjustment Data Validation Audit Notices for PY2018 Since the RADV Final Rule
- Just Released: Telemental Health Laws – Download Our Complimentary Survey and App
- HISAA: New Legislation Would Bring Cybersecurity Requirements for HIPAA Covered Entities and Business Associates
- Post-Hurricane Flexibilities Offered by the U.S. Department of Health and Human Services Through the Centers for Medicare & Medicaid Services