On October 21, 2025, the acting administrator of the Office of Information and Regulatory Affairs (OIRA) in the Office of Management and Budget issued Memorandum M-25-36, which contains guidance for federal agencies on “how to bolster, streamline, and speed” the deregulatory agenda prioritized by the Trump administration in 2025 (“the Memorandum” or “M-25-36”).
The Memorandum furthers two Executive Orders (EOs) issued earlier in the year. EO 14192, entitled “Unleashing Prosperity Through Deregulation,” requires that for every new regulation issued, ten must be repealed. EO 14219 seeks to ensure “Lawful Governance” to implement the president’s Department of Government Efficiency Deregulatory Initiative. M-25-36 also furthers a Presidential Memorandum of April 9, 2025, entitled “Directing the Repeal of Unlawful Regulations.”
The Memorandum, which establishes timelines and guidelines for OIRA review, focuses on: 1) speeding up the OIRA review process; 2) repealing facially unlawful regulations; and 3) developing better deregulatory records. We discuss each of these sections in turn before providing some thoughts in the health care context.
In Summer 2025, the U.S. Court of Appeals for the Sixth Circuit issued a strongly worded decision in In Re: FirstEnergy Corporation (No. 24-3654)—confirming the core concept that internal investigations conducted by counsel and in anticipation of litigation are privileged and protected from disclosure. When securities plaintiffs in the case sweepingly sought all documents “related to the internal investigation,” the district court incorrectly ordered their production. After much legal wrangling, the Sixth Circuit rebuked the district court on August 7, 2025, and reaffirmed in a per curiam opinion filed October 3, 2025.
New from the Diagnosing Health Care Podcast: Why is now the moment for women’s health?
On this episode, Epstein Becker Green attorneys Rachel Snyder Good, Beth Scarola, and Laura DePonio sit down with Sheila Biggs, Vice President of Jarrard, to explore how women’s health is evolving from a niche focus into a mainstream industry priority. They discuss the forces accelerating growth, the opportunities for change, and the challenges that still need to be addressed across outcomes, access, and patient experience.
This episode unpacks several key topics, including:
- what “women’s health” really means today—far beyond fertility and reproductive care;
- how decades of underinvestment created new opportunities for innovation, equity, and growth;
- why investors and policymakers are turning their attention to this space, and what’s driving the momentum;
- the biggest gaps still to close in outcomes, access, and patient experience;
- how storytelling and strategy can help reshape the public narrative around women’s health; and
- where the industry is headed next—including insights related to the upcoming Women’s Health Innovation Summit.
Tune in to find out what’s driving unprecedented momentum across policy, investment, and innovation in the women’s health space.
Imagine going online to chat with someone and finding an account with a profile photo, a description of where the person lives, and a job title . . . indicating she is a therapist. You begin chatting and discuss the highs and lows of your day among other intimate details about your life because the conversation flows easily. Only the “person” with whom you are chatting is not a person at all; it is a “companion AI.”
Recent statistics indicate a dramatic rise in adoption of these companion AI chatbots, with 88% year-over-year growth, over $120 million in annual revenue, and 337 active apps (including 128 launched in 2025 alone). Further statistics about pervasive adoption among youth indicate three of every four teens have used companion AI at least once, and two out of four use companion AI routinely. In response to these trends and the potential negative impacts on mental health in particular, state legislatures are quickly stepping in to require transparency, safety and accountability to manage risks associated with this new technology, particularly as it pertains to children.
In September 2025, the U.S. Attorneys’ Office for the Eastern District of Pennsylvania (EDPA) announced that it would be implementing a White-Collar Justice Program to strengthen its white- collar enforcement framework. Among other things, the program will “empower Assistant United States Attorneys to aggressively pursue complex investigations and significant new matters on their own initiative.”
This announcement demonstrates another step in federal districts ramping up their white-collar enforcement efforts while encouraging robust procedures for compliance and self-disclosure. This is a trend several years in the making: in September 2022, then-Deputy Attorney General Lisa Monaco directed U.S. attorneys and others within the DOJ to review their policies on corporate voluntary self-disclosure, and to draft and share a formal written policy to incentivize such self-disclosure, if one was lacking.
On October 11, California Governor Gavin Newsom signed AB 1415, which regulates private equity and hedge fund activity by expanding the Office of Health Care Affordability’s (OHCA) jurisdiction and notice requirements. Though the law is a compromise from last session’s AB 3129—which the Governor vetoed on September 28, 2024—it nevertheless represents a significant change for private equity groups, hedge funds, and management services organizations (MSOs) in the state.
Well before the latest government shutdown, the U.S. Department of Justice’s National Security Division (DOJ NSD) issued a final rule at 28 CFR Part 202 (“2025 Final Rule” or “Rule”) to help prevent “countries of concern” or “covered persons” from accessing U.S. government-related data and Americans’ bulk sensitive personal data. The 2025 Final Rule took effect in April—and after a 90-day safe harbor period, the DOJ began enforcement on July 8.
Six months after implementation—with the U.S. Senate now passing the BIOSECURE Act restricting certain biotech business with China—compliance remains the key for affected stakeholders, including those exchanging personal health data. As we reported in July, the 2025 Final Rule implemented the prior administration’s Executive Order 14117 of February 28, 2024, by prohibiting and restricting “bulk” data transactions with countries that could threaten U.S. national security through the use of Americans’ sensitive personal data.
While the 2025 Final Rule remains largely untested, federal agencies and stakeholders alike have taken action to test the bounds of the Rule and, in some instances, expand applicability beyond 28 CFR Part 202. Below is a brief refresher of the key elements of the Rule and some recent developments.
On October 6, 2025, California Governor Gavin Newsom signed SB 351, aimed at limiting the involvement of private equity groups and hedge funds in health care practices. While the new law does create new statutory requirements governing hedge fund and private equity group involvement in the management of physician and dental practices, those requirements largely reflect existing California case law and Medical Board of California guidance. Specifically, the new law:
- prohibits a private equity group or hedge fund that is involved—including as an investor or owner—with a physician or dental practice doing business in the state, from interfering with the professional judgment of physicians and dentists in making health care decisions;
- prohibits these entities from exercising power over specified actions, including hiring practices and coding and billing procedures for patient services, and
- prohibits contracts between a private equity group or hedge fund or an entity controlled by a private equity group or a hedge fund and a physician or dental practice, if the contract would allow the conduct described above or impose a noncompete or nondisparagement clause.
The law will take effect on January 1, 2026. The state attorney general is empowered to enforce the new law through injunctive relief and other equitable remedies. It is the latest in a national trend among states to strengthen corporate practice of medicine (CPOM) doctrines by limiting the influence of non-licensed entities in clinical decision-making. The bill, introduced by California State Senator Christopher Cabaldon, passed the state legislature in September with bipartisan support.
In the latest in a series of recent cases involving the “but-for” causation standard for Anti-Kickback Statute (“AKS”) claims, Judge Waverly D. Crenshaw in the U.S. District Court for the Middle District of Tennessee has dismissed United States, et al., ex rel. Nolan, et al. v. HCA Healthcare, Inc., 2025 WL 2713747 (M.D. Tenn. Sept. 22, 2025) pursuant to Rules 12(b)(6) and 9(b).
Judge Crenshaw weighed in Nolan whether the relators, co-owners of Pathologists Laboratory P.C. (“PLPC”), had plausibly alleged that: 1) defendant HCA Healthcare Inc. (“HCA”) solicited or received “remuneration” for purposes of an AKS violation; and 2) PLPC or the second lab submitted claims “resulting from” an illegal kickback for purposes of a False Claims Act (FCA). He ultimately determined that the relators had not, in fact, plausibly alleged that HCA either solicited or received “remuneration” for purposes of the AKS.
In the wake of a lawsuit filed in federal district court in California in August—alleging that an artificial intelligence (AI) chatbot encouraged a 16-year-old boy to commit suicide—a similar suit filed in September is now claiming that an AI chatbot is responsible for death of a 13-year-old girl.
It’s the latest development illustrating a growing tension between AI’s promise to improve access to mental health support and the alleged perils of unhealthy reliance on AI chatbots by vulnerable individuals. This tension is evident in recent reports that some users, particularly minors, are becoming addicted to AI chatbots, causing them to sever ties with supportive adults, lose touch with reality and, in the worst cases, engage in self-harm or harm to others.
While not yet reflected in diagnostic manuals, experts are recognizing the phenomenon of “AI psychosis”—distorted thoughts or delusional beliefs triggered by interactions with AI chatbots. According to Psychology Today, the term describes cases in which AI models have amplified, validated, or even co-created psychotic symptoms with individuals. Evidence indicates that AI psychosis can develop in people with or without a preexisting mental health issue, although the former is more common.
Blog Editors
Recent Updates
- OIRA Memo on Agency Deregulation: Implications for Health Care
- Outside Counsel’s Internal Investigations—Including Those Relating to Health Care—Are Privileged and Protected from Disclosure
- Podcast: Current Tailwinds in Women’s Health - What Do They Mean for Your Business? – Diagnosing Health Care
- Novel AI Laws Target Companion AI and Mental Health
- EDPA Ramps Up Its White-Collar Enforcement Framework