The widespread availability of Artificial Intelligence (AI) tools has enabled the growing use of “deepfakes,” whereby the human voice and likeness can be replicated seamlessly such that impersonations are impossible to detect with the naked eye (or ear). These deepfakes pose substantial new risks for commercial organizations. For example, deepfakes can threaten an organization’s brand, impersonate leaders and financial officers, and enable access to networks, communications, and sensitive information.
In 2023, the National Security Agency (NSA), Federal Bureau of Investigations (FBI), and Cybersecurity and Infrastructure Security Agency (CISA) released a Cybersecurity Information Sheet (the “Joint CSI”) entitled “Contextualizing Deepfake Threats to Organizations,” which outlines the risks to organizations posed by deepfakes and recommends steps that organizations, including national critical infrastructure companies (such as financial services, energy, healthcare and manufacturing organizations), can take to protect themselves. Loosely defining deepfakes as “multimedia that have either been created (fully synthetic) or edited (partially synthetic) using some form of machine/deep learning (artificial intelligence),” the Joint CSI cautioned that the “market is now flooded with free, easily accessible tools” such that “fakes can be produced in a fraction of the time with limited or no technical expertise.” Thus, deepfake perpetrators could be mere amateur mischief makers or savvy, experienced cybercriminals.
Blog Editors
Recent Updates
- New York’s Health Information Privacy Act Poised to Become the Latest in a Growing Trend of State Data Privacy Laws
- Effective Dates of DEA Final Rules for Telemedicine Prescribing Delayed
- Video: 2025 Outlook - the Department of Health and Human Services Under the Second Trump Administration – Diagnosing Health Care
- The NIH IDC – Where Are We Now
- False Claims Act Exposure in Focus: President Trump Signs Executive Order Targeting DEI Programs