Insights

Vital Signs_ Digital Health Law Update _ Spring 2

Vital Signs: Digital Health Law Update | Fall-Winter 2024

INTRODUCTION 

Note From the Editors 

We bring you Vital Signs, a curated, one-stop resource on the most notable digital health law updates from our U.S. and global contributors. In Industry Insights, we present a timely discussion about increasing litigation and enforcement on health care entities related to their use of artificial intelligence tools. In our U.S. Developments section, we provide numerous FDA updates, along with the latest HIPAA, privacy, and breach topics. In our Global Developments section, don't miss our discussion of the European Union's new AI Act that took effect August 1, 2024, as well as our discussion of the first tranche of Australia's much anticipated privacy law reforms. We thank our contributors who are committed to bringing you curated updates covering digital health developments of interest.

INDUSTRY INSIGHTS

AI Litigation and Enforcement Trends for Health Care Entities

Increasingly, health care entities ("HCEs") are navigating the impact of the use of artificial intelligence ("AI") tools that collect and analyze patient data, including an increase in Health Insurance Portability and Accountability ("HIPAA") and other privacy-related litigation and regulation. A few common issues for further compliance consideration that have emerged from recent litigation and enforcement include: (i) data scraping and data sharing; (ii) utilization management determinations; and (iii) discriminatory bias. 

Data-scraping and data-sharing claims typically involve use of third-party AI technology by an HCE to analyze patient data, which plaintiffs generally argue allows those third parties to inappropriately access protected health information ("PHI"), which may implicate HIPAA and other privacy laws. Courts have not made definitive rulings on this issue, but some have indicated that the merit of such claims may depend on the scope and effect of anonymization of the data. For example, the Seventh Circuit upheld the dismissal of privacy and breach claims associated with data sharing against a university medical center and a large technology company based on lack of standing and failure to state a claim. There, the court held that the plaintiff had not plausibly alleged any harm resulting from the disclosure of anonymized patient data to the third-party technology company. Conversely, the Western District of Wisconsin denied a motion to dismiss for similar, but distinguishable claims, stating that plaintiff's disclosed data was not anonymized and plaintiff plausibly alleged harm due to possible diminished sales value of the PHI.

Lawsuits focusing on the use of AI in payors' utilization management determinations have also increased, with plaintiffs arguing that such use of AI has led to improper denials of medically necessary care. As these lawsuits are still pending, it is unclear how courts will rule on these issues. However, federal agencies and various state governments have issued regulations and passed statutes limiting the use of AI in medical necessity determinations and imposing disclosure requirements. For example, the Centers for Medicare & Medicaid Services ("CMS") issued final rules on AI in utilization management, prohibiting determinations based solely on AI. Additionally, California passed Senate Bill 1120 and Assembly Bill 3030 which similarly limit insurers' use of AI with respect to medical necessity determinations and require certain health care providers to make specific disclosures when using generative AI ("GenAI") in patient communications. 

Covered entities may also face risks regarding perpetuating potential biases related to the use of AI tools. While significant litigation or enforcement actions against HCEs alleging discriminatory AI practices have not yet occurred, some state attorneys general have publicly voiced concerns about AI tools perpetuating unfair bias. For example, California Attorney General Bonta previously announced an investigation into racial bias in health care algorithms, citing concerns that AI tools could perpetuate unfair bias if the data does not fairly represent a certain patient population. Additionally, Massachusetts Attorney General Campbell issued an advisory clarifying that AI developers, suppliers, and users may not use AI technology to discriminate against individuals based on a legally protected characteristic.

Given this regulatory and enforcement landscape, the use of AI tools with respect to patient data may present operational challenges for HCEs. To help mitigate risk in the AI space, HCEs may consider assessing use of AI tools as part of their ongoing risk assessment measures as well as potential implementation of procedures to de-identify PHI prior to making such data available to third parties and utilizing data agreements and patient waivers.

United States Developments 

FEDERAL 

Awaiting Federal Action by December 31, 2024, as Expiration of Telehealth Waivers Looms  

First, certain Medicare telehealth reimbursement flexibilities, initially implemented in response to the COVID-19 public health emergency and extended by Congress twice before, are set to expire on December 31, 2024. These waivers temporarily remove certain statutory limitations on Medicare reimbursement for telehealth services—such as geographic and originating site requirements and limitations on the types of practitioners eligible to provide telehealth services—to allow Medicare payment for telehealth services provided to beneficiaries located in their homes, anywhere in the United States (not just rural areas) and for telehealth services provided by occupational, physical, and speech language therapists and audiologists, among other flexibilities. While CMS has continued efforts to expand coverage for virtual services through rulemaking, including in its final CY 2025 physician fee schedule, Congressional action is needed to continue waivers of the Medicare reimbursement requirements based in statute. Two bills that would temporarily extend waivers for another two years are currently making their way through Congress: H.R. 7623 and H.R. 8261.  

DEA's Third Temporary Extension of COVID-19 Telemedicine Flexibilities for Prescription of Controlled Medications 

On November 15, the Drug Enforcement Administration ("DEA"), jointly with the Department of Health and Human Services ("HHS"), extended temporary flexibilities that waive provisions of the federal Ryan Haight Act to allow providers to prescribe Schedule II-V controlled substances based solely on a telemedicine encounter, through December 31, 2025 (the "Third Temporary Rule"). In the Third Temporary Rule, DEA indicated that it was granting the extension to "allow DEA (and also HHS, for rules that must be issued jointly) to promulgate proposed and final regulations that are consistent with public health and safety, and that also effectively mitigate the risk of possible diversion" and to "allow adequate time for providers to come into compliance with any new standards or safeguards eventually adopted in a final set of regulations." This suggests that DEA intends to issue proposed permanent telemedicine regulations (and finalize them) in 2025. DEA also indicated it had attended several meetings with interested parties coordinated by the Office of Management and Budget ("OMB") following its transmittal of a prior Notice of Proposed Rule Making ("NPRM"). In promulgating any future final rule, DEA states it intends to evaluate comments received in response to prior NPRMs, the information and perspectives presented at the Telemedicine Listening Sessions, the tribal consultations, the recent OMB meetings, and any further comments collected during additional rounds of public comment.  

FDA Issues Draft and Final Guidance on Multiregional and Decentralized Clinical Trials 

Earlier this fall, the U.S. Food and Drug Administration ("FDA") issued two guidance documents advancing its multifaceted approach to clinical trial policy. 

First, FDA issued draft guidance on "Considerations for Generating Clinical Evidence from Oncology Multiregional Clinical Development Programs," which provides sponsors of global clinical development programs with recommendations on designing, conducting, and analyzing data from multiregional clinical trials ("MRCT") for oncology drugs. These recommendations aim to ensure both applicability of the results to the intended use population in the United States as well as alignment with U.S. medical practice. The guidance includes the following categories of recommendations: (i) U.S. population representativeness in the MRCT; (ii) considerations for U.S. and foreign site selection; (iii) disease, available treatment, and medical product considerations; (iv) considerations for analyses of data from MRCTs; and (v) early consultation with FDA and other regulatory authorities. 

FDA also issued final guidance on "Conducting Clinical Trials with Decentralized Elements," replacing preexisting guidance on decentralized clinical trials. Demonstrating a greater understanding of the increasing role remote and digital technologies play in clinical trials, the new guidance makes numerous recommendations regarding clinical protocols, data management and trial monitoring plans, and telehealth practices. Notably, the guidance provides specific standards for situations where remote technologies, direct shipment of investigational products, and other decentralized elements—which allow trial-related activities to occur remotely at locations convenient for trial participants—are appropriate.  

FDA Issues Draft Guidance Documents to Counter Misinformation and Streamline Innovation 

Over the summer, FDA provided guidance to the industry about combating misinformation online regarding medical devices and prescription drugs and about modifications to medical devices suitable for a predetermined change control plan ("PCCP"). Specifically, on July 8, 2024, FDA issued revised draft guidance titled "Addressing Misinformation About Medical Devices and Prescription Drugs" to provide updated recommendations to FDA-regulated companies on countering internet-based misinformation disseminated by independent third parties about their approved/cleared medical products. The updated recommendations address two methods that companies may use to combat misinformation: (i) tailored responsive communications, which consist of a company's voluntary, internet-based communications; and (ii) existing avenues for communication, which include sales aids, TV and radio advertisements, help-seeking, and institutional communications. The guidance provides examples of the two methods and highlights enforcement considerations. FDA does not intend to enforce promotional labeling, advertising, or post-marketing submission requirements on responsive communications that address such misinformation consistent with the guidance.  

The following month, on August 22, 2024, FDA released draft guidance concerning PCCPs, titled "Predetermined Change Control Plans for Medical Devices." PCCPs allow manufacturers to obtain advance clearance or approval of planned modifications that otherwise would have required a new marketing submission to FDA. The draft guidance describes the types of modifications that may be appropriate to include in a PCCP and explains the information that should be included in a PCCP as part of a device's marketing submission. The draft guidance also sets forth five guiding principles for manufacturers considering PCCP submission: (i) a PCCP must contain sufficient information to enable FDA to assess the reasonable assurance of safety and effectiveness of the device; (ii) a PCCP may be the least burdensome option to support device modifications, even though a PCCP is optional; (iii) a PCCP is part of a device's marketing authorization—meaning that the manufacturer is required to implement modifications consistent with the PCCP—and included in the device's letter of authorization; (iv) a PCCP should include specific modifications that the manufacturer intends to make, rather than list any or all modifications that the manufacturer may possibly make; and (v) PCCPs harmonize with existing FDA guidance on device modifications. 

FDA Announces Digital Health Advisory Committee Members and Holds Inaugural Meeting 

On August 1, 2024, FDA released the names of the members of the first FDA Digital Health Advisory Committee ("DHAC"). The newly constituted DHAC includes nine voting members from a diverse range of institutions including hospital systems and universities as well as health care companies. FDA may, but is not required to, supplement the DHAC with temporary, non-voting members taken from an industry representative pool. An inaugural meeting was held on November 20-21, 2024, where members discussed total product lifecycle considerations for GenAI-enabled devices and the impact of GenAI on the safety and effectiveness of devices. The FDA will consider additional public feedback made in comments to the public docket (FDA-2024-N-3924) before January 21, 2025. 

FDA's CDER Relies on AI/ML to Identify Patient Population for Drug Therapy, Leading to Emergency Use Authorization  

FDA's Center for Drug Evaluation and Research ("CDER") recently called upon the predictive ability of artificial intelligence/machine learning ("AI/ML") to facilitate identification of a patient population that would benefit from a drug that treats COVID-19, anakinra (Kineret), for a randomized, double-blind, placebo-controlled study. The clinical efficacy and safety data from this trial was used to support the issuance of an Emergency Use Authorization for the drug. According to FDA, this was the first time CDER leveraged AI/ML to support a regulatory decision. FDA noted that AI/ML could be used to aid other phases of drug development such as patient selection for clinical trials.

FDA Publishes New Resources on AI/ML

On June 10, 2024, FDA published six synopses of its Artificial Intelligence Program activities, discussing recent research on AI/ML-based medical devices. The new resources address FDA's research concerning the limitations of medical data in AI, measuring AI bias for enhancing health equity, developing evaluation methods for AI-enabled devices and AI-automated medical practices, and identifying post-market monitoring techniques for AI-enabled medical devices.

 

FTC Adopts Final Health Breach Notification Rule

Earlier this year, the Federal Trade Commission ("FTC") announced a rule, now in effect, intended to "strengthen and modernize" the Health Breach Notification Rule ("HBNR"). The rule explicitly extended the HBNR to cover health apps, websites, and other direct-to-consumer services holding certain health information. In addition to expanding the scope of entities and services regulated, the rule similarly expanded the regulated scope of records and sources that qualify as a "personal health record" under the HBNR to include any that has the "technical capacity" to draw from multiple sources. The rule also clarified that "breaches" are not limited to cyberattacks and intrusions but include a "company's intentional but unauthorized disclosure." While the rule extended the notification deadline for breaches involving 500 or more individuals to 60 calendar days, it substantially increases the scope of notification obligations for regulated entities to include: (i) the name or identity of third parties that acquired covered information as a result of a breach; (ii) descriptions of the types of covered information that were impacted by the breach; and (iii) descriptions of the regulated entity's efforts to protect individuals. 

U.S. District Court Invalidates HHS Guidance Overreading HIPAA's Application to Online Technologies

On June 20, 2024, a U.S. federal district court held, in a suit brought by Jones Day, that the Department of Health and Human Services ("HHS") had misapplied HIPAA in its 2022 guidance titled "Use of Online Tracking Technologies by HIPAA Covered Entities and Business Associates." The guidance adopted a rule providing that individually identifiable health information ("IIHI") is collected where, inter alia, an online technology connects: (i) an individual's IP address; with (ii) a visit to an unauthenticated public webpage (i.e., a webpage that does not require user log-in) addressing specific health conditions or health care providers (the "Proscribed Combination"). In March 2024, HHS issued revised guidance modifying its position so that the Proscribed Combination would constitute IIHI only if the webpage visitor subjectively intended to visit the site for a reason related to his or her health condition, health care, or payment for care. HHS reasoned that the Proscribed Combination qualified as IIHI because it is "indicative" of the webpage visitor's health status. Such reasoning would have prohibited the use of online technologies, even for long-standing beneficial purposes that reveal nothing about an identified individual's own health (e.g., location tools expediting travel time in urgent situations and analytics tools for functionality, experience, and efficiency).

A U.S. federal district court ultimately rejected HHS's position and held that the Proscribed Combination exceeded the IIHI definition. It reasoned that metadata collected by online technologies showing merely that an identifiable individual visited a health-related webpage is not IIHI because that information alone does not "relate to" the individual's own health. Given increased scrutiny on health privacy and security, HIPAA-regulated digital health entities should consider: (i) inventorying and evaluating data collected through third-party online technologies; (ii) updating notices/disclosures; and (iii) confirming business associate agreements, or BAAs, are in place. 

Trusted Exchange Framework and Common Agreement Revamped to Include New and Updated Exchange Purposes and Standard Operating Procedures 

In August, the Trusted Exchange Framework and Common Agreement ("TEFCA") was updated to include five new or updated standard operating procedures ("SOPs") and authorized exchange purposes ("XPs"). New or updated SOPs and permitted XPs now include those for treatment, payment, health care operations, public health, individual access services, and government benefits determination. These updated SOPs and XPs are intended as a step towards continued advancement of interoperability, but uncertainty remains in how XPs and SOPs are interpreted, particularly for treatment purposes. This uncertainty is playing out across the industry. For example, recent litigation has arisen between a large electronic medical record ("EMR") company and health tech company, in which the EMR entity cut off certain treatment data requests for some of the health tech company customers because it claimed downstream uses were unrelated to treatment. As the health care industry evolves and treatment-related uses become more diverse and complex within the relationships of providers, payors, entities that do both, and health tech companies, disputes on this topic will likely continue to occur.  

New Federal Legislation Introduced to Stymy Health Care Ransomware Attacks 

In July, a group of senators introduced a bipartisan bill, the Healthcare Cybersecurity Act, aimed at addressing the growing risks of cyberattacks in health care. The legislation promotes coordination between the Cybersecurity and Infrastructure Security Agency ("CISA") and the HHS and calls for the agencies to: (i) provide cybersecurity training to owners of specified health care assets; (ii) identify high-risk health care and public health sector assets, including technologies, services, and utilities ("Covered Assets"); (iii) allocate resources to high-risk Covered Assets; and (iv) develop health care and public health sector-specific plans to address cybersecurity risks for Covered Assets. The bill currently remains in committee, but it has the potential to expand the focus on cybersecurity in the health care industry. 

New Bills Seeks to Expand FDA's Enforcement Authority to Address Deceptive Online Drug Advertising 

On September 12, 2024, U.S. Senators Dick Durbin (D-IL) and Mike Braun (R-IN) introduced bipartisan legislation that would authorize FDA to issue warning letters and fines to influencers and telehealth companies for deceptive and misleading drug promotion. Importantly, the bill would impose requirements on drug manufacturers to report payments made to social media influencers. The proposed legislation would also require the Secretary of Health and Human Services to reinforce market surveillance activities, facilitate a joint task force between FDA and FTC, and introduce a process for notifying drug manufacturers of online advertising that fails to satisfy requirements for drug advertisements under the misbranding provisions of Section 502(n) of the Federal Food, Drug, and Cosmetic Act. 

2024 Nationwide Health Care Fraud Enforcement Action Ensnares Digital Health Schemes 

In June and July, the Department of Justice ("DOJ") and the Department of Health and Human Services, Office of Inspector General ("OIG") led a two-week nationwide health care fraud enforcement action that resulted in criminal charges against 193 defendants—including telehealth providers and a telehealth company—for over $2.75 billion in alleged fraudulent claims. Over $231 million in cash, luxury items, and other assets were also seized in connection with the nationwide action. 

Charges related to the allegedly fraudulent use of digital health tools include charges against three licensed practitioners and an executive associated with a California-based telehealth company in connection with the allegedly unlawful distribution of Adderall pills. These charges followed the arrest of the company's chief executive officer ("CEO") and clinical president the previous month. As alleged, the company charged users a monthly subscription fee in exchange for prescriptions that lacked a legitimate medical purpose. Additionally, the company's "auto-refill" policy purportedly allowed patients to continue obtaining prescriptions after an initial encounter without any further audio or visual interaction with a medical professional. The scheme also allegedly involved false and fraudulent representations to pharmacies and efforts to obstruct pharmacies from exercising their compliance-related responsibilities, which allegedly resulted in pharmacies submitting false and fraudulent claims for reimbursement. In total, the company allegedly arranged for the prescription of over 40 million Adderall pills, receiving $100 million in revenue. 

An additional 36 individuals were charged for allegedly exploiting telemedicine platforms and participating in illegal kickback schemes involving referrals for unnecessary genetic testing or durable medical equipment ("DME"), which resulted in the submission of over $1.1 billion in fraudulent claims to Medicare. For example, in the Middle District of Tennessee, four defendants were charged with submitting allegedly fraudulent Medicare claims for unnecessary genetic tests, medications, and DME. The defendants allegedly paid kickbacks and bribes to purported telemedicine companies in exchange for signed doctors' orders, which, in turn, were allegedly sold to laboratories, pharmacies, and DME companies. Two of the four defendants owned their own DME companies and purportedly purchased doctors' orders for medically unnecessary DME, which they billed to Medicare. In the Western District of Michigan, a doctor admitted in a plea agreement to working with a purported telehealth company and approving medical brace and genetic testing orders that she had failed to review. The doctor admitted to approving most orders in less than 60 seconds, and others in the minimum time needed to click through and apply an electronic signature. The telehealth company's owner was previously sentenced to 14 years in federal prison in 2022 for his role in the scheme, including "creat[ing] and maintain[ing] an internet-based exchange that produced fraudulent medical records." 

Recent False Claims Act Settlements Involving Digital Health 

On June 13, 2024, DOJ announced that a behavioral health company operating in Connecticut and other states agreed to pay almost $4.6 million to settle allegations that it submitted fraudulent Medicare claims for telehealth services provided to nursing home residents. As alleged, the telehealth company submitted improper claims for "telehealth originating site facility fees" using Healthcare Common Procedure Coding System ("HCPCS") code Q3014. According to the government, under relevant billing rules and guidance, only the originating site—in this case, the nursing homes—should bill under HCPCS code Q3014 for providing administrative and clinical support to the patient receiving psychological services through telehealth. Allegedly, the behavioral health company, rather than the nursing homes, submitted claims under HCPCS code Q3014. In addition, the company purportedly submitted claims for psychological services provided to patients in nursing homes when those patients no longer resided in nursing homes but instead had been transferred to various hospitals and admitted as inpatients. 

A Florida-based owner and operator of multiple clinical laboratories agreed to pay over $27 million to resolve allegations that he and his companies conspired to submit false claims for cancer genomic tests that were not medically necessary and were procured through illegal kickbacks. As alleged, the scheme involved telemarketing agents soliciting Medicare beneficiaries for "free" cancer genomic tests, telemedicine providers prescribing tests that were not medically necessary, and laboratories and hospitals running and billing Medicare for the tests. The defendant had previously pleaded guilty in connection with the same conduct. The civil settlement agreement requires the payment not only of cash but also a transfer to the United States of a significant number of personal and corporate assets including real estate, lab equipment, vehicles, and personal effects, as well as future contingent payments, for a potential value of up to $305 million. Additionally, the settlement includes a forfeiture provision in connection with the defendant's plea agreement in a related criminal proceeding, which requires the forfeiture of his boat or the proceeds from its sale totaling nearly $4.5 million. 

STATE 

State Attorneys General Settle with Biotechnology Company Over Alleged HIPAA and State Law Violations 

A New York-based biotechnology company and its subsidiary agreed to pay three states $4.5 million in civil monetary penalties to settle alleged violations of HIPAA's Security and Breach Notification Rules and state law, under the Health Information Technology for Clinical and Economic Health ("HITECH") Act provisions that authorize states to bring civil actions on behalf of their residents for violations of the HIPAA Privacy and Security Rules. As alleged by the attorneys general of New York, New Jersey, and Connecticut, the data of 2.4 million patients was compromised after hackers exploited the login credentials of two company employees and breached the company's database in April 2023. These credentials had been shared between five employees, and one set of credentials had not been changed for 10 years. After gaining entry, the hackers installed malware connected to remote servers, but the company had no process in place to monitor or provide notice of this suspicious activity, which went undetected for days. Ultimately, the attorneys general determined that these alleged cybersecurity shortcomings violated HIPAA's Security and Breach Notification Rules, and, in addition to a financial penalty, required the company to strengthen its cybersecurity practices. 

State Attorneys General Urge Change Healthcare to Respond to Fallout from Cyber Attack and Allude to the Potential of Future Enforcement Actions 

State attorneys general continue to monitor Change Healthcare's response to a February 2024 cyberattack, which disrupted billing throughout the health care industry and potentially exposed the sensitive data of millions of patients. Previously, Vital Signs reported that HHS had opened an investigation into the incident and that the Arkansas Attorney General had brought an enforcement action. Now, additional attorneys general have turned their attention to the breach. In April 2024, 22 attorneys general co-authored a letter to Change Healthcare's corporate parent, urging the parent to take more steps to protect providers and patients harmed by the breach of its subsidiary. The attorneys general indicated that the letter did not constitute a "settlement offer, waiver, or suspension of any ongoing or contemplated investigations or other legal action that the undersigned Attorneys General may take against your companies." In response, in July 2024, Change Healthcare announced that it was offering two years of credit card monitoring and identity theft protection to those patients potentially affected by the breach. State attorneys general, such as Rob Bonta of California, have urged consumers to take advantage of this offering while further steps are contemplated. 

Alabama Enacts Genetic Privacy Legislation 

On May 16, 2024, Alabama enacted a genetic privacy bill ("HB 21") designed to regulate consumer-facing genetic testing companies. The law defines "genetic testing companies" as those that "directly solicit[] a biological sample from a consumer for analysis in order to provide products or services to the consumer which include disclosure of information" such as disease dispositions or potential genealogical links to certain population groups or individuals. Companies that meet this definition must provide notice to consumers regarding the company's data collection and use practices, allow consumers to access and delete their genetic data, and provide consumers the ability to revoke previous consent. The law, which took effect on October 1, 2024, does not contain a private right of action and instead only permits enforcement through the Alabama Attorney General's Office. 

Florida's Digital Bill of Rights Takes Effect 

Florida's Digital Bill of Rights (Senate Bill 262) became effective on July 1, 2024, adding Florida to the growing list of states that have enacted digital privacy laws—a trend previously highlighted by Vital Signs. The new law requires certain businesses, defined as "controllers," to, among other things: limit collection of personal data; establish reasonable data security practices; provide consumers with a clear and accessible privacy notice; obtain consent before processing sensitive data; and establish a retention schedule that prevents the unnecessary retention of data. Importantly, the law does not apply to "covered entities," as defined by HIPAA. The law also provides consumers with certain rights such as the ability to confirm whether their data is being processed, access their data, correct their data, opt-out of certain collection practices, and request data deletion. Unlike some other data privacy laws, Florida's law does not create a private right of action and instead only permits enforcement through the Florida Attorney General's Office. 

New D.C. Law Defining Telehealth Leaves Possibility for In-Person Requirements 

Mayor Muriel Bowser signed into law Council Bill 250545, the "Health Occupations Revision General Amendment Act of 2024," effective July 19, 2024, adding a new section on Telehealth to the Health Occupations Boards Code. Telehealth is defined broadly as the "use of synchronous or asynchronous telecommunication technology"; however, the new law provides that the D.C. mayor "may through rulemaking issue additional requirements for specific health professionals, including an initial in-person physical examination." 

Pennsylvania Enacts Law Defining Telemedicine Broadly 

New legislation effective in July, 40 Pa. Cons. Stat. § 4802, adds a "Telemedicine Chapter" to Pennsylvania's Insurance Law and incorporated a broad definition for telemedicine as the "delivery of health care services to a patient by a health care provider who is at a different location, through synchronous interactions, asynchronous interactions or remote patient monitoring." 

Rhode Island Becomes the 20th State to Adopt Comprehensive Privacy Law 

As reported in previous issues of Vital Signs, states are regulating digital health privacy at an increasing level. This trend continues—on June 28, Rhode Island joined a growing group of states by adopting its own comprehensive data privacy law, the Rhode Island Data Transparency and Privacy Protection Act ("Act"). The Act potentially applies to those digital health providers who operate primarily in Rhode Island and provides an exemption only for information and data regulated by HIPAA, declining to follow some other states' laws that provide an exemption to any HIPAA-regulated entity. In addition, the Act treats health-related information as "sensitive" information, such that covered entities must obtain affirmative consent before collecting or processing the data. The Act will be effective on January 1, 2026.  

Texas Attorney General Secures First-of-its-Kind Settlement Against GenAI Company for Alleged Product Misrepresentations

The Texas Attorney General secured a first-of-its-kind settlement with an artificial intelligence health care technology company regarding alleged misrepresentations of the accuracy of its product. The company had partnered with several major Texas hospitals, receiving health care data to "summarize" patient conditions and treatment for hospital staff. The company represented that its product was "highly accurate" and advertised a low "severe hallucination rate" of only "<1 per 100,000." The Attorney General found these metrics were likely inaccurate and misleading or deceptive. As part of the settlement, the technology company must accurately represent its products' reliability, testing and monitoring procedures, and training data, and must also clearly and conspicuously disclose to any current and future customers known or reasonably knowable harms or misuses of its products.  

California Governor Signs New AI Legislation Impacting Health Care Sector 

Following the close of the 2024 California legislative session, California Governor Gavin Newsom signed into law a number of AI-related legislation that directly and indirectly impacts the digital health industry. Key legislation includes: 

  • SB 1120, which requires health care service plans—including specialized health care service plans—to ensure proper use of their AI, algorithms, or other software tools when employed for utilization review or utilization management based on medical necessity.
  • AB 3030, which requires health facilities, clinics, physicians' offices, or group practices that use GenAI to generate written or verbal patient communications about patient clinical information, to disclose the use of AI in generating such communications and provide instructions on how to contact appropriate personnel.
  • AB 2013, which requires developers of GenAI systems or services to disclose on their websites certain details about training data used in their products.
  • SB 942, which requires certain developers of GenAI to implement AI detection tools and disclosures for GenAI generated audio, video, or image content. 

Governor Newsom notably vetoed SB 1047, which would have imposed various requirements on GenAI developers including cybersecurity protections, written safety protocols and designating personnel responsible for implementing protocols, implementation of total shutdown capabilities, and testing, assessment, reporting, and audit obligations. 

New York Adopts New Cybersecurity Requirements for Hospitals 

New York State Department of Health ("NYSDOH") finalized and adopted new cybersecurity regulations applicable to all hospitals operating within the state, imposing new requirements to strengthen the hospitals' data privacy and cybersecurity protocols. Effective immediately, hospitals in the state are required to report material cybersecurity incidents to NYSDOH as promptly as possible but no later than 72 hours after determining an incident has occurred. Additionally, the regulations are broader than HIPAA, applying to non-public information, which may also include confidential and proprietary business information that can be used to identify a person. And by October 2025, hospitals must also meet other compliance requirements such as creating a cybersecurity program that identifies and assesses cybersecurity risks (among other things), adopting a written incident response plan, appointing a chief information security officer, employing qualified cybersecurity personnel or engaging qualified third-party cybersecurity personnel to manage the cybersecurity program, using information system user authentication, conducting an annual risk assessment of cybersecurity systems, and maintaining records for at least six years to permit audits to identify significant cybersecurity threats.  

Global Developments 

Europe 

The European Union's New AI Act Takes Effect 

Earlier this year, the European Parliament and the Council of the EU adopted and approved the AI Act and subsequently published the act in the Official Journal of the European Union on July 12, 2024. The AI Act entered into force on August 1, 2024, with provisions taking effect over the course of the next three years. The AI Act is the world's first comprehensive EU regulation aimed at governing artificial intelligence. It applies to public and private entities, both inside and outside the EU, that place an AI system on the EU market or use AI systems in a way that impacts people located in the EU. The AI Act is likely to have a significant impact on companies providing digital health products and services, as it covers AI systems used in the health care and life sciences sector. 

The European Commission Launches a Public Consultation for Draft Rules on the Health Technology Regulation 

On October 1, 2024, the European Commission opened a public consultation on the Commission Implementing Regulation for the application of the Health Technology Regulation ("EU") 2021/2282, specifically with respect to the procedures for joint scientific consultations on medicinal products for human use at the EU or Union level. The Health Technology Regulation sets forth a framework and procedures for cooperation between Member States on health technologies at Union level. The draft Implementing Regulation adds detailed procedural rules for such joint scientific consultations including rules concerning the submission of requests from health technology developers for joint scientific consultation, the selection and consultation of stakeholder organizations and patients, clinical experts and other relevant experts participating in joint scientific consultation on medicinal products; and cooperation, by exchange of information. The public consultation period ended on October 29, 2024.  

European Data Protection Board's Work Program Highlights Planned Guidelines Applicable to Digital Health 

On October 9, 2024, the European Data Protection Board ("EDPB") published its work program for 2024-2025. According to the work program, the EDPB plans to develop further guidance on key issues and concepts related to EU data protection law. This includes guidelines on topics concerning digital health such as telemetry and diagnostic data, anonymization, pseudonymization, and the processing of data for scientific research purposes.  

Data Protection Authorities in Europe Fines Health Care Companies 

Data protection authorities in Europe recently have taken action against health care companies that have failed to protect personal data. Specifically, on August 30, 2024, the Swedish data protection authority fined a pharmaceutical retailer company €3.2 million for the unlawful disclosure of website users' personal data to a multinational social media company. A few weeks earlier, the Polish data protection authority fined a health care provider €331,000 for lack of technical measures to protect personal data against ransomware attacks.

BELGIUM 

Belgium Designates Health Care Authorities to Ensure a Common High Level of Cybersecurity in Belgium 

On June 9, 2024, Belgium adopted a Royal Decree implementing Directive (EU) 2022/2555, which was published on December 14, 2022, and contains measures to ensure a "high common level of cybersecurity" across the EU. The Royal Decree designates the Centre for Cybersecurity Belgium ("CCB") as the national cybersecurity authority and other sectoral authorities—including those related to health care—to support the CCB in its responsibilities. The Royal Decree aims to ensure a common high level of cybersecurity in Belgium by promoting the protection of network and information systems used to process data in the digital health sphere.

FRANCE 

French Data Protection Authority Fines a Software Provider for Non-Compliant Processing of Patients' Health Data 

On September 5, 2024, the French data protection authority, the Commission Nationale Informatique et Libertés ("CNIL"), fined a provider of management software for medical practices and health centers €800,000 for the unauthorized processing of sensitive health data. The CNIL's investigation into the software company found that the company collected pseudonymous data, rather than anonymous data, for use in health research and statistical analysis. Specifically, the company gathered large amounts of information on individuals, linking it to a unique identifier for each patient of the same doctor. This allowed data transmitted successively by the same doctor to be combined, effectively isolating individuals within the company's database and making it possible to identify patients. Furthermore, the CNIL found that the company failed to comply with legal requirements for data processing by allowing doctors, through its HRi teleservice, to automatically collect patient reimbursement data, instead of limiting access to a simple consultation of the data. After its investigation, the CNIL concluded that the software company violated Article 66 of the French Data Protection Act by processing non-anonymous health data without the necessary CNIL authorization or submitting a compliance declaration for handling such data. 

French Data Protection Authority Releases Guidance on Processing of Athletes' Health Data  

On June 18, 2024, the CNIL, in collaboration with the Ministry of Sports and Sport Data Hub, issued guidance clarifying the legal framework for collecting health data from athletes. The guidance first defines health data, which is protected as a special category of personal data under the EU's General Data Protection Regulation ("GDPR"), and outlines three main types: (i) health data by nature (e.g., medical exams for professional athletes); (ii) health data derived from combining other data (e.g., weight and height measurements); (iii) and health data by use (e.g., images of disabled athletes used in promotional campaigns). The CNIL emphasizes that data collection within the sports ecosystem—whether for medical monitoring, anti-doping efforts, or performance optimization—must follow the principle of minimization, meaning that only the data strictly necessary should be collected. For example, the continuous monitoring of heart rate via a smartwatch outside of training or competition would be considered excessive for optimizing an athlete's physical performance. 

The guidance also stresses that the collection of health data must be supported by a valid legal basis applicable to special categories of data. This includes explicit consent from athletes, compliance with legal obligations (such as mandatory medical exams for professional athletes under French sports law), processing data for public interest reasons like doping control or medical research, or processing data that athletes have voluntarily made public. Additionally, organizations in the sports sector are encouraged to conduct Data Protection Impact Assessments ("DPIAs") and ensure that health data is securely stored and processed in compliance with GDPR and French data protection and health laws. 

NETHERLANDS

Data Breach Fallout: Dutch Court Rejects €3 Billion Claim Over Stolen Personal Information 

In January 2021, news broke of serious security flaws in the IT systems used by Dutch regional health authorities ("GGDs") in connection with the COVID-19 pandemic. Those flaws left personal data exposed to theft for long periods of time. As a result, the data of 1,250 people were reportedly stolen by criminals. Charges related to that theft were brought in the criminal circuit.  

On March 28, 2023, ICAM Foundation filed a mass claim against various GGDs, among others, on behalf of two categories of people: (i) those whose personal data had reportedly been stolen; and (ii) those who feared their personal data had been stolen. The latter group—estimated to be around 6.5 million people—encompassed anyone who provided data to a GGD. ICAM demanded compensation of approximately €3 billion on their behalf. 

On July 7, 2024, the court ruled that the 6.5 million people who only feared that their data had been stolen, were not entitled to damages. According to the court, fear of the misuse of personal data is no ground for compensation. In the cases where it did establish that there was a data breach, the GGD had already offered the victims financial compensation, which was accepted by most injured parties. As a result, the court found that the injured parties who had accepted compensation waived their right to further compensation. For the subset of victims who actually suffered a data breach and did not accept compensation, the court ruled that ICAM had not made it sufficiently plausible that an adequate number of individuals from this group supported the action they were taking, so the claim was also declared inadmissible for them. ICAM appealed the court's ruling.  

Dutch Court Allows Class Action to Proceed Against Dutch Health Care Authority Over Sensitive Data Collection 

Last year, a group consisting of mental health clients, health care professionals, and various interest groups filed a lawsuit against the Dutch Healthcare Authority, NZa, regarding a new obligation imposed by the NZa requiring health care providers to complete and submit highly-sensitive questionnaires on mental and social problems of their clients. According to the claimants, this obligation would undermine trust between health care providers and clients and violate medical confidentiality. To date, the NZa has not submitted a substantive defense to the case. However, the NZa argues against the admissibility of the interest groups, maintaining that they are not competent to bring a class action on behalf of clients and health care providers. On July 17, 2024, the court rejected NZa's argument and held that the interest organizations are admissible in their collective claims under the "light admissibility regime." The oral hearing will take place between November 2024 and March 2025. 

SPAIN 

Spanish Authorities Determine Criteria for Access Claims Regarding Traceability of Patient Medical History 

On July 30, 2024, the Catalan Data Protection Authority and the Commission for the Guarantee of the Right to Access to Public Information approved common interpretation criteria regarding the application of regulatory rules and the processing of patient access claims on the traceability of their medical record. The approved criteria will help the coordination of both authorities to guarantee a homogeneous application of data protection rules and access to medical records.  

ASIA PACIFIC

AUSTRALIA  

First Tranche of Australia's Much Anticipated Privacy Law Reforms Revealed 

The first wave of Australia's expansive privacy law reforms has been introduced into Federal Parliament in the Privacy and Other Legislation Amendment Bill 2024 (the "Bill"). If passed, it will introduce a significant new statutory cause of action—the tort of serious invasion of privacy—and a range of new enforcement tools for the Office of the Australian Information Commissioner ("OAIC"), which is already shifting to an approach of increased enforcement. Together, these changes are expected to drive a further uptick in regulatory enforcement and claims on behalf of individuals impacted by privacy and data incidents, particularly class action plaintiffs, in an already active market. The Bill is expected to pass in the coming months and the government is also expected to introduce a further bill to implement the remaining "agreed" proposals from the Privacy Act Review Report in due course. This reform not only reflects a response to growing privacy concerns in an increasingly digital landscape, but also underscores the need for proactive compliance strategies to mitigate legal risks associated with data management. Digital health companies should consider preparing for significantly heightened privacy obligations and increased risk of regulatory enforcement and litigation, particularly class actions relying on the new tort of serious invasion of privacy.

LAWYER SPOTLIGHTS 

Ryan Blaney, Lily Zhang, Jennifer Bennett, James Poth

Ryan Blaney (Washington, Cybersecurity, Privacy & Data Protection) has two decades of experience representing health care and life sciences entities, investors, and technology firms in compliance, regulatory investigations, enforcement actions, litigation, and corporate transactions. He advises clients on data privacy, cybersecurity, health care, and artificial intelligence issues in high-stakes strategic partnerships and collaborations, mergers and acquisitions, joint ventures, and private funds' investments. Ryan has experience serving as lead counsel in defending against Department of Health and Human Services, Department of Justice, Federal Trade Commission, Federal Communications Commission, and State Attorneys General investigations. Ryan also quarterbacks responses to large-scale ransomware attacks and breaches, including those by nation-states. He has extensive experience in emerging AI and digital health, advising clients at all stages on compliant use of such innovative technologies.  

Lily Zhang (San Diego, Intellectual Property) is a multidisciplinary patent strategist dedicated to maximizing the value of IP assets on the cutting edge of technology. She is a trusted advisor to multinational conglomerates, investment firms, startups, and academic institutions. Lily is an engineer by training with deep technical knowledge and comprehensive experience in AI and machine learning, computational drug and target discovery, digital pathology, high throughput screening, and medical devices. Her practice is a unique blend of high-tech and the life sciences. 

Jennifer Bennett (San Francisco, Intellectual Property) advises global technology companies in high-stakes, fast-paced complex intellectual property litigation including in the health and technology industries. She advises clients in a range of areas, including biopharmaceuticals, medical devices, plants, machine learning, electronics, and software applications. Jennifer has a significant track record in jury trials serving as managing and first chair trial lawyer in infringement cases in U.S. District Courts in California, Delaware, and Texas, as well as in cases before the United States International Trade Commission, or USITC.

James (Jim) Poth (Irvine, Health Care & Life Sciences, Business & Tort Litigation) has more than 20 years of experience advising hospitals, health systems, and other health care providers in managed care disputes. His practice focuses on representing health care providers in litigation, arbitration, and negotiation, serving as lead trial lawyer in dozens of cases representing providers seeking to collect from commercial and government-sponsored payors. Jim also has extensive experience in assisting health care providers in contract negotiations and providing advice on utilizing state and federal regulatory requirements to improve contract terms and providers' revenue cycle.  

RECENT AND UPCOMING SPEAKING ENGAGEMENTS

RECENT PUBLICATIONS

Insights by Jones Day should not be construed as legal advice on any specific facts or circumstances. The contents are intended for general information purposes only and may not be quoted or referred to in any other publication or proceeding without the prior written consent of the Firm, to be given or withheld at our discretion. To request permission to reprint or reuse any of our Insights, please use our “Contact Us” form, which can be found on our website at www.jonesday.com. This Insight is not intended to create, and neither publication nor receipt of it constitutes, an attorney-client relationship. The views set forth herein are the personal views of the authors and do not necessarily reflect those of the Firm.