Home
Caret Right
News & Insights
Caret Right

AI in Health Care Is ...

AI in Health Care Is Scaling Quickly—As Regulation Lags, Internal Controls Need to Keep Pace

May 8, 2026 | by Scott Keaty

Recent developments suggest that major health care and life sciences organizations are expanding their use of AI beyond trial initiatives and targeted applications, while federal and state policymakers struggle to develop clear and consistent rules. As a result, health care providers must address growing AI-related risk through existing compliance, contracting, and operational controls.

Novo Nordisk, for example, recently announced a partnership with OpenAI that Novo described as integrating OpenAI’s most advanced AI capabilities globally from drug discovery to commercial operations, with the partnership structured to include strict data governance and human oversight.1 This public statement underscores that AI adoption at scale continues to require familiar compliance and oversight safeguards.

Despite AI’s rapid integration into health care, however, federal policy concerning AI remains largely unsettled. On March 20, 2026, the White House released its National Policy Framework for Artificial Intelligence and accompanying legislative recommendations. This policy urges Congress to create a federal AI framework and to preempt state laws concerning AI that impose “undue burdens.”2 Yet, unless and until Congress acts, existing state laws concerning AI remain in effect. For now, then, health care entities are left with the current patchwork of AI state requirements as well as applicable federal guidance tied to particular use cases.3

The practical takeaway for health care providers, health tech companies, and life sciences organizations is that they should not wait for comprehensive AI-specific regulation before strengthening internal AI controls. In our view, organizations should assume that AI-related risk may already be present in technology contracts and operational workflows, even where the contracts or workflows do not expressly reference AI. Existing agreements also may not clearly address vendor use of customer data, whether vendors can introduce new or expanded AI-enabled functionality through updates, or responsibility for AI-generated outputs. Likewise, these agreements may not clearly align with HIPAA-related obligations where protected health information is involved.

What Health Care Providers Should Do Now:

While health care providers should work toward the development of a formal AI policy, they should first consider conducting a targeted review of current vendor contracts, governance processes, and operational workflows through an AI lens. That review should assess data-use rights, whether vendors can introduce new or expanded AI-enabled functionality through updates, allocation of responsibility for AI-assisted outputs, audit and termination rights if AI-related use exceeds permissible parameters, and consistency with HIPAA-related obligations where protected health information is involved.

Providers should also identify where AI is already being used internally, what data supports these tools, how outputs are validated before operational or clinical reliance, and whether personnel understand the limits of AI-assisted tools in documentation, reimbursement, and patient-facing settings. Taking those steps now could significantly reduce the likelihood that unresolved AI issues will emerge later in negotiations, implementations, audits, or enforcement activity.

Please let us know if we may be of assistance.

Endnotes

  1. Novo Nordisk, Novo Nordisk and OpenAI partner to transform how medicines are discovered and delivered (Apr. 14, 2026) (announcing partnership).
  2. The White House, President Donald J. Trump Unveils National AI Legislative Framework (Mar. 20, 2026); The White House, A National Policy Framework for Artificial Intelligence – Legislative Recommendations (Mar. 2026) (calling on Congress to establish a federal AI policy framework and to preempt state AI laws that impose “undue burdens”).
  3. See, e.g., FDA, FDA Issues Comprehensive Draft Guidance for Developers of Artificial Intelligence-Enabled Medical Devices (Jan. 6, 2025) (describing lifecycle recommendations for AI-enabled devices, including transparency, bias, documentation, and post-market monitoring).