The rapid integration of machine learning models into the core functions of health insurance has triggered a sophisticated and highly coordinated response from state regulators aiming to ensure consumer protection and systemic stability. While the previous decade was defined by a sense of unbridled technological experimentation, the current regulatory environment is marked by the National Association of Insurance Commissioners (NAIC) intensifying its oversight of how insurers deploy these advanced systems. Over the past several months, the NAIC has published a formal Issue Brief staking out its position on federal AI legislation, launched a multistate AI Evaluation Tool pilot aimed at examining insurers’ AI governance programs, and continued to expand the adoption of its AI Model Bulletin across state lines. These developments signify that the infrastructure for oversight is no longer theoretical but is being actively implemented across the United States. Payors must recognize that these rules establish clear expectations for documentation, testing, and accountability.
1. The Shifting Landscape of Insurance Oversight
The evolution of regulatory expectations has moved from high-level ethical guidelines to concrete enforcement mechanisms that require insurers to prove their systems are fair and transparent. When the NAIC first adopted its AI Principles in 2020, the focus was primarily on establishing a shared vocabulary for fairness, accountability, and transparency. However, the Model Bulletin introduced in late 2023 changed the trajectory by clarifying that existing insurance laws apply to AI systems just as strictly as they apply to human decision-making processes. As of 2026, approximately 24 states have formally adopted this bulletin, creating a substantial regulatory block that demands rigorous governance, thorough documentation, and strict third-party oversight. This shift means that health insurance providers can no longer treat algorithmic development as an isolated technical exercise. Instead, every model used in the business must be integrated into a comprehensive risk management framework that is accessible to regulators.
This new era of oversight targets the entire lifecycle of an AI model, from initial data collection and training to final deployment and ongoing monitoring. Regulators are particularly interested in how these tools influence essential insurance functions such as underwriting, pricing, and utilization management. The current consensus among state supervisors is that technological complexity does not excuse a lack of clarity regarding how a decision was reached or whether that decision was biased against specific protected classes. Consequently, payors are being pushed to implement robust internal controls that can detect and mitigate algorithmic drift or discriminatory outcomes before they impact the policyholder. The burden of proof has shifted; insurers must now be prepared to demonstrate that their automated systems are not only efficient but also compliant with a long-standing body of consumer protection law that predates the digital age. This requires a cultural shift within organizations to prioritize compliance alongside innovation.
2. State Sovereignty Versus Federal Intervention
In March 2026, the NAIC released a pivotal Issue Brief that formally articulated its stance on the jurisdictional battle over artificial intelligence regulation. The organization strongly advocates for a state-based oversight model, arguing that the McCarran-Ferguson framework remains the most effective way to protect consumers while allowing for regional market nuances. This position is a direct response to various federal proposals that seek to establish a centralized regulatory body or preempt state laws to prevent what some critics call a “patchwork” of conflicting requirements. The NAIC argues that state regulators are closer to the market and better equipped to handle the rapid pace of change in the insurance industry. By opposing federal preemption, the NAIC is signaling to health payors that they should focus their compliance efforts on state-level requirements rather than waiting for a single federal standard that may never arrive or could be less comprehensive.
The tension between state and federal authorities reflects a broader debate about the speed of regulatory activity versus the need for industry-wide consistency. The NAIC’s position remains firm that AI is simply another tool used in traditional insurance activities like claims processing and fraud detection. Therefore, it does not require an entirely new set of federal laws but rather a more focused application of existing state authority. The brief highlights that state regulators have already built a meaningful supervisory infrastructure, including AI-specific principles and financial risk assessment tools. Any federal override would likely displace this progress and potentially leave consumers with fewer protections during the transition period. For health insurance payors, this means that compliance strategies must remain flexible enough to accommodate variations across different state jurisdictions. Staying aligned with the NAIC Model Bulletin is currently the most reliable path for navigating this complex and often fractured legal landscape.
3. The Multistate AI Evaluation Tool Pilot
The most immediate practical development for the industry is the launch of the NAIC AI Evaluation Tool pilot, which is scheduled to run from January 2026 through September 2026. This initiative involves twelve diverse states, including California, Florida, and Pennsylvania, and aims to create a structured framework for reviewing insurer AI systems during market conduct examinations. The tool is designed to provide regulators with a consistent method for evaluating how companies manage their AI governance programs and how they apply standard risk practices in real-world scenarios. It serves as a bridge between high-level policy goals and the day-to-day operations of an insurance company. By participating in this pilot, states are signaling their intent to move beyond passive observation and into active, data-driven scrutiny of the algorithms that govern member benefits and claims. This represents a significant step forward in the professionalization of AI oversight.
While the evaluation tool has faced some pushback from industry groups concerned about the creation of “de facto” regulations, its implementation continues to move forward as a primary mechanism for state exams. The tool encourages transparency by helping insurers explain their complex models in a language that regulators can understand. It also informs long-term recommendations for financial risk assessment, ensuring that the use of AI does not compromise the solvency of the insurer. Payors operating in the participating states must be particularly diligent, as the findings from these early examinations will likely set the standard for the rest of the country. Even for companies not directly involved in the pilot, the framework provides a valuable roadmap for self-assessment. Proactively adopting the standards outlined in the tool can help an organization identify potential weaknesses in its governance structure before they are discovered during a formal audit. This proactive stance is becoming the industry standard.
4. Strategic Imperatives For Modern Health Payors
The regulatory environment of 2026 demanded that health insurance providers moved beyond theoretical discussions of ethics and into the practical application of AI governance frameworks. Organizations that succeeded in this transition were those that designated clear accountability at the board and senior management levels, ensuring that AI oversight was treated as a core business function rather than a secondary IT concern. These leaders established rigorous policies for documenting AI use across all lines of business, particularly in sensitive areas like utilization management and claims processing. By creating a centralized AI inventory, companies were able to identify every algorithm in use, whether developed internally or provided by a third-party vendor. This comprehensive cataloging was the first step in ensuring that all decision-making tools were subject to the same level of scrutiny and met the high standards for fairness and transparency set by the NAIC Model Bulletin.
The conclusion of the pilot programs and the widespread adoption of state-specific rules meant that payors had to strengthen their vendor diligence processes and contractual protections. Successful firms updated their agreements to include audit rights and mandatory cooperation clauses for regulatory inquiries, acknowledging that they remained legally responsible for the outcomes of third-party tools. Ongoing risk assessments became a recurring part of the operational lifecycle, allowing companies to verify that their AI systems complied with anti-discrimination laws and consumer protection mandates. By using the NAIC Evaluation Tool as a guide for internal audits, forward-thinking organizations were able to withstand intense regulatory scrutiny during market conduct exams. Ultimately, the industry learned that early investment in governance was more cost-effective than reactive compliance. The focus shifted toward building mature, scalable programs that could adapt to the evolving expectations of state regulators across the nation.
