⚖️ The New York State Department of Financial Services (NYDFS) recently issued a Proposed Insurance Circular Letter on the use of Artificial Intelligence (AI) and External Consumer Data and Information Sources (ECDIS) in Insurance Underwriting and Pricing; Keep in mind this is not a law; a circular helps interpret the law. It applies to financial institutions, and thus, to their suppliers. Since NYC is the head quarters of the financial sector, this circular has the potential to be the NY 500 of artificial intelligence. It's certainly a micro-trend in law to take seriously. AI in insurance isn't new; reports say it's the second most popular technology applied to underwriting, so there are certainly a few concerned with this alert. Basically, the proposed circular mandates that insurers should not use ECDIS or AI systems for underwriting or pricing unless the data source or model does not use or is not based on any protected class under the New York Insurance Law. The protected class of data refers to data that can lead to discrimination, even indirectly, such as certain family name. Insurers are expected to test for discriminatory impact on these protected classes, even when the insurers do not collect any data on these classes - a complex and potentially challenging requirement. Action Items: 📌 Insurers need to establish effective governance frameworks for the use of ECDIS and AI systems. The board of directors and senior management must play an active role in this. 📌 Detailed quantitative testing obligations must be met. This includes specific metrics and extends to third-party AI tools. 📌 Insurers need to audit and test third-party data and tools to ensure they are regulatory and actuarially compliant. This micro-trend is already well implemented in Colorado for those who follow insurtech law, so Board Members should take this seriously. If you're not talking AI already, it's time to start. #NYDFS #AI #Insurtech #Regulations #InsuranceUnderwriting
Gray areas in AI insurance
Explore top LinkedIn content from expert professionals.
Summary
The term "gray-areas-in-ai-insurance" refers to the complex and sometimes uncertain parts of using artificial intelligence in insurance, especially where rules, fairness, and discrimination are not clearly defined or universally agreed upon. As insurance companies increasingly adopt AI for underwriting, pricing, and claims, understanding these gray areas is crucial to avoid unintended bias and ensure compliance with evolving regulations.
- Review data sources: Examine all external data and AI models you use to make sure they do not unintentionally draw from information tied to protected classes or create unfair outcomes.
- Prioritize governance: Build robust frameworks and involve senior leadership to oversee how AI and third-party tools impact decision-making, ensuring transparency and accountability.
- Stay current: Regularly follow new laws and industry regulations, such as the EU AI Act and state guidelines, to keep your business practices compliant and avoid regulatory surprises.
-
-
Attention all insurers and #InsurTech companies active or planning to enter the #AI space – this is for you! Understanding the EU’s #AIAct and its implications is crucial. It’s already in force, with key provisions set to apply early next year. However, interpreting this cross-sectoral regulation within the #insurance context still involves significant uncertainty. To help clarify, the European AI Office has launched a multi-stakeholder consultation on defining AI systems and identifying prohibited AI practices as per the AI Act. This is a critical moment to reflect on your business practices, identify areas of ambiguity, and share your insights with the Commission. As a former regulator, I assure you—this feedback isn’t going into a black box. It’s a valued contribution that can shape regulatory clarity. Why am I emphasizing this in the insurance context? Because these two areas are currently raising concerns based at least on my recent discussions with industry stakeholders. Insurance has unique characteristics, and there’s a need to ensure that the industry clearly understands the scope of prohibited practices. Some stakeholders have noted that certain common practices could unintentionally align with some prohibited practices. Another area is the definition of AI systems – this is foundational to the AI Act. Some insurers seems to question whether traditional statistical models like GLMs fall under the Act’s scope. In fact, views on this vary across the market. Take this opportunity to contribute. Your input can help shape regulatory clarity. You’ll thank me later! __________ 👉 Want to stay ahead on similar consultations and AI Act impacts on insurance? Subscribe to my insurtech4good.com newsletter! ♻️ Reshare this to help it reach other innovators who might have meaningful contributions to provide.
-
A few thoughts from the NAIC’s first-ever Health Insurance AI/ML Survey (May 2025) that every carrier & broker exec should know: AI is already mainstream. 93 insurers responded—and 84 % say they’re using AI/ML today (not just piloting) across individual, group and student plans Where the deployments are live: Utilization management – 71 % Prior-auth approvals – 68 % (but only 12 % let AI suggest denials) The tech is being steered toward speed, not stonewalling—a signal regulators will like Disease-management outreach – 61 % Fraud detection (claims) – 50 % and provider fraud – 51 % Digital sales/quoting tools – 45 % Third-party tech is stronger than in house. 55 % embed external AI components, 15 % outsource everything, and just 10 % build fully in-house Production-level deployment is real. For claim-coding analytics alone, 62% of models are already live in production Product innovation is still in the early stages. Only 28 % of carriers use AI/ML to for product pricing and plan design Human-in-the-loop remains critical. Every individual major medical insurer that uses AI to negotiate out-of-network claims keeps a human reviewer in the chain (100 %) Governance is maturing—but uneven. Most companies test for drift, bias, and fairness, yet 14 % admit their models still infer sensitive traits like race, highlighting a regulatory grey zone The survey shows health-plan AI is past the hype cycle—already embedded in core workflows that impact pricing, access and member experience. Expect regulators to double their focus and attention at both state and federal levels on regulations and frameworks: Strong governance, transparent audit trails and clear human override protocols will become table stakes faster than many expect. Link to the full report in the comments
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Healthcare
- Employee Experience
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development