The Attribution Chasm: Why India's Cyber Insurance Can't Handle AI Attacks
- Centre for Advanced Studies in Cyber Law and AI CASCA
- 1 hour ago
- 7 min read
By: Charchit Pathak, a 4th year law student at Symbiosis Law School, Hyderabad and Chirkankshit, a 4th year law student at Rajiv Gandhi National University of Law, Patiala

Introduction
In an incident in Bengaluru, a resident lost over ₹3.75 crore in a perplexing scam which was built on an AI-generated deepfake video of a public figure. As India steps into its AI revolution, this potent threat of AI-driven attacks has become dominant in India’s digital landscape, including but not limited to Gen AI-enabled phishing and autonomous malware. In middle of this, the cybersecurity insurance framework has failed to catch up. According to Checkpoint's 2025 Threat Intelligence Report, Indian enterprises face over 3200 cases of cyber-attacks per week (ones which are reported) across all sectors, yet the insurance policies which they rely on stay silent on algorithmic threats. This blog shall explore how the Indian insurance law framework, which is built on an architecture of traditional doctrines such as cause proxima (proximate cause), includes a critical blind spot for AI-based cyberattacks, limiting attribution of liability to algorithms.
The Problem: Hidden Exclusions and the “Attribution Chasm”
What may constitute the primary challenge in the industry-accepted usage of ambiguous exclusion clauses. In real life scenarios, insurers suffer from a lack of actuarial data, placing reliance on vague clauses for claim denial, such as "losses from emerging technology".This particular practice brings a de-facto exclusion to the novel AI risks, leaving the most vulnerable, SMEs, Startups and fin techs, uninsured in a high-risk environment.
And in extension, this ambiguity points towards a more fundamental problem, which the author calls as the "Attribution Chasm." Firstly, this chasm is representative of an evidentiary impossibility of proving appropriate causation in an algorithmic context. Given the black-box problem associated with AI, the outcomes cannot be explained in terms of the inputs or how processing took place. Black-box problem essentially entails the non-explainable nature of AI, which means that it is not possible to know how AI arrived at a particular decision, which means a decision can be given but without proper justification. Secondly, India’s insurance landscape follows a linear, human-agent based causality. Doctrines like causa proxima require the insured to demonstrate a single, dominant, and "efficient or predominant cause" of the loss. Similarly, uberrima fides, the doctrine of utmost good faith assumes that the insured has a complete understanding of material risks. What becomes unreasonable here is how a small enterprise is expected to prove a loss from a sophisticated adversarial attack, which didn’t happen because of its own system failure? Risk disclosure is extremely difficult given that the behaviour of the autonomous agent is opaque, sometimes even beyond the comprehension of its human creators. This particular opacity creates a critical gap in the current legal framework. Thirdly, in the context of proving, the burden of proof for these claims requires incredibly intricate digital forensics, admissible under Section 63 of the Bharatiya Sakshya Adhiniyam, 2023. This hindrance becomes a cost-prohibitive barrier for the insured, who is tasked with devising an explanation for the forensically impenetrable AI’s black box.[1]
Why Indian Law Struggles
The failure of Indian legal framework to bridge the attribution chasm comes from 2 primary reasons. Firstly, is the established judicial doctrine. While Indian judiciary possesses equity-based interpretation tools such as the contra proferentem rule (vague contractual terms are interpreted against the drafter), which can be used to prevent insurers from escaping liability, the application of these is inconsistent at best, with courts often preferring literal interpretation in commercial contracts, and application of the rule with extreme care. (Sikka Papers Limited v. National Insurance Company Limited (2009 INSC 849). Secondly, is the issue of regulatory silence on the evolving AI technologies. AI regulation in India is extremely nascent, and on the date of writing, there is no soft law, either in form of guidelines or circulars from the Insurance Regulatory and Development Authority of India (IRDAI) in regards to definition of AI-based risks or recognition of these in insurance contracts. The existing guidelines including "Guidelines on Information and Cyber Security" are largely focused on internal security, not on algorithmic risks in insurance contracts. In absence of existing regulation and due to prevailing judicial practices, courts are left with no framework to assess "AI causation”, leaving AI-based risks out of the ambit, favouring the insurers.
Global Lessons: Three Models of Response
While India's regulatory paralysis is evident, other jurisdictions are pursuing a pro-active approach to tackle this issue. In the United Kingdom, a market-discipline phenomenon has surfaced, where entities like Lloyd's of London have mandated that insurers must provide clarity on whether cyber-attacks and their specific categories are covered or not. In the United States, there is a litigation-based approach, with landmark cases such as Merck vs Ace American, where courts have been proactive in narrowing cyber-attack exceptions, forcing the industry to adopt clear policies through fear of punitive penalties. Finally, in the European Union, a regulatory-mandate model has been created which embeds AI risk management and operational obligations into clear obligations under law, creating standards for all financial entities. Comparing these models within the Indian socio-legal context suggests better suitability for the EU model. India’s litigation ecosystem is plagued by judicial delays, making a litigation-based approach (as in US) unfeasible for the dynamic resolution of disputes. Additionally, India’s small entities lack the bargaining power to negotiate with huge insurance providers, discouraging adoption of a market-discipline framework like the UK. In contrast, the EU model, through its regulatory intervention, allows for certainty and clear regulatory standards, protecting interests of all stakeholders.
While the EU model suggests the most appropriate framework, the author suggests that a blind adoption of such a framework ignores the fragmented realities of India’s regulatory landscape. At present, Indian cyber-governance flows from multiple legislations such as the Information Technology Act 2000, the Digital Personal Data Protection Act 2023, CERT-In Guidelines 2022, and sporadic IRDAI circulars. These do not provide a unified AI-risk framework. A hybrid approach engaging EU’s definitional precision along with established Indian doctrines is a suitable way forward, aligning with India’s legal fabric. IRDAI could intervene through a master circular on this aspect, adopting EU’s regulatory mandates but consistent with present Indian legislations. Hence, the resolution to the attribution chasm lies not in replication, but rather selective adaptation fit to Indian legal scenarios having context specific design.
The Path Forward: A Smarter Framework for India
To resolve these lacunae in India insurance law framework, there is a requirement of a multi-pronged regulatory strategy by IRDAI. This must be done through a master circular on AI-risk coverage, which must sit at the trifecta of clarity, inclusivity and pragmatic reform. The following strategies must be adopted:
Firstly, there must be a mandate of affirmative coverage requirements, which would prohibit vague clauses facilitating “emerging threat” exclusions. These policies must adopt clear, concise and explicit language referencing to exactly what kind of adversarial attacks are covered under the policy, bring certainty and clarity to all the parties.
Secondly, a tiered risk coverage should be introduced to account for market inclusivity, since a one-size fits all policy may keep small entities out of the market. This tiered system will allow for a standard base tier covering common high-frequency threats like AI-induced phishing, with intermediate tiers covering more complex levels like data poisoning. The final tier could offer more nuanced insurance coverage for large firms facing autonomous system failures. Additionally, this tier division must not just be based on the size of the entity, but the sensitivity of operations and data handling by that particular entity.
Thirdly, the Information Parity doctrine must be included in the letter of the law, aligning the principle of uberrima fides for the new-age algorithmic threats. The insurers must be mandated to provide a simplified “Key Risk Summary”, along with establishing a pre-claim verification portal where the insured must be allowed to seek clarifications on coverage in case of complex AI-induced cyber-attacks.
Fourthly, and most critically, the Attribution Chasm must be resolved. Law must introduce nuances in the traditional burden of proof, where once an insured proves a prima-facie case of an anomalous cyber event resulting in material damage, the burden of proof must shift accordingly to the insurer, tasking them with burden of proof to prove that the damage in the concerned incident is covered by an already carved out exception. This proposition has been adopted previously as well , such as under defences under Section 87 of the Consumer Protection Act of 2019, which brings burden of proof upon manufacturer once prima facie defect is established.
Lastly, given the technical complexity of these disputes, IRDAI must facilitate a panel of neutral arbiters having the relevant technical expertise for just adjudication of disputes between the insurers and the insured.
The Bigger Picture
In the bigger landscape, this chasm is not a mere contract dispute, but rather something which may threaten India’s financial growth in this age of AI-attacks. This lacuna creates a massive uninsured risk portfolio for India’s most vulnerable businesses. When micro enterprises and upcoming fin techs, which spearhead India’s digital finance ambitions are left uninsured, it raises costs for the entire ecosystem, including the banks and customers. Without insurance, entities engaged in finance become riskier investments, raising cost of capital for them and raising scepticism from investor and banks which are the primary providers of capital for these upcoming startups. Further, without insurance coverage, these entities remain open to huge penalties and compensation pay-outs to customers, which they would have to cover themselves in in case of cyber-security and data breach incidents. These uninsured algorithmic risks hold the potential to send shockwaves across the entire ecosystem, creating apprehension and mistrust among the stakeholders. India’s AI revolution cannot be built upon an outdated insurance landscape which is incapable of absorbing the very risks that come along with the adoption of AI.
Conclusion
The concept of "Attribution Chasm" presents something more than a policy gap, it is a black hole that is getting bigger as time passes, feeding on the stability pillars of India’s financial system. As these threats increase and reach a greater scale, the window for regulation is closing, with stakes rising every minute.
To bridge this chasm, engagement of both lawmakers and judiciary is imperative. Lawmakers must move towards reduction of statutory uncertainty, where IRDAI must issue master circulars on risk coverage and recognition of algorithmic risks. The Judiciary must allow follow its part by due inclusion of these novel risks with current Indian equity doctrines through purposive interpretation. This must be succeeded by a burden-shifting mechanism where insurers must disprove coverage once AI-anomalies are prima facie established. Finally, bridging the technological expertise gap in adjudication, a technical arbitral panel must be established by IRDAI to tackle complex forensic AI issues.
Proactive regulation is not a good-to-have, but rather a necessity for stability. Without it, India’s policy makers risk hindering India’s digital growth and letting down its brightest minds which head its AI revolution.
[1] Yavar Bathaee, 'The Artificial Intelligence Black Box and the Failure of Intent and Causation' [2018] 31(2) Harv J L & Tech 889, pg. 913
