top of page

Licensing and Consent: Protecting Personality Rights under DPDP Act.

  • jacobeldhokalarikk
  • Nov 3
  • 9 min read

By: Vashmath Potluri and Shubhranshu, 4th year students at NALSAR


Introduction 

In recent months, several high-profile celebrities, including Nagarjuna, Abhishek Bhachan, and Aishwarya Rai, have successfully obtained injunctions from the Delhi High Court to protect their personality rights against unauthorized use of their likeness, voice, and other identity-linked attributes. While these cases highlight the judiciary’s recognition of dignitary and reputational harms, they also expose a deeper systemic issue, that is the absence of a dedicated statutory framework to regulate and provide redress for the misuse of personality-linked data. In the absence of such legislation, individuals must rely on fragmented remedies under tort or copyright law, which are often slow, reactive, and ill-suited to address harms in the rapidly evolving digital and AI landscape.

The Digital Personal Data Protection Act, 2023 (“DPDP Act”), though enacted to regulate personal data, diverges significantly from the earlier Personal Data Protection Bill, 2019 (“PDP Bill”) in scope and approach. While the PDP Bill explicitly recognised sensitive personal data categories and imposed stricter consent requirements, the DPDP Act applies a uniform standard to all personal data and exempts publicly available information from protection. This shift narrows the protective ambit, leaving personality-linked attributes particularly vulnerable to AI-driven exploitation.

This article situates personality rights firmly within India’s constitutional privacy framework and examines how the DPDP Act could serve as the primary legislative instrument to govern the collection, processing, and potential misuse of identity-linked attributes. The discussion unfolds in two parts. First, it diagnoses structural gaps in the DPDP Act most notably, the absence of a sensitive personal data category and the exemption for publicly available data that leave personality-linked attributes exposed. Second, it proposes targeted reforms aimed at providing ex ante protections, including the classification of sensitive personality data and an opt-in/opt-out licensing framework for its processing. These preventive measures are reinforced by ex post safeguards, such as enhanced fiduciary duties, mandatory erasure upon withdrawal of consent, and stricter penalties under Sections 33 and 42, thereby enabling individuals to secure timely and effective recourse against misuse.


Personality Rights Through the Privacy Lens

The constitutional protection of personality rights in the digital era rests on the Supreme Court’s recognition of informational privacy in Justice K.S. Puttaswamy v. Union of India. The Court expanded the notion of privacy beyond secrecy or physical intrusion to encompass an individual’s control over how personal information is collected, processed, and disseminated. Justice Chandrachud observed that non-state actors increasingly threaten autonomy in the digital age, while Justice Nariman emphasized “informational self-determination,” framing the individual’s capacity to govern how their identity is represented publicly. These principles establish a normative foundation for personality rights that extends beyond mere reputation or commercial interests, anchoring them firmly in autonomy and dignity.

The relevance of these constitutional principles becomes particularly pronounced in the context of AI-mediated identity replication. In Sadhguru Jaggi Vasudev v. Igor Isakov, the Delhi High Court granted an interim injunction against the unauthorized use of an AI-generated voice, likeness, and mannerisms. While this order was not a final judgment and does not constitute binding precedent, it illustrates judicial recognition that AI appropriation of identity-linked attributes can inflict autonomy and dignitary harms. By emphasizing control and consent over expressive attributes such as gestures, vocal patterns, and facial expressions, the Court implicitly extended the privacy-based rationale of Puttaswamy to emerging technological contexts.

However, Sadhguru also highlights limitations in the current framework because the reliance on interim relief, without a statutory right explicitly regulating AI-mediated personality uses, underscores gaps in enforceability and the scope of consent. This tension reveals both the promise and fragility of grounding personality rights in constitutional privacy principles, as while the judiciary can protect autonomy in specific instances, broader regulatory clarity remains necessary to address systemic challenges posed by generative AI.

Having established the constitutional basis for informational privacy, the discussion now turns to the statutory framework governing personal data, highlighting the structural gaps in the DPDP Act most notably, the absence of a sensitive personal data category and the exemption for publicly available information that leave identity-linked attributes inadequately protected in the digital era.


The conflation of ordinary and sensitive data

While personality-linked attributes such as facial expressions, voice and gestures technically fall within the broad definition of personal dataunder Section 2(t) of DPDP Act, which covers any information relating to an identifiable individual, the issue runs deeper. The Act treats all personal data uniformly, overlooking that certain identifiers carry inherently greater risks. Unlike an email address or phone number which can be changed or replaced, permanent identity-linked data are inseparable from the individual. Misuse particularly through artificial intelligence can result in deepfakes, impersonation or unauthorized commercial exploitation, causing harms to dignity, reputation and autonomy that are often irreversible.

The DPDP Act’s uniform treatment of personal data leaves these vulnerabilities insufficiently addressed as it does not recognize the heightened risks posed by permanent identifiers. By contrast, the Personal Data Protection Bill, 2019 sought to fill this gap by distinguishing sensitive personal data from general personal data. It explicitly included biometric identifiers such as facial images, fingerprints and other physical, physiological or behavioral characteristics enabling unique identification under Section 3. Processing this information requires explicit consent under Section 11 while anonymization must be irreversible under Section 3(2), ensuring individuals cannot be re-identified once consent is withdrawn or the data is repurposed. By embedding these safeguards, the Bill recognized that high-risk personal attributes carry the potential for lasting harm if misused, warranting stronger protection than replaceable information.

These concerns are not merely theoretical and have manifested both globally and in India, highlighting why the distinctions in the PDP Bill are necessary. Across multiple countries, AI-generated deepfakes such as non-consensual synthetic intimate imagery have inflicted severe psychological and reputational damage including public humiliation, loss of employment opportunities and emotional distress. Similar patterns of harm are emerging in India showing that unauthorized use of personality-linked data is not confined to any single context. In Global Health Limited & Anr v. John Doe & Ors involved deepfakes using Dr. Naresh Trehan’s likeness to propagate false medical advice while Ankur Warikoo & Anr v. John Doe & Ors involved the exploitation of an influencer’s persona in fraudulent investment schemes. In all these instances, the damage to reputation and public trust persisted even after the content was removed, demonstrating how such misuse can erode informational privacy and compromise control over one’s public identity.

 

The Twofold Exemption Problem of Publicly Available

While the DPDP Act’s uniform treatment of personal data already leaves high-risk, permanent identifiers vulnerable, these risks are compounded by its handling of this data when publicly available. Under Section 3(c)(ii), the Act does not apply to information that is “made or caused to be made publicly available” either by the individual or under any law, meaning the protection may largely cease to apply subject to government-prescribed exceptions. Once photos, videos or voice clips are shared on social media or broadcast through interviews, the protections of the Act no longer extend to that information.

This exemption has a twofold effect. First, it prevents the formation of a Data Principal–Data Fiduciary relationship because as the Act does not apply, no consent requirement under Section 4(1)(a) arises. Companies scraping publicly shared facial images or voice samples can therefore argue that they are not bound by consent obligations. Second, the exemption removes these data from the scope of Section 4(1)(b) which limits processing without consent to specific “legitimate uses.” In practice, this allows such information to be used for purposes including algorithmic training or commercial endorsement without regulatory constraints.

The practical consequence is a significant gap because publicly shared content is often treated as blanket consent, ignoring the context and intent of disclosure. Individuals frequently post material for personal expression or limited audiences without anticipating that their identity-linked data will be scraped, synthesized, or monetized by AI systems. By equating public availability with implied consent, the Act erodes the distinction between voluntary sharing and forfeiture of privacy leaving individuals including public figures without meaningful safeguards against AI-driven exploitation of their identity. This concern aligns with the OECD Privacy Guidelines and the Justice B.N. Srikrishna Committee Report, both of which emphasize that consent must be informed, specific, and contextually grounded, highlighting why assumptions of implied consent based solely on public availability are inadequate for protecting high-risk identity-linked data.


Way forward: Restoring Control over Personality Rights

The European Union’s General Data Protection Regulation (“GDPR”) provides a normative benchmark for the differentiated protection of personal data. Article 9(1) explicitly designates “special categories” of personal data, including biometric identifiers, subjecting their processing to heightened consent and purpose limitations. While the GDPR’s framework was initially grounded in identifiability, recent jurisprudence from the CJEU and the EU Charter of Fundamental Rights signals a shift toward an autonomy-centered model of data protection. This evolution highlights that data intimately linked to human identity engages dignity and personhood, extending beyond mere informational control.

By contrast, the DPDP Act, though inspired by the GDPR, diverges sharply from this normative trajectory. The Act applies a uniform standard to all personal data, disregarding differences in sensitivity, and excludes publicly available information from protection. Its transactional approach to consent further compounds the problem, leaving individuals vulnerable to AI systems that scrape, synthesise, and monetise identity-linked attributes. To align India’s data protection regime with the principle of dignity, a set of targeted reforms is therefore essential.


Codifying Personality Rights

To address these risks, the Central Government should utilise its rule-making power under Section 40(2)(z) of the DPDP Act, which permits prescription of “any other matter which is to be or may be prescribed,” to formally classify identity-linked attributes as sensitive personality data. This would include attributes such as facial likeness, voice, gestures, and other biometric or behavioural markers, recognising them as inherently high-risk due to their potential for misuse.

Operationally, this classification could be implemented as follows: (i) the government would maintain an official registry of sensitive personality data types, providing clarity for data fiduciaries and enabling targeted regulatory oversight; (ii) each attribute would be categorised according to its sensitivity and associated risk, allowing proportional safeguards and tailored consent requirements; and (iii) this classification would underpin enhanced fiduciary obligations, including robust security measures, verifiable consent documentation, and prompt erasure of data once its intended purpose is fulfilled under Section 12(3).

By codifying sensitive personality data in this manner, the DPDP Act would establish a clear legal foundation for operationalising consent, enforcing fiduciary accountability, and preventing AI-driven exploitation that could compromise privacy, autonomy, or reputational interests.


Strengthening the Consent Regime

Drawing inspiration from the EU Copyright Directive’s opt-in/opt-out philosophy, the DPDP Act can further balance innovation with individual autonomy by ensuring that sensitive personality data is processed only with explicit authorisation from the data principal, while preserving the continuing right to withdraw or modify consent. To operationalise this under Section 6, the rules should establish a regulated licensing mechanism in which processing is permitted solely with affirmative consent, allowing individuals to retain control over their identity-linked attributes at all times.

In practice, this framework could function as follows: (i) a centralised consent registry managed by the government would enable individuals to grant, monitor, and manage consent for each type of sensitive data, generating verifiable digital consent tokens for data fiduciaries; (ii) consent should be granular and attribute-specific, allowing individuals to authorise certain uses while prohibiting others, with each record specifying the purpose, processing type, duration, and revocation rights; and (iii) data fiduciaries would be required to validate consent tokens in real time before processing, ensuring that sensitive data is used only when explicitly authorised.

This approach ensures that even publicly available sensitive personality data cannot be processed for AI or commercial purposes without explicit consent, preventing claims of implied authorisation. Complementing this, obligations under Section 8 would be strengthened, requiring data fiduciaries to implement robust security safeguards, maintain verifiable consent records, and ensure prompt erasure of data under Section 12(3) once the purpose is fulfilled. Any breach involving sensitive personality data would attract heightened scrutiny and proportionately higher penalties under Section 33, with Section 42 empowering the Central Government to adjust penalty thresholds according to the sensitivity of the data.


Conclusion

The rise of AI-mediated identity replication and digital personalisation has exposed significant gaps in India’s current data protection framework, particularly regarding personality-linked attributes. While the DPDP Act 2023 represents a landmark step toward regulating personal data, its uniform treatment of all data, combined with the exemption for publicly available information and a transactional consent model leaves high-risk identity-linked attributes vulnerable to misuse. This underscores the need for a differentiated autonomy-centered approach to data protection aligned with constitutional principles of privacy, dignity, and informational self-determination.

Drawing inspiration from the EU GDPR’s special categories framework and the opt-in/opt-out licensing philosophy of the EU Copyright Directive, India can operationalize meaningful safeguards for sensitive personality data. Classifying facial likeness, voice, gestures, and other biometric or behavioral markers as sensitive personality data and implementing a regulated consent registry with granular attribute-specific opt-in/opt-out mechanisms would ensure that individuals retain ongoing control over their identity-linked attributes. Complementary enhancements to fiduciary obligations including robust security safeguards, verifiable consent documentation, prompt erasure of data, and proportionate penalties would further strengthen enforceability and accountability.

By adopting these targeted reforms, the DPDP Act can move beyond a reactive uniform approach toward a proactive framework that reconciles innovation with individual autonomy. Such measures would provide clear statutory protections against AI-driven exploitation, operationalize constitutional privacy rights in the digital era, and position India’s regulatory architecture in harmony with evolving international standards. Ultimately this approach ensures that the dignity and autonomy of individuals remain central to the governance of their personal and identity-linked data.


 

 
 
 

Comments


bottom of page