Bias and Discrimination in AI Regulatory Gaps in Indian Cyber Laws
- Centre for Advanced Studies in Cyber Law and AI CASCA
- Aug 31
- 6 min read
By- Aadit Seth (3rd year member, CASCA)

Introduction
Artificial Intelligence has transformed from a futuristic concept into a present reality that increasingly shapes critical decisions affecting millions of lives. From determining job opportunities to accessing government benefits, from police surveillance to credit approvals, AI systems are embedded in the fundamental fabric of Indian society. However, beneath their display of technological objectivity lies a troubling reality. AI systems can perpetuate and amplify existing social biases, creating new forms of discrimination that threaten constitutional values of equality and justice.
In India's diverse society, where historical inequalities based on caste, gender, religion, and economic status remain deeply attached, algorithmic bias poses particularly serious risks. When AI systems learn from biased historical data or flawed design assumptions, they can systematically disadvantage already marginalized communities, often without adequate accountability mechanisms.
Where and How AI Bias Emerges in India
AI-powered recruitment tools have become widespread in Indian companies, showcasing efficiency and objectivity in hiring processes. However, these systems often reproduce historical employment patterns that favour certain groups. The infamous Amazon recruitment algorithm case serves as an instructive example here; trained on predominantly male resumes, the algorithm systematically downgraded applications containing words associated with women, displaying directly how AI’s inclusion though functions fast tracked may not be the most logical choice. Â
In the Indian context, AI hiring tools may inadvertently discriminate against candidates from rural backgrounds, certain linguistic groups, or specific educational institutions. For instance, if training data predominantly features urban, English-educated candidates, the AI may systematically undervalue applications from rural areas or vernacular language backgrounds, violating principles of equal opportunity.
AI-driven credit scoring systems in Indian banks and fintech companies have been found to exhibit bias against marginalized communities. These algorithms often use proxy data like geographical location, spending patterns, or social media activity to assess creditworthiness. However, such systems can systematically disadvantage Dalits, Muslims, and economically weaker sections who may have limited access to formal financial systems or digital infrastructure, leading to an unscrupulous credit worthiness of the algorithm.
Most concerning is the use of AI in government welfare distribution. The Aadhaar-linked benefit schemes, while designed to prevent leakages, have been found to exclude vulnerable populations. Cases like Santoshi Kumari's death due to malnutrition after her ration card was wrongly cancelled highlight how algorithmic failures can have life-threatening consequences for the poor and marginalized.
Similarly, Facial Recognition Technology (FRT) deployed by Indian police forces exhibits higher error rates for women, individuals with darker skin tones, and ethnic minorities. During events like the 2020 Delhi riots, AI-powered facial recognition was reportedly used to identify and prosecute protesters, with allegations that the system disproportionately targeted certain religious and socio-economic groups.
Predictive policing algorithms, which forecast crime-prone areas based on historical data, risk reinforcing systemic biases. If past crime data reflects biased policing practices against certain communities, AI predictions will perpetuate the same prejudices, resulting in over-policing of marginalized neighbourhoods.
Legal and Regulatory Status in India
Article 14 of the Indian Constitution guarantees equality before the law and equal protection of laws. This fundamental right has been interpreted broadly by the Supreme Court to include protection against arbitrariness. In E.P. Royappa v. State of Tamil Nadu (1974), the Court established that "arbitrariness is the very antithesis of equality," providing a constitutional foundation for challenging discriminatory AI systems.
Article 15 prohibits discrimination on grounds of religion, race, caste, sex, or place of birth. Together, these provisions create a robust constitutional framework against discrimination that should apply to AI systems used by government entities.
Â
India's DPDP Act represents a significant step toward data protection but falls short of addressing AI-specific challenges. While the Act governs the processing of personal data, it lacks specific provisions for algorithmic fairness, bias detection, or automated decision-making transparency. The Act does designate "Significant Data Fiduciaries" (SDFs) who must conduct periodic algorithmic audits and data protection impact assessments, but these provisions remain largely aspirational without detailed implementation guidelines.
The IT Act provides limited protection against algorithmic discrimination. However, recent statements by government officials suggest recognition of the problem. It has been stated that "search bias, algorithmic bias, and AI models with bias are real violations of the safety and trust obligations" under IT rules, and platforms exhibiting such biases may lose safe harbour protections.
Â
Proving algorithmic discrimination requires technical expertise and access to proprietary algorithms. Affected individuals often lack the resources or technical knowledge to challenge automated decisions, creating significant barriers to justice. India currently lacks a dedicated AI regulatory body with the expertise and authority to oversee algorithmic fairness, which can work with changes in algorithms but yet remains an idealistic future to convince AI trainers to retrain AI algorithms. The absence of such an authority creates enforcement gaps and leaves victims without effective recourse mechanisms.
AI systems trained on biased or unrepresentative datasets inevitably produce discriminatory outcomes. In India's diverse context, entire communities may be missing or misrepresented in training data, leading to systematic exclusion of marginalized groups.
Â
The EU's comprehensive AI Act categorizes AI systems by risk levels and mandates strict compliance for high-risk applications. It requires transparency, human oversight, and non-discrimination measures, providing a model for rights-based AI regulation. Various U.S. states have introduced algorithmic accountability laws. California's proposed legislation includes anti-discrimination clauses and audit requirements for AI systems used in employment and housing decisions.
Recommendations and the Road Ahead
India urgently needs a dedicated AI regulation statute grounded in constitutional values. Such legislation should define risk categories based on impact and use cases, ensuring that high-risk AI applications in areas like criminal justice, healthcare, and employment receive the strictest scrutiny. The statute must prohibit AI applications that violate dignity, privacy, or due process, creating clear red lines that protect fundamental rights. Additionally, it should mandate transparency and explainability requirements, ensuring that AI decision-making processes can be understood and scrutinized by affected individuals and regulatory authorities. Finally, the legislation must create legal duties of care for AI developers and deployers, establishing clear accountability frameworks that hold organizations responsible for the societal impact of their AI systems.
India must establish formal benchmarks for ensuring algorithmic fairness through comprehensive testing and evaluation mechanisms. This includes implementing pre-deployment bias testing using representative datasets that accurately reflect India's diverse population, ensuring that AI systems do not systematically disadvantage any community before they are deployed. The framework should mandate periodic audits by independent institutions with the technical expertise to assess AI systems throughout their lifecycle, not just at the point of deployment. Furthermore, it requires risk assessments evaluating disparate impact across protected categories such as caste, religion, gender, and socio-economic status, ensuring that AI systems do not perpetuate or amplify existing social inequalities.
All AI applications affecting fundamental rights must include meaningful human oversight to prevent automated systems from making unchecked decisions that impact citizens' lives. This means establishing a principle that no automated decisions should be final without human review, particularly in critical areas like welfare distribution, criminal justice, and employment decisions. Individuals must have the right to challenge algorithmic decisions through accessible and effective grievance mechanisms, ensuring that citizens are not powerless against machine-driven outcomes.
India requires an independent AI Regulatory Authority with comprehensive powers to oversee The DPDP Act should be amended to include AI-specific provisions that address the unique challenges posed by artificial intelligence systems. This includes establishing explicit rights against automated decision-making and giving individuals the power to demand human intervention in decisions that significantly affect them. Data minimization principles for AI training must be incorporated, ensuring that AI systems use only the minimum data necessary for their intended purpose and do not perpetuate privacy violations through excessive data collection.
Conclusion
India faces a critical challenge in balancing AI-driven efficiency with constitutional guarantees of equality, as current legal frameworks inadequately address algorithmic bias and discrimination that threatens to create a "digital caste system" perpetuating historical inequalities. The emergence of AI bias represents a new frontier of discrimination requiring proactive intervention through comprehensive legal reform, regulatory oversight, and technical standards that reflect India's unique diversity and constitutional values. As AI becomes increasingly pervasive, ensuring fair and equitable systems is not merely a policy choice but a constitutional imperative that demands immediate action before algorithmic discrimination becomes further embedded in institutions affecting millions of lives. Only through proactive legal frameworks, robust oversight mechanisms, and unwavering commitment to constitutional principles can India harness AI's transformative potential while safeguarding the rights and dignity of all citizens, presenting both a significant challenge and an unprecedented opportunity to build an AI ecosystem that truly serves justice, equality, and the common good.