CURRENT AFFAIRS | MARCH 2026
Prelims: MANAV Framework, AI Impact Summit 2026, Glass Box vs Black Box AI
Mains: GS Paper II (Governance — AI Ethics, Accountability), GS Paper IV (Ethics — Technology and Morality)
Judicial Services Relevance: Right to explanation under Art. 21; audi alteram partem vs algorithmic opacity; liability for AI-generated judicial decisions; Art. 14 (non-arbitrary AI)
PM Modi’s Five-Pillar MANAV Framework for AI Governance
At the AI Impact Summit 2026, Prime Minister Narendra Modi articulated the MANAV framework — a five-dimensional ethical architecture for artificial intelligence governance. The acronym encapsulates foundational principles that India proposes as the normative bedrock for global AI regulation:
- M — Moral Systems: AI must be anchored in ethical frameworks that respect human dignity and societal values
- A — Accountable Governance: Clear chains of responsibility for AI-generated decisions, with identifiable human oversight at every critical juncture
- N — National Sovereignty: Each nation retains the right to regulate AI within its jurisdiction, resisting external imposition of regulatory standards
- A — Accessible and Inclusive: AI technologies must not become instruments of exclusion; deployment should bridge rather than widen socio-economic disparities
- V — Valid and Legitimate: AI outputs must meet standards of verifiability, accuracy, and legal legitimacy before being relied upon for consequential decisions
Moral | Accountable | National Sovereignty | Accessible | Valid
Think: “MANAV” means “human” in Hindi — AI governance must remain human-centric
Glass Box vs. Black Box: Transparency as a Legal Imperative
A particularly significant conceptual contribution was the distinction between “glass box” and “black box” AI systems:
- Black box AI — systems whose internal decision-making logic is opaque; neither the developer nor the end-user can fully explain how a particular output was generated
- Glass box AI — systems that operate with transparent, interpretable processes where the reasoning pathway can be traced, audited, and challenged
This distinction carries profound implications for the administration of justice. When an AI system recommends bail denial, predicts recidivism, or assists in sentencing (as is already occurring in several Western jurisdictions), the black box problem directly undermines the principles of natural justice — particularly the audi alteram partem rule. How can a litigant meaningfully challenge an adverse decision if the reasoning behind it cannot be explained?
The right to explanation — the ability to understand why an AI system reached a particular decision — is emerging as a critical component of Article 21 jurisprudence. In Justice K.S. Puttaswamy v. Union of India (2017), the Supreme Court recognized informational self-determination as part of the right to privacy. This principle logically extends to demand transparency in algorithmic decision-making that affects fundamental rights.
Implications for Judicial Administration
The MANAV framework’s emphasis on accountability and validity has direct relevance for the Indian judiciary:
- AI-assisted case management: Courts deploying AI for case prioritization, scheduling, or preliminary assessment must ensure the system operates as a glass box
- Bail algorithms: Any AI tool used in bail decisions must satisfy the Sections 478-480 of BNSS, 2023 requirements for reasoned orders
- E-Courts integration: The e-Courts Mission Mode Project Phase III contemplates AI deployment; the MANAV framework provides ethical guardrails
- Liability determination: When AI-generated recommendations lead to unjust outcomes, the accountability principle demands identifiable human responsibility
For judicial aspirants, the MANAV framework raises essential questions: (1) Can a glass box requirement be read into Article 14‘s non-arbitrariness doctrine? (2) Should AI tools in courtrooms be subject to judicial review under Art. 226/227? (3) How does the V (Valid and Legitimate) pillar interact with evidentiary standards under the Bharatiya Sakshya Adhiniyam, 2023?
- Framework: MANAV (5 pillars)
- Proposed by: PM Narendra Modi at AI Impact Summit 2026
- Core concept: Glass Box vs Black Box AI
- Legal significance: Right to explanation, algorithmic accountability
- Key case: K.S. Puttaswamy v. UoI (2017) — informational self-determination
Source: UPSC Essentials, The Indian Express — March 2026
Practice Quiz
Test your understanding with these 10 MCQs:
Practice Quiz — 10 Judiciary Exam-Style Questions
Click an option to reveal the answer and explanation.