The adoption of Resolution 79/325 by the United Nations General Assembly on August 26, 2025 stands as a definitive moment in the evolution of international law. This resolution establishes two critical pillars for the future of technology: The Independent International Scientific Panel on AI and the Global Dialogue on AI Governance. Emerging from the 2024 Global Digital Compact and coinciding with the first anniversary of the Council of Europe’s Framework Convention on Artificial Intelligence, this mandate requires the rigorous integration of human rights into the development and deployment of algorithmic systems. It explicitly prohibits technologies that facilitate mass surveillance or discriminatory practices, attempting to reconcile national technological sovereignty with universal legal protections.

The roots of this resolution can be traced to growing global anxiety regarding the unchecked proliferation of AI. High-profile incidents in 2025, including biased predictive policing algorithms in Europe and the overreach of facial recognition systems in various Asian jurisdictions, underscored the need for a coordinated response. The United Nations Human Rights Council issued a report in June 2025 insisting that all AI systems must align with international legal standards.
This international consensus addresses a significant disparity in the technological landscape. While wealthy nations dominate AI patents, with over seventy per cent currently held by entities in the United States and China, the Global South frequently faces the risks of data colonialism and job displacement. The Council of Europe’s Convention, which reached 15 ratifications by late 2025, provided a timely legal reinforcement by promoting a risk-based approach to cross-border enforcement.
The Independent International Scientific Panel on AI is modelled after the Intergovernmental Panel on Climate Change (IPCC). It consists of approximately 30 multidisciplinary experts selected to ensure geographic and gender balance. The mandate of the panel is to conduct periodic assessments of systemic risks, including algorithmic bias, the impact of autonomous weapons, and the role of deepfakes in undermining democratic elections. These evidence-based reports are delivered every two years and are designed to influence United Nations capacity-building initiatives and funding allocations.
Complementing this scientific body is the Global Dialogue on AI Governance, which convenes annually to bring together governments, industry leaders, academia, and civil society. This forum prioritises the capacity gaps in low-income States and promotes open-source tools for auditing. Crucially, States commit to national strategies that incorporate due diligence as defined by the 2011 United Nations Guiding Principles on Business and Human Rights. The European Union AI Act, which implemented prohibitions on high-risk uses such as real-time biometric identification in public spaces in early 2025, now serves as a primary benchmark for these global standards.
Central to this new framework is the concept of “rights-by-design.” This requires developers to embed protections for privacy and non-discrimination into the initial stages of software creation. Prohibited applications now include emotion recognition for employment screening and automated social scoring systems. Furthermore, high-risk systems must undergo mandatory impact assessments that disclose training data sources and error rates.
Vulnerable populations receive heightened scrutiny under this regime. This includes indigenous groups affected by land-use inaccuracies in automated mapping, as well as children exposed to addictive platform architectures. The resolution also outlines robust remedy mechanisms. States are required to establish independent oversight bodies to handle complaints, with pathways to regional courts such as the European Court of Human Rights. The 2025 ruling in Munich Re v. OpenAI, which found that opaque data training violated international fair use equivalents, has already set a precedent for extraterritorial corporate liability.
A primary challenge of Resolution 79/325 is balancing international standards with national autonomy. States retain the authority to deploy AI tailored to their specific domestic contexts, such as precision agriculture in Africa or health care diagnostics in India. However, sovereignty is now understood to include an obligation of cooperation. By participating in data commons and joint risk pools, nations can prevent “regulatory arbitrage,” where lax jurisdictions are used to export harmful technologies to more regulated markets.
For developing nations, this resolution unlocks significant technical assistance. A pledge of one hundred billion dollars over five years has been established to support literacy in AI, the creation of ethical datasets, and the development of necessary infrastructure. India, with its robust history of digital public goods like Aadhaar and the Unified Payments Interface (UPI), is positioned to lead this transition. India can export its models of digital sovereignty while simultaneously utilising the expertise of the International Scientific Panel to refine its own domestic regulations.
Role of India in this new global order is pivotal. The government has allocated RS 10,000 crore to the IndiaAI Mission, signalling a commitment to becoming a global hub for innovation. However, this ambition must be balanced with the requirements of international law. Domestic events, such as the February 2026 AI Impact Summit in India, have faced scrutiny from human rights organisations regarding the use of facial recognition technologies.
To attract ethical investment and maintain its position as a bridge between the Global North and South, India must align its domestic legal framework with the transparency requirements of Resolution 79/325. The rich data environment of a population of 1.4 billion people offers immense economic value, estimated to contribute trillions to the global GDP by 2030, but only if the underlying technology is perceived as trustworthy and rights-compliant.
Resolution 79/325 represents a move away from fragmented, voluntary guidelines toward a structured multilateralism. It anticipates the creation of legally binding instruments by 2030, with interim benchmarks to ensure consistent progress. While critics may argue that the pace of international diplomacy is slow compared to the speed of technological advancement, this measured approach is intended to safeguard against hasty bans that could stifle beneficial innovation.
Ultimately, the resolution frames AI as an ally to human flourishing. By prioritising human rights as non-negotiable infrastructure, the global community is working to ensure that technology serves the shared destiny of humanity rather than undermining it. In this algorithmic age, the legitimacy of a technological advancement of a state will increasingly be measured by its commitment to the universal protections defined in this landmark UN resolution.
This article is authored by Ananya Raj Kakoti, scholar, international relations, Jawaharlal Nehru University, New Delhi.
