NITI Aayog’s Principles for Responsible AI, 2021
NITI Aayog, the Government of India’s policy think tank, outlines India’s foundational principles for responsible Artificial Intelligence (AI), building upon the 2018 National Strategy on AI. It highlights AI’s economic and social potential while acknowledging the emerging risks and ethical challenges associated with its rapid deployment, such as bias, privacy breaches, and accountability issues. The document examines system and […]
NITI Aayog’s National Strategy for Artificial Intelligence, 2018
Building upon the National AI Strategy, these principles provide a comprehensive framework for ethical and responsible AI development and deployment in India. The document establishes seven core principles: safety and reliability, equality, inclusivity and non-discrimination, privacy and security, transparency, accountability, and protection and reinforcement of positive human values. Under privacy and security, it mandates that […]
NIST AI Risk Management Framework (2024)
The National Institute of Standards and Technology’s AI Risk Management Framework 1.0, released in January 2023 with updates in 2024, provides voluntary guidance for managing AI risks throughout the AI lifecycle. The framework is organized around four core functions: Govern (establishing AI risk management culture and processes), Map (understanding context and identifying risks), Measure (analyzing […]
EU AI Act (Artificial Intelligence Act – Proposed, 2021)
The EU AI Act is the world’s first comprehensive AI regulation, proposed in 2021 and expected to be fully implemented by 2026. It establishes a risk-based approach categorizing AI systems into four levels: unacceptable risk (banned), high-risk, limited risk, and minimal risk. High-risk AI systems, including those used in biometric identification, critical infrastructure, and employment, […]