2024 in Review: Significant Milestones in AI Governance Worldwide
Dec 22
/
Sariga Premanand
The year 2024 has indeed been a landmark year for AI governance, with numerous countries and international organizations taking significant steps to address the challenges and opportunities presented by artificial intelligence. Let's explore some of the key milestones in AI governance around the world in 2024.
January
- Saudi Arabia: The Saudi Data and AI Authority released Generative AI Guidelines, establishing a framework for the ethical use of AI technologies.
- World Health Organization (WHO): Issued guidance on AI governance for large multi-modal models, emphasizing the importance of safety and ethical considerations in AI applications.
February
- ASEAN: Published the Guide on AI Governance and Ethics, providing member countries with a framework to navigate AI's ethical and governance challenges.
- China: Released Basic Safety Requirements for Generative AI Services, setting standards to regulate AI services and ensure user
- Japan: Established the AI Safety Institute to oversee and ensure the safe development and deployment of AI technologies.
March
- European Union: The European Parliament approved the EU AI Act, a comprehensive regulatory framework for AI technologies.
- India: Issued a Generative AI Advisory for firms, offering guidelines to promote responsible AI development and deployment.
- United Nations: The General Assembly adopted a resolution on AI, underscoring the global commitment to addressing AI's challenges and opportunities.
April
- United States: Nine federal agencies published a Joint Statement on AI enforcement, highlighting the importance of compliance with existing laws and regulations in AI applications.
May
- European Union: The Council approved the EU AI Act, and the AI Office was established to oversee its implementation.
- OECD: Updated its AI Principles to reflect the evolving landscape of AI technologies and their societal impacts.
- Seoul Declaration: World leaders signed the Seoul Declaration on AI Safety, committing to the safe and inclusive development of AI.
June
- European Data Protection Supervisor (EDPS): Published guidelines on generative AI, focusing on data protection and privacy concerns.
- France: The Commission Nationale de l'Informatique et des Libertés (CNIL) published recommendations on AI, emphasizing transparency and user rights.
- Russia: Passed Bill 512628-8 on AI liability, outlining legal responsibilities related to AI systems.
July
- African Union: Published the Continental AI Strategy, outlining a unified approach to AI development across African nations.
- NATO: Released an updated AI strategy, addressing the integration of AI in defense while ensuring ethical considerations.
- United Kingdom: Introduced AI legislation plans in the King's Speech, signaling a commitment to regulate AI technologies.
- United States: The National Institute of Standards and Technology (NIST) published the Generative AI Profile and Secure Software Development Practices for Generative AI, providing guidelines for secure AI development.
August
- Australia: Passed the Criminal Code Amendment (Deepfake Sexual Material) Bill 2024, criminalizing the creation and distribution of non-consensual deepfake content.
- European Union: The EU AI Act entered into force, marking the beginning of its implementation phase.
- Latin America and the Caribbean: Seventeen countries signed the Cartagena Declaration, emphasizing regional cooperation in AI governance.
September
- Australia: Published a Voluntary AI Safety Standard, encouraging organizations to adopt best practices in AI development.
- China: Published the AI Safety Governance Framework, outlining measures to ensure the safe development of AI.
- Council of Europe: The Framework Convention on AI and human rights, democracy, and the rule of law opened for signature, representing the first international legally binding treaty on AI.
- European Union: Over 100 companies signed the EU AI Pact pledge, committing to responsible AI practices.
- Saudi Arabia: The Saudi Data and AI Authority published Deepfake Guidelines to address the ethical and legal implications of deepfake technologies.
- South Korea: Passed a law criminalizing sexually explicit deepfakes, strengthening protections against non-consensual AI-generated content.
- United States: California's Governor approved a suite of new state AI laws, enhancing the state's regulatory framework for AI.
October
- United States: The White House published a National Security Memorandum on AI, outlining strategies to maintain AI leadership and address national security concerns.
- Data Protection Authorities: Seventeen authorities signed a Joint Statement on data scraping, privacy, and AI, highlighting the need for data protection in AI applications.
November
- Canada: Established the Canadian AI Safety Institute to oversee AI safety research and policy development.
- European Union: Published a draft of the General Purpose AI Code of Practice, providing guidelines for the development and use of general-purpose AI systems.
- South Korea: Established the AI Safety Institute to monitor and ensure the safe deployment of AI technologies.
- International Collaboration: Launched the International Network of AI Safety Institutes, fostering global cooperation in AI safety research and governance.Normal text.
December
- Brazil: The Senate approved the 'AI Bill' (2338/2023), setting a legal framework for AI governance in the country.
- European Data Protection Board (EDPB): Published an opinion on AI models and personal data, providing guidance on data protection compliance for AI developers.
- European Union: The updated EU Product Liability Directive entered into force, extending liability rules to AI systems.
- United Kingdom: Launched a consultation on AI and copyright, seeking input on how AI-generated content intersects with intellectual property laws.
Emerging Trends
Throughout 2024, we saw several emerging trends in AI governance:
- Focus on Safety: Many countries established dedicated AI safety institutes, reflecting a growing concern for the potential risks associated with advanced AI systems.
- International Collaboration: The formation of the International Network of AI Safety Institutes demonstrates a shift towards global cooperation in addressing AI challenges.
- Comprehensive Regulations: Countries and regions are moving beyond guidelines to establish more binding regulatory frameworks, as exemplified by the EU AI Act.
- Ethical Considerations: Many initiatives, such as Saudi Arabia's Generative AI Guidelines, emphasize the ethical use of AI technologies.
- Sector-Specific Guidance: Organizations like the World Health Organization issued guidance on AI governance for specific sectors, recognizing the unique challenges in different fields
As we move forward, it's clear that 2024 has set the stage for a more coordinated and comprehensive approach to AI governance globally. These developments reflect a growing recognition of AI's transformative potential and the need for responsible development and deployment of AI technologies.