Meta resumes AI training in the UK using public user data from Facebook and Instagram. Global efforts intensify as regulators push for stronger AI governance to address privacy concerns and risks.
Meta Resumes AI Training in the UK After Regulatory Pause
Meta has announced that it will resume using public data from adult users in the UK to train its AI models, following a temporary halt due to regulatory concerns in the European Union. The tech giant will use publicly shared posts, comments, photos, and captions from Facebook and Instagram to enhance its generative AI capabilities. UK users will be informed of these practices via in-app notifications, with an option to object to their data being used.
This decision follows a similar regulatory pushback in the EU, where Meta was ordered by the Irish Data Protection Commission (DPC) to pause its AI releases due to privacy concerns. Meta asserts that users have consented to their data being used, and those who have already objected will not be contacted again.
Irish Data Protection Commission Tightens Oversight of AI Models
Ireland’s Data Protection Commission (DPC) has expanded its vigilance over tech companies developing AI models, recently opening an investigation into Google’s Pathways Language Model 2 (PaLM2). The investigation seeks to determine whether Google adhered to EU data protection laws during the development of its AI model, which features advanced multilingual and reasoning capabilities. This move comes after the DPC’s similar scrutiny of social media platform X, which agreed to stop using EU user data for training its AI chatbot, Grok.
UN Chief Warns of AI’s Threat to Democratic Systems
United Nations Secretary-General António Guterres has expressed concerns over the unchecked rise of artificial intelligence, warning that it could undermine democratic systems, increase disinformation, and exacerbate gender inequality. In a speech for the International Day for Democracy, Guterres called for global guidelines to regulate AI and emphasized the need for international cooperation. While he acknowledged AI’s potential to promote public participation and equality, Guterres stressed that a lack of regulation could lead to harmful outcomes, particularly for women in public life who already face online violence and gender-based discrimination.
AI Scientists Call for Global Contingency Plan on Loss of Human Control
A group of leading AI scientists has issued a call for the creation of a global oversight framework to address the potential loss of human control over AI systems. In an open letter, more than 30 experts from the US, China, Britain, and other countries warned that the malicious use or loss of control over AI could lead to catastrophic consequences for humanity. The group proposed the establishment of emergency preparedness agreements, a safety assurance framework, and global AI safety research to prevent these risks. The scientists also pointed out the increasing difficulty of achieving international consensus on AI threats, particularly in light of rising tensions between the US and China.
Global Efforts to Regulate AI Intensify
As concerns grow over the risks posed by artificial intelligence, international efforts to regulate the technology are gaining momentum. In early September, the US, EU, and UK signed the world’s first legally binding AI treaty, emphasizing human rights and accountability in AI development. However, while regulators push for stricter oversight, tech companies warn that excessive regulation could stifle innovation. Meta, for example, continues to expand its AI efforts globally, aiming to reflect the diversity of its users in more countries and languages later this year.