7 OCTOBER | LONDON 2024
SEPTEMBER 12TH - 14TH
The O2, LONDON
Preparing for AI
Your weekly CogX newsletter on AI Safety and Ethics
The week's developments on AI, safety and ethics, explained | 29.03.24
If you’re enjoying this briefing, sign up here — and share it with a friend. Reading time: ~5 minutes
This week, the UN unanimously passed its first AI resolution, backed by over 120 countries including the US, aiming to safeguard human rights and privacy against the growing risks of AI. Simultaneously the Pentagon is advancing plans to deploy thousands of AI military drones, and pursue fully automated AI surveillance along the US-Mexico border. I wonder if this technology will adhere to the UN's 'secure by design' principles?...
Meanwhile, a new study reveals that ChatGPT (or I guess the new king Claude 3) may not be as great as you think, demonstrating that the so-called emergent abilities of LLMs may simply be due to faulty assessment metrics.
We cover these stories, plus the open-source definition dilemma, AI flirting coaches, and the AI witch hunt caused by Princess Catherine’s viral video.
- Charlie and the Research and Intelligence Team
Ps. We’ve just launched The CogX Transatlantic Accelerator; a joint campaign with the UK Government to connect the most innovative UK startups with US markets. If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here
Share your expertise! Want to be a guest contributor in our next issue? drop us a line at: editors@cogx.live.
Ethics and Governance
🛸 The Pentagon plans to build thousands of low-cost, AI-driven drones inspired by Ukraine's use of drone warfare to prepare for potential conflicts. The initiative will prioritise bulk production and replaceability, over high cost and sophistication.
🌎 The UN unanimously adopted the first global AI resolution, urging nations to protect human rights and privacy while monitoring AI risks, co-sponsored by over 120 countries. This nonbinding resolution is part of an international push for "secure by design" AI systems.
🗳️ AI deepfake experiment proves coming election chaos: Kari Lake, a Senate candidate in Arizona, was deepfaked in an experiment to show the potential chaos AI could bring to elections. The realistic deepfake garnered significant attention and a cease-and-desist letter from Lake's campaign.
📊 Report warns that nearly 8 million UK jobs could be lost to AI, especially impacting women, younger workers, and the lower-paid in a "jobs apocalypse". However the report does emphasise that with proactive government and industry actions, the crisis can be averted.
AI Dilemmas
💗 Ethical debates emerge as AI transforms dating through services like simulated first dates and flirt coaches. These tools aim to enhance users' dating experiences — improving profiles and generating opening lines — yet raises questions about authenticity and representation.
🛂 The US is advancing AI-driven border surveillance to automate detection at the US-Mexico border. This push for autonomous surveillance aims to reduce manpower and improve efficiency but raises ethical concerns over bias, and the infringement of migrants' rights.
🕵️♂️ The victims of digital exploitation: WIRED reporters found a site that lets users "nudify" photos raising ethical and legal concerns for its victims, largely young girls, whose images are being used without their consent. The site highlights a growing concern of AI misuse, leading to calls for better protections for victims.
🚀Enjoying our content? We’re also on LinkedIn — Follow us to stay in the loop with the week's most impactful AI news and research!
Insights & Research
📈LLM emergent abilities are a mirage, suggests study arguing that the perceived sudden jumps in AI performance are due to the assessment metrics rather than leaps in capability. Researchers found that when using different metrics that award partial credit, improvements in models appear gradual and predictable.
🤖Can AI replace human research participants? Proposals to use AI instead of humans to generate research data in scientific studies are gaining traction: to save costs and increase diversity. Critics, however, contend that AI cannot grasp the nuance of human experiences, potentially compromising the integrity of social science research.
🔐The tech industry is divided on defining "open-source AI" specifically whether simply releasing AI models, without their training data, qualifies as open-source.The OSI is attempting to standardise the definition, integrating opinions from hacktivists to tech giants.
🗳️As chatbots become increasingly more political, how do you compare? A study reveals the scope of LLM political bias, with a general trend towards left-libertarian. Interested in knowing how similar your views are to each common LLM? This quiz is available for comparison.
In case you missed it
The AI witch-hunt is back on post Princess Catherine’s viral cancer video. What do you think: was the video real, or AI?
✍️ Enjoying this newsletter? Subscribe to our free weekly briefings on Preparing for AI, Cinema & AI, The Race to Net Zero, AI & DeepTech and The Future of Work
🚀 We’ve just launched The CogX Transatlantic Accelerator; a joint campaign with the UK Government to connect the most innovative UK startups with US markets. If you are or know a UK startup they can apply for over $20k worth of support to attend, exhibit and network at CogX Festival in LA on 7th May here