|
The E.U. AI Act’s risky loophole must be closed
|
|
|
Negotiations around the final shape of the E.U. AI Act, touted as “the world’s first comprehensive AI law,” are nearing the finish line. But as E.U. lawmakers head into the final stretch, they must urgently address a gaping loophole the tech industry lobbied for – one that would let companies decide for themselves whether their services are “high-risk.” Given that this could render the legislation toothless, Access Now, EDRi, and BEUC, together with 145+ other organizations, are calling on policymakers to close the loophole and prioritize the rights of people affected by AI systems. Read our joint statement here. Read more via Access Now
|
|
How AI hurts the Global Majority
In this week's special issue of Express, we're focusing on how AI, algorithms, and automation impact people living in Global Majority countries. Did we miss any key points on this topic? Would you like to read more themed issues of Express? Email [email protected] to let us know.
|
Apple killed CSAM scanning to avoid government misuse
Last year, following outcry from civil society, Apple reversed course on plans to introduce a machine learning-based scanning tool for identifying child sexual abuse material (CSAM) on its iCloud platform. Now the company is sharing more details about why, acknowledging that creating a system for circumventing encryption and conducting automated scanning creates a "slippery slope of unintended consequences," opening the door to mass surveillance and searches for other types of content. These are concerns Access Now and other organizations had raised with the company, and we're glad to see that Apple took that feedback on board. Read more via WIRED
|
The “digital sweatshops” behind the AI boom
While policymakers in Brussels and Washington D.C. loudly debate whether, how, and when to regulate AI, the harm it is already causing elsewhere is mostly ignored. That includes the exploitation of people whose poorly paid and often grueling labor underpins AI tools. In the Philippines, more than two million people work to annotate and label the data used to train machine learning models, often for below minimum wage. At least 10,000 do this for Scale AI, a U.S. company that works with Meta, Microsoft, and OpenAI. It is one of many companies drawing criticism for its labor practices, as Filipino workers report being paid late or not at all, and having nowhere to seek redress. Read more via Washington Post
|
The AI startup where workers reap rewards
The global AI industry is currently worth billions, but workers rarely see their fair share of those profits. In India, a nonprofit called Karya is pioneering a new model to avoid AI data annotators being left with mere scraps. They not only funnel any profits back into the communities who do the hard labor, they give workers ownership of the content they annotated, so that whenever it is resold, they make additional money on top of their wages. Karya is also working to build out data in previously neglected languages, such as Marathi, Telugu, Hindi, Bengali, and Malayalam. Read more via TIME
|
We tested ChatGPT in Bengali, Kurdish, and Tamil. It failed
Tech companies are happy to profit from global demand for AI-driven translation technology, but they don’t put the same effort into ensuring that AI platforms actually serve people in the Global Majority. That can have devastating consequences. Rest of World recently put ChatGPT’s language capacity to the test, trying out languages including Bengali, Kurdish, Tamil, Swahili, Urdu, Haitian Creole, and Tigrayana, which are spoken by hundreds of millions of people worldwide. The result? A failing grade, with the tool frequently mistranslating, misinterpreting, and even inventing words and phrases. By ignoring the need for accurate, contextualized, and locally sensitive AI outputs, tech companies risk deepening the digital divide, as well as entrenching bias and discrimination. Read more via Rest of World
|
OK, doomers – let’s get real
Ignore AI hysteria: algorithms already sow disinformation in Africa
While OpenAI’s Sam Altman distracts regulators by shouting about the hypothetical dangers of generative AI, people like Kenyan journalist and data scientist Odanga Madung are busy painstakingly documenting the way platforms like TikTok, Facebook, and Twitter are already being used to undermine democracies in Africa. “The crux of the issue lies not in the content made by AI tools such as ChatGPT but in how people receive, process, and comprehend the information facilitated by the AI systems of tech platforms,” he explains. Read more via The Guardian
|
Urgent: update your iPhone now
As we prepared this newsletter, the news broke that Citizen Lab had identified a zero-click vulnerability in iPhones that enables attackers to deliver NSO Group's dangerous Pegasus spyware. We advise you to update your phone immediately to the latest iOS version 16.6.1. For more details, see Apple's note on the update. As always, if you're part of a human rights organization, member of a civil society rights group, or journalist or media organization, and you need direct emergency digital safety assistance, our Digital Security Helpline is a resource for you. Read more via Citizen Lab
|
Opportunities and other highlights
We’re hiring a Deputy General Counsel
If you’re a highly skilled and experienced lawyer with a deep commitment to cultural, linguistic, and economic equity, and a desire to help shape Access Now’s global legal priorities, we'd like to talk to you. We’re hiring a full-time Deputy General Counsel position. Learn more and apply. Learn more via Access Now
|
|
|
|