Access Now
 

 

The E.U. AI Act’s risky loophole must be closed

Negotiations around the final shape of the E.U. AI Act, touted as “the world’s first comprehensive AI law,” are nearing the finish line. But as E.U. lawmakers head into the final stretch, they must urgently address a gaping loophole the tech industry lobbied for – one that would let companies decide for themselves whether their services are “high-risk.” Given that this could render the legislation toothless, Access Now, EDRi, and BEUC, together with 145+ other organizations, are calling on policymakers to close the loophole and prioritize the rights of people affected by AI systems. Read our joint statement here. Read more via Access Now

Reining in AI

Rights groups urge Biden to make AI Bill of Rights binding

AI’s harms aren’t hypothetical. Deeply entrenched algorithmic bias and discrimination are already hurting real people. With U.S. President Biden expected to shortly issue an executive order on AI, we joined 65+ civil and human rights organizations in an open letter arguing for implementing the AI Bill of Rights in federal policy. Access Now’s Willmary Escoto breaks down the coalition's demands. Read more via Axios

How AI hurts the Global Majority

In this week's special issue of Express, we're focusing on how AI, algorithms, and automation impact people living in Global Majority countries. Did we miss any key points on this topic? Would you like to read more themed issues of Express? Email [email protected] to let us know.

AI is for autocrat

AI-enhanced identification: a danger in the Middle East?

When Emirati dissident Khalaf al-Romaithi flew from Turkey to Jordan to find a new school for his son, he ended up going to jail in the UAE instead. A shared database containing his biometric data — an iris scan — enabled Jordanian authorities to identify and deport him. His story should worry everyone, argues Yana Gorokhovskaia of Freedom House. "We are very concerned about the increasing use of biometric technology to enable closer cooperation between repressive governments," she says. Read more via Deutsche Welle

In West Asia and North Africa, AI innovation trumps human rights

Things go badly when the private sector develops technology without regard for people’s fundamental human rights. SMEX’s Sarah Culper looks at how AI is being developed in WANA countries that have “either no regulations or soft non-binding recommendations,” warning that use of these technologies “almost always allows for greater surveillance of individuals, and therefore, is especially concerning in authoritarian regimes.” Read more via Global Voices

When AI doom-mongering is a smokescreen for authoritarian "cybersecurity"

Government overreach can be even more dangerous than corporate failure. During negotiations over the pending UN Global Cybercrime Treaty, state delegates discussed including AI technologies under its scope, despite the fact that, according to Access Now’s Raman Jit Singh Chima, the draft treaty “does not actually help those who are trying to make sure that AI does not result in an explosion in cybercrime." Meanwhile, the delegates risk ignoring the very real human rights problems with the treaty, which could end up harming the very people it’s supposed to protect. Read more via The Record

Apple killed CSAM scanning to avoid government misuse

Last year, following outcry from civil society, Apple reversed course on plans to introduce a machine learning-based scanning tool for identifying child sexual abuse material (CSAM) on its iCloud platform. Now the company is sharing more details about why, acknowledging that creating a system for circumventing encryption and conducting automated scanning creates a "slippery slope of unintended consequences," opening the door to mass surveillance and searches for other types of content. These are concerns Access Now and other organizations had raised with the company, and we're glad to see that Apple took that feedback on board. Read more via WIRED

Paying the price for AI

The “digital sweatshops” behind the AI boom

While policymakers in Brussels and Washington D.C. loudly debate whether, how, and when to regulate AI, the harm it is already causing elsewhere is mostly ignored. That includes the exploitation of people whose poorly paid and often grueling labor underpins AI tools. In the Philippines, more than two million people work to annotate and label the data used to train machine learning models, often for below minimum wage. At least 10,000 do this for Scale AI, a U.S. company that works with Meta, Microsoft, and OpenAI. It is one of many companies drawing criticism for its labor practices, as Filipino workers report being paid late or not at all, and having nowhere to seek redress. Read more via Washington Post

The AI startup where workers reap rewards

The global AI industry is currently worth billions, but workers rarely see their fair share of those profits. In India, a nonprofit called Karya is pioneering a new model to avoid AI data annotators being left with mere scraps. They not only funnel any profits back into the communities who do the hard labor, they give workers ownership of the content they annotated, so that whenever it is resold, they make additional money on top of their wages. Karya is also working to build out data in previously neglected languages, such as Marathi, Telugu, Hindi, Bengali, and Malayalam. Read more via TIME

We tested ChatGPT in Bengali, Kurdish, and Tamil. It failed

Tech companies are happy to profit from global demand for AI-driven translation technology, but they don’t put the same effort into ensuring that AI platforms actually serve people in the Global Majority. That can have devastating consequences. Rest of World recently put ChatGPT’s language capacity to the test, trying out languages including Bengali, Kurdish, Tamil, Swahili, Urdu, Haitian Creole, and Tigrayana, which are spoken by hundreds of millions of people worldwide. The result? A failing grade, with the tool frequently mistranslating, misinterpreting, and even inventing words and phrases. By ignoring the need for accurate, contextualized, and locally sensitive AI outputs, tech companies risk deepening the digital divide, as well as entrenching bias and discrimination. Read more via Rest of World

OK, doomers – let’s get real

Ignore AI hysteria: algorithms already sow disinformation in Africa

While OpenAI’s Sam Altman distracts regulators by shouting about the hypothetical dangers of generative AI, people like Kenyan journalist and data scientist Odanga Madung are busy painstakingly documenting the way platforms like TikTok, Facebook, and Twitter are already being used to undermine democracies in Africa. “The crux of the issue lies not in the content made by AI tools such as ChatGPT but in how people receive, process, and comprehend the information facilitated by the AI systems of tech platforms,” he explains. Read more via The Guardian

What we really need to talk about

Six months after the “AI doomsday” letter was published, it’s becoming clear those who signed it may have had ulterior motives. In case you missed it, our generative AI FAQ explains how the technology works and the actual risks that warrant immediate attention. Read more via Access Now

Security update

Urgent: update your iPhone now

As we prepared this newsletter, the news broke that Citizen Lab had identified a zero-click vulnerability in iPhones that enables attackers to deliver NSO Group's dangerous Pegasus spyware. We advise you to update your phone immediately to the latest iOS version 16.6.1. For more details, see Apple's note on the update. As always, if you're part of a human rights organization, member of a civil society rights group, or journalist or media organization, and you need direct emergency digital safety assistance, our Digital Security Helpline is a resource for you. Read more via Citizen Lab

Opportunities and other highlights

We’re hiring a Deputy General Counsel

If you’re a highly skilled and experienced lawyer with a deep commitment to cultural, linguistic, and economic equity, and a desire to help shape Access Now’s global legal priorities, we'd like to talk to you. We’re hiring a full-time Deputy General Counsel position. Learn more and apply. Learn more via Access Now

The TIME100 most influential people in AI

Congratulations to the many familiar faces from the digital rights community included on the TIME100 AI list, intended as a “map of the relationships and power centers driving the development of AI." Among them are Meredith Whittaker, Joy Buolamwini, Sarah Chander, Timnit Gebru, and Margaret Mitchell – all recognized for their work to advance more ethical, transparent, and rights-respecting AI. Bravo! Learn more via TIME