Facing a Changing Industry, AI Activists Rethink Their Strategy

In the spring of 2018, thousands of Google employees pressured the company into dropping a major artificial intelligence contract with the Pentagon. The tech giant even pledged to not use its AI for weapons or certain surveillance systems in the future.
The victory, which came amid a wave of unprecedented employee-led protests, helped inspire a new generation of tech activists in Silicon Valley. But seven years later, the legacy of that moment is more complicated. Google recently revised its AI ethics principles to allow some of the use cases it previously banned, and companies across the industry are releasing powerful new AI tools at breakneck speed.
On Tuesday, the AI Now Institute, a think tank that studies the social implications of artificial intelligence, published a sweeping report on the current AI landscape, detailing the way power is becoming concentrated in a handful of dominant companies that have shaped narratives about the technology to their own advantage. The authors suggest new strategies for how activists, civil society groups, and workers can gain power in a radically changed environment.
The authors point to declarations from tech industry figures who say the dawn of all-powerful superintelligence is right around the corner—a development they believe will usher in a utopian age in which humanity can rapidly find cures for cancer or solve climate change. This idea has “become the argument to end all other arguments, a technological milestone that is both so abstract and absolute that it gains default priority over other means, and indeed, all other ends,” the authors of the report write.
Among its recommendations, AI Now is urging advocacy and research groups to connect AI-related issues to broader economic concerns, such as job security and the future of work. While the negative impacts of artificial intelligence were previously hidden or abstract for employees in many fields, previously stable career paths are now being disrupted across many different parts of the economy, from software engineering to education.
The authors see an opportunity for workers to resist how AI is being deployed and push back against tech-industry talking points that frame outcomes like widespread job loss as inevitable. That could be especially powerful in a political climate where Republicans have positioned themselves as the party of the working class, though the Trump administration is opposed to most AI regulation.
The authors point to several case studies in the report where workers succeeded in halting the implementation of AI at their companies or made sure guardrails were put in place. One example is National Nurses United, a union that staged protests against the use of AI in health care and conducted its own survey showing the technology can undermine clinical judgment and threaten patient safety. The activism led a number of hospitals to institute new AI oversight mechanisms and scale back the rollout of some automated tools.
“What's unique to this moment is this push to integrate AI everywhere. It’s granting tech companies and the people that run them new kinds of power that go way beyond just deepening their pockets,” says Sarah Myers West, co-executive director of AI Now and one of the authors of the report. “We're talking about this profound social and economic and political reshaping of the fabric of our lives, and that necessitates a different way of accounting for AI harms.”
The report is more pessimistic about the current power of regulators, who the authors note opened a flurry of investigations into AI companies in recent years that have so far resulted in few tangible outcomes, such as a national digital privacy law in the US. Although officials often talk about the need to curb monopoly power and limit personal data collection, “much of this activity failed to materialize into concrete enforcement action and legislative change, or to draw bright lines prohibiting specific anticompetitive business practices,” according to the report.
Amba Kak, co-executive director of AI Now and another coauthor of the report, says that her organization “has been quite focused” on government policy as a way to enact change, but adds that it’s become clear those levers will be unsuccessful unless power is built from below. “We need to make sure that AI is resonating as an issue that is affecting people's material lives, not as this abstract tech thing over there.”
The authors stress that the point is not about portraying different AI products or technologies in a specific light. “We're not interested in discussing whether or not an individual technology like ChatGPT is good,” says Kate Brennan, an associate director at AI Now and another coauthor of the report. “We're asking whether it's good for society that these companies have unaccountable power,” which can be entirely compatible with “believing that certain products are good and interesting and exciting.”
wired