When I worked at CREDO, we made a decision that a lot of progressives hated: We started a superPAC.
SuperPACs were barely two years old. They were the right wing’s weapon of choice, dark money vehicles designed to flood elections with corporate cash. Our side had spent months denouncing them. And now we were launching one.
I asked my boss why. Her answer was simple: “We don’t get to unilaterally disarm.”
She was right. Our opponents had a powerful new tool. We could stand on principle and lose, or we could pick up the same tool and fight back. We picked it up. And it worked.
I think about that conversation a lot lately. Progressives are making the same mistake with AI, except this time the stakes are bigger and the window is closing faster.
The gap is forming now
I run a consulting firm that works with progressive nonprofits on technology and operations. I talk to organizers, digital directors, and ops people every week. And a pattern has become hard to ignore: there’s a growing divide in who’s engaging with AI and who isn’t.
A 2025 survey by the American Association of Political Consultants found that a third of political consultants now use AI daily.1 But the adoption isn’t evenly distributed. Higher Ground Labs, which tracks technology in progressive campaigns, flagged a widening gap between left and right.2 And a post-election survey by the Center for Campaign Innovation found that nearly 4 in 10 progressive campaign professionals hadn’t used AI for content creation at all.3
While some progressive orgs are still debating whether AI is ethical enough to touch, conservative operations are using it to draft fundraising appeals, analyze voter data, generate rapid response content, and automate the grunt work that eats up staff time. They’re not waiting for a consensus position on AI ethics. They’re just using it.
I get the hesitation. The concerns about AI are real: bias in hiring algorithms, surveillance deployed against communities of color, the environmental cost of training models. I write about these things every week in my newsletter. These aren’t abstract problems.
But the gap between organizations that engage with AI and those that don’t is widening. And it gets harder to close the longer we wait.
Your team deserves a head start
There’s something else going on here that nobody’s talking about: what happens to the people inside these organizations?
Think about someone who spends five years at an org that treats AI as off-limits. They never learn to use it for research, for drafting, for data analysis. They don’t build any of the skills that employers increasingly expect. Then they enter the job market — maybe they got laid off in one of the funding crunches our sector goes through every couple of years, maybe they’re just ready for a change — and they discover that the job postings are asking for AI fluency they never had the chance to develop.
This isn’t hypothetical. Women make up roughly two-thirds of the nonprofit workforce.4 The Dallas Fed found that 6.1 million U.S. workers in administrative and clerical roles are among the most vulnerable to AI-driven job displacement, and 86% of them are women.5 Brookings found the same pattern — the workers most exposed to AI disruption are the least prepared for it.6
The people on your team are going to need these skills no matter where they work next. You can be the org that gave them a head start, or the one that left them behind.
There’s also a clock on this. Training your team on AI basics today is a conversation — a lunch-and-learn, a few hours of experimentation, some new workflows. Training them in two years, when AI is embedded in every tool they touch? That’s a crisis.
Use it so you can shape it
Progressives have the best reasons to engage with AI. We care about bias in algorithms and worker protections. We’re the ones pushing for transparency and accountability. So why are we the ones refusing to touch the thing?
You can’t write good policy for something you’ve never used. You can’t evaluate an AI vendor’s claims about fairness if you don’t understand how the models work. And you can’t push for meaningful regulation if your entire experience with AI is reading articles about it.
Look at California’s “No Robo Bosses Act,” sponsored by the state’s largest labor federation and introduced by a legislator who co-chaired the Congressional AI Caucus.7 The bill doesn’t just say “AI bad.” It makes specific, practical distinctions: employers can’t rely solely on automated systems to fire or discipline workers, and they can’t use AI to predict future worker behavior from personal data. That kind of precision comes from people who’ve actually used these systems.8
Every time a progressive organizer learns to use AI, that’s one more person who can spot the problems and push for the fixes. Every time an org opts out entirely, that’s knowledge we’ll never develop.
Where to start
I’m not asking you to love AI. The skepticism is healthy.
I’m asking you to learn it anyway.
Pick one task that eats your time every week and try it with an AI tool. Use it to pull together background research for a lobby visit. Have it draft the first pass of that grant report that’s been sitting in your inbox. Ask it to summarize 200 public comments before your next coalition call. Run your fundraising email through it and ask what’s working and what isn’t.
You’re not going to break anything. The first time will feel clunky. But you’ll start to see where AI fits into your work — and more importantly, where it doesn’t. That judgment is the skill. And it’s the skill that nobody builds by sitting it out.
The right has picked up this tool. We don’t get to unilaterally disarm.
Footnotes
-
American Association of Political Consultants, “Artificial Intelligence in Political Consulting: Key Findings from a Survey,” July 2025. 200 AAPC members surveyed; 59% use AI weekly, 34% daily. Also covered by Campaigns & Elections. ↩
-
Higher Ground Labs, “AI Landscape Report,” 2024. Describes progressive AI adoption as “highly fragmented” and flags a widening adoption gap with conservative campaigns. ↩
-
Center for Campaign Innovation, “2024 Post-Election Political Professional Survey.” Found 39% of progressive campaign respondents had not used AI for content creation. ↩
-
U.S. Bureau of Labor Statistics, “For-Profit, Nonprofit, and Government Sector Jobs in 2022,” 2023. Also cited by Independent Sector, “CEO Views: Elevating Women in the Nonprofit Workforce,” March 2025. Women constitute approximately 67% of the nonprofit workforce. ↩
-
Federal Reserve Bank of Dallas, “Measuring U.S. Workers’ Capacity to Adapt to AI-Driven Job Displacement,” February 2026. Identified 6.1 million workers in admin/clerical roles lacking adaptive capacity; 86% are women. ↩
-
Brookings Institution, “Measuring U.S. Workers’ Capacity to Adapt to AI-Driven Job Displacement,” 2026. ↩
-
Sen. Jerry McNerney (D-Pleasanton) introduced SB 947, the “No Robo Bosses Act,” February 2026. Sponsored by the California Labor Federation, AFL-CIO. McNerney previously co-chaired the Congressional Artificial Intelligence Caucus. ↩
-
For legal analysis of SB 947’s provisions, see Crowell & Moring, “California SB 947: ‘No Robo Bosses Act.’” ↩