Blog by Sumana Harihareswara, Changeset founder
Generative AI Abstinence and Harm Reduction
Up till now, I've mostly abstained from using generative AI. I'm now reassessing whether I should keep doing that. This post preserves a Fediverse thread I wrote yesterday, with better formatting, section headers, some additional context early on, and links to others' responses at the end.
What got me thinking
Via "Nope to Open Primaries in NYC, Says Charter Revision Commission", a news article published yesterday by THE CITY :
I once again came across Vikram Oberoi's citymeetings.nyc (Katie Honan linked to a chapter of the July 7th Charter Revision Commission meeting discussing the confusion and contention surrounding "jungle primaries").
I've been leery of citymeetings.nyc because, the first time I came across it, I saw that it was AI-based, and thought it would fundamentally be slop.
But today I read "How citymeetings.nyc uses AI to make it easy to navigate city council meetings" .
The front page of citymeetings.nyc currently says:
"citymeetings.nyc turns NYC Council meetings into short, quick-to-skim, easy-to-share segments same-day using AI and human oversight."
and that links to an About page that links to Oberoi's detailed explanation. As slide 19 illustrates, Oberoi takes multiple human "review & fix" steps throughout the process. This helps me trust the output, so I'm much more likely to use his site going forward.
- A tour of the tools...
- A crash course on how to write an effective prompt.
- How I create video chapters ...
- Tips on how to approach more ambitious projects that rely on LLMs.
I am tremendously excited about how AI can make government data more accessible and transparent.
Tools like this are valuable public goods. I'd like to see cities fund them in the way they do libraries.
Cities should do things like this instead of releasing chatbots!.....
Additional context (added for blog post)
New York City's Charter Revision Commission has the responsibility to suggest major changes to how the city runs. The current Commission announced its interim report on July 1st, 2025. It surprised a bunch of people by saying they were thinking of putting a measure on the November 2025 election ballot to make a major change to NYC primary elections. The Commission said that July 15th was the last day for the public to submit written comment, and that that, on July 21st, they would vote to decide which questions they would put on the ballot.
On July 7th, they held their first public hearing since the release of the interim report. Over 4.5 hours, 57 people gave spoken testimony, many representing advocacy organizations. The video went up on YouTube on July 8th, and YouTube, as usual, provided its own AI-generated transcript/captions. The Commission published a brief summary of their testimony on (I think) July 14th, but still has not released a full transcript; it often takes many business days, or even weeks, to get an official transcript of these sorts of city hearings.
citymeetings.nyc likely had transcripts and summaries up within 24 hours of the July 7th hearing, per its paid contract with the Commission.
On July 15th, Aditya Mukerjee posted to ask New Yorkers to submit comment before midnight. I did so. I also copied his post from Bluesky to Mastodon. The ensuing conversation included questions and confusion about what specific model of open primary elections the Commission wanted to propose. While researching this, I didn't think to check citymeetings.nyc. I scrubbed through the YouTube auto-transcript to find and skim testimony, somewhat effectively.
Conversation at the July 7th hearing included people discussing that confusion, and suggesting we need more time to work that out and clarify public messaging before the Commission puts a question on the ballot. On July 16th, the Commission announced it would wait longer to ask voters to open the primaries.
In summary: if I had used citymeetings.nyc to research this issue, I would have better understood it, written better testimony, and participated more usefully in the online conversation.
My abstinence
"if you want to use LLMs effectively and responsibly you must acknowledge that they will fabricate things."
Oberoi specifically describes the perils of false/hallucinated output and ways he has mitigated that problem. He doesn't discuss systemic bias, energy usage, or other ethical issues with using LLMs, and how he mitigates those concerns. (I've sent him a note saying I'd be interested in his thoughts on that.)
Nevertheless I emerge with a mix of feelings about my own abstinence.
In 2022 I wrote about how I was thinking about the ethics of using Whisper, an LLM I use to transcribe audio. I did some evaluation, in the absence of ethical guidance from trusted assesors. I continue to use it frequently.
I've tried to abstain from using AI/LLM-type tools that I haven't evaluated this same way. As far as I know, we still don't have any guides like "these LLMs are LESS unethically trained" - https://github.com/mozilla-ai/lumigator/issues/1338 is where I suggest Lumigator do that.
Understanding the cost-benefit of using, e.g., the chatbot-type tools would get easier if I got hands-on experience using them, so I could concretely say: in these domains, with this amount of effort, I have these new capabilities that allow me to do these new things/to do these things faster/better/more delightfully.
But I am unaware of any chatbots that are trained only on ethically sourced data, and which offer a way to mitigate the climate impact of their training and the user's usage.
As Chelsea Troy discusses in "Does AI benefit the world?" (her writing on ML/AI/LLMs has been invaluable to me as I think about this),
Our ethical struggle with generative models derives in part from the fact that we…sort of can’t have them ethically, right now, to be honest.... we did not have the necessary volume of parseable data available until recently—and even then, to get it, companies have to plunder the internet.
And her perspective is: it really is not feasible to get enough people to genuinely consent to sharing their data to train these models sufficiently for usability. And I trust her, saying that, far more than I trust the AI company founders and employees and their apologists.
Someone else noted: As the utility, availability & cost of the unethically trained models get better & better, the incentives to gather & use necessarily smaller ethically-gathered datasets, + train models on them, go down.
Risks, benefits, and capabilities I want
Troy compares the impacts of three technological changes (cars, the consumer internet, and generative text and image models), discusses the disparate impacts on different populations, and says:
Could we theoretically improve the net benefit of any given technical development today if we make efforts to maximize its positive outcomes and mitigate its negative ones? I believe so, and I believe that’s basically the option available to us.
And citymeetings.nyc really brings that home, for me.
Aaron Swartz mention
Twelve years ago, one reason I did a stint at Recurse Center was a lesson I drew from Aaron Swartz's too-short life.
"he'd said, the revolution will be A/B tested... We activists have a responsibility to use our energy well. I, in particular, believe I need to become a better software engineer....
"Life is short, so be a better activist."
I anticipated wanting to hack a lot of dashboards, APIs, courseware, wiki templates, poorly formatted datasets, CRMs, and helpful little scripts (yup, that prediction was right), and figured:
the skillset will supercharge everything else I do. I'll be a more effective citizen, coach, and leader if I increase my fluency in code.
And citymeetings.nyc (using generative AI plus human oversight) suggests to me: maybe the same argument applies.
Reflecting on my impulses
I have tried my whole life to avoid self-delusion, to notice what is actually happening. I try to notice my own cognitive dissonance, to notice the impulse to wave away new info that causes me discomfort.
So:
The impulse that initially led me to dismiss https://citymeetings.nyc, without really poking at it, misled me.
It was not, like, a reasoned assessment that it would likely be slop.
It rhymes with what Danny O'Brien wrote about, a critique-centric tendency.
And I believe the impulse had a big reverence/purity/disgust/sacredness/visceral aversion component, along with a kind of anxious fear.
Beyond my own cerebral assessment of most generative AI stuff, I've developed a reflexive disgust response. Individual instances remind me of the ickiness of the industry, the lies, the exploitation, the misinformation and delusions some chatbot users start to believe and the consequent suffering. I feel a heightened need to guard myself against being tricked.
And there's a social group thing happening, too.
My group of friends and colleagues and acquaintances include a lot of people who feel beset by genAI and who consider it a catastrophe -- many of whom are being materially hurt by the effects of current genAI trends -- and thus attempt to abstain completely and to persuade others to as well.
Beyond how persuasive I find their arguments, there's also.... they're friends. I would prefer that they not think badly of or shun me.
Harm reduction
People who try to adhere to a particular set of values struggle with calibrating degrees of acceptable compromise. What falls into "well, we all have to live in the world"? What's "you've sold your soul to the devil"? What do we do about the in-betweens?
Often, we try for harm reduction.
I'd love to find more people and organizations making systematic assessments and recommendations about harm-reduction approaches to using genAI; please feel free to reply & link.
Thanks Alyssa Coghlan for pointing to one such effort.
I also know about Common Voice and Common Pile (and concerns about Common Pile licensing).
A tentative plan
Trying to wrap up this thread:
Because of a mix of intellectual and emotional responses, up till now, I've tried to be pretty abstinent regarding genAI, with the exception I've mentioned (Whisper). Up till now, I have taken the personal position that, whatever the upside for me, the risks and known negative impact are too great.
And now I want to reassess that, and check whether harm-reducing ways exist for me to try out other genAI tooling for my own needs.
Which would entail:
(among other things)
But that's a tentative plan and I'm not committing to it yet.
I'm getting some replies to this thread and will consider replying later; right now I need to eat breakfast.
And I'll probably turn this into a blog post as well sometime soon.
Responses so far
Some replies and a blog post in response:
Comments