洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to silverpill's post

@silverpill @phnt

@silverpill is right that maintainers hold real authority here, and I want to build on that rather than argue against either of you.

The frustration with LLMs is largely legitimate. But “how does this look to outsiders” is a poor criterion for evaluating ethical concerns; by that standard, feminism looks like lunacy to 4chan. The question is whether the concerns are correct, not whether they're legible to the unconvinced.

That said, I don't think making LLMs socially unacceptable is a viable path, and not just because the adoption curve has run too far. The maintainer's authority is real precisely because it's specific: you decide what enters your project. Refusing AI-assisted contributions is a legitimate choice. But declaring LLM use itself impermissible starts to look like “I only accept patches written in Vim, not IDE-generated code”—a demand that grows harder to justify as the tools become ordinary. As maintainer of Fedify, I've taken a middle path: disclose what you used, show you've tested it yourself, and we're fine. See also https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md.

What worries me more is that the “total rejection vs. total acceptance” framing leaves the actual problem untouched. If we stay inside that binary, OpenAI and the others keep the models, keep the surplus, keep the compute bills externalized onto the climate—with no pressure to change any of it. The ethical problems with LLMs aren't properties of the technology; they're properties of who owns it and under what terms. I've written about this in more depth if it's of interest: Histomat of F/OSS: We should reclaim LLMs, not reject them and a follow-up Acting materialistically in an imperfect world: LLMs as means of production and social relations.

Phantasm's avatar
Phantasm

@phnt@fluffytail.org · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post

@hongminhee @silverpill
>But “how does this look to outsiders” is a poor criterion for evaluating ethical concerns; by that standard, feminism looks like lunacy to 4chan.

I've used specifically this wording for a reason. Free/open source software community is still small compared to the rest and its own goal is to become as large as possible, in a way to eliminate ways of proprietary computing. The "how does it look to outsiders" is specifically targeted to people completely outside this community, where the purpose of the free/open source community is to try to convince these people it is worth it to sacrifice some inconvenience for freedom. If you argue with irrational arguments very few in the greater scheme care about, just like GNU/People shun new Linux users for not running fully free systems, a thing you are very likely aware of, you are scaring these people you want and need away. That behavior is counterproductive in nature. It doesn't achieve nothing, it is worse, it achieves a regression.

>As maintainer of Fedify, I've taken a middle path: disclose what you used, show you've tested it yourself, and we're fine. See also https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md.

I think this is one of the best solutions that are currently possible.

>What worries me more is that the “total rejection vs. total acceptance” framing leaves the actual problem untouched. If we stay inside that binary, OpenAI and the others keep the models, keep the surplus, keep the compute bills externalized onto the climate—with no pressure to change any of it. The ethical problems with LLMs aren't properties of the technology; they're properties of who owns it and under what terms.

100% agree.
Dawa's avatar
Dawa

@fwtl@mastodon.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post

@hongminhee @silverpill @phnt@fluffytail.org damn right