洪 民憙 (Hong Minhee) :nonbinary:'s avatar

洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · 1006 following · 1472 followers

An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in , , , & . They/them.

서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. , , , 等으로 自由 소프트웨어 만듦.

()

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

This brings back memories. Before Python had async/await, before asyncio became standard, I was happily writing concurrent code with gevent. I actually preferred it.

The reason was simple: no function color problem. With async/await, you split the world into two kinds of functions—ones that block by default and ones that don't. You have to mark the latter with async and remember to await them. With gevent, everything just blocks by default, and you spawn when you need concurrency. It's the same mental model as multithreading, just lighter. Project Loom in Java does something similar, though the technical details differ.

I sometimes wonder what Python would look like if it had embraced gevent-style coroutines in CPython instead of adding async/await. Or if Stackless Python had been accepted upstream. Maybe async programming would be more approachable today.

The explicit await keyword gives you visibility into where context switches can happen, sure. But in practice, I/O points are obvious even without the keyword—you're reading from a socket, querying a database, making an HTTP request. The explicitness doesn't really prevent race conditions or timing bugs. Meanwhile, function colors infect everything. One async library forces your entire call stack to be async. You end up maintaining both sync and async versions of the same code, or the ecosystem just splits in two.

With gevent, there's no such problem. You just call functions. Spawn them if you want concurrency, call them normally if you don't. Go's goroutines and Project Loom are popular for good reasons—they make concurrency accessible without the cognitive overhead.

Python's choice is history now, and there's no going back. But looking at how things turned out, I can't help but think the more practical path was right there, and we walked past it.

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

This brings back memories. Before Python had async/await, before asyncio became standard, I was happily writing concurrent code with gevent. I actually preferred it.

The reason was simple: no function color problem. With async/await, you split the world into two kinds of functions—ones that block by default and ones that don't. You have to mark the latter with async and remember to await them. With gevent, everything just blocks by default, and you spawn when you need concurrency. It's the same mental model as multithreading, just lighter. Project Loom in Java does something similar, though the technical details differ.

I sometimes wonder what Python would look like if it had embraced gevent-style coroutines in CPython instead of adding async/await. Or if Stackless Python had been accepted upstream. Maybe async programming would be more approachable today.

The explicit await keyword gives you visibility into where context switches can happen, sure. But in practice, I/O points are obvious even without the keyword—you're reading from a socket, querying a database, making an HTTP request. The explicitness doesn't really prevent race conditions or timing bugs. Meanwhile, function colors infect everything. One async library forces your entire call stack to be async. You end up maintaining both sync and async versions of the same code, or the ecosystem just splits in two.

With gevent, there's no such problem. You just call functions. Spawn them if you want concurrency, call them normally if you don't. Go's goroutines and Project Loom are popular for good reasons—they make concurrency accessible without the cognitive overhead.

Python's choice is history now, and there's no going back. But looking at how things turned out, I can't help but think the more practical path was right there, and we walked past it.

David Lord :python:'s avatar
David Lord :python:

@davidism@mas.to

Did you know you can get similar concurrency as asyncio/ASGI in Flask, by using gevent? It's been possible as long as Flask has existed! Turns out we never documented it, so how would anyone have known? Fixed that flask.palletsprojects.com/en/s

ploum's avatar
ploum

@ploum@mamot.fr

Idea: "The unbillionaires list", to promote contributors to the common.

A collaborative website that lists people who created something useful to millions but purposedly choose to put in in the common and didn’t earn money directly from it (or not as much as expected)

Besides those listed in ploum.net/2026-01-22-why-no-eu

I would add Henri Dunant (Red Cross, he died in great poverty), Didier Pittet (who invented the hydroalcoolic gel we now use everyday).

Nora is Fed Up's avatar
Nora is Fed Up

@noracodes@tenforward.social

Here ya go, a full account. notes.nora.codes/atproto-again/

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Kaito's post

@kai In Korea, we traditionally don't provide any slippers. However, some households do so nowadays.

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

After months of struggling with the “zombie post” issue on Hackers' Pub—where deleted posts wouldn't disappear from remote servers—I had a sudden hypothesis today. As I dug into it, I realized it's a structural issue with Fedify's MessageQueue system: Create(Note) and Delete(Note) activities can be delivered out of order, causing remote instances to receive Delete(Note) before Create(Note).

The fix will likely require API changes, so this will probably need to wait for 2.0.0.

Liaizon Wakest's avatar
Liaizon Wakest

@wakest@hackers.pub

Jan 26th, online

Jan 31st, Brussels

February 1st, Berlin

February 1st, Brussels

February 3rd, Berlin

February 4th + 5th, London

February 22nd, Vancouver

February 24th, Montreal

March 2nd, online

March 19th + 20th, Amsterdam

July 8th to the 12th, Germany

Lee Dogeon's avatar
Lee Dogeon

@moreal@hackers.pub

Fedify로 ActivityPub 객체 룩업해서 한 페이지로 만드는거 만들어봤는데 생각해보면 그냥 Mastodon처럼 글자 제한이 작은 쪽만 문제라서 Mastodon API 써서 하는게 좋았겠다는 생각이 들고 그리고 이미 그런 구현체는 있었다 (당연). 번역해서 보려고 모 서비스에 넘기려던 글은 타임아웃에 걸리는지 안 넘어가지는데 내일 마저 봐야지..

https://ap-thread-reader.fly.dev/

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Ian Wagner's post

@ianthetechie Yeah, I need to be exposed to English-speaking environments more!

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

In cultures like Korea and Japan, taking off your shoes at home is a long-standing tradition. I'm curious about how this practice varies across different regions and households in the fediverse.

How does your household handle shoes indoors?

OptionVoters
Everyone takes shoes off (strict).239 (64%)
Family takes shoes off; guests keep them on.96 (26%)
Everyone wears shoes/outdoor footwear.37 (10%)
洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Steve Bate's post

@steve @liaizon Yeah, I'm going to FOSDEM 2026 as well!

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

I want to become fluent in English. Not reading and writing, but speaking.

Emelia 👸🏻's avatar
Emelia 👸🏻

@thisismissem@hachyderm.io

RE: social.wake.st/@liaizon/115952

I'll hopefully be here:

wakest ⁂'s avatar
wakest ⁂

@liaizon@social.wake.st

On February 3rd (very soon!) I am hosting another [BERLIN FEDERATED NETWORK EXPLORATION CIRCLE] at @offline. It's a chance to meet and talk with people who are interested in the & networking & exploration & circ---you get the idea.

We have the pleasure of having @hongminhee who will give a presentation about @fedify "an opinionated framework for TypeScript that handles the protocol plumbing"

It is an open free event and everyone is welcome!

BERLIN FEDERATED NETWORK EXPLORATION CIRCLE
BEFENEC? BEFENEEXCI?
we have 洪 民憙 (Hong Minhee) all the way here from 
Korea with a presentation about Fedify, a fediverse
library they have been building that is now powering
the federation of things like Ghost and Hackers' Pub

come join us offline
at offline
Lichtenrader Str. 49
Berlin
ALT text detailsBERLIN FEDERATED NETWORK EXPLORATION CIRCLE BEFENEC? BEFENEEXCI? we have 洪 民憙 (Hong Minhee) all the way here from Korea with a presentation about Fedify, a fediverse library they have been building that is now powering the federation of things like Ghost and Hackers' Pub come join us offline at offline Lichtenrader Str. 49 Berlin
wakest ⁂'s avatar
wakest ⁂

@liaizon@social.wake.st

On February 3rd (very soon!) I am hosting another [BERLIN FEDERATED NETWORK EXPLORATION CIRCLE] at @offline. It's a chance to meet and talk with people who are interested in the & networking & exploration & circ---you get the idea.

We have the pleasure of having @hongminhee who will give a presentation about @fedify "an opinionated framework for TypeScript that handles the protocol plumbing"

It is an open free event and everyone is welcome!

BERLIN FEDERATED NETWORK EXPLORATION CIRCLE
BEFENEC? BEFENEEXCI?
we have 洪 民憙 (Hong Minhee) all the way here from 
Korea with a presentation about Fedify, a fediverse
library they have been building that is now powering
the federation of things like Ghost and Hackers' Pub

come join us offline
at offline
Lichtenrader Str. 49
Berlin
ALT text detailsBERLIN FEDERATED NETWORK EXPLORATION CIRCLE BEFENEC? BEFENEEXCI? we have 洪 民憙 (Hong Minhee) all the way here from Korea with a presentation about Fedify, a fediverse library they have been building that is now powering the federation of things like Ghost and Hackers' Pub come join us offline at offline Lichtenrader Str. 49 Berlin
洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post

After deep thinking, I've designed a satisfying solution and broke it down into 4 issues:

  • Annotations system (#83): Low-level primitive that allows passing runtime context to parsers via parse() options

  • SourceContext interface (#85): High-level system for composing multiple data sources (env, config, etc.) with clear priority ordering via runWith()

  • @optique/config (#84): Configuration file support with Standard Schema validation (Zod, Valibot, etc.)

  • @optique/env (#86): Environment variables support with automatic type conversion

The key insight: use a two-pass parsing approach where the first pass extracts the config file path, then the second pass runs with config data injected as annotations. All sources can be composed together with runWith([envContext, configContext]) for automatic priority handling: CLI > environment variables > configuration file > default values.

internetsdairy's avatar
internetsdairy

@internetsdairy@mastodon.art

My wife picked up a zine that is advertising the fediverse.

A lofi zine ad in green reading: "Fediverse Punk Month. Hey, punk! Fuck corporate social media. Spend January on the Fediverse. Alternative decentralised, open, not for profit DIY social media. allpunkspleaseleave.meta.com " There is a sketch of a calendar page for January and a scared looking blonde punk kid running away a large sheet of bubble wrap or something.
ALT text detailsA lofi zine ad in green reading: "Fediverse Punk Month. Hey, punk! Fuck corporate social media. Spend January on the Fediverse. Alternative decentralised, open, not for profit DIY social media. allpunkspleaseleave.meta.com " There is a sketch of a calendar page for January and a scared looking blonde punk kid running away a large sheet of bubble wrap or something.
洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

Would it be a good idea to add a feature for handling configuration files to Optique? 🤔

Hollo :hollo:'s avatar
Hollo :hollo:

@hollo@hollo.social

Hollo 0.7.0: Advanced search, faster notifications, and improved client compatibility

It's been a while since our last release, and we're excited to finally share Hollo 0.7.0 with you. This release brings a lot of improvements that we've been working on over the past months—from powerful new search capabilities to significant performance gains that should make your daily Hollo experience noticeably snappier.

Let's dive into what's new.

Highlights

Search gets a major upgrade

One of the most requested features has been better search, and we're happy to deliver. Hollo now supports Mastodon-compatible search operators, so you can finally filter your searches the way you've always wanted:

  • has:media/has:poll — Find posts with attachments or polls
  • is:reply/is:sensitive — Filter by post type
  • language:xx — Search in a specific language
  • from:username — Find posts from a specific person
  • mentions:username — Find posts mentioning someone
  • before:YYYY-MM-DD/after:YYYY-MM-DD — Search within a date range
  • Combine them with - for negation, OR for alternatives, and parentheses for grouping

For example, (from:alice OR from:bob) has:poll -is:reply will find polls from Alice or Bob that aren't replies.

We've also made search much faster. URL and handle searches that used to take 8–10 seconds now complete in about 1.4 seconds—an 85% improvement.

Notifications are faster than ever

We completely rebuilt how notifications work under the hood. Instead of computing notifications on every request, Hollo now stores them as they happen. The result? About 24% faster notification loading (down from 2.5s to 1.9s).

On top of that, we've implemented Mastodon's v2 grouped notifications API, which groups similar notifications together server-side. This means less work for your client app and a cleaner notification experience.

Everything loads faster with compression

All API responses are now compressed, reducing their size by 70–92%. Some real numbers: notification responses dropped from 767KB to 58KB, and home timeline responses went from 91KB to 14KB. You'll notice faster load times, especially on slower connections.

Quote notifications

When someone quotes your post, you'll now get a notification about it. And if the original author edits a post you've quoted, you'll be notified too. These are the new quote and quoted_update notification types from Mastodon 4.5.0.

Background import processing

Importing your data (follows, lists, muted/blocked accounts, bookmarks) used to block the entire request until it finished. Now imports run in the background, and you can watch the progress in real-time. Much better for large imports. Thanks to Juyoung Jung for implementing this in #295.

Other improvements

  • Upgraded Fedify to 1.10.0.
  • Instance API responses now include proper thumbnails, actual stats, and correct values for max_featured_tags and max_pinned_statuses. Thanks to Juyoung Jung for this improvement in #296.
  • The notifications API now includes a prev link in pagination headers, which was tracked in #312.
  • Replaced the deprecated fluent-ffmpeg package with direct ffmpeg calls. If video thumbnail generation fails, you'll get a default image instead of an error. Thanks to Peter Jeschke for this fix in #333.

Bug fixes

  • Emelia Smith fixed an issue where POST /api/v1/statuses and PUT /api/v1/statuses/:id were rejecting FormData requests in #171.
  • Fixed log files writing multiple JSON objects on a single line, as reported in #174.
  • Lee ByeongJun fixed POST /api/v1/statuses rejecting null values in optional fields in #179.
  • Juyoung Jung fixed OAuth token endpoint issues with clients that send credentials in both the header and body in #296.
  • Fixed OAuth token endpoint failing to parse requests from clients that don't send a Content-Type header.
  • Peter Jeschke fixed notification endpoints returning 500 errors for unknown notification types in #334.
  • Fixed /api/v2/search not respecting the limit parameter, as reported in #210.

Upgrading

Docker

Pull the latest image and restart your container:

docker pull ghcr.io/fedify-dev/hollo:0.7.0
docker compose up -d

Railway

Go to your Railway dashboard, select your Hollo service, and click Redeploy from the deployments menu.

Manual installation

Pull the latest code and reinstall dependencies:

git pull origin stable
pnpm install
pnpm run prod

Thank you to our contributors

This release wouldn't have been possible without the contributions from our community. A big thank you to Emelia Smith (@thisismissem), Juyoung Jung (@quadr), Lee ByeongJun (@joonnot), and Peter Jeschke (@peter@jeschke.dev) for their pull requests and bug reports. We really appreciate your help in making Hollo better!

Fedify: ActivityPub server framework's avatar
Fedify: ActivityPub server framework

@fedify@hollo.social

We've published an AI usage policy for the Fedify project, inspired by Ghostty's approach.

TL;DR: AI tools are welcome, but contributions must disclose AI usage, be tied to accepted issues, and be human-verified. Maintainers are exempt.

https://github.com/fedify-dev/fedify/blob/main/AI_POLICY.md

kosui's avatar
kosui

@kosui@blog.kosui.me · Reply to kosui's post

ioriとFedify

ioriがActivityPubをサポートできているのは、間違いなくFedifyのおかげである。

自分でActivityPubの仕様を一から実装しようとしたことはこれまでに何度もあるが、以下の問題にぶつかり、いつも挫折してきた。

  • データモデリングの対象が広範囲に及ぶ
    • Vocabularyが多岐にわたる
    • オブジェクトタイプやアクタータイプが多い
  • ネットワーク通信の仕様が複雑
    • HTTPシグネチャチャの実装
    • JSON-LDのコンテキスト解決

私が本当に提供したいのはナレッジ管理サービスであり、ActivityPubの実装ではない。Fedifyはこれらの複雑さを抽象化し、開発者がビジネスロジックに集中できるようにしてくれる。

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

Ghostty has a really well-balanced AI usage policy. It doesn't ban AI tools outright, but it sets clear boundaries to prevent the common problems we're seeing in open source contributions these days.

What stands out is that it's not about being anti-AI. The policy explicitly says the maintainers use AI themselves. The rules are there because too many people treat AI as a magic button that lets them contribute without actually understanding or testing what they're submitting. The requirement that AI-generated PRs must be for accepted issues only, fully tested by humans, and properly disclosed feels like basic respect for maintainers' time.

I'm thinking of adopting something similar for my projects, even though they're not at Ghostty's scale yet. Better to set expectations early.

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Julian Fietkau's post

@julian @reiver Yes, true. Fedify can represent multiple keys for an actor, and indeed Hollo and Hackers' Pub do so!

Julian Fietkau's avatar
Julian Fietkau

@julian@fietkau.social · Reply to @reiver ⊼ (Charles) :batman:'s post

@reiver From personal experience, at the very least anything based on @fedify can represent multiple keys for an actor.

FEP-521a has a list of implementations: codeberg.org/fediverse/fep/src

On changing keys, I used to think this was impossible, but then I saw Claire mention that Mastodon will simply accept a changed key as long as the valid updated actor can be fetched from its canonical URI. So I guess that might work straightforwardly?

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

Ghostty has a really well-balanced AI usage policy. It doesn't ban AI tools outright, but it sets clear boundaries to prevent the common problems we're seeing in open source contributions these days.

What stands out is that it's not about being anti-AI. The policy explicitly says the maintainers use AI themselves. The rules are there because too many people treat AI as a magic button that lets them contribute without actually understanding or testing what they're submitting. The requirement that AI-generated PRs must be for accepted issues only, fully tested by humans, and properly disclosed feels like basic respect for maintainers' time.

I'm thinking of adopting something similar for my projects, even though they're not at Ghostty's scale yet. Better to set expectations early.

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

來日(내일) 늦잠 자면 안 되는데, 아직 자고 싶지 않다…

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

Another thought just struck me today, though, and comes from the perspective of my current role as a maintainer of heavily-used open source software projects: while an agents file may be a hint that makes us curmudgeons roll our eyes and step away in disgust, the dark forest of vibe coders exists, and they're probably opening PRs on your projects. Some people are probably vibe coding without even knowing it, because LLM-powered autocomplete is enabled in their IDE by default or something. In that reality, an AGENTS.md might also be the best protection you have against agents and IDEs making dumb mistakes that are, often, very hard to notice during a code review. If you maintain projects that welcome third-party contributions, you deserve to at least know that you've given the agents some railings to lean on.

You might not trust vibe coders, but if you can gently guide the vibes, maybe it's worth the cringe or two you'll get from seasoned engineers.

AGENTS.md as a dark signal, Josh Mock

Lobsters

@lobsters@mastodon.social

AGENTS.md as a dark signal lobste.rs/s/x0qrlm
joshmock.com/post/2026-agents-

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Gergely Nagy 🐁's post

@algernon @iocaine Thank you for taking the time to engage with my piece and for sharing your concrete experience with aggressive crawling. The scale you describe—3+ million daily requests from ClaudeBot alone—makes the problem tangible in a way abstract discussion doesn't.

Where we agree: AI companies don't behave ethically. I don't assume they do, and I certainly don't expect them to voluntarily follow rules out of goodwill. The environmental costs you mention are real and serious concerns that I share. And your point about needing training data alongside weights for true reproducibility is well-taken—I should have been more explicit about that.

On whether they've “scraped everything”

I overstated this point. When I said they've already scraped what they need, I was making a narrower claim than I stated: that the major corporations have already accumulated sufficient training corpora that individual developers withdrawing their code won't meaningfully degrade those models. Your traffic numbers actually support this—if they're still crawling that aggressively, it means they have the resources and infrastructure to get what they want regardless of individual resistance.

But you raise an important nuance I hadn't fully considered: the value of fresh human-generated content in an internet increasingly filled with synthetic output. That's a real dynamic worth taking seriously.

On licensing strategy

I hear your skepticism about licensing, and the Anthropic case you cite is instructive. But I think we may be drawing different conclusions from it. Yes, the copyright claim was dismissed while the illegal sourcing claim succeeded—but this tells me that legal framing matters. The problem isn't that law is irrelevant; it's that current licenses don't adequately address this use case.

I'm not suggesting a new license because I believe companies will voluntarily comply. I'm suggesting it because it changes the legal terrain. Right now, they can argue—as you note—that training doesn't create derivative works and thus doesn't trigger copyleft obligations. A training-specific copyleft wouldn't eliminate violations, but it would make them explicit rather than ambiguous. It would create clearer grounds for legal action and community pressure.

You might say this is naïve optimism about law, but I'd point to GPL's history. It also faced the critique that corporations would simply ignore it. They didn't always comply voluntarily, but the license created the framework for both legal action and social norms that, over time, did shape behavior. Imperfectly, yes, but meaningfully.

The strategic question I'm still wrestling with

Here's where I'm genuinely uncertain: even if we grant that licensing won't stop corporate AI companies (and I largely agree it won't, at least not immediately), what's the theory of victory for the withdrawal strategy?

My concern—and I raise this not as a gotcha but as a genuine question—is that OpenAI and Anthropic already have their datasets. They have the resources to continue acquiring what they need. Individual developers blocking crawlers may slow them marginally, but it won't stop them. What it will do, I fear, is starve open source AI development of high-quality training data.

The companies you're fighting have billions in funding, massive datasets, and legal teams. Open source projects like Llama or Mistral, or the broader ecosystem of researchers trying to build non-corporate alternatives, don't. If the F/OSS community treats AI training as inherently unethical and withdraws its code from that use, aren't we effectively conceding the field to exactly the corporations we oppose?

This isn't about “accepting reality” in the sense of surrender. It's about asking: what strategy actually weakens corporate AI monopolies versus what strategy accidentally strengthens them? I worry that withdrawal achieves the latter.

On environmental costs and publicization

Freeing model weights alone doesn't solve environmental costs, I agree. But I'd argue that publicization of models does address this, though perhaps I didn't make the connection clear enough.

Right now we have competitive redundancy: every major company training similar models independently, duplicating compute costs. If models were required to be open and collaborative development was the norm, we'd see less wasteful duplication. This is one reason why treating LLMs as public infrastructure rather than private property matters—not just for access, but for efficiency.

The environmental argument actually cuts against corporate monopolization, not for it.

A final thought

I'm not advocating negotiation with AI companies in the sense of compromise or appeasement. I'm advocating for a different field of battle. Rather than fighting to keep them from training (which I don't believe we can win), I'm suggesting we fight over the terms: demanding that what's built from our commons remains part of the commons.

You invoke the analogy of not negotiating with fascists. I'd push back gently on that framing—not because these corporations aren't doing real harm, but because the historical anti-fascist struggle wasn't won through withdrawal. It was won through building alternative power bases, through organization, through creating the structures that could challenge and eventually supplant fascist power.

That's what I'm trying to articulate: not surrender to a “new reality,” but the construction of a different one—one where the productive forces of AI are brought under collective rather than private control.

I may be wrong about the best path to get there. But I think we share the destination.

洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

Been thinking a lot about @algernon's recent post on FLOSS and LLM training. The frustration with AI companies is spot on, but I wonder if there's a different strategic path. Instead of withdrawal, what if this is our GPL moment for AI—a chance to evolve copyleft to cover training? Tried to work through the idea here: Histomat of F/OSS: We should reclaim LLMs, not reject them.

Older →