洪 民憙 (Hong Minhee) 
@hongminhee@hollo.social · Reply to Ian Wagner's post
@ianthetechie Yeah, I need to be exposed to English-speaking environments more!


@hongminhee@hollo.social · 1005 following · 1469 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|

@hongminhee@hollo.social · Reply to Ian Wagner's post
@ianthetechie Yeah, I need to be exposed to English-speaking environments more!

@hongminhee@hollo.social
In cultures like Korea and Japan, taking off your shoes at home is a long-standing tradition. I'm curious about how this practice varies across different regions and households in the fediverse.
How does your household handle shoes indoors?
| Option | Voters |
|---|---|
| Everyone takes shoes off (strict). | 69 (61%) |
| Family takes shoes off; guests keep them on. | 32 (28%) |
| Everyone wears shoes/outdoor footwear. | 13 (11%) |

@hongminhee@hollo.social · Reply to Steve Bate's post

@hongminhee@hollo.social
I want to become fluent in English. Not reading and writing, but speaking.
@thisismissem@hachyderm.io
RE: https://social.wake.st/@liaizon/115952183460649167
I'll hopefully be here:
@liaizon@social.wake.st
On February 3rd (very soon!) I am hosting another [BERLIN FEDERATED NETWORK EXPLORATION CIRCLE] at @offline. It's a chance to meet and talk with people who are interested in the #fediverse & networking & exploration & circ---you get the idea.
We have the pleasure of having @hongminhee who will give a presentation about @fedify "an opinionated #ActivityPub framework for TypeScript that handles the protocol plumbing"
It is an open free event and everyone is welcome!
@liaizon@social.wake.st
On February 3rd (very soon!) I am hosting another [BERLIN FEDERATED NETWORK EXPLORATION CIRCLE] at @offline. It's a chance to meet and talk with people who are interested in the #fediverse & networking & exploration & circ---you get the idea.
We have the pleasure of having @hongminhee who will give a presentation about @fedify "an opinionated #ActivityPub framework for TypeScript that handles the protocol plumbing"
It is an open free event and everyone is welcome!

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
After deep thinking, I've designed a satisfying solution and broke it down into 4 issues:
Annotations system (#83): Low-level primitive that allows passing runtime context to parsers via parse() options
SourceContext interface (#85): High-level system for composing multiple data sources (env, config, etc.) with clear priority ordering via runWith()
@optique/config (#84): Configuration file support with Standard Schema validation (Zod, Valibot, etc.)
@optique/env (#86): Environment variables support with automatic type conversion
The key insight: use a two-pass parsing approach where the first pass extracts the config file path, then the second pass runs with config data injected as annotations. All sources can be composed together with runWith([envContext, configContext]) for automatic priority handling: CLI > environment variables > configuration file > default values.
@internetsdairy@mastodon.art
My wife picked up a zine that is advertising the fediverse.

@hongminhee@hollo.social
Would it be a good idea to add a feature for handling configuration files to Optique? 🤔

@hollo@hollo.social
It's been a while since our last release, and we're excited to finally share Hollo 0.7.0 with you. This release brings a lot of improvements that we've been working on over the past months—from powerful new search capabilities to significant performance gains that should make your daily Hollo experience noticeably snappier.
Let's dive into what's new.
One of the most requested features has been better search, and we're happy to deliver. Hollo now supports Mastodon-compatible search operators, so you can finally filter your searches the way you've always wanted:
has:media/has:poll — Find posts with attachments or pollsis:reply/is:sensitive — Filter by post typelanguage:xx — Search in a specific languagefrom:username — Find posts from a specific personmentions:username — Find posts mentioning someonebefore:YYYY-MM-DD/after:YYYY-MM-DD — Search within a date range- for negation, OR for alternatives, and parentheses for groupingFor example, (from:alice OR from:bob) has:poll -is:reply will find polls from Alice or Bob that aren't replies.
We've also made search much faster. URL and handle searches that used to take 8–10 seconds now complete in about 1.4 seconds—an 85% improvement.
We completely rebuilt how notifications work under the hood. Instead of computing notifications on every request, Hollo now stores them as they happen. The result? About 24% faster notification loading (down from 2.5s to 1.9s).
On top of that, we've implemented Mastodon's v2 grouped notifications API, which groups similar notifications together server-side. This means less work for your client app and a cleaner notification experience.
All API responses are now compressed, reducing their size by 70–92%. Some real numbers: notification responses dropped from 767KB to 58KB, and home timeline responses went from 91KB to 14KB. You'll notice faster load times, especially on slower connections.
When someone quotes your post, you'll now get a notification about it. And if the original author edits a post you've quoted, you'll be notified too. These are the new quote and quoted_update notification types from Mastodon 4.5.0.
Importing your data (follows, lists, muted/blocked accounts, bookmarks) used to block the entire request until it finished. Now imports run in the background, and you can watch the progress in real-time. Much better for large imports. Thanks to Juyoung Jung for implementing this in #295.
max_featured_tags and max_pinned_statuses. Thanks to Juyoung Jung for this improvement in #296.prev link in pagination headers, which was tracked in #312.POST /api/v1/statuses and PUT /api/v1/statuses/:id were rejecting FormData requests in #171.POST /api/v1/statuses rejecting null values in optional fields in #179.Content-Type header./api/v2/search not respecting the limit parameter, as reported in #210.Pull the latest image and restart your container:
docker pull ghcr.io/fedify-dev/hollo:0.7.0
docker compose up -dGo to your Railway dashboard, select your Hollo service, and click Redeploy from the deployments menu.
Pull the latest code and reinstall dependencies:
git pull origin stable
pnpm install
pnpm run prodThis release wouldn't have been possible without the contributions from our community. A big thank you to Emelia Smith (@thisismissem), Juyoung Jung (@quadr), Lee ByeongJun (@joonnot), and Peter Jeschke (@peter@jeschke.dev) for their pull requests and bug reports. We really appreciate your help in making Hollo better!
@fedify@hollo.social
We've published an AI usage policy for the Fedify project, inspired by Ghostty's approach.
TL;DR: AI tools are welcome, but contributions must disclose AI usage, be tied to accepted issues, and be human-verified. Maintainers are exempt.
@kosui@blog.kosui.me · Reply to kosui's post
ioriとFedify
ioriがActivityPubをサポートできているのは、間違いなくFedifyのおかげである。
自分でActivityPubの仕様を一から実装しようとしたことはこれまでに何度もあるが、以下の問題にぶつかり、いつも挫折してきた。
私が本当に提供したいのはナレッジ管理サービスであり、ActivityPubの実装ではない。Fedifyはこれらの複雑さを抽象化し、開発者がビジネスロジックに集中できるようにしてくれる。

@hongminhee@hollo.social
Ghostty has a really well-balanced AI usage policy. It doesn't ban AI tools outright, but it sets clear boundaries to prevent the common problems we're seeing in open source contributions these days.
What stands out is that it's not about being anti-AI. The policy explicitly says the maintainers use AI themselves. The rules are there because too many people treat AI as a magic button that lets them contribute without actually understanding or testing what they're submitting. The requirement that AI-generated PRs must be for accepted issues only, fully tested by humans, and properly disclosed feels like basic respect for maintainers' time.
I'm thinking of adopting something similar for my projects, even though they're not at Ghostty's scale yet. Better to set expectations early.

@hongminhee@hollo.social · Reply to Julian Fietkau's post
@julian@fietkau.social · Reply to @reiver ⊼ (Charles) :batman:'s post
@reiver From personal experience, at the very least anything based on @fedify can represent multiple keys for an actor.
FEP-521a has a list of implementations: https://codeberg.org/fediverse/fep/src/branch/main/fep/521a/fep-521a.md#implementations
On changing keys, I used to think this was impossible, but then I saw Claire mention that Mastodon will simply accept a changed key as long as the valid updated actor can be fetched from its canonical URI. So I guess that might work straightforwardly?

@hongminhee@hollo.social
Ghostty has a really well-balanced AI usage policy. It doesn't ban AI tools outright, but it sets clear boundaries to prevent the common problems we're seeing in open source contributions these days.
What stands out is that it's not about being anti-AI. The policy explicitly says the maintainers use AI themselves. The rules are there because too many people treat AI as a magic button that lets them contribute without actually understanding or testing what they're submitting. The requirement that AI-generated PRs must be for accepted issues only, fully tested by humans, and properly disclosed feels like basic respect for maintainers' time.
I'm thinking of adopting something similar for my projects, even though they're not at Ghostty's scale yet. Better to set expectations early.

@hongminhee@hollo.social
來日 늦잠 자면 안 되는데, 아직 자고 싶지 않다…

@hongminhee@hollo.social
Another thought just struck me today, though, and comes from the perspective of my current role as a maintainer of heavily-used open source software projects: while an agents file may be a hint that makes us curmudgeons roll our eyes and step away in disgust, the dark forest of vibe coders exists, and they're probably opening PRs on your projects. Some people are probably vibe coding without even knowing it, because LLM-powered autocomplete is enabled in their IDE by default or something. In that reality, an AGENTS.md might also be the best protection you have against agents and IDEs making dumb mistakes that are, often, very hard to notice during a code review. If you maintain projects that welcome third-party contributions, you deserve to at least know that you've given the agents some railings to lean on.
You might not trust vibe coders, but if you can gently guide the vibes, maybe it's worth the cringe or two you'll get from seasoned engineers.
—AGENTS.md as a dark signal, Josh Mock
@lobsters@mastodon.social
AGENTS.md as a dark signal https://lobste.rs/s/x0qrlm #vibecoding
https://joshmock.com/post/2026-agents-md-as-a-dark-signal/

@hongminhee@hollo.social · Reply to Gergely Nagy 🐁's post
@algernon @iocaine Thank you for taking the time to engage with my piece and for sharing your concrete experience with aggressive crawling. The scale you describe—3+ million daily requests from ClaudeBot alone—makes the problem tangible in a way abstract discussion doesn't.
Where we agree: AI companies don't behave ethically. I don't assume they do, and I certainly don't expect them to voluntarily follow rules out of goodwill. The environmental costs you mention are real and serious concerns that I share. And your point about needing training data alongside weights for true reproducibility is well-taken—I should have been more explicit about that.
I overstated this point. When I said they've already scraped what they need, I was making a narrower claim than I stated: that the major corporations have already accumulated sufficient training corpora that individual developers withdrawing their code won't meaningfully degrade those models. Your traffic numbers actually support this—if they're still crawling that aggressively, it means they have the resources and infrastructure to get what they want regardless of individual resistance.
But you raise an important nuance I hadn't fully considered: the value of fresh human-generated content in an internet increasingly filled with synthetic output. That's a real dynamic worth taking seriously.
I hear your skepticism about licensing, and the Anthropic case you cite is instructive. But I think we may be drawing different conclusions from it. Yes, the copyright claim was dismissed while the illegal sourcing claim succeeded—but this tells me that legal framing matters. The problem isn't that law is irrelevant; it's that current licenses don't adequately address this use case.
I'm not suggesting a new license because I believe companies will voluntarily comply. I'm suggesting it because it changes the legal terrain. Right now, they can argue—as you note—that training doesn't create derivative works and thus doesn't trigger copyleft obligations. A training-specific copyleft wouldn't eliminate violations, but it would make them explicit rather than ambiguous. It would create clearer grounds for legal action and community pressure.
You might say this is naïve optimism about law, but I'd point to GPL's history. It also faced the critique that corporations would simply ignore it. They didn't always comply voluntarily, but the license created the framework for both legal action and social norms that, over time, did shape behavior. Imperfectly, yes, but meaningfully.
Here's where I'm genuinely uncertain: even if we grant that licensing won't stop corporate AI companies (and I largely agree it won't, at least not immediately), what's the theory of victory for the withdrawal strategy?
My concern—and I raise this not as a gotcha but as a genuine question—is that OpenAI and Anthropic already have their datasets. They have the resources to continue acquiring what they need. Individual developers blocking crawlers may slow them marginally, but it won't stop them. What it will do, I fear, is starve open source AI development of high-quality training data.
The companies you're fighting have billions in funding, massive datasets, and legal teams. Open source projects like Llama or Mistral, or the broader ecosystem of researchers trying to build non-corporate alternatives, don't. If the F/OSS community treats AI training as inherently unethical and withdraws its code from that use, aren't we effectively conceding the field to exactly the corporations we oppose?
This isn't about “accepting reality” in the sense of surrender. It's about asking: what strategy actually weakens corporate AI monopolies versus what strategy accidentally strengthens them? I worry that withdrawal achieves the latter.
Freeing model weights alone doesn't solve environmental costs, I agree. But I'd argue that publicization of models does address this, though perhaps I didn't make the connection clear enough.
Right now we have competitive redundancy: every major company training similar models independently, duplicating compute costs. If models were required to be open and collaborative development was the norm, we'd see less wasteful duplication. This is one reason why treating LLMs as public infrastructure rather than private property matters—not just for access, but for efficiency.
The environmental argument actually cuts against corporate monopolization, not for it.
I'm not advocating negotiation with AI companies in the sense of compromise or appeasement. I'm advocating for a different field of battle. Rather than fighting to keep them from training (which I don't believe we can win), I'm suggesting we fight over the terms: demanding that what's built from our commons remains part of the commons.
You invoke the analogy of not negotiating with fascists. I'd push back gently on that framing—not because these corporations aren't doing real harm, but because the historical anti-fascist struggle wasn't won through withdrawal. It was won through building alternative power bases, through organization, through creating the structures that could challenge and eventually supplant fascist power.
That's what I'm trying to articulate: not surrender to a “new reality,” but the construction of a different one—one where the productive forces of AI are brought under collective rather than private control.
I may be wrong about the best path to get there. But I think we share the destination.

@hongminhee@hollo.social
Been thinking a lot about @algernon's recent post on FLOSS and LLM training. The frustration with AI companies is spot on, but I wonder if there's a different strategic path. Instead of withdrawal, what if this is our GPL moment for AI—a chance to evolve copyleft to cover training? Tried to work through the idea here: Histomat of F/OSS: We should reclaim LLMs, not reject them.
@quillmatiq@mastodon.social · Reply to Anuj Ahooja's post
"But in another sense, this shows the issue with bridging between these two networks, and how this is not just a matter of networking architecture, but of how network architecture leads to different mental models that are not always compatible with each other."
But, also, this. It's fair criticism, and something I think about basically every day.
@quillmatiq@mastodon.social
RE: https://mastodon.social/@fediversereport/115945468921908243
"When fediverse users say they don’t want to be bridged to Bluesky, they’re applying an ActivityPub mental model to ATProto infrastructure. In one sense this is a bit of a category error, the bridge connects to networking infrastructure, not the application. This way your’e not just refusing to federate with the Bluesky-the-app but with the entire ecosystem, including apps with different values, such as Blacksky or Leaflet."
This. It's like blocking the entire Fedi because Threads is in it.
@fediversereport@mastodon.social
New from me: FR#150 - On ICE, Verification, and Presence As Harm
Bluesky has verified the account of ICE, which was a step too far for many in the fediverse, wanting to disconnect from the bridge between the networks
The presence itself of ICE on Bluesky is a form of harm, and Bluesky is not well equipped to deal with this new challenge. Making things worse, their verification system is set up to delegate responsibility, but instead they made no use of it
https://connectedplaces.online/reports/fr150-on-ice-verification-and-presence-as-harm/
@shriramk@mastodon.social
In a world where most code in modern programming languages will be machine-generated, what is the role of an upper-level programming languages course?
Interesting and non-obvious answers please.

@hongminhee@hollo.social · Reply to tatmius(タミアス)'s post
@tatmius 理想を言えば、ローマ字に変換する必要はなく、日本語のままで記述しても問題ないと思います!少なくとも、そのコードベースで作業するチームメンバー全員が日本語を理解できるのであれば!最近のプログラミング言語は、ユニコードの識別子も問題なくサポートしていますからね。
@matthew_d_green@ioc.exchange
Microsoft is handing over Bitlocker keys to law enforcement. https://www.forbes.com/sites/thomasbrewster/2026/01/22/microsoft-gave-fbi-keys-to-unlock-bitlocker-encrypted-data/

@hongminhee@hollo.social
この問題について、以前は「優れたソフトウェアエンジニアになるには、英語が一定のレベル以上できなければならない」と考えていましたが、今は考えが変わりました。自国語でも十分にコーディングができるよう、技術的・文化的な土壌が整えられるべきです。
@tatmius@vivaldi.net
(自称)英語話せない人が付けた変数名、その人の技術力関係なく結構モヤモヤする率高い.......。まぁ別に変数名が合致してたらプログラムは動くから、まぁ、良いかとは思うし、わざわざ指摘するのもなぁ....という気持ちがあるので、表には出さないが、それはそれとしてどうしてもモヤモヤはする.....。
@tatmius@vivaldi.net
(自称)英語話せない人が付けた変数名、その人の技術力関係なく結構モヤモヤする率高い.......。まぁ別に変数名が合致してたらプログラムは動くから、まぁ、良いかとは思うし、わざわざ指摘するのもなぁ....という気持ちがあるので、表には出さないが、それはそれとしてどうしてもモヤモヤはする.....。

@hongminhee@hollo.social · Reply to Evan Prodromou's post
@evan Yeah, indeed. It's also fragile for network errors.
@evan@cosocial.ca · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
@hongminhee woof. That's an important feature and a lot of the network fabric comes apart in that situation. If you can't refetch remote ActivityPub objects from their source, you have to keep them cached indefinitely. That gets very messy very quickly!