洪 民憙 (Hong Minhee) 
@hongminhee@hollo.social · Reply to Laurens Hof's post
@laurenshof Thanks!


@hongminhee@hollo.social · 1006 following · 1472 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|

@hongminhee@hollo.social · Reply to Laurens Hof's post
@laurenshof Thanks!

@hongminhee@hollo.social · Reply to Kristoff Bonne 🇪🇺 🇧🇪's post
@kristoff Well, there's the simplest phrase which works for the most cases: “annyeonghaseyo.” For “thank you”: “gamsahamnida.” These spellings might look a little complicated, but the actual pronunciations aren't hard. Listen to the audio on the Wiktionary entries I linked.

@hongminhee@hollo.social
By the way, I'm flying from Seoul to Brussels tomorrow to attend FOSDEM 2026. It's my first trip to Europe, so I'm both excited and nervous.

@hongminhee@hollo.social · Reply to NTSK's post
@ntek 良いアイデアですね。課題トラッカーにイシューとして作成していただけますか?

@hongminhee@hollo.social
If people who speak a different native language than you make the effort to speak to you in yours, that's a privilege.
@pervognsen@mastodon.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
@hongminhee FWIW, I think EVE Online still uses its stackless Python fork (which supports goroutine-like tasks/fibers) for gameplay scripting.

@hongminhee@hollo.social
This brings back memories. Before Python had async/await, before asyncio became standard, I was happily writing concurrent code with gevent. I actually preferred it.
The reason was simple: no function color problem. With async/await, you split the world into two kinds of functions—ones that block by default and ones that don't. You have to mark the latter with async and remember to await them. With gevent, everything just blocks by default, and you spawn when you need concurrency. It's the same mental model as multithreading, just lighter. Project Loom in Java does something similar, though the technical details differ.
I sometimes wonder what Python would look like if it had embraced gevent-style coroutines in CPython instead of adding async/await. Or if Stackless Python had been accepted upstream. Maybe async programming would be more approachable today.
The explicit await keyword gives you visibility into where context switches can happen, sure. But in practice, I/O points are obvious even without the keyword—you're reading from a socket, querying a database, making an HTTP request. The explicitness doesn't really prevent race conditions or timing bugs. Meanwhile, function colors infect everything. One async library forces your entire call stack to be async. You end up maintaining both sync and async versions of the same code, or the ecosystem just splits in two.
With gevent, there's no such problem. You just call functions. Spawn them if you want concurrency, call them normally if you don't. Go's goroutines and Project Loom are popular for good reasons—they make concurrency accessible without the cognitive overhead.
Python's choice is history now, and there's no going back. But looking at how things turned out, I can't help but think the more practical path was right there, and we walked past it.

@davidism@mas.to
Did you know you can get similar concurrency as asyncio/ASGI in Flask, by using gevent? It's been possible as long as Flask has existed! Turns out we never documented it, so how would anyone have known? Fixed that https://flask.palletsprojects.com/en/stable/gevent/ #Python #Flask #gevent
@heardark.bsky.social@bsky.brid.gy
사상(메시지)가 없는 작품은 없습니다. 어떤 작품이 아무런 사상이나 의도가 없어 보인다면 기존의 지배적인 질서에 너무나 완벽하게 순응하고 있기 때문입니다. 단지 사유를 멈추게 하는 그런 사상과. 사유를 시작하게 하는 사상이 있고 보통 후자를 메시지(사상)가 있다고 말할 뿐이죠. 작품에 사상이 없어야 한다 라는 주장은 사실 '지배적인 사상에 저항하지 마라' 라는 강력한 이데올로기(사상)적 명령입니다.

@hongminhee@hollo.social
This brings back memories. Before Python had async/await, before asyncio became standard, I was happily writing concurrent code with gevent. I actually preferred it.
The reason was simple: no function color problem. With async/await, you split the world into two kinds of functions—ones that block by default and ones that don't. You have to mark the latter with async and remember to await them. With gevent, everything just blocks by default, and you spawn when you need concurrency. It's the same mental model as multithreading, just lighter. Project Loom in Java does something similar, though the technical details differ.
I sometimes wonder what Python would look like if it had embraced gevent-style coroutines in CPython instead of adding async/await. Or if Stackless Python had been accepted upstream. Maybe async programming would be more approachable today.
The explicit await keyword gives you visibility into where context switches can happen, sure. But in practice, I/O points are obvious even without the keyword—you're reading from a socket, querying a database, making an HTTP request. The explicitness doesn't really prevent race conditions or timing bugs. Meanwhile, function colors infect everything. One async library forces your entire call stack to be async. You end up maintaining both sync and async versions of the same code, or the ecosystem just splits in two.
With gevent, there's no such problem. You just call functions. Spawn them if you want concurrency, call them normally if you don't. Go's goroutines and Project Loom are popular for good reasons—they make concurrency accessible without the cognitive overhead.
Python's choice is history now, and there's no going back. But looking at how things turned out, I can't help but think the more practical path was right there, and we walked past it.

@davidism@mas.to
Did you know you can get similar concurrency as asyncio/ASGI in Flask, by using gevent? It's been possible as long as Flask has existed! Turns out we never documented it, so how would anyone have known? Fixed that https://flask.palletsprojects.com/en/stable/gevent/ #Python #Flask #gevent

@davidism@mas.to
Did you know you can get similar concurrency as asyncio/ASGI in Flask, by using gevent? It's been possible as long as Flask has existed! Turns out we never documented it, so how would anyone have known? Fixed that https://flask.palletsprojects.com/en/stable/gevent/ #Python #Flask #gevent
@ploum@mamot.fr
Idea: "The unbillionaires list", to promote contributors to the common.
A collaborative website that lists people who created something useful to millions but purposedly choose to put in in the common and didn’t earn money directly from it (or not as much as expected)
Besides those listed in https://ploum.net/2026-01-22-why-no-european-google.html
I would add Henri Dunant (Red Cross, he died in great poverty), Didier Pittet (who invented the hydroalcoolic gel we now use everyday).
@noracodes@tenforward.social
Here ya go, a full account. https://notes.nora.codes/atproto-again/

@hongminhee@hollo.social · Reply to Kaito's post
@kai In Korea, we traditionally don't provide any slippers. However, some households do so nowadays.

@hongminhee@hollo.social
After months of struggling with the “zombie post” issue on Hackers' Pub—where deleted posts wouldn't disappear from remote servers—I had a sudden hypothesis today. As I dug into it, I realized it's a structural issue with Fedify's MessageQueue system: Create(Note) and Delete(Note) activities can be delivered out of order, causing remote instances to receive Delete(Note) before Create(Note).
The fix will likely require API changes, so this will probably need to wait for #Fedify 2.0.0.
@wakest@hackers.pub
@moreal@hackers.pub
Fedify로 ActivityPub 객체 룩업해서 한 페이지로 만드는거 만들어봤는데 생각해보면 그냥 Mastodon처럼 글자 제한이 작은 쪽만 문제라서 Mastodon API 써서 하는게 좋았겠다는 생각이 들고 그리고 이미 그런 구현체는 있었다 (당연). 번역해서 보려고 모 서비스에 넘기려던 글은 타임아웃에 걸리는지 안 넘어가지는데 내일 마저 봐야지..

@hongminhee@hollo.social · Reply to Ian Wagner's post
@ianthetechie Yeah, I need to be exposed to English-speaking environments more!

@hongminhee@hollo.social
In cultures like Korea and Japan, taking off your shoes at home is a long-standing tradition. I'm curious about how this practice varies across different regions and households in the fediverse.
How does your household handle shoes indoors?
| Option | Voters |
|---|---|
| Everyone takes shoes off (strict). | 239 (64%) |
| Family takes shoes off; guests keep them on. | 96 (26%) |
| Everyone wears shoes/outdoor footwear. | 37 (10%) |

@hongminhee@hollo.social · Reply to Steve Bate's post

@hongminhee@hollo.social
I want to become fluent in English. Not reading and writing, but speaking.
@thisismissem@hachyderm.io
RE: https://social.wake.st/@liaizon/115952183460649167
I'll hopefully be here:
@liaizon@social.wake.st
On February 3rd (very soon!) I am hosting another [BERLIN FEDERATED NETWORK EXPLORATION CIRCLE] at @offline. It's a chance to meet and talk with people who are interested in the #fediverse & networking & exploration & circ---you get the idea.
We have the pleasure of having @hongminhee who will give a presentation about @fedify "an opinionated #ActivityPub framework for TypeScript that handles the protocol plumbing"
It is an open free event and everyone is welcome!
@liaizon@social.wake.st
On February 3rd (very soon!) I am hosting another [BERLIN FEDERATED NETWORK EXPLORATION CIRCLE] at @offline. It's a chance to meet and talk with people who are interested in the #fediverse & networking & exploration & circ---you get the idea.
We have the pleasure of having @hongminhee who will give a presentation about @fedify "an opinionated #ActivityPub framework for TypeScript that handles the protocol plumbing"
It is an open free event and everyone is welcome!

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
After deep thinking, I've designed a satisfying solution and broke it down into 4 issues:
Annotations system (#83): Low-level primitive that allows passing runtime context to parsers via parse() options
SourceContext interface (#85): High-level system for composing multiple data sources (env, config, etc.) with clear priority ordering via runWith()
@optique/config (#84): Configuration file support with Standard Schema validation (Zod, Valibot, etc.)
@optique/env (#86): Environment variables support with automatic type conversion
The key insight: use a two-pass parsing approach where the first pass extracts the config file path, then the second pass runs with config data injected as annotations. All sources can be composed together with runWith([envContext, configContext]) for automatic priority handling: CLI > environment variables > configuration file > default values.
@internetsdairy@mastodon.art
My wife picked up a zine that is advertising the fediverse.

@hongminhee@hollo.social
Would it be a good idea to add a feature for handling configuration files to Optique? 🤔

@hollo@hollo.social
It's been a while since our last release, and we're excited to finally share Hollo 0.7.0 with you. This release brings a lot of improvements that we've been working on over the past months—from powerful new search capabilities to significant performance gains that should make your daily Hollo experience noticeably snappier.
Let's dive into what's new.
One of the most requested features has been better search, and we're happy to deliver. Hollo now supports Mastodon-compatible search operators, so you can finally filter your searches the way you've always wanted:
has:media/has:poll — Find posts with attachments or pollsis:reply/is:sensitive — Filter by post typelanguage:xx — Search in a specific languagefrom:username — Find posts from a specific personmentions:username — Find posts mentioning someonebefore:YYYY-MM-DD/after:YYYY-MM-DD — Search within a date range- for negation, OR for alternatives, and parentheses for groupingFor example, (from:alice OR from:bob) has:poll -is:reply will find polls from Alice or Bob that aren't replies.
We've also made search much faster. URL and handle searches that used to take 8–10 seconds now complete in about 1.4 seconds—an 85% improvement.
We completely rebuilt how notifications work under the hood. Instead of computing notifications on every request, Hollo now stores them as they happen. The result? About 24% faster notification loading (down from 2.5s to 1.9s).
On top of that, we've implemented Mastodon's v2 grouped notifications API, which groups similar notifications together server-side. This means less work for your client app and a cleaner notification experience.
All API responses are now compressed, reducing their size by 70–92%. Some real numbers: notification responses dropped from 767KB to 58KB, and home timeline responses went from 91KB to 14KB. You'll notice faster load times, especially on slower connections.
When someone quotes your post, you'll now get a notification about it. And if the original author edits a post you've quoted, you'll be notified too. These are the new quote and quoted_update notification types from Mastodon 4.5.0.
Importing your data (follows, lists, muted/blocked accounts, bookmarks) used to block the entire request until it finished. Now imports run in the background, and you can watch the progress in real-time. Much better for large imports. Thanks to Juyoung Jung for implementing this in #295.
max_featured_tags and max_pinned_statuses. Thanks to Juyoung Jung for this improvement in #296.prev link in pagination headers, which was tracked in #312.POST /api/v1/statuses and PUT /api/v1/statuses/:id were rejecting FormData requests in #171.POST /api/v1/statuses rejecting null values in optional fields in #179.Content-Type header./api/v2/search not respecting the limit parameter, as reported in #210.Pull the latest image and restart your container:
docker pull ghcr.io/fedify-dev/hollo:0.7.0
docker compose up -dGo to your Railway dashboard, select your Hollo service, and click Redeploy from the deployments menu.
Pull the latest code and reinstall dependencies:
git pull origin stable
pnpm install
pnpm run prodThis release wouldn't have been possible without the contributions from our community. A big thank you to Emelia Smith (@thisismissem), Juyoung Jung (@quadr), Lee ByeongJun (@joonnot), and Peter Jeschke (@peter@jeschke.dev) for their pull requests and bug reports. We really appreciate your help in making Hollo better!
@fedify@hollo.social
We've published an AI usage policy for the Fedify project, inspired by Ghostty's approach.
TL;DR: AI tools are welcome, but contributions must disclose AI usage, be tied to accepted issues, and be human-verified. Maintainers are exempt.
@kosui@blog.kosui.me · Reply to kosui's post
ioriとFedify
ioriがActivityPubをサポートできているのは、間違いなくFedifyのおかげである。
自分でActivityPubの仕様を一から実装しようとしたことはこれまでに何度もあるが、以下の問題にぶつかり、いつも挫折してきた。
私が本当に提供したいのはナレッジ管理サービスであり、ActivityPubの実装ではない。Fedifyはこれらの複雑さを抽象化し、開発者がビジネスロジックに集中できるようにしてくれる。

@hongminhee@hollo.social
Ghostty has a really well-balanced AI usage policy. It doesn't ban AI tools outright, but it sets clear boundaries to prevent the common problems we're seeing in open source contributions these days.
What stands out is that it's not about being anti-AI. The policy explicitly says the maintainers use AI themselves. The rules are there because too many people treat AI as a magic button that lets them contribute without actually understanding or testing what they're submitting. The requirement that AI-generated PRs must be for accepted issues only, fully tested by humans, and properly disclosed feels like basic respect for maintainers' time.
I'm thinking of adopting something similar for my projects, even though they're not at Ghostty's scale yet. Better to set expectations early.

@hongminhee@hollo.social · Reply to Julian Fietkau's post