洪 民憙 (Hong Minhee) :nonbinary:'s avatar

洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social

1,067 following1,878 followers

An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in , , , & . They/them.

서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. , , , 等으로 自由 소프트웨어 만듦.

()

Pinned

@hongminhee@hollo.social

Hello! I'm Hong Minhee (洪 民憙), an open source software engineer in my late 30s, living in Seoul, Korea. I'm bisexual and non-binary (they/them), and an enthusiastic advocate of free/open source software and the fediverse.

I work full-time on @fedify, an ActivityPub server framework in TypeScript, funded by @sovtechfund. I'm also the creator of @hollo, a single-user ActivityPub microblog; @botkit, an ActivityPub bot framework; Hackers' Pub, a fediverse platform for software developers; and LogTape, a logging library for JavaScript and TypeScript.

I have a long interest in East Asian languages (CJK) and Unicode. I post mostly in English here, though occasionally in Japanese or in mixed-script Korean (國漢文混用體), a traditional writing style that interleaves Chinese characters with the native Korean alphabet. Wanting to write in that style was actually one of the reasons I joined the fediverse. Feel free to talk to me in English, Korean, Japanese, or even Literary Chinese!

en.wikipedia.org

Korean mixed script - Wikipedia

Pinned

はじめまして!ソウル在住の30代後半のオープンソースソフトウェアエンジニア、洪 民憙ホン・ミンヒと申します。バイセクシュアル(bisexual)・ノンバイナリー(non-binary)で、自由・オープンソースソフトウェア(F/OSS)とフェディバース(fediverse)の熱烈な支持者です。

STF(@sovtechfund)の支援を受け、TypeScript用ActivityPubサーバーフレームワーク「@fedify」の開発に専念しています。他にも、おひとり様向けのActivityPubマイクロブログ「@hollo」、ActivityPubボットフレームワーク「@botkit」、ソフトウェア開発者向けフェディバースプラットフォームHackers' Pub、JavaScript・TypeScript用ロギングライブラリLogTapeなどの制作者でもあります。

東アジア言語(いわゆるCJK)とUnicodeにも興味があります。このアカウントでは主に英語で投稿していますが、時々日本語や国漢文混用体(漢字ハングル混じり文)の韓国語でも書いています。実はこの文体で書きたくてフェディバースを始めた、という経緯もあります。日本語、英語、韓国語、漢文でも気軽に話しかけてください!

speakerdeck.com

国漢文混用体からHolloまで

本発表では、韓国語の「国漢文混用体」(漢字ハングル混じり文)を自分のフェディバース投稿に実装したいという小さな目標から始まった旅路を共有します。 この目標を達成するために、ActivityPubのJSON-LDの複雑さやHTTP Signatures、WebFingerなどの仕様を理解する必要性に…

Pinned

安寧(안녕)하세요! 저는 서울에 살고 있는 30() 後半(후반)의 오픈 소스 소프트웨어 엔지니어 洪民憙(홍민희)입니다. 兩性愛者(양성애자)(bisexual)이자 논바이너리(non-binary)이며, 自由(자유)·오픈 소스 소프트웨어(F/OSS)와 聯合宇宙(연합우주)(fediverse)의 熱烈(열렬)支持者(지지자)이기도 합니다.

STF(@sovtechfund)의 支援(지원)을 받아 TypeScript() ActivityPub 서버 프레임워크 @fedify 開發(개발)專業(전업)으로 ()하고 있습니다. 그 ()에도 싱글 유저() ActivityPub 마이크로블로그 @hollo, ActivityPub 봇 프레임워크 @botkit, 소프트웨어 開發者(개발자)를 위한 聯合宇宙(연합우주) 플랫폼 Hackers' Pub, JavaScript·TypeScript() 로깅 라이브러리 LogTape ()製作者(제작자)이기도 합니다.

()아시아 言語(언어)(이른바 CJK)와 Unicode에도 關心(관심)이 많습니다. 이 計定(계정)에서는 ()英語(영어)로 포스팅하지만, 때때로 日本語(일본어)國漢文混用體(국한문 혼용체) 韓國語(한국어)로도 씁니다. 聯合宇宙(연합우주)에 오게 된 動機(동기) () 하나가 바로 國漢文混用體(국한문 혼용체)로 글을 쓰고 싶었기 때문이기도 하고요. 韓國語(한국어), 英語(영어), 日本語(일본어), 아니면 漢文(한문)으로도 말을 걸어주세요!

logtape.org

LogTape

Unobtrusive logging library with zero dependencies—library-first design for Deno, Node.js, Bun, browsers, and edge functions

@hongminhee@hollo.social

標準國語大辭典(표준국어대사전)》 MCP 서버를 만들었습니다.

旣存(기존)에도 《標國大(표국대)》 MCP들이 있긴 한데, 그냥 標題語(표제어)랑 뜻풀이만 덜렁 주는 데다가, 每番(매번) stdict.korean.go.kr 서버에 要請(요청)하는 ()으로 作動(작동)해서 레이트 리미트에 걸리더라고요. 제가 만든 건 아예 全體(전체) 辭典(사전) 데이터를 맨 처음에 받은 다음에 그걸 SQLite에 넣고 照會(조회)합니다.

github.com

GitHub - dahlia/ko-stdict-mcp: 《표준국어대사전》 MCP 서버

《표준국어대사전》 MCP 서버. Contribute to dahlia/ko-stdict-mcp development by creating an account on GitHub.

@ppatel@mstdn.social

"But I fundamentally disagree with the conclusion.
The proposed solution is denial and isolation: block the crawlers, withdraw from centralized forges like GitHub, make our work inaccessible to AI scrapers, shun those who use these “anti-ethical tools” from our communities. I understand the anger behind it. But I think it misses something important and misreads the historical pattern that has shaped FLOSS itself."

We should reclaim LLMs, not reject them

writings.hongminhee.org/2026/0

writings.hongminhee.org

Histomat of F/OSS: We should reclaim LLMs, not reject them

A few days ago I read a blog post titled On FLOSS and training LLMs . It captures well the frustration spreading through the free and open source software…

@hongminhee@hollo.social · Reply to John O'Nolan

@john That’s fascinating—and it makes sense that you'd notice it more sharply than most. I hadn't really thought about how deep it goes beyond the obvious ones. “Vivid” is a good example; I use it all the time without thinking.

@hongminhee@hollo.social

I've been trying not to use words like “blindly” and “eye-opening.” Using blindness to mean not knowing or not noticing something doesn't sit right with me. But English isn't my first language, so finding replacements is harder than I expected. Sometimes “uncritically” works for “blindly,” but not always. I still haven't found a good casual replacement for “eye-opening.” I don't think people who use these words are bad. I just don't want to use them myself.

@hongminhee@hollo.social · Reply to Wartezimmer

@Kaesekuchen Honestly, yes, it's global state… just scoped per thread/task. The advantage over a plain global is that you get isolation across concurrent requests without threading values through every call. The debugging experience with contextvars was rough in my memory, though I haven't used it in a while. Statically typed implicits feel safer to me because they desugar to actual arguments, so the compiler keeps track. The footgun either way is that context-dependent functions proliferate silently and become hard to refactor out.

Reading this also made me realize I've had a soft spot for dynamic scoping/implicits for a long time… probably since I first used @mitsuhiko's Flask, where the request context object was just there without you having to pass it around. Felt like magic, then felt like a footgun, then felt like a reasonable tradeoff again. Python has since put contextvars in the standard library, which is essentially the same idea.

docs.python.org

contextvars — Context Variables

This module provides APIs to manage, store, and access context-local state. The ContextVar class is used to declare and work with Context Variables. The copy_context() function and the Context clas...

@hongminhee@hollo.social

Enjoyed this wiki post by the author of Garnet on effect systems: [[PonderingEffects]]. What I liked is that it doesn't just describe the design space: it's honest about what the author finds confusing or unconvincing, including a skeptical take on algebraic effect handlers specifically. The Lobsters thread is worth reading too; someone points out that what the post calls “effects on data” is already studied under the name coeffects, which was news to me.

lobste.rs

Pondering Effects

6 comments

@hollo@hollo.social

Hollo has always been headless—no built-in frontend, just a Mastodon-compatible API. You pick your own client. That's kind of the point.

But we've been wondering: what if Hollo shipped its own web frontend? The Mastodon-compatible API would stay, so your current client setup wouldn't change. It'd just be one more option.

Would you use it?

  • Yes, I'd switch to it3 (6%)
  • Maybe, depending on what it offers12 (22%)
  • No, I'd stick with my current client8 (15%)
  • I'm just curious / not a Hollo user31 (57%)
@rick@rmendes.net

Reading Hong Minhee with my coffee this morning, its again one of these moments where I stumble on something that express exactly (and better) what I had in mind, in a latent space, without being able to express it so beautifully.

I want my code to be used for LLM training. What I don’t want is for that training to produce proprietary models that become the exclusive property of AI corporations. The problem isn’t the technology or even the training process itself. The problem is the enclosure of the commons, the privatization of collective knowledge, the one-way flow of value from the many to the few.

The question isn’t whether to use LLMs or adapt to them—that ship has sailed. The question is: who owns the models? Who benefits from the commons that trained them? If millions of F/OSS developers contributed their code to the public domain, should the resulting models be proprietary? This isn’t just about centralization or market dynamics. It’s about whether the fruits of collective labor remain collective, or become private property.

More from where that come from here, here and here



🔖 https://writings.hongminhee.org/2026/01/histomat-foss-llm/

🔗 https://rmendes.net/bookmarks/2026/03/22/yes-we-should-reclaim-llms-not-reject-them

rmendes.net

Yes, We should reclaim LLMs, not reject them.

Yes, We should reclaim LLMs, not reject them. 22 March 2026 AI 🔖 https://writings.hongminhee.org/2026/01/histomat-foss-llm/ - Reading Hong Minhee with my coffee this morning, its again one of these m...

@rick@rmendes.net

The “for or against AI” framing buries these questions. The reason it looks inconsistent to criticize the major AI vendors while remaining open to LLMs as a technology is that the framing assumes the technology and its capitalist application are the same thing. That assumption is wrong.



🔖 https://writings.hongminhee.org/2026/02/acting-materialistically-in-an-imperfect-world/

🔗 https://rmendes.net/bookmarks/2026/03/22/inspirational-thinking-about-the-whole

rmendes.net

Inspirational thinking about the whole AI divide

Inspirational thinking about the whole AI divide 22 March 2026 AI 🔖 https://writings.hongminhee.org/2026/02/acting-materialistically-in-an-imperfect-world/ - The “for or against AI” framing buries th...

@kodingwarrior@hackers.pub

Hackers' Pub Android Client v1.1.1 Released!

I didn't add any features but, Refined Compose Screen's UX flow. Also for home timeline.

You can download here : https://github.com/hackers-pub/android/releases/tag/v1.1.1

hackers.pub

Next release of Hackers' Pub Android client will have some UI improvements...

Next release of Hackers' Pub Android client will have some UI improvements...

@kodingwarrior@hackers.pub

Next release of Hackers' Pub Android client will have some UI improvements...

Before
ALT text

Before

After
ALT text

After

@kodingwarrior@hackers.pub

Next release of Hackers' Pub Android client will have some UI improvements...

Before
ALT text

Before

After
ALT text

After

@matdevdug@c.im
@COSCUP@floss.social にて「FediDevKR & FediLUG (Japan)」でブースを出展についても採択されました! ​:fedilug:
東アジアのFediverseの発信地・交流の場として準備を進めています!!
日本のFediLUGからはノベルティ配布などの企画を考えています!詳細情報をお楽しみに!!
https://blog.coscup.org/2026/03/coscup-x-ubucon-asia-2026-first-wave-of.html

blog.coscup.org

COSCUP x UbuCon Asia 2026 首波社群攤位名單公布 First wave of accepted community booth

無論您是開放原始碼的開發者、推廣者、使用者,都歡迎您來參加 COSCUP「開源人年會」

@hongminhee@hollo.social · Reply to Zanzi @ Monoidal Cafe

@zanzi @jnkrtech Your point about abstraction ladders is something I've been turning over since I read it. The Terence Tao/Lean combination feels like a glimpse of exactly what you're describing: Lean's type system carries so much semantic weight that the LLM doesn't need to compensate with volume. The proof is short because the language is expressive enough to make it short. That's very different from what happens when you point an LLM at TypeScript.

I'm skeptical of vibe coding, and have been from the start. Generating an entire project from prompts feels to me like a path to maintainability disaster, and I think most of the people currently excited about it haven't yet had to clean up what they made. The enthusiasm reads a lot like the dynamic typing boom of the 2000s: Ruby, Python, JavaScript, the whole wave. “We can build so fast.” True, and then ten years passed, the codebases grew, the teams changed, and people started hitting walls they hadn't anticipated. Python grew type hints. Flow and TypeScript appeared. Ruby quietly declined. The reckoning came, it just took a while.

I expect vibe coding to follow the same curve. One difference worries me though. With dynamic typing, the code was at least written by humans who understood it at the time. The technical debt was “hard to read.” With LLM-generated code that nobody reviewed deeply, the debt is something else: code that exists for reasons nobody can reconstruct. That's a harder problem.

There's a related problem I don't think more training data will fix. LLMs converge toward the average of what they've seen, and the average code on the internet is not concise. Verbose code is the norm; terse, well-factored code is rare, and usually underdocumented, so it contributes a weak training signal at best. The result is that LLMs have internalized the habits of the median developer: defensive, repetitive, over-specified. Conciseness requires knowing what not to write, and that judgment depends on domain context and something like aesthetic sense—neither of which transfers easily through pretraining. I don't see a scaling path out of that.

My own workflow tries to avoid this. Even when I use an LLM for Fedify, I steer constantly: small outputs, immediate review, corrections before moving on. The LLM is closer to a fast typist than an autonomous collaborator. It still helps, but the judgment about what to write, what to cut, where to stop, stays with me.

Which brings me to your actual question, what should the metric be. I don't have a clean answer, but I think it has something to do with how much of the codebase a human can hold in their head and feel responsible for. LOC never measured that. Neither does “prompt to working demo.” Whatever comes next probably needs to.

And on the higher-level languages point: I think you're right, and I'd add that this might be where the more interesting craft ends up living. Not writing the implementation, but designing the abstractions well enough that the implementation, whoever or whatever produces it, stays within bounds a human can oversee. That's a different skill from what most developers have trained, but it doesn't feel like a lesser one.

@zanzi@mathstodon.xyz

Where are the nuanced left-wing takes on modern AI and LLMs?

So much of the discourse around this tech is centered on rejecting it because of who currently owns it. But like all tech, it can be used for both oppression and liberation.

Who is focusing on the latter?

@hongminhee@hollo.social · Reply to jnkrtech