洪 民憙 (Hong Minhee) 
@hongminhee@hollo.social · Reply to Bimbo's post
@BigTittyBimbo That's a good one, though I wonder if “perspective” itself carries some visual baggage?


@hongminhee@hollo.social · 1042 following · 1715 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|

@hongminhee@hollo.social · Reply to Bimbo's post
@BigTittyBimbo That's a good one, though I wonder if “perspective” itself carries some visual baggage?

@hongminhee@hollo.social
I've been trying not to use words like “blindly” and “eye-opening.” Using blindness to mean not knowing or not noticing something doesn't sit right with me. But English isn't my first language, so finding replacements is harder than I expected. Sometimes “uncritically” works for “blindly,” but not always. I still haven't found a good casual replacement for “eye-opening.” I don't think people who use these words are bad. I just don't want to use them myself.

@hongminhee@hollo.social · Reply to Wartezimmer's post
@Kaesekuchen Honestly, yes, it's global state… just scoped per thread/task. The advantage over a plain global is that you get isolation across concurrent requests without threading values through every call. The debugging experience with contextvars was rough in my memory, though I haven't used it in a while. Statically typed implicits feel safer to me because they desugar to actual arguments, so the compiler keeps track. The footgun either way is that context-dependent functions proliferate silently and become hard to refactor out.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
Reading this also made me realize I've had a soft spot for dynamic scoping/implicits for a long time… probably since I first used @mitsuhiko's Flask, where the request context object was just there without you having to pass it around. Felt like magic, then felt like a footgun, then felt like a reasonable tradeoff again. Python has since put contextvars in the standard library, which is essentially the same idea.

@hongminhee@hollo.social
Enjoyed this wiki post by the author of Garnet on effect systems: [[PonderingEffects]]. What I liked is that it doesn't just describe the design space: it's honest about what the author finds confusing or unconvincing, including a skeptical take on algebraic effect handlers specifically. The Lobsters thread is worth reading too; someone points out that what the post calls “effects on data” is already studied under the name coeffects, which was news to me.

@hongminhee@hollo.social · Reply to もちもちずきん🍆's post
@Yohei_Zuho Windows 98!懐かしいですね…
@rmdes@indieweb.social
Having fun with #RSS and #ActivityPub @davew
@hongminhee
---> https://diff.rmendes.net/about
Follow the tool on the fedi with @bot
@liaizon@social.wake.st · Reply to wakest likes your bugs ⁂'s post
act II
@mkljczk speed runs contributing to korean fediverse

@hongminhee@hollo.social
Hmm, Codeberg seems down now?

@hongminhee@hollo.social · Reply to nicole mikołajczyk's post
@mkljczk Haha, thanks for your first contribution!
@mkljczk@fediverse.pl
7 minutes from installing Hollo to my first pull request, why it's always like this

@hollo@hollo.social
Hollo has always been headless—no built-in frontend, just a Mastodon-compatible API. You pick your own client. That's kind of the point.
But we've been wondering: what if Hollo shipped its own web frontend? The Mastodon-compatible API would stay, so your current client setup wouldn't change. It'd just be one more option.
Would you use it?
| Option | Voters |
|---|---|
| Yes, I'd switch to it | 3 (6%) |
| Maybe, depending on what it offers | 12 (22%) |
| No, I'd stick with my current client | 8 (15%) |
| I'm just curious / not a Hollo user | 31 (57%) |
@hongminhee@hackers.pub · Reply to An Nyeong (安寧)'s post
@nyeong 제가 추천하는 경량 ActivityPub 서버 소프트웨어 목록:
@rick@rmendes.net
Reading Hong Minhee with my coffee this morning, its again one of these moments where I stumble on something that express exactly (and better) what I had in mind, in a latent space, without being able to express it so beautifully.
I want my code to be used for LLM training. What I don’t want is for that training to produce proprietary models that become the exclusive property of AI corporations. The problem isn’t the technology or even the training process itself. The problem is the enclosure of the commons, the privatization of collective knowledge, the one-way flow of value from the many to the few.
The question isn’t whether to use LLMs or adapt to them—that ship has sailed. The question is: who owns the models? Who benefits from the commons that trained them? If millions of F/OSS developers contributed their code to the public domain, should the resulting models be proprietary? This isn’t just about centralization or market dynamics. It’s about whether the fruits of collective labor remain collective, or become private property.
More from where that come from here, here and here
🔗 https://rmendes.net/bookmarks/2026/03/22/yes-we-should-reclaim-llms-not-reject-them
@rick@rmendes.net
The “for or against AI” framing buries these questions. The reason it looks inconsistent to criticize the major AI vendors while remaining open to LLMs as a technology is that the framing assumes the technology and its capitalist application are the same thing. That assumption is wrong.
🔗 https://rmendes.net/bookmarks/2026/03/22/inspirational-thinking-about-the-whole
@kodingwarrior@hackers.pub
Hackers' Pub Android Client v1.1.1 Released!
I didn't add any features but, Refined Compose Screen's UX flow. Also for home timeline.
You can download here : https://github.com/hackers-pub/android/releases/tag/v1.1.1
RE: https://hackers.pub/@kodingwarrior/019d2de5-b9a6-7b58-9f91-38c84ff9f381
@kodingwarrior@hackers.pub
Next release of Hackers' Pub Android client will have some UI improvements...
@kodingwarrior@hackers.pub
Next release of Hackers' Pub Android client will have some UI improvements...
@matdevdug@c.im
I Can't See Apple's Vision https://matduggan.com/i-cant-see-apples-vision/ #apple #mac
@fedilug@msky.ospn.jp · Reply to Fediverse Linux Users Group's post

@hongminhee@hollo.social · Reply to Zanzi @ Monoidal Cafe's post
@zanzi @jnkrtech Your point about abstraction ladders is something I've been turning over since I read it. The Terence Tao/Lean combination feels like a glimpse of exactly what you're describing: Lean's type system carries so much semantic weight that the LLM doesn't need to compensate with volume. The proof is short because the language is expressive enough to make it short. That's very different from what happens when you point an LLM at TypeScript.
I'm skeptical of vibe coding, and have been from the start. Generating an entire project from prompts feels to me like a path to maintainability disaster, and I think most of the people currently excited about it haven't yet had to clean up what they made. The enthusiasm reads a lot like the dynamic typing boom of the 2000s: Ruby, Python, JavaScript, the whole wave. “We can build so fast.” True, and then ten years passed, the codebases grew, the teams changed, and people started hitting walls they hadn't anticipated. Python grew type hints. Flow and TypeScript appeared. Ruby quietly declined. The reckoning came, it just took a while.
I expect vibe coding to follow the same curve. One difference worries me though. With dynamic typing, the code was at least written by humans who understood it at the time. The technical debt was “hard to read.” With LLM-generated code that nobody reviewed deeply, the debt is something else: code that exists for reasons nobody can reconstruct. That's a harder problem.
There's a related problem I don't think more training data will fix. LLMs converge toward the average of what they've seen, and the average code on the internet is not concise. Verbose code is the norm; terse, well-factored code is rare, and usually underdocumented, so it contributes a weak training signal at best. The result is that LLMs have internalized the habits of the median developer: defensive, repetitive, over-specified. Conciseness requires knowing what not to write, and that judgment depends on domain context and something like aesthetic sense—neither of which transfers easily through pretraining. I don't see a scaling path out of that.
My own workflow tries to avoid this. Even when I use an LLM for Fedify, I steer constantly: small outputs, immediate review, corrections before moving on. The LLM is closer to a fast typist than an autonomous collaborator. It still helps, but the judgment about what to write, what to cut, where to stop, stays with me.
Which brings me to your actual question, what should the metric be. I don't have a clean answer, but I think it has something to do with how much of the codebase a human can hold in their head and feel responsible for. LOC never measured that. Neither does “prompt to working demo.” Whatever comes next probably needs to.
And on the higher-level languages point: I think you're right, and I'd add that this might be where the more interesting craft ends up living. Not writing the implementation, but designing the abstractions well enough that the implementation, whoever or whatever produces it, stays within bounds a human can oversee. That's a different skill from what most developers have trained, but it doesn't feel like a lesser one.
@zanzi@mathstodon.xyz
Where are the nuanced left-wing takes on modern AI and LLMs?
So much of the discourse around this tech is centered on rejecting it because of who currently owns it. But like all tech, it can be used for both oppression and liberation.
Who is focusing on the latter?

@hongminhee@hollo.social · Reply to jnkrtech's post
@jnkrtech @zanzi Thanks, here are my pieces about the topic:
I hope you enjoy them!

@kopper@not-brain.d.on-t.work

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
Our Fediverse & Social Web track has been accepted for @COSCUP 2026 (Taipei, Aug 8–9)! We'll have a full day—six hours—to fill with talks on the #fediverse, #ActivityPub, and the open social web.
The CFP for speakers isn't open yet, but we'll announce it here when it is. Stay tuned!

@hongminhee@hollo.social · Reply to julian's post
@julian Haha, I will attend to both!
@fedilug@msky.ospn.jp
@Yohei_Zuho@mstdn.y-zu.org · Reply to もちもちずきん🍆's post
何が始まるんです?
@Yohei_Zuho@mstdn.y-zu.org
@thisismissem@hachyderm.io · Reply to wakest likes your bugs ⁂'s post
@liaizon there are some good FLOSS LLMs like Haidra: https://haidra.net
@liaizon@social.wake.st
federated Urban Dictionary when?