洪 民憙 (Hong Minhee) 
@hongminhee@hollo.social · Reply to Fred Praca's post
@FredPraca 炸醬麵은 韓國式 中華料理라서, 韓國 밖에서는 찾기 어려울 거예요. 😅


@hongminhee@hollo.social · 1029 following · 1583 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|

@hongminhee@hollo.social · Reply to Fred Praca's post
@FredPraca 炸醬麵은 韓國式 中華料理라서, 韓國 밖에서는 찾기 어려울 거예요. 😅

@hongminhee@hollo.social · Reply to 쯔방 :yuri: :yurigarden: :garden:'s post
@pbzweihander @chalk 저는 이 이야기가 「CW를 걸어야 하는 게 귀찮다」 같은 뜻이 아니라, 「根本的으로 내가 하고자 하는 이야기가 歡迎 받지 못할 것 같다는 不安을 준다」 같은 뜻이라고 생각합니다. 그러니까, CW를 걸려면 걸 수는 있지만 처음부터 CW를 걸어야 하는 게 그토록 많다면 이 空間에서 내가 하고 싶은 얘기를 듣기 싫어하는 사람이 그렇게나 많다는 이야기인가?—하는 느낌을 받는달까요? 즉, 規則이 問題라기 보다는 「나를 歡迎하지 않는 雰圍氣」로 받아들여지는 게 問題 같아요. 이 問題를 어떻게 풀어야 할 지는 저도 잘 모르겠습니다…
@heka@baram.me
산라탕 먹고 싶다
부담없이 예약없이 들어가서 산라탕만 먹고 나올 수 있는 중국집이 있으면 좋겠어

@hongminhee@hollo.social
오늘 저녁은 뭘 먹을까?
| Option | Voters |
|---|---|
| 짜파게티 | 2 (67%) |
| 炸醬麵 (자장면) | 1 (33%) |

@hongminhee@hollo.social · Reply to Janne Moren's post
@jannem That's a fair point, and I think you may be right that a life sentence is harder to argue against on legal or moral grounds than a death sentence would be. The “this punishment is inhumane” argument essentially disappears.
My remaining concern is less about the courts and more about the presidential pardon. In South Korea, the president can unilaterally pardon anyone, and historically this power has been used as a political card rather than a matter of justice. Chun Doo-hwan's pardon wasn't really about whether his sentence was too harsh—it was a political calculation ahead of an election. So I worry that no matter how legally solid the sentence is, a future president could still undo it for reasons that have nothing to do with whether the punishment was proportionate.
That said, I'll admit this is more of a general anxiety about the pardon system than a specific prediction. Maybe the political cost of pardoning someone convicted of insurrection would be too high for any president to seriously consider. I hope so.

@hongminhee@hollo.social
Today, 443 days after declaring martial law on December 3, 2024, former President Yoon Suk Yeol was sentenced to life in prison for leading an insurrection. The court found that his martial law decree, the deployment of troops to blockade the National Assembly, and the attempt to detain political figures constituted an act of insurrection against the constitutional order.
Many Koreans had hoped for the death penalty—the prosecution had asked for it, and the charge of leading an insurrection only allows three possible sentences: death, life with labor, or life without labor. The court chose life.
I have complicated feelings about this. As someone who believes the death penalty should be abolished, I shouldn't want it imposed on anyone, and in principle I don't—not even on Yoon. But there's a practical dilemma that's hard to ignore. South Korea has a precedent here: Chun Doo-hwan, who led the 1980 military coup and the Gwangju massacre, was sentenced to death at his first trial, reduced to life on appeal, and then pardoned by President Kim Young-sam. He walked free. If the starting point is already life in prison rather than death, it's even easier for an appeals court to reduce the sentence further, and for some future president to pardon him down the road.
So the discomfort isn't really about wanting Yoon to die. It's about the gap between what the sentence says and what actually happens in practice. And that points to a deeper problem: the presidential pardon power. As long as a sitting president can unilaterally pardon anyone—including someone convicted of trying to overthrow the very constitutional order the presidency is supposed to protect—no sentence feels truly final. I'd rather see the death penalty abolished and the pardon power curtailed, so that life in prison actually means life in prison.
@hanibsky.bsky.social@bsky.brid.gy
법원이 내란 우두머리 혐의로 재판에 넘겨진 윤석열 전 대통령에게 무기징역을 선고했습니다. 김용현 전 장관에게는 징역 30년을 선고했습니다. 2024년 12월3일 비상계엄이 선포된 지 444일 만입니다.
[속보] 법원, 윤석열에 무기징역 선고…“비상계엄으로 ...

@hongminhee@hollo.social · Reply to Chee Aun 🤔's post
@cheeaun Oh, I didn't realize Hollo's quote didn't work on Mastodon… Here's the context: https://hackers.pub/@2chanhaeng/019c6d4e-ca99-7482-8663-8f3cb0d7cb9c.
@2chanhaeng@hackers.pub
OK, I bought fedi.blue because I can, and I have no plans to use it for anything specific. Well, at least, I want to build an app that is compatible with both the fediverse (AP Protocol) and Bluesky (AT Protocol) ecosystem at the same time. So... if you have any ideas or suggestions, feel free to let me know! Sincerely, I want to waste money no more for domains that I won't use, so if you have any good ideas, please, please, PLEASE share them with me. You can find me at @chomu.dev on Bluesky and @2chanhaeng on Hackers' Pub. Or, you can also leave an issue in the repository. Thanks!

@hongminhee@hollo.social
Fedidevs, any ideas?
Edit: Here's the context: https://hackers.pub/@2chanhaeng/019c6d4e-ca99-7482-8663-8f3cb0d7cb9c.
@2chanhaeng@hackers.pub
OK, I bought fedi.blue because I can, and I have no plans to use it for anything specific. Well, at least, I want to build an app that is compatible with both the fediverse (AP Protocol) and Bluesky (AT Protocol) ecosystem at the same time. So... if you have any ideas or suggestions, feel free to let me know! Sincerely, I want to waste money no more for domains that I won't use, so if you have any good ideas, please, please, PLEASE share them with me. You can find me at @chomu.dev on Bluesky and @2chanhaeng on Hackers' Pub. Or, you can also leave an issue in the repository. Thanks!

@hongminhee@hollo.social
I want to try making a highly autonomous fediverse bot, just for fun.

@hongminhee@hollo.social · Reply to 쯔방 :yuri: :yurigarden: :garden:'s post
@pbzweihander 참새 이어서 해주세요…!
@2chanhaeng@hackers.pub
OK, I bought fedi.blue because I can, and I have no plans to use it for anything specific. Well, at least, I want to build an app that is compatible with both the fediverse (AP Protocol) and Bluesky (AT Protocol) ecosystem at the same time. So... if you have any ideas or suggestions, feel free to let me know! Sincerely, I want to waste money no more for domains that I won't use, so if you have any good ideas, please, please, PLEASE share them with me. You can find me at @chomu.dev on Bluesky and @2chanhaeng on Hackers' Pub. Or, you can also leave an issue in the repository. Thanks!
@liaizon@social.wake.st · Reply to wakest ⁂'s post
The fediverse is trans. Trans as in non binary. Trans as in transition. Trans as in transitional.
@liaizon@social.wake.st
The fediverse is anti-capitalist. The fediverse is anarchist praxis. The fediverse is not a protocol. The fediverse caries an ideology of communal care and mutual aid for our fellow humans. The fediverse should never be neutral on ideology. The tools we are building provide infrastructure for communication but they also shape that communication.
@b0rk@jvns.ca
i feel like i've probably asked this before but has anyone written a fancy command line man page viewer to replace `man`?
(not emacs or vim)
@mikaeru@mastodon.social
All documents published by the Ideographic Research Group (IRG) are now available on the Unicode web site, and can be easily and efficiently found through the new search bar provided on the IRG homepage.
🔗 https://www.unicode.org/irg/
This long-awaited search feature is very convenient, and so useful to find what you're interested in, and even more (ah, the wonderful power of serendipity!)...
#Unicode #IRG #IdeographicResearchGroup #CJK #Ideographs #Unihan

@hongminhee@hollo.social · Reply to Fred Praca's post
@FredPraca 제 이름의 마지막 글자는 「히」가 아니라 「희」입니다. 😀

@hongminhee@hollo.social · Reply to Fred Praca's post
@FredPraca Haha, there's a small typo, but it's recognizable! I can't blame you because the romanization of my name is non-standard. 😄

@hongminhee@hollo.social · Reply to Fred Praca's post
@FredPraca Haha, thanks! By the way, learning French is one of things in my bucket list!

@hongminhee@hollo.social
Optique 0.10.0 released—the biggest update yet for this TypeScript CLI parser library.
What's new:
@optique/config package with Standard Schema validation@optique/man)This is the last pre-release before 1.0.0.

@hongminhee@hollo.social · Reply to Peter Brett's post
@krans I have, and I do carry guilt about it—I'm not going to pretend otherwise. I try not to use LLMs carelessly, and I don't think the ethical concerns you're raising are wrong.
But I'd ask you to consider what it means to write and maintain open source documentation, commit messages, issue discussions, and community communication in a language that shares essentially nothing with your own. For me, LLMs are less of a shortcut and more of an accessibility tool—one that lets me participate in a space that was never designed with people like me in mind. Giving that up wouldn't make the ethical problems go away; it would just mean one less non-native English speaker able to show up.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
And a note for fellow non-native English speakers from European language backgrounds: I hear you, and we're on the same side of this, but the gap isn't the same for everyone. Korean and English have a linguistic distance of 89.2 out of 100—essentially no detectable relationship at all. No shared roots, no cognates to lean on, completely different writing system. The distance between, say, French or German and English is a different universe. So when I say LLMs are an equalizer for writing English, I mean it quite literally—without one, even expressing a simple idea in natural-sounding English can take me disproportionately longer than the idea itself deserves.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
Adding to this: much of my LLM usage actually goes into writing documentation and communicating in English—a language that isn't mine but that open source essentially demands. For non-native English speakers, LLMs are a genuine equalizer. They let me write docs that don't get ignored for sounding awkward, and participate in discussions without spending twice the effort a native speaker would. But when English-native developers dismiss LLM-assisted writing wholesale, they're not even aware of the privilege baked into that judgment. It's like someone who's never had to do housework scoffing at a washing machine for making people “lazy”—easy to say when the burden was never yours to begin with.

@hongminhee@hollo.social
I've been honestly adding Co-Authored-By: Claude <noreply@anthropic.com> to every commit where I used an LLM even slightly—whether it's generating test scaffolding, drafting docs, or just bouncing ideas. I thought transparency was the right thing to do. Turns out, people see that trailer and immediately assume the whole thing is “vibe coded” AI slop, no further questions asked. The irony is that being honest about my process is what's getting my work dismissed.
Now I'm genuinely torn. Do I keep the trailer and accept that some people will write off my work at a glance? Or do I drop it and lose something I actually believe in? It's frustrating that there's no widely understood distinction between “I prompted an LLM to write my entire app” and “I used an LLM as a tool while writing my own code.” I don't have an answer yet—just sitting with the discomfort for now.
@moreal@hackers.pub
지금 해보고 있는 작업 과정에서 optique, optparse-applicative 같은 류의 커맨드라인 파서의 (개인적으로) 신기한 점을 알게 되었다. --server 가 있을 때 --server-port 도 함께 require할 수 있게 의존 관계를 표현할 수 있다, 같은 느낌에서 "와" 였었는데 오늘 알게 된 것은 같은 옵션을 상황에 따라 다르게 쓸 수 있다는 점이었다.
예를 들자면 --server가 있을 때 -P의 의미가 --port의 short option이라고 할 때 --server가 없을 때 -P의 의미가 --program[1]의 short option이 될 수도 있다. 이렇게 -P가 여러 의미를 가지게 되는 것이 좋은가 같은 이야기와 별개로 신기했다.
p로 시작하는 더 좋은 옵션이 당장 생각 안 나서 --program이라 하였다 ↩︎

@hongminhee@hollo.social · Reply to Jaeyeol Lee's post
@kodingwarrior Moim 어때요?

@sftblw@lake.naru.cafe · Reply to Ch. :animal_feed_trickcal:'s post
fediverse.kr 이 어제? 그제? 부터 online 상태입니다.
사실 도메인 선점을 해두고 연합우주 가이드 사이트 언젠간 만들어야지...! 했는데, AI에게 짬때리면 사이트가 툭 하고 튀어나오는 시대가 되어서 AI를 갈궈서 웹사이트를 "생성" 했습니다. 저는 소스 한 줄 읽지 않았고... 아니 중간중간에 가끔 편집하면서 눈에 채이는 건 뭐라뭐라하긴 했는데, 그래도 모든 코드를 AI가 짰습니다. 내용도.
정성들인 수제 사이트는 아니지만 적어도 도메인을 선점해놓고 방치한다는 짓은 이제 면하게 되었습니다.
소개문 같은 내용은 방향성만 지시했고 채운 건 AI입니다. AI를 여러번 갈궜음에도 GoToSocial 같은 게 마스토돈 계열이라던가 하는 헛소리가 여전히 있습니다. 그래도 내용이 그리 나쁜 것 같지는 않네요
사이트 목록은 일부 검열이 있습니다. 휘핑에디션 이후로 커미션으로 마스토돈을 열어서 커뮤를 달리는 분들이 늘었고, 그런 인스턴스들을 클로드가 죄다 긁어모아 수집했더라구요. 커뮤는 목적이 비공개 커뮤니티인지라 자커마스를 제외하고는 확인되는대로 숨김처리 해두었습니다. 숨김처리 기능도 AI를 갈궈서 뽑아냈습니다.
공포의 기능: 인스턴스에 댓글을 달 수 있는 기능이 있습니다. 다만, 애초에 로그인하려면 연합우주의 6개월 이상 계정이 필요하게 방지조치를 해두긴 했습니다. 확인은 아마 안 해봤는데 계정 삭제 기능 아마 아직 안 만들었을겁니다.
로그인은 암호를 저장하는 대신에, 타임라인에 일회용 키를 게시하는 방식으로 되어있습니다. 만들어줘! 하니까 툭 하고 튀어나와서 기뻤습니다. 직접 만드려면 많이많이 고생했을거에요
@evan@cosocial.ca
Open Source developers only: how much time per month do you budget to maintain a one-developer project?
| Option | Voters |
|---|---|
| 4 hours or less | 55 (35%) |
| About 8 hours | 43 (27%) |
| About 16 hours | 30 (19%) |
| 32 hours or more | 29 (18%) |

@hongminhee@hollo.social · Reply to silverpill's post
@silverpill Well, Fedify currently relies heavily on a JSON-LD processor, so it seems too early for that. 🤔
@nebuleto@hackers.pub
These days I work primarily in TypeScript on Node.js. I needed to handle bulk uploads of large Excel data and dynamically generate template Excel files to collect that data. Those templates had to include data validation, conditional formatting, dropdowns, and so on.
The existing Node.js Excel libraries each had problems. One split its functionality between a community edition and a paid edition, which meant features I needed were locked away. The other had a gap between its internal implementation and its TypeScript typings, and it was too slow for what I was trying to do. Pull requests had piled up in the repository, but the project was no longer being maintained.
I had known about Excelize, the Go library, for a while. Charts, conditional formatting, formulas, data validation: it covers a lot of the OOXML spec and does it well. I kept thinking I wanted something at that level in TypeScript.
Coding agents have gotten noticeably better in the past year or so, and I wanted to try a specific way of working: I make all the design and architecture decisions, and agents handle the implementation. On Wednesday of last week (February 4th) I started analyzing Excelize and other Excel libraries. By Saturday night (February 7th) I was writing code.
That's SheetKit.
This is the first of two posts. This one covers what SheetKit is and how the week went, from first release to the v0.5.0 I shipped this evening (February 14th). The second post will be about working with coding agents: what I delegated, how, and where it broke down.
Dates are crates.io / npm publish timestamps. Approximate, not to-the-minute.
| Version | When | Date | What |
|---|---|---|---|
| v0.1.0 | Sunday (last week) | 2026-02-08 | First publish (initial form) |
| v0.1.2 | Monday early morning (last week) | 2026-02-09 | First snapshot worth calling a public release |
| v0.2.0 | Monday morning (last week) | 2026-02-09 | Buffer I/O, formula helpers |
| v0.3.0 | Tuesday early morning (last week) | 2026-02-10 | Raw buffer FFI, batch APIs, benchmark suite |
| v0.4.0 | Tuesday afternoon (last week) | 2026-02-10 | Feature expansion + documentation site |
| v0.5.0 | Saturday evening (today) | 2026-02-14 | Lazy loading / streaming, COW save, benchmark rule improvements |
SheetKit is a Rust spreadsheet library for OOXML formats (.xlsx, .xlsm, etc.) with Node.js bindings via napi-rs. Bun and Deno work too, since they support Node-API.
.xlsx files are ZIP archives containing XML parts. SheetKit opens the ZIP, deserializes each XML part into Rust structs, lets you manipulate them, and serializes everything back on save.
Three crates on the Rust side:
sheetkit-xml: Low-level XML data structures mapping to OOXML schemassheetkit-core: All business logicsheetkit: Facade crate for library consumersNode.js bindings live in packages/sheetkit and expose the Rust API via #[napi] macros.
To get started: sheetkit.dev/getting-started.
I started coding Saturday night (February 7th) and pushed v0.1.0 the next day. By early Monday morning I had v0.1.2, which was the first version I'd actually call releasable.
I had spent Wednesday analyzing the OOXML spec and how existing libraries implemented features, so by Saturday I had a detailed plan ready. I handed implementation to coding agents (Claude Code and Codex). The setup was: a main orchestrator agent receives the plan, then spawns sub-agents in parallel for each feature area. It burns through tokens fast, but it gets a large plan done quickly. After the agents finish, a separate agent does code review before I look at it.
More on this workflow in the next post.
v0.1.2 was an MVP. It had 44,000+ lines, 1,533 tests, 110 formula functions, charts, images, conditional formatting, data validation, StreamWriter, and builds for 8 platform targets. But it could only read/write via file paths (no Buffer I/O), and I hadn't measured performance at all. It worked, but that was about it.
v0.2.0 went up Monday morning, a few hours after v0.1.2.
I added Buffer I/O: read and write .xlsx directly from in-memory buffers, no filesystem needed. In a server you're usually processing binary from an HTTP request or streaming a generated file back in the response, so this had to come early. fill_formula and other formula helpers went in at the same time.
With Buffer I/O in place I could run tests closer to real production workloads. That's where the problems showed up.
The initial implementation created a JS object per cell and passed it across the Rust/JS FFI boundary. Pull a 50k×20 sheet as a row array and that's a million-plus JS objects. GC pressure and memory usage went through the roof.
I got the idea from oxc, which transfers Rust AST data to JS as raw buffers instead of object trees. Same principle here:
The encoder picks dense or sparse layout automatically based on cell occupancy (threshold: 30%). Since the JS side receives a raw buffer, I also wrote a TypeScript parser for the format.
v0.3.0 shipped the first version of this buffer protocol. v0.5.0 later replaced it with a v2 format that supports inline strings and incremental row-by-row decoding.
I also made changes in the Rust XML layer. The goal was fewer heap allocations and simpler hot paths.
| Change | Why |
|---|---|
Cell references ("A1") stored as [u8; 10] inline arrays, not heap Strings |
Max cell ref is "XFD1048576" (10 bytes). No need for the heap. |
| Cell type attribute normalized to a 1-byte enum | Stops carrying raw XML attribute strings around |
| Binary search for cells within a row, replacing linear scan |
| Metric | Before | After |
|---|---|---|
| Memory (RSS) at 100k rows | 361 MB | 13.5 MB |
| Node.js read overhead vs. native Rust | — | ~4% |
| GC pressure | 1M+ object creations | Single buffer transfer |
This is when I built the benchmark suite, comparing SheetKit against existing Node.js and Rust libraries. The runner outputs Markdown with environment info, iteration counts, and raw numbers.
Setup: Apple M4 Pro, 24 GB / Node v25.3.0 / Rust 1.93.0. Median of 5 runs after 1 warmup. RSS/heapUsed are residual deltas (before vs. after), not peaks. Fixtures are generated deterministically; row counts include the header.
50k rows × 20 columns: SheetKit read 541 ms, write 469 ms. The JS-only libraries: 1.24–1.56s read, 1.09–2.62s write. heapUsed delta: 0 MB, which confirmed that the JS side was no longer accumulating objects.
One odd thing: edit-xlsx, a Rust library, was showing suspiciously fast read times. I didn't understand why at this point. The explanation came during the v0.5.0 work (covered below).
v0.4.0 shipped Tuesday afternoon. This one was about features, not performance.
I went through what other Excel libraries supported and listed what SheetKit was still missing. Shapes, slicers, form controls, threaded comments, VBA extraction, a CLI. I also added 54 more formula functions (total: 164), mostly financial and engineering.
Same orchestrator/sub-agent setup as before: write a detailed plan for each feature, have the agents implement in parallel, agent review first, then my review.
Memory optimization continued on the side. Reworking the Cell struct and SST memory layout cut RSS from 349 MB to 195 MB for sync reads (44% drop). Async reads: 17 MB.
I also set up a VitePress documentation site around this time.
v0.5.0 went out this evening. Unlike the previous releases, which added features on top of the same API shape, this one changed the Node.js API structure and parts of the Rust core.
Before v0.5.0, open() parsed every XML part upfront. Open a 50k-row file and all sheets load into memory, even the ones you never touch. Now there are three read modes:
lazy (default): reads ZIP index and metadata only. Sheets parse on first access.eager: the old behavior. Parse everything immediately.stream: forward-only, bounded memory.Lazy open costs less than 30% of eager, and pre-access memory is under 20% of eager. Auxiliary parts (comments, charts, images, pivot tables) also defer parsing until you actually call a method that needs them.
Forward-only reader for large files. One batch in memory at a time.
const wb = await Workbook.open("huge.xlsx", { readMode: "stream" });
const reader = await wb.openSheetReader("Sheet1", { batchSize: 1000 });
for await (const batch of reader) {
for (const row of batch) {
// process
}
}
When you save a lazily-opened workbook, unchanged sheets pass through directly from the original ZIP entry. No parse-serialize round trip. At work I generate files by opening a template, filling in a few cells, and sending it back. That's exactly the workload this helps.
edit-xlsx Read Anomaly Back when I built the benchmarks, edit-xlsx was recording very fast read times on some files. Rows/cells count was dropping to zero.
I added comparability rules to the benchmark:
Then I dug into why. In SpreadsheetML, fileVersion, workbookPr, and bookViews in workbook.xml are optional. edit-xlsx 0.4.x treats them as required. When deserialization fails on a file missing these elements, it falls back to a default struct: rows=0, cells=0, near-zero runtime. It was fast because it wasn't reading anything.
SheetKit now writes default values for fileVersion and workbookPr (matching Excel's own defaults) when they're absent, for compatibility.
In some write scenarios, the Node.js bindings beat native Rust.
| Scenario | Rust | Node.js | Overhead |
|---|---|---|---|
| Write 50k rows × 20 cols | 544 ms | 469 ms | −14% (Node.js faster) |
| Write 20k text-heavy rows | 108 ms | 86 ms | −20% (Node.js faster) |
This happens because V8 is very good at string interning and memory management when building SST data through the batch API (setSheetData). The napi crossing costs less than what V8 saves. I did not expect to see negative overhead, but here we are.
I replaced our previous library with SheetKit at work. Template generation and bulk upload processing have been running fine.
Where it stands today (February 14th):
Read overhead (Node.js vs. Rust): ~4%. Some write scenarios are faster from Node.js. Details at sheetkit.dev.
The library is still experimental and APIs may change. I'll keep using it in production, measuring, and fixing things as they come up. Issues and PRs are always welcome.
This covered the what and when. The next post is about the how: orchestrator/sub-agent structure, how I used Claude Code and Codex, the agentic code review loop, where I had to step in, and what I'd do differently.