Evan Prodromou
@evan@cosocial.ca
Open Source developers only: how much time per month do you budget to maintain a one-developer project?
| Option | Voters |
|---|---|
| 4 hours or less | 33 (45%) |
| About 8 hours | 18 (24%) |
| About 16 hours | 14 (19%) |
| 32 hours or more | 9 (12%) |


@hongminhee@hollo.social · 1033 following · 1583 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|
@evan@cosocial.ca
Open Source developers only: how much time per month do you budget to maintain a one-developer project?
| Option | Voters |
|---|---|
| 4 hours or less | 33 (45%) |
| About 8 hours | 18 (24%) |
| About 16 hours | 14 (19%) |
| 32 hours or more | 9 (12%) |

@hongminhee@hollo.social · Reply to silverpill's post
@silverpill Well, Fedify currently relies heavily on a JSON-LD processor, so it seems too early for that. 🤔
@nebuleto@hackers.pub
These days I work primarily in TypeScript on Node.js. I needed to handle bulk uploads of large Excel data and dynamically generate template Excel files to collect that data. Those templates had to include data validation, conditional formatting, dropdowns, and so on.
The existing Node.js Excel libraries each had problems. One split its functionality between a community edition and a paid edition, which meant features I needed were locked away. The other had a gap between its internal implementation and its TypeScript typings, and it was too slow for what I was trying to do. Pull requests had piled up in the repository, but the project was no longer being maintained.
I had known about Excelize, the Go library, for a while. Charts, conditional formatting, formulas, data validation: it covers a lot of the OOXML spec and does it well. I kept thinking I wanted something at that level in TypeScript.
Coding agents have gotten noticeably better in the past year or so, and I wanted to try a specific way of working: I make all the design and architecture decisions, and agents handle the implementation. On Wednesday of last week (February 4th) I started analyzing Excelize and other Excel libraries. By Saturday night (February 7th) I was writing code.
That's SheetKit.
This is the first of two posts. This one covers what SheetKit is and how the week went, from first release to the v0.5.0 I shipped this evening (February 14th). The second post will be about working with coding agents: what I delegated, how, and where it broke down.
Dates are crates.io / npm publish timestamps. Approximate, not to-the-minute.
| Version | When | Date | What |
|---|---|---|---|
| v0.1.0 | Sunday (last week) | 2026-02-08 | First publish (initial form) |
| v0.1.2 | Monday early morning (last week) | 2026-02-09 | First snapshot worth calling a public release |
| v0.2.0 | Monday morning (last week) | 2026-02-09 | Buffer I/O, formula helpers |
| v0.3.0 | Tuesday early morning (last week) | 2026-02-10 | Raw buffer FFI, batch APIs, benchmark suite |
| v0.4.0 | Tuesday afternoon (last week) | 2026-02-10 | Feature expansion + documentation site |
| v0.5.0 | Saturday evening (today) | 2026-02-14 | Lazy loading / streaming, COW save, benchmark rule improvements |
SheetKit is a Rust spreadsheet library for OOXML formats (.xlsx, .xlsm, etc.) with Node.js bindings via napi-rs. Bun and Deno work too, since they support Node-API.
.xlsx files are ZIP archives containing XML parts. SheetKit opens the ZIP, deserializes each XML part into Rust structs, lets you manipulate them, and serializes everything back on save.
Three crates on the Rust side:
sheetkit-xml: Low-level XML data structures mapping to OOXML schemassheetkit-core: All business logicsheetkit: Facade crate for library consumersNode.js bindings live in packages/sheetkit and expose the Rust API via #[napi] macros.
To get started: sheetkit.dev/getting-started.
I started coding Saturday night (February 7th) and pushed v0.1.0 the next day. By early Monday morning I had v0.1.2, which was the first version I'd actually call releasable.
I had spent Wednesday analyzing the OOXML spec and how existing libraries implemented features, so by Saturday I had a detailed plan ready. I handed implementation to coding agents (Claude Code and Codex). The setup was: a main orchestrator agent receives the plan, then spawns sub-agents in parallel for each feature area. It burns through tokens fast, but it gets a large plan done quickly. After the agents finish, a separate agent does code review before I look at it.
More on this workflow in the next post.
v0.1.2 was an MVP. It had 44,000+ lines, 1,533 tests, 110 formula functions, charts, images, conditional formatting, data validation, StreamWriter, and builds for 8 platform targets. But it could only read/write via file paths (no Buffer I/O), and I hadn't measured performance at all. It worked, but that was about it.
v0.2.0 went up Monday morning, a few hours after v0.1.2.
I added Buffer I/O: read and write .xlsx directly from in-memory buffers, no filesystem needed. In a server you're usually processing binary from an HTTP request or streaming a generated file back in the response, so this had to come early. fill_formula and other formula helpers went in at the same time.
With Buffer I/O in place I could run tests closer to real production workloads. That's where the problems showed up.
The initial implementation created a JS object per cell and passed it across the Rust/JS FFI boundary. Pull a 50k×20 sheet as a row array and that's a million-plus JS objects. GC pressure and memory usage went through the roof.
I got the idea from oxc, which transfers Rust AST data to JS as raw buffers instead of object trees. Same principle here:
The encoder picks dense or sparse layout automatically based on cell occupancy (threshold: 30%). Since the JS side receives a raw buffer, I also wrote a TypeScript parser for the format.
v0.3.0 shipped the first version of this buffer protocol. v0.5.0 later replaced it with a v2 format that supports inline strings and incremental row-by-row decoding.
I also made changes in the Rust XML layer. The goal was fewer heap allocations and simpler hot paths.
| Change | Why |
|---|---|
Cell references ("A1") stored as [u8; 10] inline arrays, not heap Strings |
Max cell ref is "XFD1048576" (10 bytes). No need for the heap. |
| Cell type attribute normalized to a 1-byte enum | Stops carrying raw XML attribute strings around |
| Binary search for cells within a row, replacing linear scan |
| Metric | Before | After |
|---|---|---|
| Memory (RSS) at 100k rows | 361 MB | 13.5 MB |
| Node.js read overhead vs. native Rust | — | ~4% |
| GC pressure | 1M+ object creations | Single buffer transfer |
This is when I built the benchmark suite, comparing SheetKit against existing Node.js and Rust libraries. The runner outputs Markdown with environment info, iteration counts, and raw numbers.
Setup: Apple M4 Pro, 24 GB / Node v25.3.0 / Rust 1.93.0. Median of 5 runs after 1 warmup. RSS/heapUsed are residual deltas (before vs. after), not peaks. Fixtures are generated deterministically; row counts include the header.
50k rows × 20 columns: SheetKit read 541 ms, write 469 ms. The JS-only libraries: 1.24–1.56s read, 1.09–2.62s write. heapUsed delta: 0 MB, which confirmed that the JS side was no longer accumulating objects.
One odd thing: edit-xlsx, a Rust library, was showing suspiciously fast read times. I didn't understand why at this point. The explanation came during the v0.5.0 work (covered below).
v0.4.0 shipped Tuesday afternoon. This one was about features, not performance.
I went through what other Excel libraries supported and listed what SheetKit was still missing. Shapes, slicers, form controls, threaded comments, VBA extraction, a CLI. I also added 54 more formula functions (total: 164), mostly financial and engineering.
Same orchestrator/sub-agent setup as before: write a detailed plan for each feature, have the agents implement in parallel, agent review first, then my review.
Memory optimization continued on the side. Reworking the Cell struct and SST memory layout cut RSS from 349 MB to 195 MB for sync reads (44% drop). Async reads: 17 MB.
I also set up a VitePress documentation site around this time.
v0.5.0 went out this evening. Unlike the previous releases, which added features on top of the same API shape, this one changed the Node.js API structure and parts of the Rust core.
Before v0.5.0, open() parsed every XML part upfront. Open a 50k-row file and all sheets load into memory, even the ones you never touch. Now there are three read modes:
lazy (default): reads ZIP index and metadata only. Sheets parse on first access.eager: the old behavior. Parse everything immediately.stream: forward-only, bounded memory.Lazy open costs less than 30% of eager, and pre-access memory is under 20% of eager. Auxiliary parts (comments, charts, images, pivot tables) also defer parsing until you actually call a method that needs them.
Forward-only reader for large files. One batch in memory at a time.
const wb = await Workbook.open("huge.xlsx", { readMode: "stream" });
const reader = await wb.openSheetReader("Sheet1", { batchSize: 1000 });
for await (const batch of reader) {
for (const row of batch) {
// process
}
}
When you save a lazily-opened workbook, unchanged sheets pass through directly from the original ZIP entry. No parse-serialize round trip. At work I generate files by opening a template, filling in a few cells, and sending it back. That's exactly the workload this helps.
edit-xlsx Read Anomaly Back when I built the benchmarks, edit-xlsx was recording very fast read times on some files. Rows/cells count was dropping to zero.
I added comparability rules to the benchmark:
Then I dug into why. In SpreadsheetML, fileVersion, workbookPr, and bookViews in workbook.xml are optional. edit-xlsx 0.4.x treats them as required. When deserialization fails on a file missing these elements, it falls back to a default struct: rows=0, cells=0, near-zero runtime. It was fast because it wasn't reading anything.
SheetKit now writes default values for fileVersion and workbookPr (matching Excel's own defaults) when they're absent, for compatibility.
In some write scenarios, the Node.js bindings beat native Rust.
| Scenario | Rust | Node.js | Overhead |
|---|---|---|---|
| Write 50k rows × 20 cols | 544 ms | 469 ms | −14% (Node.js faster) |
| Write 20k text-heavy rows | 108 ms | 86 ms | −20% (Node.js faster) |
This happens because V8 is very good at string interning and memory management when building SST data through the batch API (setSheetData). The napi crossing costs less than what V8 saves. I did not expect to see negative overhead, but here we are.
I replaced our previous library with SheetKit at work. Template generation and bulk upload processing have been running fine.
Where it stands today (February 14th):
Read overhead (Node.js vs. Rust): ~4%. Some write scenarios are faster from Node.js. Details at sheetkit.dev.
The library is still experimental and APIs may change. I'll keep using it in production, measuring, and fixing things as they come up. Issues and PRs are always welcome.
This covered the what and when. The next post is about the how: orchestrator/sub-agent structure, how I used Claude Code and Codex, the agentic code review loop, where I had to step in, and what I'd do differently.

@hongminhee@hollo.social
Someone who enthusiastically uses an open source project I made and actively reports issues is an absolute gem. I'm so grateful for them.
@Jose_A_Alonso@mathstodon.xyz
Learning Lean (part 1). ~ Rado Kirov. https://rkirov.github.io/posts/lean1 #ITP #LeanProver #Math
@4sterisk@mi.tomadoi.com

@hongminhee@hollo.social
Threads 어카운트 지우고 싶은데 가끔 Fedify/Hollo 테스트해야 해서 못 지운다…

@hongminhee@hollo.social
We have only 4 issues left until the Fedify 2.0 milestone!
@gaebalgom@hackers.pub
Welcom to the gaji,
Type-safe GitHub Actions
Write GitHub Actions workflows in TypeScript with full type safety
@xoofx@mastodon.social
Heya! 🥳 XenoAtom.CommandLine 2.0 is out: https://xenoatom.github.io/commandline/
It brings XenoAtom.Terminal.UI integration, validation, option constraints, and pluggable output rendering. After a month of terminal-focused work, it finally feels complete! 😅
I'm also consolidating my sites with shared templates via my Lunet generator, and I'm hoping to share it more broadly later this year.
Finally, the XenoAtom umbrella now has a landing page: https://xenoatom.github.io/ 🤩
@0xabad1dea@infosec.exchange
So You Want To Write An Open Source Discord Replacement
Things you don’t need:
- federation/distributed systems
- multiparty end-to-end encryption
- an entirely new operating system kernel specially designed to—
Things you DO need:
- a user interface that is Normal
- the ability to use languages other than English and writing systems other than Latin
- higher standards of user experience than how irc actually works in the real world
- any fucking clue how Discord works and why people use it
I have muted replies to this post due to the usual reasons
@box464@mastodon.social · Reply to Box464's post
My code is ugly, but it works! And I didn't have to learn all the intricacies of every one-off scenario between every single platform. Fedify hides that from me, and I'm so glad it does. I have no desire to go that deep.
2/2
@box464@mastodon.social
Using @fedify to see how far I can get with very simplistic AP objects and activities. The tutorial and documentation was helpful and gave me a solid template to work from.
The fedify CLI is a great debugging tool, too.
The built in tunnel command makes it a snap to spin up temporary servers that are open to the public web for testing. Here I have two instances spun up so I can send activities between them.
1/2

@hongminhee@hollo.social · Reply to Doug Webb's post
@douginamug Of course, thanks!

@hongminhee@hollo.social
Fedify 2.0 will probably be out by the end of February. No, it has to be.

@hongminhee@hollo.social · Reply to dansup's post
@dansup Bower? Haven't been heard of it for a decade… 😂
@TypeScript@fosstodon.org
TypeScript 6.0 beta is now published!
This release brings
- inference improvements for functions
- updates to package.json 'imports'
- Temporal APIs
- alignments for the upcoming TypeScript 7.0
- & more!
Try it today!
https://devblogs.microsoft.com/typescript/announcing-typescript-6-0-beta/
@julian@fietkau.social · Reply to Fedi.Tips's post
@FediTips Re: reply controls.
GoToSocial came up with a way (https://docs.gotosocial.org/en/latest/federation/interaction_controls/) to do this. It doesn't “solve” malicious servers, but it lets benevolent servers honor each other's inhabitants' wishes.
I'm drafting a “Fediverse Enhancement Proposal” document to make it easier for other projects to join GTS. It's progressing, but I have day job stuff etc. It might help to add a few collaborators.
Anyone comfortable w/ technical specs similar to this https://fediverse.codeberg.page/fep/fep/044f/ & want to help?

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
Sneak peak.

@hongminhee@hollo.social
Working on @fedify/debugger, an embedded ActivityPub debug dashboard for Fedify applications. It will be shipped with Fedify 2.0.
@lobsters@mastodon.social
How to level up the fediverse https://lobste.rs/s/gx9hvu #video #distributed
https://fosdem.org/2026/schedule/event/HVJRNV-how_to_level_up_the_fediverse/

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
Sneak peak.

@hongminhee@hollo.social
Working on @fedify/debugger, an embedded ActivityPub debug dashboard for Fedify applications. It will be shipped with Fedify 2.0.
@grahamperrin@bsd.cafe
RE: https://mastodon.social/@lobsters/115882407207303960
Good reads:
― the 2024 blog post
― the 2026 discussion in Lobsters.
Incidentally, I did stop using Discord. No regrets.
@lobsters@mastodon.social
The rise (and future fall) of Discord https://lobste.rs/s/r4wccr #culture #historical
https://slugcat.systems/post/24-12-12-the-rise-and-future-fall-of-discord/
@xoofx@mastodon.social · Reply to Alexandre Mutel's post
Just promoted XenoAtom.Terminal.UI to 1.0! 🎉
I have added 2 new features from the preview: placeholder and brush gradients usable with text controls! 🎨
I'm going to see if I can add an extension to XenoAtom.CommandLine library to generate beautiful command line help and I will be hopefully done with this entire sidetrack of projects! ☺️
@jiyu@hackers.pub
URI는 고유하지만 WebFinger 핸들이 동일한 계정들에 대해 각 구현체들이 어떻게 대응하는지가 궁금하다...

@hongminhee@hollo.social · Reply to pkg update's post
@pkgupdt 음, 그렇다기 보다는 ActivityPub 具顯이 어카운트의 캐시에 依存하면 안 된다고 보시면 될 것 같아요. 🤔

@hongminhee@hollo.social · Reply to pkg update's post
@pkgupdt 實은 도메인 네임 再使用이 可能하긴 합니다. 서비스 終了하기 前에 모든 어카운트에 對한 Delete 액티비티를 날리면 (이른바 self-destruct) 確實하게 可能하고, 그렇지 않더라도 時間이 좀 지나면 可能해야 합니다.
@haskell@fosstodon.org
“Well-Typed are delighted to announce a release preview of hs-bindgen, a tool for automatic Haskell binding generation from C header files”
Go try it out and give feedback!
@cheeaun@mastodon.social · Reply to Derek's post
@deach I'm not particularly knowledgeable about them. I used to spend a lot of time on implementing localization stuff, but not much on vertical text unfortunately.
This reminded me of articles by @huijing which have a lot more details and background:
- https://chenhuijing.com/blog/chinese-web-typography/ (2016)
- https://chenhuijing.com/blog/vertical-typesetting-revisited/ (2017)