Hello, I'm an open source software engineer in my late 30s living in #Seoul, #Korea, and an avid advocate of #FLOSS and the #fediverse.
I'm the creator of @fedify, an #ActivityPub server framework in #TypeScript, @hollo, an ActivityPub-enabled microblogging software for single users, and @botkit, a simple ActivityPub bot framework.
I sent my HHKB Pro 2 to a keyboard modding service to get it lubed and have the stabilizers balanced. I just got it back and am trying it out now. It's still a little stiff since it was just lubed, but I'm happy that the stabilizer rattle is definitely gone!
I have this bad habit. When something annoys me enough times,
I end up building a library for it. This time, it was CLI validation code.
See, I spend a lot of time reading other people's code. Open source projects,
work stuff, random GitHub repos I stumble upon at 2 AM. And I kept noticing this
thing: every CLI tool has the same ugly validation code tucked away somewhere.
You know the kind:
if (!opts.server && opts.port) { throw new Error("--port requires --server flag");}if (opts.server && !opts.port) { opts.port = 3000; // default port}// wait, what if they pass --port without a value?// what if the port is out of range?// what if...
It's not even that this code is hard to write. It's that it's everywhere.
Every project. Every CLI tool. The same patterns, slightly different flavors.
Options that depend on other options. Flags that can't be used together.
Arguments that only make sense in certain modes.
And here's what really got me: we solved this problem years ago for other types
of data. Just… not for CLIs.
The problem with validation
There's this blog post that completely changed how I think about parsing.
It's called Parse, don't validate by Alexis King. The gist? Don't parse data
into a loose type and then check if it's valid. Parse it directly into a type
that can only be valid.
Think about it. When you get JSON from an API, you don't just parse it as any
and then write a bunch of if-statements. You use something like Zod to parse
it directly into the shape you want. Invalid data? The parser rejects it. Done.
But with CLIs? We parse arguments into some bag of properties and then spend
the next 100 lines checking if that bag makes sense. It's backwards.
So yeah, I built Optique. Not because the world desperately needed another CLI
parser (it didn't), but because I was tired of seeing—and writing—the same
validation code everywhere.
Three patterns I was sick of validating
Dependent options
This one's everywhere. You have an option that only makes sense when another
option is enabled.
The type system now understands that when server is false, port literally
doesn't exist. Not undefined, not null—it's not there. Try to access it and
TypeScript yells at you. No runtime validation needed.
Mutually exclusive options
Another classic. Pick one output format: JSON, YAML, or XML. But definitely not
two.
I used to write this mess:
if ((opts.json ? 1 : 0) + (opts.yaml ? 1 : 0) + (opts.xml ? 1 : 0) > 1) { throw new Error('Choose only one output format');}
(Don't judge me, you've written something similar.)
Now?
const format = or( map(option("--json"), () => "json" as const), map(option("--yaml"), () => "yaml" as const), map(option("--xml"), () => "xml" as const));
The or() combinator means exactly one succeeds. The result is just
"json" | "yaml" | "xml". A single string. Not three booleans to juggle.
Environment-specific requirements
Production needs auth. Development needs debug flags. Docker needs different
options than local. You know the drill.
Instead of a validation maze, you just describe each environment:
No auth in production? Parser fails immediately. Trying to access --auth in
dev mode? TypeScript won't let you—the field doesn't exist on that type.
“But parser combinators though…”
I know, I know. “Parser combinators” sounds like something you'd need
a CS degree to understand.
Here's the thing: I don't have a CS degree. Actually, I don't have any degree.
But I've been using parser combinators for years because they're actually… not
that hard? It's just that the name makes them sound way scarier than they are.
I'd been using them for other stuff—parsing config files, DSLs, whatever.
But somehow it never clicked that you could use them for CLI parsing until
I saw Haskell's optparse-applicative. That was a real “wait, of course”
moment. Like, why are we doing this any other way?
Turns out it's stupidly simple. A parser is just a function. Combinators are
just functions that take parsers and return new parsers. That's it.
// This is a parserconst port = option("--port", integer());// This is also a parser (made from smaller parsers)const server = object({ port: port, host: option("--host", string())});// Still a parser (parsers all the way down)const config = or(server, client);
No monads. No category theory. Just functions. Boring, beautiful functions.
TypeScript does the heavy lifting
Here's the thing that still feels like cheating: I don't write types for my CLI
configs anymore. TypeScript just… figures it out.
TypeScript knows that if action is "deploy", then environment exists but
version doesn't. It knows replicas is a number. It knows force is
a boolean. I didn't tell it any of this.
This isn't just about nice autocomplete (though yeah, the autocomplete is great).
It's about catching bugs before they happen. Forget to handle a new option
somewhere? Code won't compile.
What actually changed for me
I've been dogfooding this for a few weeks. Some real talk:
I delete code now. Not refactor. Delete. That validation logic that used to
be 30% of my CLI code? Gone. It feels weird every time.
Refactoring isn't scary. Want to know something that usually terrifies me?
Changing how a CLI takes its arguments. Like going from --input file.txt to
just file.txt as a positional argument. With traditional parsers,
you're hunting down validation logic everywhere. With this?
You change the parser definition, TypeScript immediately shows you every place
that breaks, you fix them, done. What used to be an hour of “did I catch
everything?” is now “fix the red squiggles and move on.”
My CLIs got fancier. When adding complex option relationships doesn't mean
writing complex validation, you just… add them. Mutually exclusive groups?
Sure. Context-dependent options? Why not. The parser handles it.
But honestly? The biggest change is trust. If it compiles, the CLI logic works.
Not “probably works” or “works unless someone passes weird arguments.”
It just works.
Should you care?
If you're writing a 10-line script that takes one argument, you don't need this.
process.argv[2] and call it a day.
But if you've ever:
Had validation logic get out of sync with your actual options
Discovered in production that certain option combinations explode
Spent an afternoon tracking down why --verbose breaks when used with
--json
Written the same “option A requires option B” check for the fifth time
Then yeah, maybe you're tired of this stuff too.
Fair warning: Optique is young. I'm still figuring things out, the API might
shift a bit. But the core idea—parse, don't validate—that's solid.
And I haven't written validation code in months.
Still feels weird. Good weird.
Try it or don't
If this resonates:
Tutorial: Build something real, see if you hate it
I'm not saying Optique is the answer to all CLI problems. I'm just saying
I was tired of writing the same validation code everywhere, so I built something
that makes it unnecessary.
Take it or leave it. But that validation code you're about to write?
You probably don't need it.
I got suddenly inspired yesterday to build an email sending library for Node.js/Deno/Bun/edge functions. Meet Upyo: a TypeScript-first email library with a unified API that works across all JavaScript runtimes. It features pluggable transports (SMTP and Mailgun so far), built-in connection pooling, and comprehensive type safety. Still early days but already loving how clean the API turned out!
If you're interested in building your own #ActivityPub server but don't know where to start, I recommend checking out #Fedify's #tutorialCreating your own federated microblog. It provides a comprehensive, step-by-step guide that walks you through building a fully functional federated application. Perfect for developers who want to dive into the #fediverse!
Now up on #PhanpySocialDevhttps://dev.phanpy.social/ - give it a try 🙇♂️ - Link hidden inside Settings, not the nav menu - Not localized yet, still experimental, things might change or break later - The 3D grid background was fun 🙈
ALT text detailsDemo of navigating around 'Year In Posts' feature on Phanpy
I've opened a proposal for #LogTape to support configuration from plain objects, making it possible to load #logging configs from JSON/YAML/TOML files.
The idea is similar to Python's logging.config.dictConfig()—you'd be able to configure sinks, formatters, and loggers declaratively, making it easier to manage different configs for dev/staging/prod without touching code.
Would love to hear your thoughts, especially if you've worked with similar patterns in other ecosystems.
공개 서버를 운영하다가 닫게 된 것에는 당연히 충분한 고민이 있기는 하니 닫게 된 곡절에 대해서 따지기도 힘들고 이미 정착해 있는 사람들 입장에서는 평소에는 안 들어올때만 트위터 터졌을 때만 찾는 게 꽤나 얄밉기도 하고 운영 종료 통지를 회원 한 사람 한 사람 집에 찾아가서 통보를 할 수도 없고 … 이렇게 할 거면 조금 더 책임감 있게 공개 인스턴스를 개설해야 하는 더 아닌가 싶으면서도 그런 책임감이나 자금이 크나큰 진입 장벽으로 작동하게 되면 가벼운 이야기는 할 수 없는 공간이 될테고 …. 그런 복잡한 문제가 있단 말이죠….
ALT text detailsA Windows computer screen displaying the “Reset this PC” interface with a purple background. The message reads “Getting things ready. This won't take long” with a loading hourglass icon. A “Cancel” button is visible in the bottom right corner.
순수 함수형이고, 모나드는 없고, 소유권도 없고, 타입클래스나 트레잇도 없습니다. 물론 객체 시스템도 없고요. 대신 제네릭과 대수적 효과를 넣을 예정입니다. ad-hoc polymorphism을 배제하고 어디까지 갈 수 있는지 시험해보려는 게 목적 중 하나인데 생각보다 할만할 것 같아요.
그리고 매우 vibe-coded되어 있습니다. Claude Code와 Codex가 없었으면 엄두도 못 냈을 듯.
문법적으로는 Rust와 Gleam에, 의미론적으로는 Gleam과 Unison에 영감을 많이 받았습니다. 사실 Gleam과 Unison 둘 다 네이티브 바이너리로 컴파일을 아직 못 하고 있어서 시작한 프로젝트이기도 합니다. 하지만 정작 Tribute도 첫 타겟은 네이티브가 아니라 WebAssembly 3.0입니다. GC 구현을 만들기 귀찮았거든요.
I don't know if people are aware of this Firefox addon that brings discoverability for personal websites that have identity confirmation links to Mastodon profiles: StreetPass for Mastodon.
It's pretty good, it allowed me to find quite a number of people based on incidentally reading their blogs.
Fedify 1.10.0: Observability foundations for the future debug dashboard
Fedify is a #TypeScript framework for building #ActivityPub servers that participate in the #fediverse. It reduces the complexity and boilerplate typically required for ActivityPub implementation while providing comprehensive federation capabilities.
We're excited to announce #Fedify 1.10.0, a focused release that lays critical groundwork for future debugging and observability features. Released on December 24, 2025, this version introduces infrastructure improvements that will enable the upcoming debug dashboard while maintaining full backward compatibility with existing Fedify applications.
This release represents a transitional step toward Fedify 2.0.0, introducing optional capabilities that will become standard in the next major version. The changes focus on enabling richer observability through OpenTelemetry enhancements and adding prefix scanning capabilities to the key–value store interface.
Enhanced OpenTelemetry instrumentation
Fedify 1.10.0 significantly expands OpenTelemetry instrumentation with span events that capture detailed ActivityPub data. These enhancements enable richer observability and debugging capabilities without relying solely on span attributes, which are limited to primitive values.
The new span events provide complete activity payloads and verification status, making it possible to build comprehensive debugging tools that show the full context of federation operations:
activitypub.activity.received event on activitypub.inbox span — records the full activity JSON, verification status (activity verified, HTTP signatures verified, Linked Data signatures verified), and actor information
activitypub.activity.sent event on activitypub.send_activity span — records the full activity JSON and target inbox URL
activitypub.object.fetched event on activitypub.lookup_object span — records the fetched object's type and complete JSON-LD representation
Additionally, Fedify now instruments previously uncovered operations:
activitypub.fetch_document span for document loader operations, tracking URL fetching, HTTP redirects, and final document URLs
activitypub.verify_key_ownership span for cryptographic key ownership verification, recording actor ID, key ID, verification result, and the verification method used
These instrumentation improvements emerged from work on issue #234 (Real-time ActivityPub debug dashboard). Rather than introducing a custom observer interface as originally proposed in #323, we leveraged Fedify's existing OpenTelemetry infrastructure to capture rich federation data through span events. This approach provides a standards-based foundation that's composable with existing observability tools like Jaeger, Zipkin, and Grafana Tempo.
Distributed trace storage with FedifySpanExporter
Building on the enhanced instrumentation, Fedify 1.10.0 introduces FedifySpanExporter, a new OpenTelemetry SpanExporter that persists ActivityPub activity traces to a KvStore. This enables distributed tracing support across multiple nodes in a Fedify deployment, which is essential for building debug dashboards that can show complete request flows across web servers and background workers.
The new @fedify/fedify/otel module provides the following types and interfaces:
import { MemoryKvStore } from "@fedify/fedify";import { FedifySpanExporter } from "@fedify/fedify/otel";import { BasicTracerProvider, SimpleSpanProcessor,} from "@opentelemetry/sdk-trace-base";const kv = new MemoryKvStore();const exporter = new FedifySpanExporter(kv, { ttl: Temporal.Duration.from({ hours: 1 }),});const provider = new BasicTracerProvider();provider.addSpanProcessor(new SimpleSpanProcessor(exporter));
The stored traces can be queried for display in debugging interfaces:
// Get all activities for a specific traceconst activities = await exporter.getActivitiesByTraceId(traceId);// Get recent traces with summary informationconst recentTraces = await exporter.getRecentTraces({ limit: 100 });
The exporter supports two storage strategies depending on the KvStore capabilities. When the list() method is available (preferred), it stores individual records with keys like [prefix, traceId, spanId]. When only cas() is available, it uses compare-and-swap operations to append records to arrays stored per trace.
This infrastructure provides the foundation for implementing a comprehensive debug dashboard as a custom SpanExporter, as outlined in the updated implementation plan for issue #234.
Optional list() method for KvStore interface
Fedify 1.10.0 adds an optional list() method to the KvStore interface for enumerating entries by key prefix. This method enables efficient prefix scanning, which is useful for implementing features like distributed trace storage, cache invalidation by prefix, and listing related entries.
When the prefix parameter is omitted or empty, list() returns all entries in the store. This is useful for debugging and administrative purposes. All official KvStore implementations have been updated to support this method:
MemoryKvStore — filters in-memory keys by prefix
SqliteKvStore — uses LIKE query with JSON key pattern
PostgresKvStore — uses array slice comparison
RedisKvStore — uses SCAN with pattern matching and key deserialization
DenoKvStore — delegates to Deno KV's built-in list() API
While list() is currently optional to give existing custom KvStore implementations time to add support, it will become a required method in Fedify 2.0.0 (tracked in issue #499). This migration path allows implementers to gradually adopt the new capability throughout the 1.x release cycle.
The addition of list() support was implemented in pull request #500, which also included the setup of proper testing infrastructure for WorkersKvStore using Vitest with @cloudflare/vitest-pool-workers.
NestJS 11 and Express 5 support
Thanks to a contribution from Cho Hasang (@crohasang), the @fedify/nestjs package now supports NestJS 11 environments that use Express 5. The peer dependency range for Express has been widened to ^4.0.0 || ^5.0.0, eliminating peer dependency conflicts in modern NestJS projects while maintaining backward compatibility with Express 4.
This change, implemented in pull request #493, keeps the workspace catalog pinned to Express 4 for internal development and test stability while allowing Express 5 in consuming applications.
What's next
Fedify 1.10.0 serves as a stepping stone toward the upcoming 2.0.0 release. The optional list() method introduced in this version will become required in 2.0.0, simplifying the interface contract and allowing Fedify internals to rely on prefix scanning being universally available.
The enhanced #OpenTelemetry instrumentation and FedifySpanExporter provide the foundation for implementing the debug dashboard proposed in issue #234. The next steps include building the web dashboard UI with real-time activity lists, filtering, and JSON inspection capabilities—all as a separate package that leverages the standards-based observability infrastructure introduced in this release.
Depending on the development timeline and feature priorities, there may be additional 1.x releases before the 2.0.0 migration. For developers building custom KvStore implementations, now is the time to add list() support to prepare for the eventual 2.0.0 upgrade. The implementation patterns used in the official backends provide clear guidance for various storage strategies.
Acknowledgments
Special thanks to Cho Hasang (@crohasang) for the NestJS 11 compatibility improvements, and to all community members who provided feedback and testing for the new observability features.
For the complete list of changes, bug fixes, and improvements, please refer to the CHANGES.md file in the repository.