MacRumors.com
@macrumors@mastodon.social
Duolingo Used iPhone's Dynamic Island to Display Ads, Violating Apple Design Guidelines https://www.macrumors.com/2026/01/02/duolingo-dynamic-island-ad/?utm_source=dlvr.it&utm_medium=mastodon


@hongminhee@hollo.social · 998 following · 1408 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|
@macrumors@mastodon.social
Duolingo Used iPhone's Dynamic Island to Display Ads, Violating Apple Design Guidelines https://www.macrumors.com/2026/01/02/duolingo-dynamic-island-ad/?utm_source=dlvr.it&utm_medium=mastodon
@hongminhee@hackers.pub
Hi #fediverse! I'm working on Hackers' Pub, a small #ActivityPub-powered social platform for developers and tech folks.
We're currently drafting a content #moderation (#flag/#report) system and would really appreciate any feedback from those who have experience with federated moderation—we're still learning.
Some ideas we're exploring:
Flag activity for cross-instance reportsOur guiding principle is that moderation should be about growth, not punishment. Expulsion is the last resort.
Here's the full draft if you're curious: https://github.com/hackers-pub/hackerspub/issues/192.
If you've dealt with moderation in federated contexts, what challenges did you run into? What worked well? We'd love to hear your thoughts.

@hongminhee@hollo.social
I wrote about setting up logging that's more useful than console.log() but doesn't require a Ph.D. in configuration. Covers categories, structured logging, request tracing, and production tips.
https://hackers.pub/@hongminhee/2026/logging-nodejs-deno-bun-2026
@hongminhee@hackers.pub
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
sinks) and which logs to show (lowestLevel).["my-app", "server"] inherits settings from ["my-app"].Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.

@hongminhee@hollo.social
Wrote about designing type-safe sync/async mode support in TypeScript. Making object({ sync: syncParser, async: asyncParser }) automatically infer as async turned out to be trickier than expected.
https://hackers.pub/@hongminhee/2026/typescript-sync-async-type-safety
@hongminhee@hackers.pub
I recently added sync/async mode support to Optique, a type-safe CLI parser
for TypeScript. It turned out to be one of the trickier features I've
implemented—the object() combinator alone needed to compute a combined mode
from all its child parsers, and TypeScript's inference kept hitting edge cases.
Optique is a type-safe, combinatorial CLI parser for TypeScript, inspired by Haskell's optparse-applicative. Instead of decorators or builder patterns, you compose small parsers into larger ones using combinators, and TypeScript infers the result types.
Here's a quick taste:
import { object } from "@optique/core/constructs";
import { argument, option } from "@optique/core/primitives";
import { string, integer } from "@optique/core/valueparser";
import { run } from "@optique/run";
const cli = object({
name: argument(string()),
count: option("-n", "--count", integer()),
});
// TypeScript infers: { name: string; count: number | undefined }
const result = run(cli); // sync by default
The type inference works through arbitrarily deep compositions—in most cases, you don't need explicit type annotations.
Lucas Garron (@lgarron) opened an issue requesting
async support for shell completions. He wanted to provide
Tab-completion suggestions by running shell commands like
git for-each-ref to list branches and tags.
// Lucas's example: fetching Git branches and tags in parallel
const [branches, tags] = await Promise.all([
$`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
$`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);
At first, I didn't like the idea. Optique's entire API was synchronous, which made it simpler to reason about and avoided the “async infection” problem where one async function forces everything upstream to become async. I argued that shell completion should be near-instantaneous, and if you need async data, you should cache it at startup.
But Lucas pushed back. The filesystem is a database, and many useful completions inherently require async work—Git refs change constantly, and pre-caching everything at startup doesn't scale for large repos. Fair point.
So, how do you support both sync and async execution modes in a composable parser library while maintaining type safety?
The key requirements were:
parse() returns T or Promise<T>complete() returns T or Promise<T>suggest() returns Iterable<T> or AsyncIterable<T>The fourth requirement is the tricky one. Consider this:
const syncParser = flag("--verbose");
const asyncParser = option("--branch", asyncValueParser);
// What's the type of this?
const combined = object({ verbose: syncParser, branch: asyncParser });
The combined parser should be async because one of its fields is async. This means we need type-level logic to compute the combined mode.
I explored five different approaches, each with its own trade-offs.
Add a mode type parameter to Parser and use conditional types:
type Mode = "sync" | "async";
type ModeValue<M extends Mode, T> = M extends "async" ? Promise<T> : T;
interface Parser<M extends Mode, TValue, TState> {
parse(context: ParserContext<TState>): ModeValue<M, ParserResult<TState>>;
// ...
}
The challenge is computing combined modes:
type CombineModes<T extends Record<string, Parser<any, any, any>>> =
T[keyof T] extends Parser<infer M, any, any>
? M extends "async" ? "async" : "sync"
: never;
A variant of Option A, but place the mode parameter first with a default
of "sync":
interface Parser<M extends Mode = "sync", TValue, TState> {
readonly $mode: M;
// ...
}
The default value maintains backward compatibility—existing user code keeps working without changes.
Define completely separate Parser and AsyncParser interfaces with
explicit conversion:
interface Parser<TValue, TState> { /* sync methods */ }
interface AsyncParser<TValue, TState> { /* async methods */ }
function toAsync<T, S>(parser: Parser<T, S>): AsyncParser<T, S>;
Simpler to understand, but requires code duplication and explicit conversions.
The minimal approach. Only allow suggest() to be async:
interface Parser<TValue, TState> {
parse(context: ParserContext<TState>): ParserResult<TState>; // always sync
suggest(context: ParserContext<TState>, prefix: string):
Iterable<Suggestion> | AsyncIterable<Suggestion>; // can be either
}
This addresses the original use case but doesn't help if async parse() is
ever needed.
Use the technique from fp-ts to simulate Higher-Kinded Types:
interface URItoKind<A> {
Identity: A;
Promise: Promise<A>;
}
type Kind<F extends keyof URItoKind<any>, A> = URItoKind<A>[F];
interface Parser<F extends keyof URItoKind<any>, TValue, TState> {
parse(context: ParserContext<TState>): Kind<F, ParserResult<TState>>;
}
The most flexible approach, but with a steep learning curve.
Rather than commit to an approach based on theoretical analysis, I created a prototype to test how well TypeScript handles the type inference in practice. I published my findings in the GitHub issue:
Both approaches correctly handle the “any async → all async” rule at the type level. (…) Complex conditional types like
ModeValue<CombineParserModes<T>, ParserResult<TState>>sometimes require explicit type casting in the implementation. This only affects library internals. The user-facing API remains clean.
The prototype validated that Option B (explicit mode parameter with default) would work. I chose it for these reasons:
"sync" keeps existing code working$mode
property)CombineModes works The CombineModes type computes whether a combined parser should be sync or
async:
type CombineModes<T extends readonly Mode[]> = "async" extends T[number]
? "async"
: "sync";
This type checks if "async" is present anywhere in the tuple of modes.
If so, the result is "async"; otherwise, it's "sync".
For combinators like object(), I needed to extract modes from parser
objects and combine them:
// Extract the mode from a single parser
type ParserMode<T> = T extends Parser<infer M, unknown, unknown> ? M : never;
// Combine modes from all values in a record of parsers
type CombineObjectModes<T extends Record<string, Parser<Mode, unknown, unknown>>> =
CombineModes<{ [K in keyof T]: ParserMode<T[K]> }[keyof T][]>;
The type system handles compile-time safety, but the implementation also needs
runtime logic. Each parser has a $mode property that indicates its execution
mode:
const syncParser = option("-n", "--name", string());
console.log(syncParser.$mode); // "sync"
const asyncParser = option("-b", "--branch", asyncValueParser);
console.log(asyncParser.$mode); // "async"
Combinators compute their mode at construction time:
function object<T extends Record<string, Parser<Mode, unknown, unknown>>>(
parsers: T
): Parser<CombineObjectModes<T>, ObjectValue<T>, ObjectState<T>> {
const parserKeys = Reflect.ownKeys(parsers);
const combinedMode: Mode = parserKeys.some(
(k) => parsers[k as keyof T].$mode === "async"
) ? "async" : "sync";
// ... implementation
}
Lucas suggested an important refinement during our
discussion. Instead of having run() automatically choose between sync and
async based on the parser mode, he proposed separate functions:
Perhaps
run(…)could be automatic, andrunSync(…)andrunAsync(…)could enforce that the inferred type matches what is expected.
So we ended up with:
run(): automatic based on parser moderunSync(): enforces sync mode at compile timerunAsync(): enforces async mode at compile time// Automatic: returns T for sync parsers, Promise<T> for async
const result1 = run(syncParser); // string
const result2 = run(asyncParser); // Promise<string>
// Explicit: compile-time enforcement
const result3 = runSync(syncParser); // string
const result4 = runAsync(asyncParser); // Promise<string>
// Compile error: can't use runSync with async parser
const result5 = runSync(asyncParser); // Type error!
I applied the same pattern to parse()/parseSync()/parseAsync() and
suggest()/suggestSync()/suggestAsync() in the facade functions.
With the new API, creating an async value parser for Git branches looks like this:
import type { Suggestion } from "@optique/core/parser";
import type { ValueParser, ValueParserResult } from "@optique/core/valueparser";
function gitRef(): ValueParser<"async", string> {
return {
$mode: "async",
metavar: "REF",
parse(input: string): Promise<ValueParserResult<string>> {
return Promise.resolve({ success: true, value: input });
},
format(value: string): string {
return value;
},
async *suggest(prefix: string): AsyncIterable<Suggestion> {
const { $ } = await import("bun");
const [branches, tags] = await Promise.all([
$`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
$`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);
for (const ref of [...branches.split("\n"), ...tags.split("\n")]) {
const trimmed = ref.trim();
if (trimmed && trimmed.startsWith(prefix)) {
yield { kind: "literal", text: trimmed };
}
}
},
};
}
Notice that parse() returns Promise.resolve() even though it's synchronous.
This is because the ValueParser<"async", T> type requires all methods to use
async signatures. Lucas pointed out this is a minor ergonomic issue. If only
suggest() needs to be async, you still have to wrap parse() in a Promise.
I considered per-method mode granularity (e.g., ValueParser<ParseMode, SuggestMode, T>), but the implementation complexity would multiply
substantially. For now, the workaround is simple enough:
// Option 1: Use Promise.resolve()
parse(input) {
return Promise.resolve({ success: true, value: input });
}
// Option 2: Mark as async and suppress the linter
// biome-ignore lint/suspicious/useAwait: sync implementation in async ValueParser
async parse(input) {
return { success: true, value: input };
}
Supporting dual modes added significant complexity to Optique's internals. Every combinator needed updates:
For example, the object() combinator went from around 100 lines to around
250 lines. The internal implementation uses conditional logic based on the
combined mode:
if (combinedMode === "async") {
return {
$mode: "async" as M,
// ... async implementation with Promise chains
async parse(context) {
// ... await each field's parse result
},
};
} else {
return {
$mode: "sync" as M,
// ... sync implementation
parse(context) {
// ... directly call each field's parse
},
};
}
This duplication is the cost of supporting both modes without runtime overhead for sync-only use cases.
My initial instinct was to resist async support. Lucas's persistence and concrete examples changed my mind, but I validated the approach with a prototype before committing. The prototype revealed practical issues (like TypeScript inference limits) that pure design analysis would have missed.
Making "sync" the default mode meant existing code continued to work
unchanged. This was a deliberate choice. Breaking changes should require
user action, not break silently.
I chose unified mode (all methods share the same sync/async mode) over
per-method granularity. This means users occasionally write
Promise.resolve() for methods that don't actually need async, but the
alternative was multiplicative complexity in the type system.
The entire design process happened in a public GitHub issue. Lucas, Giuseppe,
and others contributed ideas that shaped the final API. The
runSync()/runAsync() distinction came directly from Lucas's feedback.
This was one of the more challenging features I've implemented in Optique. TypeScript's type system is powerful enough to encode the “any async means all async” rule at compile time, but getting there required careful design work and prototyping.
What made it work: conditional types like ModeValue<M, T> can bridge the gap
between sync and async worlds. You pay for it with implementation complexity,
but the user-facing API stays clean and type-safe.
Optique 0.9.0 with async support is currently in pre-release testing. If you'd like to try it, check out PR #70 or install the pre-release:
npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
Feedback is welcome!
@lobsters@mastodon.social
Designing type-safe sync/async mode support in TypeScript https://lobste.rs/s/844jrt #api #javascript #plt
https://hackers.pub/@hongminhee/2026/typescript-sync-async-type-safety
@chomu.dev@bsky.brid.gy
지인분이 그 밥 산 거 왔다면서 그 사진이랑 다 지우시고 올리셨길래 클로드한테 이걸 기반으로 SCP 문서 써달라고 하니까 맛있게 말아와줌

@hongminhee@hollo.social
Wrote about designing type-safe sync/async mode support in TypeScript. Making object({ sync: syncParser, async: asyncParser }) automatically infer as async turned out to be trickier than expected.
https://hackers.pub/@hongminhee/2026/typescript-sync-async-type-safety
@hongminhee@hackers.pub
I recently added sync/async mode support to Optique, a type-safe CLI parser
for TypeScript. It turned out to be one of the trickier features I've
implemented—the object() combinator alone needed to compute a combined mode
from all its child parsers, and TypeScript's inference kept hitting edge cases.
Optique is a type-safe, combinatorial CLI parser for TypeScript, inspired by Haskell's optparse-applicative. Instead of decorators or builder patterns, you compose small parsers into larger ones using combinators, and TypeScript infers the result types.
Here's a quick taste:
import { object } from "@optique/core/constructs";
import { argument, option } from "@optique/core/primitives";
import { string, integer } from "@optique/core/valueparser";
import { run } from "@optique/run";
const cli = object({
name: argument(string()),
count: option("-n", "--count", integer()),
});
// TypeScript infers: { name: string; count: number | undefined }
const result = run(cli); // sync by default
The type inference works through arbitrarily deep compositions—in most cases, you don't need explicit type annotations.
Lucas Garron (@lgarron) opened an issue requesting
async support for shell completions. He wanted to provide
Tab-completion suggestions by running shell commands like
git for-each-ref to list branches and tags.
// Lucas's example: fetching Git branches and tags in parallel
const [branches, tags] = await Promise.all([
$`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
$`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);
At first, I didn't like the idea. Optique's entire API was synchronous, which made it simpler to reason about and avoided the “async infection” problem where one async function forces everything upstream to become async. I argued that shell completion should be near-instantaneous, and if you need async data, you should cache it at startup.
But Lucas pushed back. The filesystem is a database, and many useful completions inherently require async work—Git refs change constantly, and pre-caching everything at startup doesn't scale for large repos. Fair point.
So, how do you support both sync and async execution modes in a composable parser library while maintaining type safety?
The key requirements were:
parse() returns T or Promise<T>complete() returns T or Promise<T>suggest() returns Iterable<T> or AsyncIterable<T>The fourth requirement is the tricky one. Consider this:
const syncParser = flag("--verbose");
const asyncParser = option("--branch", asyncValueParser);
// What's the type of this?
const combined = object({ verbose: syncParser, branch: asyncParser });
The combined parser should be async because one of its fields is async. This means we need type-level logic to compute the combined mode.
I explored five different approaches, each with its own trade-offs.
Add a mode type parameter to Parser and use conditional types:
type Mode = "sync" | "async";
type ModeValue<M extends Mode, T> = M extends "async" ? Promise<T> : T;
interface Parser<M extends Mode, TValue, TState> {
parse(context: ParserContext<TState>): ModeValue<M, ParserResult<TState>>;
// ...
}
The challenge is computing combined modes:
type CombineModes<T extends Record<string, Parser<any, any, any>>> =
T[keyof T] extends Parser<infer M, any, any>
? M extends "async" ? "async" : "sync"
: never;
A variant of Option A, but place the mode parameter first with a default
of "sync":
interface Parser<M extends Mode = "sync", TValue, TState> {
readonly $mode: M;
// ...
}
The default value maintains backward compatibility—existing user code keeps working without changes.
Define completely separate Parser and AsyncParser interfaces with
explicit conversion:
interface Parser<TValue, TState> { /* sync methods */ }
interface AsyncParser<TValue, TState> { /* async methods */ }
function toAsync<T, S>(parser: Parser<T, S>): AsyncParser<T, S>;
Simpler to understand, but requires code duplication and explicit conversions.
The minimal approach. Only allow suggest() to be async:
interface Parser<TValue, TState> {
parse(context: ParserContext<TState>): ParserResult<TState>; // always sync
suggest(context: ParserContext<TState>, prefix: string):
Iterable<Suggestion> | AsyncIterable<Suggestion>; // can be either
}
This addresses the original use case but doesn't help if async parse() is
ever needed.
Use the technique from fp-ts to simulate Higher-Kinded Types:
interface URItoKind<A> {
Identity: A;
Promise: Promise<A>;
}
type Kind<F extends keyof URItoKind<any>, A> = URItoKind<A>[F];
interface Parser<F extends keyof URItoKind<any>, TValue, TState> {
parse(context: ParserContext<TState>): Kind<F, ParserResult<TState>>;
}
The most flexible approach, but with a steep learning curve.
Rather than commit to an approach based on theoretical analysis, I created a prototype to test how well TypeScript handles the type inference in practice. I published my findings in the GitHub issue:
Both approaches correctly handle the “any async → all async” rule at the type level. (…) Complex conditional types like
ModeValue<CombineParserModes<T>, ParserResult<TState>>sometimes require explicit type casting in the implementation. This only affects library internals. The user-facing API remains clean.
The prototype validated that Option B (explicit mode parameter with default) would work. I chose it for these reasons:
"sync" keeps existing code working$mode
property)CombineModes works The CombineModes type computes whether a combined parser should be sync or
async:
type CombineModes<T extends readonly Mode[]> = "async" extends T[number]
? "async"
: "sync";
This type checks if "async" is present anywhere in the tuple of modes.
If so, the result is "async"; otherwise, it's "sync".
For combinators like object(), I needed to extract modes from parser
objects and combine them:
// Extract the mode from a single parser
type ParserMode<T> = T extends Parser<infer M, unknown, unknown> ? M : never;
// Combine modes from all values in a record of parsers
type CombineObjectModes<T extends Record<string, Parser<Mode, unknown, unknown>>> =
CombineModes<{ [K in keyof T]: ParserMode<T[K]> }[keyof T][]>;
The type system handles compile-time safety, but the implementation also needs
runtime logic. Each parser has a $mode property that indicates its execution
mode:
const syncParser = option("-n", "--name", string());
console.log(syncParser.$mode); // "sync"
const asyncParser = option("-b", "--branch", asyncValueParser);
console.log(asyncParser.$mode); // "async"
Combinators compute their mode at construction time:
function object<T extends Record<string, Parser<Mode, unknown, unknown>>>(
parsers: T
): Parser<CombineObjectModes<T>, ObjectValue<T>, ObjectState<T>> {
const parserKeys = Reflect.ownKeys(parsers);
const combinedMode: Mode = parserKeys.some(
(k) => parsers[k as keyof T].$mode === "async"
) ? "async" : "sync";
// ... implementation
}
Lucas suggested an important refinement during our
discussion. Instead of having run() automatically choose between sync and
async based on the parser mode, he proposed separate functions:
Perhaps
run(…)could be automatic, andrunSync(…)andrunAsync(…)could enforce that the inferred type matches what is expected.
So we ended up with:
run(): automatic based on parser moderunSync(): enforces sync mode at compile timerunAsync(): enforces async mode at compile time// Automatic: returns T for sync parsers, Promise<T> for async
const result1 = run(syncParser); // string
const result2 = run(asyncParser); // Promise<string>
// Explicit: compile-time enforcement
const result3 = runSync(syncParser); // string
const result4 = runAsync(asyncParser); // Promise<string>
// Compile error: can't use runSync with async parser
const result5 = runSync(asyncParser); // Type error!
I applied the same pattern to parse()/parseSync()/parseAsync() and
suggest()/suggestSync()/suggestAsync() in the facade functions.
With the new API, creating an async value parser for Git branches looks like this:
import type { Suggestion } from "@optique/core/parser";
import type { ValueParser, ValueParserResult } from "@optique/core/valueparser";
function gitRef(): ValueParser<"async", string> {
return {
$mode: "async",
metavar: "REF",
parse(input: string): Promise<ValueParserResult<string>> {
return Promise.resolve({ success: true, value: input });
},
format(value: string): string {
return value;
},
async *suggest(prefix: string): AsyncIterable<Suggestion> {
const { $ } = await import("bun");
const [branches, tags] = await Promise.all([
$`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
$`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);
for (const ref of [...branches.split("\n"), ...tags.split("\n")]) {
const trimmed = ref.trim();
if (trimmed && trimmed.startsWith(prefix)) {
yield { kind: "literal", text: trimmed };
}
}
},
};
}
Notice that parse() returns Promise.resolve() even though it's synchronous.
This is because the ValueParser<"async", T> type requires all methods to use
async signatures. Lucas pointed out this is a minor ergonomic issue. If only
suggest() needs to be async, you still have to wrap parse() in a Promise.
I considered per-method mode granularity (e.g., ValueParser<ParseMode, SuggestMode, T>), but the implementation complexity would multiply
substantially. For now, the workaround is simple enough:
// Option 1: Use Promise.resolve()
parse(input) {
return Promise.resolve({ success: true, value: input });
}
// Option 2: Mark as async and suppress the linter
// biome-ignore lint/suspicious/useAwait: sync implementation in async ValueParser
async parse(input) {
return { success: true, value: input };
}
Supporting dual modes added significant complexity to Optique's internals. Every combinator needed updates:
For example, the object() combinator went from around 100 lines to around
250 lines. The internal implementation uses conditional logic based on the
combined mode:
if (combinedMode === "async") {
return {
$mode: "async" as M,
// ... async implementation with Promise chains
async parse(context) {
// ... await each field's parse result
},
};
} else {
return {
$mode: "sync" as M,
// ... sync implementation
parse(context) {
// ... directly call each field's parse
},
};
}
This duplication is the cost of supporting both modes without runtime overhead for sync-only use cases.
My initial instinct was to resist async support. Lucas's persistence and concrete examples changed my mind, but I validated the approach with a prototype before committing. The prototype revealed practical issues (like TypeScript inference limits) that pure design analysis would have missed.
Making "sync" the default mode meant existing code continued to work
unchanged. This was a deliberate choice. Breaking changes should require
user action, not break silently.
I chose unified mode (all methods share the same sync/async mode) over
per-method granularity. This means users occasionally write
Promise.resolve() for methods that don't actually need async, but the
alternative was multiplicative complexity in the type system.
The entire design process happened in a public GitHub issue. Lucas, Giuseppe,
and others contributed ideas that shaped the final API. The
runSync()/runAsync() distinction came directly from Lucas's feedback.
This was one of the more challenging features I've implemented in Optique. TypeScript's type system is powerful enough to encode the “any async means all async” rule at compile time, but getting there required careful design work and prototyping.
What made it work: conditional types like ModeValue<M, T> can bridge the gap
between sync and async worlds. You pay for it with implementation complexity,
but the user-facing API stays clean and type-safe.
Optique 0.9.0 with async support is currently in pre-release testing. If you'd like to try it, check out PR #70 or install the pre-release:
npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
Feedback is welcome!
@hongminhee@hackers.pub
I recently added sync/async mode support to Optique, a type-safe CLI parser
for TypeScript. It turned out to be one of the trickier features I've
implemented—the object() combinator alone needed to compute a combined mode
from all its child parsers, and TypeScript's inference kept hitting edge cases.
Optique is a type-safe, combinatorial CLI parser for TypeScript, inspired by Haskell's optparse-applicative. Instead of decorators or builder patterns, you compose small parsers into larger ones using combinators, and TypeScript infers the result types.
Here's a quick taste:
import { object } from "@optique/core/constructs";
import { argument, option } from "@optique/core/primitives";
import { string, integer } from "@optique/core/valueparser";
import { run } from "@optique/run";
const cli = object({
name: argument(string()),
count: option("-n", "--count", integer()),
});
// TypeScript infers: { name: string; count: number | undefined }
const result = run(cli); // sync by default
The type inference works through arbitrarily deep compositions—in most cases, you don't need explicit type annotations.
Lucas Garron (@lgarron) opened an issue requesting
async support for shell completions. He wanted to provide
Tab-completion suggestions by running shell commands like
git for-each-ref to list branches and tags.
// Lucas's example: fetching Git branches and tags in parallel
const [branches, tags] = await Promise.all([
$`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
$`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);
At first, I didn't like the idea. Optique's entire API was synchronous, which made it simpler to reason about and avoided the “async infection” problem where one async function forces everything upstream to become async. I argued that shell completion should be near-instantaneous, and if you need async data, you should cache it at startup.
But Lucas pushed back. The filesystem is a database, and many useful completions inherently require async work—Git refs change constantly, and pre-caching everything at startup doesn't scale for large repos. Fair point.
So, how do you support both sync and async execution modes in a composable parser library while maintaining type safety?
The key requirements were:
parse() returns T or Promise<T>complete() returns T or Promise<T>suggest() returns Iterable<T> or AsyncIterable<T>The fourth requirement is the tricky one. Consider this:
const syncParser = flag("--verbose");
const asyncParser = option("--branch", asyncValueParser);
// What's the type of this?
const combined = object({ verbose: syncParser, branch: asyncParser });
The combined parser should be async because one of its fields is async. This means we need type-level logic to compute the combined mode.
I explored five different approaches, each with its own trade-offs.
Add a mode type parameter to Parser and use conditional types:
type Mode = "sync" | "async";
type ModeValue<M extends Mode, T> = M extends "async" ? Promise<T> : T;
interface Parser<M extends Mode, TValue, TState> {
parse(context: ParserContext<TState>): ModeValue<M, ParserResult<TState>>;
// ...
}
The challenge is computing combined modes:
type CombineModes<T extends Record<string, Parser<any, any, any>>> =
T[keyof T] extends Parser<infer M, any, any>
? M extends "async" ? "async" : "sync"
: never;
A variant of Option A, but place the mode parameter first with a default
of "sync":
interface Parser<M extends Mode = "sync", TValue, TState> {
readonly $mode: M;
// ...
}
The default value maintains backward compatibility—existing user code keeps working without changes.
Define completely separate Parser and AsyncParser interfaces with
explicit conversion:
interface Parser<TValue, TState> { /* sync methods */ }
interface AsyncParser<TValue, TState> { /* async methods */ }
function toAsync<T, S>(parser: Parser<T, S>): AsyncParser<T, S>;
Simpler to understand, but requires code duplication and explicit conversions.
The minimal approach. Only allow suggest() to be async:
interface Parser<TValue, TState> {
parse(context: ParserContext<TState>): ParserResult<TState>; // always sync
suggest(context: ParserContext<TState>, prefix: string):
Iterable<Suggestion> | AsyncIterable<Suggestion>; // can be either
}
This addresses the original use case but doesn't help if async parse() is
ever needed.
Use the technique from fp-ts to simulate Higher-Kinded Types:
interface URItoKind<A> {
Identity: A;
Promise: Promise<A>;
}
type Kind<F extends keyof URItoKind<any>, A> = URItoKind<A>[F];
interface Parser<F extends keyof URItoKind<any>, TValue, TState> {
parse(context: ParserContext<TState>): Kind<F, ParserResult<TState>>;
}
The most flexible approach, but with a steep learning curve.
Rather than commit to an approach based on theoretical analysis, I created a prototype to test how well TypeScript handles the type inference in practice. I published my findings in the GitHub issue:
Both approaches correctly handle the “any async → all async” rule at the type level. (…) Complex conditional types like
ModeValue<CombineParserModes<T>, ParserResult<TState>>sometimes require explicit type casting in the implementation. This only affects library internals. The user-facing API remains clean.
The prototype validated that Option B (explicit mode parameter with default) would work. I chose it for these reasons:
"sync" keeps existing code working$mode
property)CombineModes works The CombineModes type computes whether a combined parser should be sync or
async:
type CombineModes<T extends readonly Mode[]> = "async" extends T[number]
? "async"
: "sync";
This type checks if "async" is present anywhere in the tuple of modes.
If so, the result is "async"; otherwise, it's "sync".
For combinators like object(), I needed to extract modes from parser
objects and combine them:
// Extract the mode from a single parser
type ParserMode<T> = T extends Parser<infer M, unknown, unknown> ? M : never;
// Combine modes from all values in a record of parsers
type CombineObjectModes<T extends Record<string, Parser<Mode, unknown, unknown>>> =
CombineModes<{ [K in keyof T]: ParserMode<T[K]> }[keyof T][]>;
The type system handles compile-time safety, but the implementation also needs
runtime logic. Each parser has a $mode property that indicates its execution
mode:
const syncParser = option("-n", "--name", string());
console.log(syncParser.$mode); // "sync"
const asyncParser = option("-b", "--branch", asyncValueParser);
console.log(asyncParser.$mode); // "async"
Combinators compute their mode at construction time:
function object<T extends Record<string, Parser<Mode, unknown, unknown>>>(
parsers: T
): Parser<CombineObjectModes<T>, ObjectValue<T>, ObjectState<T>> {
const parserKeys = Reflect.ownKeys(parsers);
const combinedMode: Mode = parserKeys.some(
(k) => parsers[k as keyof T].$mode === "async"
) ? "async" : "sync";
// ... implementation
}
Lucas suggested an important refinement during our
discussion. Instead of having run() automatically choose between sync and
async based on the parser mode, he proposed separate functions:
Perhaps
run(…)could be automatic, andrunSync(…)andrunAsync(…)could enforce that the inferred type matches what is expected.
So we ended up with:
run(): automatic based on parser moderunSync(): enforces sync mode at compile timerunAsync(): enforces async mode at compile time// Automatic: returns T for sync parsers, Promise<T> for async
const result1 = run(syncParser); // string
const result2 = run(asyncParser); // Promise<string>
// Explicit: compile-time enforcement
const result3 = runSync(syncParser); // string
const result4 = runAsync(asyncParser); // Promise<string>
// Compile error: can't use runSync with async parser
const result5 = runSync(asyncParser); // Type error!
I applied the same pattern to parse()/parseSync()/parseAsync() and
suggest()/suggestSync()/suggestAsync() in the facade functions.
With the new API, creating an async value parser for Git branches looks like this:
import type { Suggestion } from "@optique/core/parser";
import type { ValueParser, ValueParserResult } from "@optique/core/valueparser";
function gitRef(): ValueParser<"async", string> {
return {
$mode: "async",
metavar: "REF",
parse(input: string): Promise<ValueParserResult<string>> {
return Promise.resolve({ success: true, value: input });
},
format(value: string): string {
return value;
},
async *suggest(prefix: string): AsyncIterable<Suggestion> {
const { $ } = await import("bun");
const [branches, tags] = await Promise.all([
$`git for-each-ref --format='%(refname:short)' refs/heads/`.text(),
$`git for-each-ref --format='%(refname:short)' refs/tags/`.text(),
]);
for (const ref of [...branches.split("\n"), ...tags.split("\n")]) {
const trimmed = ref.trim();
if (trimmed && trimmed.startsWith(prefix)) {
yield { kind: "literal", text: trimmed };
}
}
},
};
}
Notice that parse() returns Promise.resolve() even though it's synchronous.
This is because the ValueParser<"async", T> type requires all methods to use
async signatures. Lucas pointed out this is a minor ergonomic issue. If only
suggest() needs to be async, you still have to wrap parse() in a Promise.
I considered per-method mode granularity (e.g., ValueParser<ParseMode, SuggestMode, T>), but the implementation complexity would multiply
substantially. For now, the workaround is simple enough:
// Option 1: Use Promise.resolve()
parse(input) {
return Promise.resolve({ success: true, value: input });
}
// Option 2: Mark as async and suppress the linter
// biome-ignore lint/suspicious/useAwait: sync implementation in async ValueParser
async parse(input) {
return { success: true, value: input };
}
Supporting dual modes added significant complexity to Optique's internals. Every combinator needed updates:
For example, the object() combinator went from around 100 lines to around
250 lines. The internal implementation uses conditional logic based on the
combined mode:
if (combinedMode === "async") {
return {
$mode: "async" as M,
// ... async implementation with Promise chains
async parse(context) {
// ... await each field's parse result
},
};
} else {
return {
$mode: "sync" as M,
// ... sync implementation
parse(context) {
// ... directly call each field's parse
},
};
}
This duplication is the cost of supporting both modes without runtime overhead for sync-only use cases.
My initial instinct was to resist async support. Lucas's persistence and concrete examples changed my mind, but I validated the approach with a prototype before committing. The prototype revealed practical issues (like TypeScript inference limits) that pure design analysis would have missed.
Making "sync" the default mode meant existing code continued to work
unchanged. This was a deliberate choice. Breaking changes should require
user action, not break silently.
I chose unified mode (all methods share the same sync/async mode) over
per-method granularity. This means users occasionally write
Promise.resolve() for methods that don't actually need async, but the
alternative was multiplicative complexity in the type system.
The entire design process happened in a public GitHub issue. Lucas, Giuseppe,
and others contributed ideas that shaped the final API. The
runSync()/runAsync() distinction came directly from Lucas's feedback.
This was one of the more challenging features I've implemented in Optique. TypeScript's type system is powerful enough to encode the “any async means all async” rule at compile time, but getting there required careful design work and prototyping.
What made it work: conditional types like ModeValue<M, T> can bridge the gap
between sync and async worlds. You pay for it with implementation complexity,
but the user-facing API stays clean and type-safe.
Optique 0.9.0 with async support is currently in pre-release testing. If you'd like to try it, check out PR #70 or install the pre-release:
npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
Feedback is welcome!

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
今回の目玉機能はsync/asyncモード対応です。非同期の値パースや補完に対応したCLIパーサーが作れるようになりました。Gitのブランチ/タグ一覧のように、シェルコマンドの実行が必要な補完にぴったりです。
Asyncモードはcombinatorを通じて自動的に伝播するので、開発者は末端のパーサーでだけsync/asyncを決めればOKです。
インストール:
npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212マージ前にフィードバックいただけると助かります!特に気になる点:
ドキュメント:

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
이번 주요 기능은 동기/비동기 모드 지원입니다. 이제 비동기 값 파싱과 자동완성을 지원하는 CLI 파서를 만들 수 있습니다. Git 브랜치/태그 목록처럼 셸 명령 실행이 필요한 자동완성에 딱이에요.
컴비네이터를 통해 async 모드가 자동으로 전파되기 때문에, 개발자는 말단 파서에서만 동기/비동기를 결정하면 됩니다.
설치:
npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212머지 전에 피드백 주시면 정말 감사하겠습니다! 특히 이런 부분이 궁금해요:
문서:

@hongminhee@hollo.social
The big new feature: sync/async mode support. You can now build CLI parsers with async value parsing and suggestions—perfect for shell completions that need to run commands (like listing Git branches/tags).
The API automatically propagates async mode through combinators, so you only decide sync vs async at the leaf level.
Try it:
npm add @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212
deno add --jsr @optique/core@0.9.0-dev.212 @optique/run@0.9.0-dev.212I'd love feedback before merging! Especially interested in:
Docs:
@SwiftOnSecurity@infosec.exchange
TCP/IP is a social construct
@ianthetechie@fosstodon.org
Modern optimizing compilers are truly amazing. Rust / LLVM just broke my brain by turning what I was SURE would be poorly optimized code due to indirection into a tight result with zero perceptible overhead.
Modern CPUs also probably help.

@hongminhee@hollo.social · Reply to Bart Louwers's post
@bart Thanks for sharing this! I hadn't seen this issue before—really interesting to learn that Node.js is exploring built-in structured logging.
Looking at the discussion, it seems like they're still in the early stages—lots of debate around API design and porting foundational pieces like SonicBoom. So it might be a while before anything lands, but exciting to see the progress.
Until then, LogTape is one option that tries to fill this gap. And if node:log eventually ships, hopefully the concepts are similar enough that migrating wouldn't be too painful!
@bart@floss.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
@hongminhee Nice article!
FYI #NodeJS will likely soon have a built-in structured logger. https://github.com/nodejs/node/issues/49296
@nixCraft@mastodon.social
Apple will allow alternative browser engines for iPhone and iPad users (iOS/iPadOS) in Japan.
https://developer.apple.com/support/alternative-browser-engines-jp/
Apple should allow alt engine for the rest of the world too. No point holding it back.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
console.log()だけだと本番で困る、でも本格的なロギングは設定が面倒——という方向けに、ちょうどいい落としどころを探る記事を書きました。

@hongminhee@hollo.social
I wrote about setting up logging that's more useful than console.log() but doesn't require a Ph.D. in configuration. Covers categories, structured logging, request tracing, and production tips.
https://hackers.pub/@hongminhee/2026/logging-nodejs-deno-bun-2026
@hongminhee@hackers.pub
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
sinks) and which logs to show (lowestLevel).["my-app", "server"] inherits settings from ["my-app"].Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
@hongminhee@hackers.pub
It's 2 AM. Something is wrong in production. Users are complaining, but you're not sure what's happening—your only clues are a handful of console.log statements you sprinkled around during development. Half of them say things like “here” or “this works.” The other half dump entire objects that scroll off the screen. Good luck.
We've all been there. And yet, setting up “proper” logging often feels like overkill. Traditional logging libraries like winston or Pino come with their own learning curves, configuration formats, and assumptions about how you'll deploy your app. If you're working with edge functions or trying to keep your bundle small, adding a logging library can feel like bringing a sledgehammer to hang a picture frame.
I'm a fan of the “just enough” approach—more than raw console.log, but without the weight of a full-blown logging framework. We'll start from console.log(), understand its real limitations (not the exaggerated ones), and work toward a setup that's actually useful. I'll be using LogTape for the examples—it's a zero-dependency logging library that works across Node.js, Deno, Bun, and edge functions, and stays out of your way when you don't need it.
The console object is JavaScript's great equalizer. It's built-in, it works everywhere, and it requires zero setup. You even get basic severity levels: console.debug(), console.info(), console.warn(), and console.error(). In browser DevTools and some terminal environments, these show up with different colors or icons.
console.debug("Connecting to database...");
console.info("Server started on port 3000");
console.warn("Cache miss for user 123");
console.error("Failed to process payment");
For small scripts or quick debugging, this is perfectly fine. But once your application grows beyond a few files, the cracks start to show:
No filtering without code changes. Want to hide debug messages in production? You'll need to wrap every console.debug() call in a conditional, or find-and-replace them all. There's no way to say “show me only warnings and above” at runtime.
Everything goes to the console. What if you want to write logs to a file? Send errors to Sentry? Stream logs to CloudWatch? You'd have to replace every console.* call with something else—and hope you didn't miss any.
No context about where logs come from. When your app has dozens of modules, a log message like “Connection failed” doesn't tell you much. Was it the database? The cache? A third-party API? You end up prefixing every message manually: console.error("[database] Connection failed").
No structured data. Modern log analysis tools work best with structured data (JSON). But console.log("User logged in", { userId: 123 }) just prints User logged in { userId: 123 } as a string—not very useful for querying later.
Libraries pollute your logs. If you're using a library that logs with console.*, those messages show up whether you want them or not. And if you're writing a library, your users might not appreciate unsolicited log messages.
Before diving into code, let's think about what would actually solve the problems above. Not a wish list of features, but the practical stuff that makes a difference when you're debugging at 2 AM or trying to understand why requests are slow.
A logging system should let you categorize messages by severity—trace, debug, info, warning, error, fatal—and then filter them based on what you need. During development, you want to see everything. In production, maybe just warnings and above. The key is being able to change this without touching your code.
When your app grows beyond a single file, you need to know where logs are coming from. A good logging system lets you tag logs with categories like ["my-app", "database"] or ["my-app", "auth", "oauth"]. Even better, it lets you set different log levels for different categories—maybe you want debug logs from the database module but only warnings from everything else.
“Sink” is just a fancy word for “where logs go.” You might want logs to go to the console during development, to files in production, and to an external service like Sentry or CloudWatch for errors. A good logging system lets you configure multiple sinks and route different logs to different destinations.
Instead of logging strings, you log objects with properties. This makes logs machine-readable and queryable:
// Instead of this:
logger.info("User 123 logged in from 192.168.1.1");
// You do this:
logger.info("User logged in", { userId: 123, ip: "192.168.1.1" });
Now you can search for all logs where userId === 123 or filter by IP address.
In a web server, you often want all logs from a single request to share a common identifier (like a request ID). This makes it possible to trace a request's journey through your entire system.
There are plenty of logging libraries out there. winston has been around forever and has a plugin for everything. Pino is fast and outputs JSON. bunyan, log4js, signale—the list goes on.
So why LogTape? A few reasons stood out to me:
Zero dependencies. Not “few dependencies”—actually zero. In an era where a single npm install can pull in hundreds of packages, this matters for security, bundle size, and not having to wonder why your lockfile just changed.
Works everywhere. The same code runs on Node.js, Deno, Bun, browsers, and edge functions like Cloudflare Workers. No polyfills, no conditional imports, no “this feature only works on Node.”
Doesn't force itself on users. If you're writing a library, you can add logging without your users ever knowing—unless they want to see the logs. This is a surprisingly rare feature.
Let's set it up:
npm add @logtape/logtape # npm
pnpm add @logtape/logtape # pnpm
yarn add @logtape/logtape # Yarn
deno add jsr:@logtape/logtape # Deno
bun add @logtape/logtape # Bun
Configuration happens once, at your application's entry point:
import { configure, getConsoleSink, getLogger } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink(), // Where logs go
},
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // What to log
],
});
// Now you can log from anywhere in your app:
const logger = getLogger(["my-app", "server"]);
logger.info`Server started on port 3000`;
logger.debug`Request received: ${{ method: "GET", path: "/api/users" }}`;
Notice a few things:
sinks) and which logs to show (lowestLevel).["my-app", "server"] inherits settings from ["my-app"].Here's a scenario: you're debugging a database issue. You want to see every query, every connection attempt, every retry. But you don't want to wade through thousands of HTTP request logs to find them.
Categories let you solve this. Instead of one global log level, you can set different verbosity for different parts of your application.
await configure({
sinks: {
console: getConsoleSink(),
},
loggers: [
{ category: ["my-app"], lowestLevel: "info", sinks: ["console"] }, // Default: info and above
{ category: ["my-app", "database"], lowestLevel: "debug", sinks: ["console"] }, // DB module: show debug too
],
});
Now when you log from different parts of your app:
// In your database module:
const dbLogger = getLogger(["my-app", "database"]);
dbLogger.debug`Executing query: ${sql}`; // This shows up
// In your HTTP module:
const httpLogger = getLogger(["my-app", "http"]);
httpLogger.debug`Received request`; // This is filtered out (below "info")
httpLogger.info`GET /api/users 200`; // This shows up
If you're using libraries that also use LogTape, you can control their logs separately:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
// Only show warnings and above from some-library
{ category: ["some-library"], lowestLevel: "warning", sinks: ["console"] },
],
});
Sometimes you want a catch-all configuration. The root logger (empty category []) catches everything:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
// Catch all logs at info level
{ category: [], lowestLevel: "info", sinks: ["console"] },
// But show debug for your app
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
});
LogTape has six log levels. Choosing the right one isn't just about severity—it's about who needs to see the message and when.
| Level | When to use it |
|---|---|
trace |
Very detailed diagnostic info. Loop iterations, function entry/exit. Usually only enabled when hunting a specific bug. |
debug |
Information useful during development. Variable values, state changes, flow control decisions. |
info |
Normal operational messages. “Server started,” “User logged in,” “Job completed.” |
warning |
Something unexpected happened, but the app can continue. Deprecated API usage, retry attempts, missing optional config. |
error |
Something failed. An operation couldn't complete, but the app is still running. |
fatal |
The app is about to crash or is in an unrecoverable state. |
const logger = getLogger(["my-app"]);
logger.trace`Entering processUser function`;
logger.debug`Processing user ${{ userId: 123 }}`;
logger.info`User successfully created`;
logger.warn`Rate limit approaching: ${980}/1000 requests`;
logger.error`Failed to save user: ${error.message}`;
logger.fatal`Database connection lost, shutting down`;
A good rule of thumb: in production, you typically run at info or warning level. During development or when debugging, you drop down to debug or trace.
At some point, you'll want to search your logs. “Show me all errors from the payment service in the last hour.” “Find all requests from user 12345.” “What's the average response time for the /api/users endpoint?”
If your logs are plain text strings, these queries are painful. You end up writing regexes, hoping the log format is consistent, and cursing past-you for not thinking ahead.
Structured logging means attaching data to your logs as key-value pairs, not just embedding them in strings. This makes logs machine-readable and queryable.
LogTape supports two syntaxes for this:
const userId = 123;
const action = "login";
logger.info`User ${userId} performed ${action}`;
logger.info("User performed action", {
userId: 123,
action: "login",
ip: "192.168.1.1",
timestamp: new Date().toISOString(),
});
You can reference properties in your message using placeholders:
logger.info("User {userId} logged in from {ip}", {
userId: 123,
ip: "192.168.1.1",
});
// Output: User 123 logged in from 192.168.1.1
LogTape supports dot notation and array indexing in placeholders:
logger.info("Order {order.id} placed by {order.customer.name}", {
order: {
id: "ORD-001",
customer: { name: "Alice", email: "alice@example.com" },
},
});
logger.info("First item: {items[0].name}", {
items: [{ name: "Widget", price: 9.99 }],
});
For production, you often want logs as JSON (one object per line). LogTape has a built-in formatter for this:
import { configure, getConsoleSink, jsonLinesFormatter } from "@logtape/logtape";
await configure({
sinks: {
console: getConsoleSink({ formatter: jsonLinesFormatter }),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console"] },
],
});
Output:
{"@timestamp":"2026-01-15T10:30:00.000Z","level":"INFO","message":"User logged in","logger":"my-app","properties":{"userId":123}}
So far we've been sending everything to the console. That's fine for development, but in production you'll likely want logs to go elsewhere—or to multiple places at once.
Think about it: console output disappears when the process restarts. If your server crashes at 3 AM, you want those logs to be somewhere persistent. And when an error occurs, you might want it to show up in your error tracking service immediately, not just sit in a log file waiting for someone to grep through it.
This is where sinks come in. A sink is just a function that receives log records and does something with them. LogTape comes with several built-in sinks, and creating your own is trivial.
The simplest sink—outputs to the console:
import { getConsoleSink } from "@logtape/logtape";
const consoleSink = getConsoleSink();
For writing logs to files, install the @logtape/file package:
npm add @logtape/file
import { getFileSink, getRotatingFileSink } from "@logtape/file";
// Simple file sink
const fileSink = getFileSink("app.log");
// Rotating file sink (rotates when file reaches 10MB, keeps 5 old files)
const rotatingFileSink = getRotatingFileSink("app.log", {
maxSize: 10 * 1024 * 1024, // 10MB
maxFiles: 5,
});
Why rotating files? Without rotation, your log file grows indefinitely until it fills up the disk. With rotation, old logs are automatically archived and eventually deleted, keeping disk usage under control. This is especially important for long-running servers.
For production systems, you often want logs to go to specialized services that provide search, alerting, and visualization. LogTape has packages for popular services:
// OpenTelemetry (for observability platforms like Jaeger, Honeycomb, Datadog)
import { getOpenTelemetrySink } from "@logtape/otel";
// Sentry (for error tracking with stack traces and context)
import { getSentrySink } from "@logtape/sentry";
// AWS CloudWatch Logs (for AWS-native log aggregation)
import { getCloudWatchLogsSink } from "@logtape/cloudwatch-logs";
The OpenTelemetry sink is particularly useful if you're already using OpenTelemetry for tracing—your logs will automatically correlate with your traces, making debugging distributed systems much easier.
Here's where things get interesting. You can send different logs to different destinations based on their level or category:
await configure({
sinks: {
console: getConsoleSink(),
file: getFileSink("app.log"),
errors: getSentrySink(),
},
loggers: [
{ category: [], lowestLevel: "info", sinks: ["console", "file"] }, // Everything to console + file
{ category: [], lowestLevel: "error", sinks: ["errors"] }, // Errors also go to Sentry
],
});
Notice that a log record can go to multiple sinks. An error log in this configuration goes to the console, the file, and Sentry. This lets you have comprehensive local logs while also getting immediate alerts for critical issues.
Sometimes you need to send logs somewhere that doesn't have a pre-built sink. Maybe you have an internal logging service, or you want to send logs to a Slack channel, or store them in a database.
A sink is just a function that takes a LogRecord. That's it:
import type { Sink } from "@logtape/logtape";
const slackSink: Sink = (record) => {
// Only send errors and fatals to Slack
if (record.level === "error" || record.level === "fatal") {
fetch("https://hooks.slack.com/services/YOUR/WEBHOOK/URL", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({
text: `[${record.level.toUpperCase()}] ${record.message.join("")}`,
}),
});
}
};
The simplicity of sink functions means you can integrate LogTape with virtually any logging backend in just a few lines of code.
Here's a scenario you've probably encountered: a user reports an error, you check the logs, and you find a sea of interleaved messages from dozens of concurrent requests. Which log lines belong to the user's request? Good luck figuring that out.
This is where request tracing comes in. The idea is simple: assign a unique identifier to each request, and include that identifier in every log message produced while handling that request. Now you can filter your logs by request ID and see exactly what happened, in order, for that specific request.
LogTape supports this through contexts—a way to attach properties to log messages without passing them around explicitly.
The simplest approach is to create a logger with attached properties using .with():
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
const logger = getLogger(["my-app", "http"]).with({ requestId });
logger.info`Request received`; // Includes requestId automatically
processRequest(req, logger);
logger.info`Request completed`; // Also includes requestId
}
This works well when you're passing the logger around explicitly. But what about code that's deeper in your call stack? What about code in libraries that don't know about your logger instance?
This is where implicit contexts shine. Using withContext(), you can set properties that automatically appear in all log messages within a callback—even in nested function calls, async operations, and third-party libraries (as long as they use LogTape).
First, enable implicit contexts in your configuration:
import { configure, getConsoleSink } from "@logtape/logtape";
import { AsyncLocalStorage } from "node:async_hooks";
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] },
],
contextLocalStorage: new AsyncLocalStorage(),
});
Then use withContext() in your request handler:
import { withContext, getLogger } from "@logtape/logtape";
function handleRequest(req: Request) {
const requestId = crypto.randomUUID();
return withContext({ requestId }, async () => {
// Every log message in this callback includes requestId—automatically
const logger = getLogger(["my-app"]);
logger.info`Processing request`;
await validateInput(req); // Logs here include requestId
await processBusinessLogic(req); // Logs here too
await saveToDatabase(req); // And here
logger.info`Request complete`;
});
}
The magic is that validateInput, processBusinessLogic, and saveToDatabase don't need to know anything about the request ID. They just call getLogger() and log normally, and the request ID appears in their logs automatically. This works even across async boundaries—the context follows the execution flow, not the call stack.
This is incredibly powerful for debugging. When something goes wrong, you can search for the request ID and see every log message from every module that was involved in handling that request.
Setting up request tracing manually can be tedious. LogTape has dedicated packages for popular frameworks that handle this automatically:
// Express
import { expressLogger } from "@logtape/express";
app.use(expressLogger());
// Fastify
import { getLogTapeFastifyLogger } from "@logtape/fastify";
const app = Fastify({ loggerInstance: getLogTapeFastifyLogger() });
// Hono
import { honoLogger } from "@logtape/hono";
app.use(honoLogger());
// Koa
import { koaLogger } from "@logtape/koa";
app.use(koaLogger());
These middlewares automatically generate request IDs, set up implicit contexts, and log request/response information. You get comprehensive request logging with a single line of code.
If you've ever used a library that spams your console with unwanted log messages, you know how annoying it can be. And if you've ever tried to add logging to your own library, you've faced a dilemma: should you use console.log() and annoy your users? Require them to install and configure a specific logging library? Or just... not log anything?
LogTape solves this with its library-first design. Libraries can add as much logging as they want, and it costs their users nothing unless they explicitly opt in.
The rule is simple: use getLogger() to log, but never call configure(). Configuration is the application's responsibility, not the library's.
// my-library/src/database.ts
import { getLogger } from "@logtape/logtape";
const logger = getLogger(["my-library", "database"]);
export function connect(url: string) {
logger.debug`Connecting to ${url}`;
// ... connection logic ...
logger.info`Connected successfully`;
}
What happens when someone uses your library?
If they haven't configured LogTape, nothing happens. The log calls are essentially no-ops—no output, no errors, no performance impact. Your library works exactly as if the logging code wasn't there.
If they have configured LogTape, they get full control. They can see your library's debug logs if they're troubleshooting an issue, or silence them entirely if they're not interested. They decide, not you.
This is fundamentally different from using console.log() in a library. With console.log(), your users have no choice—they see your logs whether they want to or not. With LogTape, you give them the power to decide.
You configure LogTape once in your entry point. This single configuration controls logging for your entire application, including any libraries that use LogTape:
await configure({
sinks: { console: getConsoleSink() },
loggers: [
{ category: ["my-app"], lowestLevel: "debug", sinks: ["console"] }, // Your app: verbose
{ category: ["my-library"], lowestLevel: "warning", sinks: ["console"] }, // Library: quiet
{ category: ["noisy-library"], lowestLevel: "fatal", sinks: [] }, // That one library: silent
],
});
This separation of concerns—libraries log, applications configure—makes for a much healthier ecosystem. Library authors can add detailed logging for debugging without worrying about annoying their users. Application developers can tune logging to their needs without digging through library code.
If your application already uses winston, Pino, or another logging library, you don't have to migrate everything at once. LogTape provides adapters that route LogTape logs to your existing logging setup:
import { install } from "@logtape/adaptor-winston";
import winston from "winston";
install(winston.createLogger({ /* your existing config */ }));
This is particularly useful when you want to use a library that uses LogTape, but you're not ready to switch your whole application over. The library's logs will flow through your existing winston (or Pino) configuration, and you can migrate gradually if you choose to.
Development and production have different needs. During development, you want verbose logs, pretty formatting, and immediate feedback. In production, you care about performance, reliability, and not leaking sensitive data. Here are some things to keep in mind.
By default, logging is synchronous—when you call logger.info(), the message is written to the sink before the function returns. This is fine for development, but in a high-throughput production environment, the I/O overhead of writing every log message can add up.
Non-blocking mode buffers log messages and writes them in the background:
const consoleSink = getConsoleSink({ nonBlocking: true });
const fileSink = getFileSink("app.log", { nonBlocking: true });
The tradeoff is that logs might be slightly delayed, and if your process crashes, some buffered logs might be lost. But for most production workloads, the performance benefit is worth it.
Logs have a way of ending up in unexpected places—log aggregation services, debugging sessions, support tickets. If you're logging request data, user information, or API responses, you might accidentally expose sensitive information like passwords, API keys, or personal data.
LogTape's @logtape/redaction package helps you catch these before they become a problem:
import {
redactByPattern,
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
type RedactionPattern,
} from "@logtape/redaction";
import { defaultConsoleFormatter, configure, getConsoleSink } from "@logtape/logtape";
const BEARER_TOKEN_PATTERN: RedactionPattern = {
pattern: /Bearer [A-Za-z0-9\-._~+\/]+=*/g,
replacement: "[REDACTED]",
};
const formatter = redactByPattern(defaultConsoleFormatter, [
EMAIL_ADDRESS_PATTERN,
CREDIT_CARD_NUMBER_PATTERN,
BEARER_TOKEN_PATTERN,
]);
await configure({
sinks: {
console: getConsoleSink({ formatter }),
},
// ...
});
With this configuration, email addresses, credit card numbers, and bearer tokens are automatically replaced with [REDACTED] in your log output. The @logtape/redaction package comes with built-in patterns for common sensitive data types, and you can define custom patterns for anything else. It's not foolproof—you should still be mindful of what you log—but it provides a safety net.
See the redaction documentation for more patterns and field-based redaction.
Edge functions (Cloudflare Workers, Vercel Edge Functions, etc.) have a unique constraint: they can be terminated immediately after returning a response. If you have buffered logs that haven't been flushed yet, they'll be lost.
The solution is to explicitly flush logs before returning:
import { configure, dispose } from "@logtape/logtape";
export default {
async fetch(request, env, ctx) {
await configure({ /* ... */ });
// ... handle request ...
ctx.waitUntil(dispose()); // Flush logs before worker terminates
return new Response("OK");
},
};
The dispose() function flushes all buffered logs and cleans up resources. By passing it to ctx.waitUntil(), you ensure the worker stays alive long enough to finish writing logs, even after the response has been sent.
Logging isn't glamorous, but it's one of those things that makes a huge difference when something goes wrong. The setup I've described here—categories for organization, structured data for queryability, contexts for request tracing—isn't complicated, but it's a significant step up from scattered console.log statements.
LogTape isn't the only way to achieve this, but I've found it hits a nice sweet spot: powerful enough for production use, simple enough that you're not fighting the framework, and light enough that you don't feel guilty adding it to a library.
If you want to dig deeper, the LogTape documentation covers advanced topics like custom filters, the “fingers crossed” pattern for buffering debug logs until an error occurs, and more sink options. The GitHub repository is also a good place to report issues or see what's coming next.
Now go add some proper logging to that side project you've been meaning to clean up. Your future 2 AM self will thank you.
@objectif@mitir.social
@server_destroyer@iqhina.org
@hongminhee@hackers.pub
So you need to send emails from your JavaScript application. Email remains one of the most essential features in web apps—welcome emails, password resets, notifications—but the ecosystem is fragmented. Nodemailer doesn't work on edge functions. Each provider has its own SDK. And if you're using Deno or Bun, good luck finding libraries that actually work.
This guide covers how to send emails across modern JavaScript runtimes using Upyo, a cross-runtime email library.
Disclosure
I'm the author of Upyo. This guide focuses on Upyo because I built it to solve problems I kept running into, but you should know that going in. If you're looking for alternatives: Nodemailer is the established choice for Node.js (though it doesn't work on Deno/Bun/edge), and most email providers offer their own official SDKs.
If you just want working code, here's the quickest path to sending an email:
import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
const transport = new SmtpTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: {
user: "your-email@gmail.com",
pass: "your-app-password", // Not your regular password!
},
});
const message = createMessage({
from: "your-email@gmail.com",
to: "recipient@example.com",
subject: "Hello from my app!",
content: { text: "This is my first email." },
});
const receipt = await transport.send(message);
if (receipt.successful) {
console.log("Sent:", receipt.messageId);
} else {
console.log("Failed:", receipt.errorMessages);
}
Install with:
npm add @upyo/core @upyo/smtp
That's it. This exact code works on Node.js, Deno, and Bun. But if you want to understand what's happening and explore more powerful options, read on.
Let's start with the most accessible option: Gmail's SMTP server. It's free, requires no additional accounts, and works great for development and low-volume production use.
Gmail doesn't allow you to use your regular password for SMTP. You need to create an app-specific password:
Choose your runtime and package manager:
Node.js
npm add @upyo/core @upyo/smtp
# or: pnpm add @upyo/core @upyo/smtp
# or: yarn add @upyo/core @upyo/smtp
Deno
deno add jsr:@upyo/core jsr:@upyo/smtp
Bun
bun add @upyo/core @upyo/smtp
The same code works across all three runtimes—that's the beauty of Upyo.
import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
// Create the transport (reuse this for multiple emails)
const transport = new SmtpTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: {
user: "your-email@gmail.com",
pass: "abcd efgh ijkl mnop", // Your app password
},
});
// Create and send a message
const message = createMessage({
from: "your-email@gmail.com",
to: "recipient@example.com",
subject: "Welcome to my app!",
content: {
text: "Thanks for signing up. We're excited to have you!",
html: "<h1>Welcome!</h1><p>Thanks for signing up. We're excited to have you!</p>",
},
});
const receipt = await transport.send(message);
if (receipt.successful) {
console.log("Email sent successfully! Message ID:", receipt.messageId);
} else {
console.error("Failed to send email:", receipt.errorMessages.join(", "));
}
// Don't forget to close connections when done
await transport.closeAllConnections();
Let me highlight a few important details:
secure: true with port 465: This establishes a TLS-encrypted connection from the start. Gmail requires encryption, so this combination is essential.text and html content: Always provide both. Some email clients don't render HTML, and spam filters look more favorably on emails with plain text alternatives.receipt pattern: Upyo uses discriminated unions for type-safe error handling. When receipt.successful is true, you get messageId. When it's false, you get errorMessages. This makes it impossible to forget error handling.await using (shown next) to handle this automatically.await using Managing resources manually is error-prone—what if an exception occurs before closeAllConnections() is called? Modern JavaScript (ES2024) solves this with explicit resource management.
import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
// Transport is automatically disposed when it goes out of scope
await using transport = new SmtpTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: {
user: "your-email@gmail.com",
pass: "your-app-password",
},
});
const message = createMessage({
from: "your-email@gmail.com",
to: "recipient@example.com",
subject: "Hello!",
content: { text: "This email was sent with automatic cleanup!" },
});
await transport.send(message);
// No need to call `closeAllConnections()` - it happens automatically!
The await using keyword tells JavaScript to call the transport's cleanup method when execution leaves this scope—even if an error is thrown. This pattern is similar to Python's with statement or C#'s using block. It's supported in Node.js 22+, Deno, and Bun.
What if your environment doesn't support await using?
For older Node.js versions or environments without ES2024 support, use try/finally to ensure cleanup:
const transport = new SmtpTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: { user: "your-email@gmail.com", pass: "your-app-password" },
});
try {
await transport.send(message);
} finally {
await transport.closeAllConnections();
}
This achieves the same result—cleanup happens whether the send succeeds or throws an error.
Real-world emails often need more than plain text.
Inline images appear directly in the email body rather than as downloadable attachments. The trick is to reference them using a Content-ID (CID) URL scheme.
import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
import { readFile } from "node:fs/promises";
await using transport = new SmtpTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: { user: "your-email@gmail.com", pass: "your-app-password" },
});
// Read your logo file
const logoContent = await readFile("./assets/logo.png");
const message = createMessage({
from: "your-email@gmail.com",
to: "customer@example.com",
subject: "Your order confirmation",
content: {
html: `
<div style="font-family: sans-serif; max-width: 600px; margin: 0 auto;">
<img src="cid:company-logo" alt="Company Logo" style="width: 150px;">
<h1>Order Confirmed!</h1>
<p>Thank you for your purchase. Your order #12345 has been confirmed.</p>
</div>
`,
text: "Order Confirmed! Thank you for your purchase. Your order #12345 has been confirmed.",
},
attachments: [
{
filename: "logo.png",
content: logoContent,
contentType: "image/png",
contentId: "company-logo", // Referenced as cid:company-logo in HTML
inline: true,
},
],
});
await transport.send(message);
Key points about inline images:
contentId: This is the identifier you use in the HTML's src="cid:..." attribute. It can be any unique string.inline: true: This tells the email client to display the image within the message body, not as a separate attachment.alt text: Some email clients block images by default, so the alt text ensures your message is still understandable.For regular attachments that recipients can download, use the standard File API. This approach works across all JavaScript runtimes.
import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
import { readFile } from "node:fs/promises";
await using transport = new SmtpTransport({
host: "smtp.gmail.com",
port: 465,
secure: true,
auth: { user: "your-email@gmail.com", pass: "your-app-password" },
});
// Read files to attach
const invoicePdf = await readFile("./invoices/invoice-2024-001.pdf");
const reportXlsx = await readFile("./reports/monthly-report.xlsx");
const message = createMessage({
from: "billing@yourcompany.com",
to: "client@example.com",
cc: "accounting@yourcompany.com",
subject: "Invoice #2024-001",
content: {
text: "Please find your invoice and monthly report attached.",
},
attachments: [
new File([invoicePdf], "invoice-2024-001.pdf", { type: "application/pdf" }),
new File([reportXlsx], "monthly-report.xlsx", {
type: "application/vnd.openxmlformats-officedocument.spreadsheetml.sheet",
}),
],
priority: "high", // Sets email priority headers
});
await transport.send(message);
A few notes on attachments:
type helps email clients display the right icon and open the file with the appropriate application.priority: "high": This sets the X-Priority header, which some email clients use to highlight important messages. Use it sparingly—overuse can trigger spam filters.Email supports several recipient types, each with different visibility rules:
import { createMessage } from "@upyo/core";
const message = createMessage({
from: { name: "Support Team", address: "support@yourcompany.com" },
to: [
"primary-recipient@example.com",
{ name: "John Smith", address: "john@example.com" },
],
cc: "manager@yourcompany.com",
bcc: ["archive@yourcompany.com", "compliance@yourcompany.com"],
replyTo: "no-reply@yourcompany.com",
subject: "Your support ticket has been updated",
content: { text: "We've responded to your ticket #5678." },
});
Understanding recipient types:
to: Primary recipients. Everyone can see who else is in this field.cc (Carbon Copy): Secondary recipients. Visible to all recipients—use for people who should be informed but aren't the primary audience.bcc (Blind Carbon Copy): Hidden recipients. No one can see BCC addresses—useful for archiving or compliance without revealing internal processes.replyTo: Where replies should go. Useful when sending from a no-reply address but wanting responses to reach a real inbox.You can specify addresses as simple strings ("email@example.com") or as objects with name and address properties for display names.
Gmail SMTP is great for getting started, but for production applications, you'll want a dedicated email service provider. Here's why:
The best part? With Upyo, switching providers requires minimal code changes—just swap the transport.
Resend is a newer email service with an excellent developer experience.
npm add @upyo/resend
import { createMessage } from "@upyo/core";
import { ResendTransport } from "@upyo/resend";
const transport = new ResendTransport({
apiKey: process.env.RESEND_API_KEY!,
});
const message = createMessage({
from: "hello@yourdomain.com", // Must be verified in Resend
to: "user@example.com",
subject: "Welcome aboard!",
content: {
text: "Thanks for joining us!",
html: "<h1>Welcome!</h1><p>Thanks for joining us!</p>",
},
tags: ["onboarding", "welcome"], // For analytics
});
const receipt = await transport.send(message);
if (receipt.successful) {
console.log("Sent via Resend:", receipt.messageId);
}
Notice how similar this looks to the SMTP example? The only differences are the import and the transport configuration. Your message creation and sending logic stays exactly the same—that's Upyo's transport abstraction at work.
SendGrid is a popular choice for high-volume senders, offering advanced analytics, template management, and a generous free tier.
SendGrid is a popular choice for high-volume senders.
npm add @upyo/sendgrid
import { createMessage } from "@upyo/core";
import { SendGridTransport } from "@upyo/sendgrid";
const transport = new SendGridTransport({
apiKey: process.env.SENDGRID_API_KEY!,
clickTracking: true,
openTracking: true,
});
const message = createMessage({
from: "notifications@yourdomain.com",
to: "user@example.com",
subject: "Your weekly digest",
content: {
html: "<h1>This Week's Highlights</h1><p>Here's what you missed...</p>",
text: "This Week's Highlights\n\nHere's what you missed...",
},
tags: ["digest", "weekly"],
});
await transport.send(message);
Mailgun offers robust infrastructure with strong EU support—important if you need GDPR-compliant data residency.
npm add @upyo/mailgun
import { createMessage } from "@upyo/core";
import { MailgunTransport } from "@upyo/mailgun";
const transport = new MailgunTransport({
apiKey: process.env.MAILGUN_API_KEY!,
domain: "mg.yourdomain.com",
region: "eu", // or "us"
});
const message = createMessage({
from: "team@yourdomain.com",
to: "user@example.com",
subject: "Important update",
content: { text: "We have some news to share..." },
});
await transport.send(message);
Amazon SES is incredibly affordable—about $0.10 per 1,000 emails. If you're already in the AWS ecosystem, it integrates seamlessly with IAM, CloudWatch, and other services.
npm add @upyo/ses
import { createMessage } from "@upyo/core";
import { SesTransport } from "@upyo/ses";
const transport = new SesTransport({
authentication: {
type: "credentials",
accessKeyId: process.env.AWS_ACCESS_KEY_ID!,
secretAccessKey: process.env.AWS_SECRET_ACCESS_KEY!,
},
region: "us-east-1",
configurationSetName: "my-config-set", // Optional: for tracking
});
const message = createMessage({
from: "alerts@yourdomain.com",
to: "admin@example.com",
subject: "System alert",
content: { text: "CPU usage exceeded 90%" },
priority: "high",
});
await transport.send(message);
Here's where many email solutions fall short. Edge functions (Cloudflare Workers, Vercel Edge, Deno Deploy) run in a restricted environment—they can't open raw TCP connections, which means SMTP is not an option.
You must use an HTTP-based transport like Resend, SendGrid, Mailgun, or Amazon SES. The good news? Your code barely changes.
// src/index.ts
import { createMessage } from "@upyo/core";
import { ResendTransport } from "@upyo/resend";
export default {
async fetch(request: Request, env: Env): Promise<Response> {
const transport = new ResendTransport({
apiKey: env.RESEND_API_KEY,
});
const message = createMessage({
from: "noreply@yourdomain.com",
to: "user@example.com",
subject: "Request received",
content: { text: "We got your request and are processing it." },
});
const receipt = await transport.send(message);
if (receipt.successful) {
return new Response(`Email sent: ${receipt.messageId}`);
} else {
return new Response(`Failed: ${receipt.errorMessages.join(", ")}`, {
status: 500,
});
}
},
};
interface Env {
RESEND_API_KEY: string;
}
// app/api/send-email/route.ts
import { createMessage } from "@upyo/core";
import { SendGridTransport } from "@upyo/sendgrid";
export const runtime = "edge";
export async function POST(request: Request) {
const { to, subject, body } = await request.json();
const transport = new SendGridTransport({
apiKey: process.env.SENDGRID_API_KEY!,
});
const message = createMessage({
from: "app@yourdomain.com",
to,
subject,
content: { text: body },
});
const receipt = await transport.send(message);
if (receipt.successful) {
return Response.json({ success: true, messageId: receipt.messageId });
} else {
return Response.json(
{ success: false, errors: receipt.errorMessages },
{ status: 500 }
);
}
}
// main.ts
import { createMessage } from "jsr:@upyo/core";
import { MailgunTransport } from "jsr:@upyo/mailgun";
Deno.serve(async (request: Request) => {
if (request.method !== "POST") {
return new Response("Method not allowed", { status: 405 });
}
const { to, subject, body } = await request.json();
const transport = new MailgunTransport({
apiKey: Deno.env.get("MAILGUN_API_KEY")!,
domain: Deno.env.get("MAILGUN_DOMAIN")!,
region: "us",
});
const message = createMessage({
from: "noreply@yourdomain.com",
to,
subject,
content: { text: body },
});
const receipt = await transport.send(message);
if (receipt.successful) {
return Response.json({ success: true, messageId: receipt.messageId });
} else {
return Response.json(
{ success: false, errors: receipt.errorMessages },
{ status: 500 }
);
}
});
Ever wonder why some emails land in spam while others don't? Email authentication plays a huge role. DKIM (DomainKeys Identified Mail) is one of the key mechanisms—it lets you digitally sign your emails so recipients can verify they actually came from your domain and weren't tampered with in transit.
Without DKIM:
First, generate a DKIM key pair. You can use OpenSSL:
# Generate a 2048-bit RSA private key
openssl genrsa -out dkim-private.pem 2048
# Extract the public key
openssl rsa -in dkim-private.pem -pubout -out dkim-public.pem
Then configure your SMTP transport:
import { createMessage } from "@upyo/core";
import { SmtpTransport } from "@upyo/smtp";
import { readFileSync } from "node:fs";
const transport = new SmtpTransport({
host: "smtp.example.com",
port: 587,
secure: false,
auth: {
user: "user@yourdomain.com",
pass: "password",
},
dkim: {
signatures: [
{
signingDomain: "yourdomain.com",
selector: "mail", // Creates DNS record at mail._domainkey.yourdomain.com
privateKey: readFileSync("./dkim-private.pem", "utf8"),
algorithm: "rsa-sha256", // or "ed25519-sha256" for shorter keys
},
],
},
});
The key configuration options:
signingDomain: Must match your email's "From" domainselector: An arbitrary name that becomes part of your DNS record (e.g., mail creates a record at mail._domainkey.yourdomain.com)algorithm: RSA-SHA256 is widely supported; Ed25519-SHA256 offers shorter keys (see below)Add a TXT record to your domain's DNS:
mail._domainkey (or mail._domainkey.yourdomain.com depending on your DNS provider)v=DKIM1; k=rsa; p=YOUR_PUBLIC_KEY_HEREExtract the public key value (remove headers, footers, and newlines from the .pem file):
cat dkim-public.pem | grep -v "^-" | tr -d '\n'
RSA-2048 keys are long—about 400 characters for the public key. This can be problematic because DNS TXT records have size limits, and some DNS providers struggle with long records.
Ed25519 provides equivalent security with much shorter keys (around 44 characters). If your email infrastructure supports it, Ed25519 is the modern choice.
# Generate Ed25519 key pair
openssl genpkey -algorithm ed25519 -out dkim-ed25519-private.pem
openssl pkey -in dkim-ed25519-private.pem -pubout -out dkim-ed25519-public.pem
const transport = new SmtpTransport({
// ... other config
dkim: {
signatures: [
{
signingDomain: "yourdomain.com",
selector: "mail2025",
privateKey: readFileSync("./dkim-ed25519-private.pem", "utf8"),
algorithm: "ed25519-sha256",
},
],
},
});
When you need to send emails to many recipients—newsletters, notifications, marketing campaigns—you have two approaches:
send() // ❌ Don't do this for bulk sending
for (const subscriber of subscribers) {
await transport.send(createMessage({
from: "newsletter@example.com",
to: subscriber.email,
subject: "Weekly update",
content: { text: "..." },
}));
}
This works, but it's inefficient:
send() call waits for the previous one to completesendMany() The sendMany() method is designed for bulk operations:
import { createMessage } from "@upyo/core";
import { ResendTransport } from "@upyo/resend";
const transport = new ResendTransport({
apiKey: process.env.RESEND_API_KEY!,
});
const subscribers = [
{ email: "alice@example.com", name: "Alice" },
{ email: "bob@example.com", name: "Bob" },
{ email: "charlie@example.com", name: "Charlie" },
// ... potentially thousands more
];
// Create personalized messages
const messages = subscribers.map((subscriber) =>
createMessage({
from: "newsletter@yourdomain.com",
to: subscriber.email,
subject: "Your weekly digest",
content: {
html: `<h1>Hi ${subscriber.name}!</h1><p>Here's what's new this week...</p>`,
text: `Hi ${subscriber.name}!\n\nHere's what's new this week...`,
},
tags: ["newsletter", "weekly"],
})
);
// Send all messages efficiently
let successCount = 0;
let failureCount = 0;
for await (const receipt of transport.sendMany(messages)) {
if (receipt.successful) {
successCount++;
} else {
failureCount++;
console.error("Failed:", receipt.errorMessages.join(", "));
}
}
console.log(`Sent: ${successCount}, Failed: ${failureCount}`);
Why sendMany() is better:
const totalMessages = messages.length;
let processed = 0;
for await (const receipt of transport.sendMany(messages)) {
processed++;
if (processed % 100 === 0) {
console.log(`Progress: ${processed}/${totalMessages} (${Math.round((processed / totalMessages) * 100)}%)`);
}
if (!receipt.successful) {
console.error(`Message ${processed} failed:`, receipt.errorMessages);
}
}
console.log("Batch complete!");
send() vs sendMany() | Scenario | Use |
|---|---|
| Single transactional email (welcome, password reset) | send() |
| A few emails (under 10) | send() in a loop is fine |
| Newsletters, bulk notifications | sendMany() |
| Batch processing from a queue | sendMany() |
Upyo includes a MockTransport for testing:
import { createMessage } from "@upyo/core";
import { MockTransport } from "@upyo/mock";
import assert from "node:assert";
import { describe, it, beforeEach } from "node:test";
describe("Email functionality", () => {
let transport: MockTransport;
beforeEach(() => {
transport = new MockTransport();
});
it("should send welcome email after registration", async () => {
// Your application code would call this
const message = createMessage({
from: "welcome@yourapp.com",
to: "newuser@example.com",
subject: "Welcome to our app!",
content: { text: "Thanks for signing up!" },
});
const receipt = await transport.send(message);
// Assertions
assert.strictEqual(receipt.successful, true);
assert.strictEqual(transport.getSentMessagesCount(), 1);
const sentMessage = transport.getLastSentMessage();
assert.strictEqual(sentMessage?.subject, "Welcome to our app!");
assert.strictEqual(sentMessage?.recipients[0].address, "newuser@example.com");
});
it("should handle email failures gracefully", async () => {
// Simulate a failure
transport.setNextResponse({
successful: false,
errorMessages: ["Invalid recipient address"],
});
const message = createMessage({
from: "test@yourapp.com",
to: "invalid-email",
subject: "Test",
content: { text: "Test" },
});
const receipt = await transport.send(message);
assert.strictEqual(receipt.successful, false);
assert.ok(receipt.errorMessages.includes("Invalid recipient address"));
});
});
The key testing methods:
getSentMessagesCount(): How many emails were “sent”getLastSentMessage(): The most recent messagegetSentMessages(): All messages as an arraysetNextResponse(): Force the next send to succeed or fail with specific errorsimport { MockTransport } from "@upyo/mock";
// Simulate network delays
const slowTransport = new MockTransport({
delay: 500, // 500ms delay per email
});
// Simulate random failures (10% failure rate)
const unreliableTransport = new MockTransport({
failureRate: 0.1,
});
// Simulate variable latency
const realisticTransport = new MockTransport({
randomDelayRange: { min: 100, max: 500 },
});
import { MockTransport } from "@upyo/mock";
const transport = new MockTransport();
// Start your async operation that sends emails
startUserRegistration("newuser@example.com");
// Wait for the expected emails to be sent
await transport.waitForMessageCount(2, 5000); // Wait for 2 emails, 5s timeout
// Or wait for a specific email
const welcomeEmail = await transport.waitForMessage(
(msg) => msg.subject.includes("Welcome"),
3000
);
console.log("Welcome email was sent:", welcomeEmail.subject);
PoolTransport What happens if your email provider goes down? For mission-critical applications, you need redundancy. PoolTransport combines multiple providers with automatic failover—if one fails, it tries the next.
import { PoolTransport } from "@upyo/pool";
import { ResendTransport } from "@upyo/resend";
import { SendGridTransport } from "@upyo/sendgrid";
import { MailgunTransport } from "@upyo/mailgun";
import { createMessage } from "@upyo/core";
// Create multiple transports
const resend = new ResendTransport({ apiKey: process.env.RESEND_API_KEY! });
const sendgrid = new SendGridTransport({ apiKey: process.env.SENDGRID_API_KEY! });
const mailgun = new MailgunTransport({
apiKey: process.env.MAILGUN_API_KEY!,
domain: "mg.yourdomain.com",
});
// Combine them with priority-based failover
const transport = new PoolTransport({
strategy: "priority",
transports: [
{ transport: resend, priority: 100 }, // Try first
{ transport: sendgrid, priority: 50 }, // Fallback
{ transport: mailgun, priority: 10 }, // Last resort
],
maxRetries: 3,
});
const message = createMessage({
from: "critical@yourdomain.com",
to: "admin@example.com",
subject: "Critical alert",
content: { text: "This email will try multiple providers if needed." },
});
const receipt = await transport.send(message);
// Automatically tries Resend first, then SendGrid, then Mailgun if others fail
The priority values determine the order—higher numbers are tried first. If Resend fails (network error, rate limit, etc.), the pool automatically retries with SendGrid, then Mailgun.
For more advanced routing strategies (weighted distribution, content-based routing), see the pool transport documentation.
In production, you'll want to track email metrics: send rates, failure rates, latency. Upyo integrates with OpenTelemetry:
import { createOpenTelemetryTransport } from "@upyo/opentelemetry";
import { SmtpTransport } from "@upyo/smtp";
const baseTransport = new SmtpTransport({
host: "smtp.example.com",
port: 587,
auth: { user: "user", pass: "password" },
});
const transport = createOpenTelemetryTransport(baseTransport, {
serviceName: "email-service",
tracing: { enabled: true },
metrics: { enabled: true },
});
// Now all email operations generate traces and metrics automatically
await transport.send(message);
This gives you:
See the OpenTelemetry documentation for details.
| Scenario | Recommended Transport |
|---|---|
| Development/testing | Gmail SMTP or MockTransport |
| Small production app | Resend or SendGrid |
| High volume (100k+/month) | Amazon SES |
| Edge functions | Resend, SendGrid, or Mailgun |
| Self-hosted infrastructure | SMTP with DKIM |
| Mission-critical | PoolTransport with failover |
| EU data residency | Mailgun (EU region) or self-hosted |
This guide covered the most popular transports, but Upyo also supports:
And you can always create a custom transport for any email service not yet supported.
Have questions or feedback? Feel free to open an issue.
What's been your biggest pain point when sending emails from JavaScript? Let me know in the comments—I'm curious what challenges others have run into.
Upyo (pronounced /oo-pyo/) comes from the Korean word 郵票, meaning “postage stamp.”

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
結局リリースできないまま2026年を迎えてしまいました…!今月こそは、絶対に新バージョンをリリースするぞ…!

@hongminhee@hollo.social · Reply to jnkrtech's post
@jnkrtech True commitment to the Mac workflow: including the “Apple tax” equivalent. 😂

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
明けましておめでとうございます!2026年もよろしくお願いいたします。

@hongminhee@hollo.social
2026年에도 새해 福 많이 받으세요!

@hongminhee@hollo.social
If you want to use Linux but also want the “it just works” experience of a Mac, I recommend Fedora Linux. Out of all the Linux distros I've tried, it's the most low-maintenance one.
Of course, if what I just said rubs you the wrong way, then you should be using Arch Linux. No, wait, you probably already are.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
I posted a blog entry to wrap up the year: My 2025 with the fediverse. I'm grateful that the fediverse has allowed me to connect with so many people. I look forward to our continued connection.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
Fediverse Advent Calendar 2025の10日目に参加する記事をブログに投稿しました:「フェディバースと過ごした2025年」。タイトルの通り、フェディバースと共に過ごした私の一年を振り返る内容です。フェディバースのおかげで多くのご縁に恵まれ、感謝しています。これからもよろしくお願いします。

@hongminhee@hollo.social
한 해를 마무리하는 글을 블로그에 썼습니다: 〈聯合宇宙와 함께 한 2025年〉(한글 專用文은 이쪽). 題目 그대로 聯合宇宙와 함께 했던 저의 한 해를 되돌아 보는 글입니다. 聯合宇宙 德分에 많은 因緣과 이어지게 되어서 感謝하게 생각합니다.

@hongminhee@hollo.social · Reply to bgl gwyng's post
@bgl 흠, 생각해 보니 그렇네요. 근데 그렇게 가다 보면 LangGraph나 Mastra 같은 것에 가까워 지는 것 같기도 하고요…? 🤔