David Lord 
@davidism@mas.to
Windows special device names are cursed: https://chrisdenton.github.io/omnipath/Special%20Dos%20Device%20Names.html This is the subject of two recent Werkzeug CVEs. #windows #python #flask


@hongminhee@hollo.social · 1001 following · 1422 followers
An intersectionalist, feminist, and socialist living in Seoul (UTC+09:00). @tokolovesme's spouse. Who's behind @fedify, @hollo, and @botkit. Write some free software in #TypeScript, #Haskell, #Rust, & #Python. They/them.
서울에 사는 交叉女性主義者이자 社會主義者. 金剛兔(@tokolovesme)의 配偶者. @fedify, @hollo, @botkit 메인테이너. #TypeScript, #Haskell, #Rust, #Python 等으로 自由 소프트웨어 만듦.
| Website | GitHub | Blog | Hackers' Pub |
|---|---|---|---|

@davidism@mas.to
Windows special device names are cursed: https://chrisdenton.github.io/omnipath/Special%20Dos%20Device%20Names.html This is the subject of two recent Werkzeug CVEs. #windows #python #flask
@fancysandwiches@neuromatch.social
One of the ways I'm dealing with AI slop at work is that when I'm giving feedback on the work I'm making sure to never assign the responsibility of the bad code to the AI. I'm directly saying that "this change that YOU made needs to be corrected". I'm always assigning the output of the AI to the person who put me in the position of reviewing the work. It is their responsibility to read the code that they're trying to review, they are responsible for 100% of the code, so they also get 100% of the blame when it's bad. If a change is confusing or nonsensical I'll ask "why did YOU make this change?". I'll never ask why an AI made a change, that we cannot know. All we can know is why someone thought it was acceptable to ship garbage, and we can assign them the responsibility for the garbage that they're willing to ship
@fediversereport@mastodon.social
New from me: X is A Power Problem, Not a Platform Problem
https://connectedplaces.online/reports/a-power-problem-not-a-platform-problem/

@hongminhee@hollo.social
I've always believed that structured logging shouldn't be complicated. Seeing Sentry echo this sentiment in their latest engineering blog post—and using LogTape to demonstrate it—is a massive validation for me.
They did a great job explaining why we need to move beyond console.log() in production. Really proud to see my work mentioned alongside such a standard-setting tool.
https://blog.sentry.io/trace-connected-structured-logging-with-logtape-and-sentry/
@silverpill@mitra.social
Mastodon got a working implementation of a thread collection (context). I'm adding it to the list of implementations in FEP-f228: https://codeberg.org/fediverse/fep/pulls/745
RE: https://mastodon.social/users/MastodonEngineering/statuses/115854312836282687
@MastodonEngineering@mastodon.social
We just released Mastodon 4.5.4, 4.4.11, 4.3.17 and 4.2.29.
These versions contain various bug fixes, including a high and a moderate severity security fixes.
Mastodon v4.2.29 will be the last update for the Mastodon v4.2 branch, please update to newer versions as soon as you can.
Full release notes and update instructions are available on the GitHub releases page.
@deno_land@fosstodon.org
Deno v2.6.4 just shipped with a fix for Intel Macs and a big performance improvement to `node:http` module.

@hongminhee@hollo.social
I finally gave in and wrote my own markdownlint rules to enforce my peculiar and stubborn Markdown style. Probably no one else will ever need these, but I've published them as open source anyway.
@jdv_jazz@mastodon.nl
Ryo Fukui - It Could Happen To You
#JazzDeVille #Jazz #NowPlaying #RyoFukui

@hongminhee@hollo.social
Today is one of those days that comes a few times a year when I just have to listen to Michael Jackson.

@hongminhee@hollo.social
Wrote a tutorial on building CLI apps with Optique, a TypeScript CLI parser I've been working on. If you've ever wanted discriminated unions from your argument parser, this might interest you.
@hongminhee@hackers.pub
We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.
In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.
process.argv Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:
// greet.ts
const args = process.argv.slice(2);
let name: string | undefined;
let count = 1;
for (let i = 0; i < args.length; i++) {
if (args[i] === "--name" || args[i] === "-n") {
name = args[++i];
} else if (args[i] === "--count" || args[i] === "-c") {
count = parseInt(args[++i], 10);
}
}
if (!name) {
console.error("Error: --name is required");
process.exit(1);
}
for (let i = 0; i < count; i++) {
console.log(`Hello, ${name}!`);
}
Run node greet.js --name Alice --count 3 and you'll get three greetings.
But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.
You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:
// With Commander.js
import { program } from "commander";
program
.requiredOption("-n, --name <n>", "Name to greet")
.option("-c, --count <number>", "Number of times to greet", "1")
.parse();
const opts = program.opts();
These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.
The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.
Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.
Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.
Let's rebuild our greeting program:
import { object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { integer, string } from "@optique/core/valueparser";
import { withDefault } from "@optique/core/modifiers";
import { run } from "@optique/run";
const parser = object({
name: option("-n", "--name", string()),
count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
});
const config = run(parser);
// config is typed as { name: string; count: number }
for (let i = 0; i < config.count; i++) {
console.log(`Hello, ${config.name}!`);
}
Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.
Install it with your package manager of choice:
npm add @optique/core @optique/run
# or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run
Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.
import { object } from "@optique/core/constructs";
import { optional, withDefault } from "@optique/core/modifiers";
import { argument, option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = object({
input: argument(string({ metavar: "INPUT" })),
output: option("-o", "--output", string({ metavar: "FILE" })),
format: withDefault(
option("-f", "--format", choice(["json", "yaml", "toml"])),
"json"
),
pretty: option("-p", "--pretty"),
verbose: option("-v", "--verbose"),
});
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
});
// config.input: string
// config.output: string
// config.format: "json" | "yaml" | "toml"
// config.pretty: boolean
// config.verbose: boolean
The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.
The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).
Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:
import { object, or } from "@optique/core/constructs";
import { withDefault } from "@optique/core/modifiers";
import { argument, constant, option } from "@optique/core/primitives";
import { integer, string, url } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
// Server mode
object({
mode: constant("server"),
port: option("-p", "--port", integer({ min: 1, max: 65535 })),
host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
}),
// Client mode
object({
mode: constant("client"),
url: argument(url()),
}),
);
const config = run(parser);
The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.
TypeScript infers a discriminated union:
type Config =
| { mode: "server"; port: number; host: string }
| { mode: "client"; url: URL };
Now you can write type-safe code that handles each mode:
if (config.mode === "server") {
console.log(`Starting server on ${config.host}:${config.port}`);
} else {
console.log(`Connecting to ${config.url.hostname}`);
}
Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.
This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.
For larger tools, you'll want subcommands. Optique handles this with the command() parser:
import { object, or } from "@optique/core/constructs";
import { optional } from "@optique/core/modifiers";
import { argument, command, constant, option } from "@optique/core/primitives";
import { string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
command("add", object({
action: constant("add"),
key: argument(string({ metavar: "KEY" })),
value: argument(string({ metavar: "VALUE" })),
})),
command("remove", object({
action: constant("remove"),
key: argument(string({ metavar: "KEY" })),
})),
command("list", object({
action: constant("list"),
pattern: optional(option("-p", "--pattern", string())),
})),
);
const result = run(parser, { help: "both" });
switch (result.action) {
case "add":
console.log(`Adding ${result.key}=${result.value}`);
break;
case "remove":
console.log(`Removing ${result.key}`);
break;
case "list":
console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
break;
}
Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.
The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.
Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
completion: "both",
});
Users can then generate completion scripts:
$ myapp --completion bash >> ~/.bashrc
$ myapp --completion zsh >> ~/.zshrc
$ myapp --completion fish > ~/.config/fish/completions/myapp.fish
The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.
Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.
Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:
import { z } from "zod";
import { zod } from "@optique/zod";
import { option } from "@optique/core/primitives";
const email = option("--email", zod(z.string().email()));
const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));
Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.
Prefer Valibot? The @optique/valibot package works the same way:
import * as v from "valibot";
import { valibot } from "@optique/valibot";
import { option } from "@optique/core/primitives";
const email = option("--email", valibot(v.pipe(v.string(), v.email())));
Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.
A few things I've learned building CLIs with Optique:
Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.
Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.
Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.
Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:
import { parse } from "@optique/core/parser";
const result = parse(parser, ["--name", "Alice", "--count", "3"]);
if (result.success) {
assert.equal(result.value.name, "Alice");
assert.equal(result.value.count, 3);
}
This is especially valuable for complex parsers with many edge cases.
We've covered the fundamentals, but Optique has more to offer:
path() for checking file existence, directory structure, and file extensionsmerge() for sharing common options across subcommands@optique/temporal package for parsing dates and times using the Temporal APICheck out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.
Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.
The source is on GitHub, and packages are available on both npm and JSR.
Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.
@hongminhee@hackers.pub
We've all been there. You start a quick TypeScript CLI with process.argv.slice(2), add a couple of options, and before you know it you're drowning in if/else blocks and parseInt calls. It works, until it doesn't.
In this guide, we'll move from manual argument parsing to a fully type-safe CLI with subcommands, mutually exclusive options, and shell completion.
process.argv Let's start with the most basic approach. Say we want a greeting program that takes a name and optionally repeats the greeting:
// greet.ts
const args = process.argv.slice(2);
let name: string | undefined;
let count = 1;
for (let i = 0; i < args.length; i++) {
if (args[i] === "--name" || args[i] === "-n") {
name = args[++i];
} else if (args[i] === "--count" || args[i] === "-c") {
count = parseInt(args[++i], 10);
}
}
if (!name) {
console.error("Error: --name is required");
process.exit(1);
}
for (let i = 0; i < count; i++) {
console.log(`Hello, ${name}!`);
}
Run node greet.js --name Alice --count 3 and you'll get three greetings.
But this approach is fragile. count could be NaN if someone passes --count foo, and we'd silently proceed. There's no help text. If someone passes --name without a value, we'd read the next option as the name. And the boilerplate grows fast with each new option.
You've probably heard of Commander.js and Yargs. They've been around for years and solve the basic problems:
// With Commander.js
import { program } from "commander";
program
.requiredOption("-n, --name <n>", "Name to greet")
.option("-c, --count <number>", "Number of times to greet", "1")
.parse();
const opts = program.opts();
These libraries handle help text, option parsing, and basic validation. But they were designed before TypeScript became mainstream, and the type safety is bolted on rather than built in.
The real problem shows up when you need mutually exclusive options. Say your CLI works either in "server mode" (with --port and --host) or "client mode" (with --url). With these libraries, you end up with a config object where all options are potentially present, and you're left writing runtime checks to ensure the user didn't mix incompatible flags. TypeScript can't help you because the types don't reflect the actual constraints.
Optique takes a different approach. Instead of configuring options declaratively, you build parsers by composing smaller parsers together. The types flow naturally from this composition, so TypeScript always knows exactly what shape your parsed result will have.
Optique works across JavaScript runtimes: Node.js, Deno, and Bun are all supported. The core parsing logic has no runtime-specific dependencies, so you can even use it in browsers if you need to parse CLI-like arguments in a web context.
Let's rebuild our greeting program:
import { object } from "@optique/core/constructs";
import { option } from "@optique/core/primitives";
import { integer, string } from "@optique/core/valueparser";
import { withDefault } from "@optique/core/modifiers";
import { run } from "@optique/run";
const parser = object({
name: option("-n", "--name", string()),
count: withDefault(option("-c", "--count", integer({ min: 1 })), 1),
});
const config = run(parser);
// config is typed as { name: string; count: number }
for (let i = 0; i < config.count; i++) {
console.log(`Hello, ${config.name}!`);
}
Types are inferred automatically. config.name is string, not string | undefined. config.count is number, guaranteed to be at least 1. Validation is built in: integer({ min: 1 }) rejects non-integers and values below 1 with clear error messages. Help text is generated automatically, and the run() function handles errors and exits with appropriate codes.
Install it with your package manager of choice:
npm add @optique/core @optique/run
# or: pnpm add, yarn add, bun add, deno add jsr:@optique/core jsr:@optique/run
Let's build something more realistic: a file converter that reads from an input file, converts to a specified format, and writes to an output file.
import { object } from "@optique/core/constructs";
import { optional, withDefault } from "@optique/core/modifiers";
import { argument, option } from "@optique/core/primitives";
import { choice, string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = object({
input: argument(string({ metavar: "INPUT" })),
output: option("-o", "--output", string({ metavar: "FILE" })),
format: withDefault(
option("-f", "--format", choice(["json", "yaml", "toml"])),
"json"
),
pretty: option("-p", "--pretty"),
verbose: option("-v", "--verbose"),
});
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
});
// config.input: string
// config.output: string
// config.format: "json" | "yaml" | "toml"
// config.pretty: boolean
// config.verbose: boolean
The type of config.format isn't just string. It's the union "json" | "yaml" | "toml". TypeScript will catch typos like config.format === "josn" at compile time.
The choice() parser is useful for any option with a fixed set of valid values: log levels, output formats, environment names, and so on. You get both runtime validation (invalid values are rejected with helpful error messages) and compile-time checking (TypeScript knows the exact set of possible values).
Now let's tackle the case that trips up most CLI libraries: mutually exclusive options. Say our tool can either run as a server or connect as a client, but not both:
import { object, or } from "@optique/core/constructs";
import { withDefault } from "@optique/core/modifiers";
import { argument, constant, option } from "@optique/core/primitives";
import { integer, string, url } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
// Server mode
object({
mode: constant("server"),
port: option("-p", "--port", integer({ min: 1, max: 65535 })),
host: withDefault(option("-h", "--host", string()), "0.0.0.0"),
}),
// Client mode
object({
mode: constant("client"),
url: argument(url()),
}),
);
const config = run(parser);
The or() combinator tries each alternative in order. The first one that successfully parses wins. The constant() parser adds a literal value to the result without consuming any input, which serves as a discriminator.
TypeScript infers a discriminated union:
type Config =
| { mode: "server"; port: number; host: string }
| { mode: "client"; url: URL };
Now you can write type-safe code that handles each mode:
if (config.mode === "server") {
console.log(`Starting server on ${config.host}:${config.port}`);
} else {
console.log(`Connecting to ${config.url.hostname}`);
}
Try accessing config.url in the server branch. TypeScript won't let you. The compiler knows that when mode is "server", only port and host exist.
This is the key difference from configuration-based libraries. With Commander or Yargs, you'd get a type like { port?: number; host?: string; url?: string } and have to check at runtime which combination of fields is actually present. With Optique, the types match the actual constraints of your CLI.
For larger tools, you'll want subcommands. Optique handles this with the command() parser:
import { object, or } from "@optique/core/constructs";
import { optional } from "@optique/core/modifiers";
import { argument, command, constant, option } from "@optique/core/primitives";
import { string } from "@optique/core/valueparser";
import { run } from "@optique/run";
const parser = or(
command("add", object({
action: constant("add"),
key: argument(string({ metavar: "KEY" })),
value: argument(string({ metavar: "VALUE" })),
})),
command("remove", object({
action: constant("remove"),
key: argument(string({ metavar: "KEY" })),
})),
command("list", object({
action: constant("list"),
pattern: optional(option("-p", "--pattern", string())),
})),
);
const result = run(parser, { help: "both" });
switch (result.action) {
case "add":
console.log(`Adding ${result.key}=${result.value}`);
break;
case "remove":
console.log(`Removing ${result.key}`);
break;
case "list":
console.log(`Listing${result.pattern ? ` (filter: ${result.pattern})` : ""}`);
break;
}
Each subcommand gets its own help text. Run myapp add --help and you'll see only the options relevant to add. Run myapp --help and you'll see a summary of all available commands.
The pattern here is the same as mutually exclusive options: or() to combine alternatives, constant() to add a discriminator. This consistency is one of Optique's strengths. Once you understand the basic combinators, you can build arbitrarily complex CLI structures by composing them.
Optique has built-in shell completion for Bash, zsh, fish, PowerShell, and Nushell. Enable it by passing completion: "both" to run():
const config = run(parser, {
help: "both",
version: { mode: "both", value: "1.0.0" },
completion: "both",
});
Users can then generate completion scripts:
$ myapp --completion bash >> ~/.bashrc
$ myapp --completion zsh >> ~/.zshrc
$ myapp --completion fish > ~/.config/fish/completions/myapp.fish
The completions are context-aware. They know about your subcommands, option values, and choice() alternatives. Type myapp --format <TAB> and you'll see json, yaml, toml as suggestions. Type myapp a<TAB> and it'll complete to myapp add.
Completion support is often an afterthought in CLI tools, but it makes a real difference in user experience. With Optique, you get it essentially for free.
Already using Zod for validation in your project? The @optique/zod package lets you reuse those schemas as CLI value parsers:
import { z } from "zod";
import { zod } from "@optique/zod";
import { option } from "@optique/core/primitives";
const email = option("--email", zod(z.string().email()));
const port = option("--port", zod(z.coerce.number().int().min(1).max(65535)));
Your existing validation logic just works. The Zod error messages are passed through to the user, so you get the same helpful feedback you're used to.
Prefer Valibot? The @optique/valibot package works the same way:
import * as v from "valibot";
import { valibot } from "@optique/valibot";
import { option } from "@optique/core/primitives";
const email = option("--email", valibot(v.pipe(v.string(), v.email())));
Valibot's bundle size is significantly smaller than Zod's (~10KB vs ~52KB), which can matter for CLI tools where startup time is noticeable.
A few things I've learned building CLIs with Optique:
Start simple. Begin with object() and basic options. Add or() for mutually exclusive groups only when you need them. It's easy to over-engineer CLI parsers.
Use descriptive metavars. Instead of string(), write string({ metavar: "FILE" }) or string({ metavar: "URL" }). The metavar appears in help text and error messages, so it's worth the extra few characters.
Leverage withDefault(). It's better than making options optional and checking for undefined everywhere. Your code becomes cleaner when you can assume values are always present.
Test your parser. Optique's core parsing functions work without process.argv, so you can unit test your parser logic:
import { parse } from "@optique/core/parser";
const result = parse(parser, ["--name", "Alice", "--count", "3"]);
if (result.success) {
assert.equal(result.value.name, "Alice");
assert.equal(result.value.count, 3);
}
This is especially valuable for complex parsers with many edge cases.
We've covered the fundamentals, but Optique has more to offer:
path() for checking file existence, directory structure, and file extensionsmerge() for sharing common options across subcommands@optique/temporal package for parsing dates and times using the Temporal APICheck out the documentation for the full picture. The tutorial walks through the concepts in more depth, and the cookbook has patterns for common scenarios.
Building CLIs in TypeScript doesn't have to mean fighting with types or writing endless runtime validation. Optique lets you express constraints in a way that TypeScript actually understands, so the compiler catches mistakes before they reach production.
The source is on GitHub, and packages are available on both npm and JSR.
Questions or feedback? Find me on the fediverse or open an issue on the GitHub repo.
@byulmaru@planet.moe
안녕하세요, 플래닛 등의 연합우주 SNS를 사용한 적 있는 모든 사람을 대상으로, 동인을 위한 더 나은 SNS 및 서비스 개발을 위한 설문조사를 1월 11일(일)까지 진행 중입니다.
혹시 설문조사에 대한 질문이나 개선점 등이 필요하다 생각하시다면 편히 멘션이나 DM으로 이야기해주세요. 감사합니다.

@hongminhee@hollo.social
Why #Markdown's emphasis syntax (**) fails outside of Western languages: A deep dive into #CommonMark's “delimiter run” flaws and their impact on #CJK users.
A must-read for anyone interested in #internationalization and the future of Markdown:
https://hackers.pub/@yurume/019b912a-cc3b-7e45-9227-d08f0d1eafe8
@yurume@hackers.pub · Reply to 유루메 Yurume's post
As Markdown has become the standard for LLM outputs, we are now forced to witness a common and unsightly mess where Markdown emphasis markers (**) remain unrendered and exposed, as seen in the image. This is a chronic issue with the CommonMark specification---one that I once reported about ten years ago---but it has been left neglected without any solution to this day.
The technical details of the problem are as follows: In an effort to limit parsing complexity during the standardization process, CommonMark introduced the concept of "delimiter runs." These runs are assigned properties of being "left-flanking" or "right-flanking" (or both, or neither) depending on their position. According to these rules, a bolded segment must start with a left-flanking delimiter run and end with a right-flanking one. The crucial point is that whether a run is left- or right-flanking is determined solely by the immediate surrounding characters, without any consideration of the broader context. For instance, a left-flanking delimiter must be in the form of **<ordinary character>, <whitespace>**<punctuation>, or <punctuation>**<punctuation>. (Here, "ordinary character" refers to any character that is not whitespace or punctuation.) The first case is presumably intended to allow markers embedded within a word, like **마크다운**은, while the latter cases are meant to provide limited support for markers placed before punctuation, such as in 이 **"마크다운"** 형식은. The rules for right-flanking are identical, just in the opposite direction.
However, when you try to parse a string like **마크다운(Markdown)**은 using these rules, it fails because the closing ** is preceded by punctuation (a parenthesis) and it must be followed by whitespace or another punctuation mark to be considered right-flanking. Since it is followed by an ordinary letter (은), it is not recognized as right-flanking and thus fails to close the emphasis.
As explained in the CommonMark spec, the original intent of this rule was to support nested emphasis, like **this **way** of nesting**. Since users typically don't insert spaces inside emphasis markers (e.g., **word **), the spec attempts to resolve ambiguity by declaring that markers adjacent to whitespace can only function in a specific direction. However, in CJK (Chinese, Japanese, Korean) environments, either spaces are completly absent or (as in Korean) punctuations are commonly used within a word. Consequently, there are clear limits to inferring whether a delimiter is left or right-flanking based on these rules. Even if we were to allow <ordinary character>**<punctuation> to be interpreted as left-flanking to accommodate cases like **마크다운(Markdown)**은, how would we handle something like このような**[状況](...)は**?
In my view, the utility of nested emphasis is marginal at best, while the frustration it causes in CJK environments is significant. Furthermore, because LLMs generate Markdown based on how people would actually use it---rather than strictly following the design intent of CommonMark---this latent inconvenience that users have long felt is now being brought directly to the surface.
@yurume@hackers.pub · Reply to 유루메 Yurume's post
As Markdown has become the standard for LLM outputs, we are now forced to witness a common and unsightly mess where Markdown emphasis markers (**) remain unrendered and exposed, as seen in the image. This is a chronic issue with the CommonMark specification---one that I once reported about ten years ago---but it has been left neglected without any solution to this day.
The technical details of the problem are as follows: In an effort to limit parsing complexity during the standardization process, CommonMark introduced the concept of "delimiter runs." These runs are assigned properties of being "left-flanking" or "right-flanking" (or both, or neither) depending on their position. According to these rules, a bolded segment must start with a left-flanking delimiter run and end with a right-flanking one. The crucial point is that whether a run is left- or right-flanking is determined solely by the immediate surrounding characters, without any consideration of the broader context. For instance, a left-flanking delimiter must be in the form of **<ordinary character>, <whitespace>**<punctuation>, or <punctuation>**<punctuation>. (Here, "ordinary character" refers to any character that is not whitespace or punctuation.) The first case is presumably intended to allow markers embedded within a word, like **마크다운**은, while the latter cases are meant to provide limited support for markers placed before punctuation, such as in 이 **"마크다운"** 형식은. The rules for right-flanking are identical, just in the opposite direction.
However, when you try to parse a string like **마크다운(Markdown)**은 using these rules, it fails because the closing ** is preceded by punctuation (a parenthesis) and it must be followed by whitespace or another punctuation mark to be considered right-flanking. Since it is followed by an ordinary letter (은), it is not recognized as right-flanking and thus fails to close the emphasis.
As explained in the CommonMark spec, the original intent of this rule was to support nested emphasis, like **this **way** of nesting**. Since users typically don't insert spaces inside emphasis markers (e.g., **word **), the spec attempts to resolve ambiguity by declaring that markers adjacent to whitespace can only function in a specific direction. However, in CJK (Chinese, Japanese, Korean) environments, either spaces are completly absent or (as in Korean) punctuations are commonly used within a word. Consequently, there are clear limits to inferring whether a delimiter is left or right-flanking based on these rules. Even if we were to allow <ordinary character>**<punctuation> to be interpreted as left-flanking to accommodate cases like **마크다운(Markdown)**은, how would we handle something like このような**[状況](...)は**?
In my view, the utility of nested emphasis is marginal at best, while the frustration it causes in CJK environments is significant. Furthermore, because LLMs generate Markdown based on how people would actually use it---rather than strictly following the design intent of CommonMark---this latent inconvenience that users have long felt is now being brought directly to the surface.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
複数のパーサーを合成するとき、一つでも非同期なら結果も非同期になる——これをTypeScriptの型レベルで表現するのが意外と難しかった。Optiqueでの設計過程を書きました。
@mitsuhiko@hachyderm.io · Reply to FediThing :progress_pride:'s post
@FediThing @glyph LLMs are trained on human-created material in much the same way a person learns by reading books and then acting on what they've learned. They don't directly reproduce that material.
As I mentioned I strongly believe that broad sharing of knowledge is a net benefit to humanity. Questions of credit and attribution are a separate issue and to discuss them meaningfully, you first have to be clear about what you consider reasonable attribution in the first place.
You can take for instance the tankgame and then tell me which part should be attributed and is not, and what you would be attributing it to.
On the "against the will": I want you to use the code I wrote, it's definitely not against my will that LLMs are trained on the code I wrote over the years.

@hongminhee@hollo.social
#Optique 0.9.0 is here!
This release brings #async/await support to #CLI parsers. Now you can validate input against external resources—databases, APIs, Git repositories—directly at parse time, with full #TypeScript type safety.
The new @optique/git package showcases this: validate branch names, tags, and commit SHAs against an actual Git repo, complete with shell completion suggestions.
Other highlights:
choice()Fully backward compatible—your existing parsers work unchanged.

@hongminhee@hollo.social
My last salaried job was at a company that built blockchain technology. No, it wasn't for cryptocurrency. The goal was to use blockchain to create a fully peer-to-peer, decentralized game. I found it a technically interesting goal. I've always been fascinated by decentralized technologies, which is also why I'm drawn to ActivityPub. Another thing that attracted me was the promise that this technology would be implemented as 100% open source. I had always wanted to work on open source full-time, so I accepted the offer.
However, once I started working there, I found myself increasingly disappointed. The organization gradually filled up with so-called “crypto bros,” and the culture shifted toward prioritizing token price over technical achievement. I and a few close colleagues believed that introducing partial centralization to the fully decentralized system—whether to defend the token price or to rush a release—was not a “minor compromise” but a “major corruption.” The rest of the organization didn't see it that way.
One of the most painful things about being in that organization was the fact that the technology I was creating was not only unhelpful to society, but was actually harming the environment and society. At the time, I felt like I was working for a tobacco company—knowing that cigarettes harm people's health, yet turning a blind eye and doing the job anyway.
I'm no fan of cryptocurrency, but I still think blockchain has technically interesting aspects. However, blockchain has already become socially inseparable from cryptocurrency, and even if blockchain is technically interesting, there are very few domains where it's actually useful. Furthermore, the negative environmental impact of blockchain technology is a problem that must be solved for it to be taken seriously. In its current state, when I weigh the harm against the utility, I believe the harm overwhelmingly outweighs it.
Anyway, I have now completely said goodbye to blockchain technology. I feel at ease now that I don't have to live with that guilt anymore. I also came to realize that engineers must consider not only the technical interest of a technology but also its social impact. So for now, I want to focus on ActivityPub. I find it both technically interesting and socially meaningful!

@hongminhee@hollo.social · Reply to wakest ⁂'s post
@liaizon @thisismissem @2chanhaeng That sounds wonderful! I'd love to visit @offline. I'm happy to reprise the FOSDEM talk—having slides actually helps since my spoken English isn't perfect. 😅 I'm totally open to Q&A and casual chat afterwards, but I might be a bit slow in free-flowing conversation. As long as you're patient with me, I'd love to do it!

@hongminhee@hollo.social · Reply to wakest ⁂'s post
@liaizon @thisismissem Hi you two, I'm planning to stay in Berlin from the evening of February 2nd until the night of February 4th after FOSDEM 2026 is over! Would you be available to meet up? For your information, I'll be with ChanHaeng Lee (@2chanhaeng), one of key contributors to the Fedify project.

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
複数のパーサーを合成するとき、一つでも非同期なら結果も非同期になる——これをTypeScriptの型レベルで表現するのが意外と難しかった。Optiqueでの設計過程を書きました。

@hongminhee@hollo.social · Reply to Elena Rossini on GoToSocial ⁂'s post
@elena Thanks!!

@hongminhee@hollo.social
Okay, I've finished the slides for my presentation at FOSDEM 2026. Of course, I'll probably keep fine-tuning them until the presentation day, but it's a weight off my shoulders. However, since I have to present in English, I need to practice delivering it in English every day from now until the event.
@mitchellh@hachyderm.io · Reply to Mitchell Hashimoto's post
I will repeat that I was not sitting back at all during those 6 hours. While agents were working, I was working, just on separate -- but related -- tasks. I know for a fact that I could not have completed this amount of work in 6 hours fully manually (based on the experience that I've written something like 30+ bindings to C libraries in the past decade, probably more).
@mitchellh@hachyderm.io
I wrote Zig bindings to quickjs-ng with 96% API coverage (~240 exported C decls) with unit tests, examples, and doc strings on all functions in less than 6 total hours with AI assistance. I never want to hear that AI isn't faster ever again. https://github.com/mitchellh/zig-quickjs-ng
This isn't slop. I worked for those 6 hours.
I was reviewing everything it outputted, updating my AGENTS.md to course correct future work, ensuring the output was idiomatic Zig, writing my own tests on the side to verify its work (while it worked), and more. My work was split across ~40 separate Amp threads (not one mega session, which doesn't work anyways unless you're orchestrating).
I have a ton of experience writing bindings to libraries for various languages, especially Zig. I have never achieved this much coverage in so little time with such high quality (e.g. test coverage). My usual approach is to get bind just-enough of the surface area to do my actual work and move on. This time I thought I'd draw the whole owl, because it's a new world. And I'm very happy with the result.
Anyone with experience writing bindings knows that you do some small surface area, then the rest of the coverage is annoying repetition. That's why I usually stopped. Well, LLMs/agents are really, really good at annoying repetition and pattern matching. So going from 5% API coverage to 95% is... cake.
There is probably some corners that are kind of nasty still, but I've been re-reviewing every line of code manually and there is nothing major. Definitely some areas that can just use a nicer Zig interfaces over the C API, but that's about it.
I plan on writing a longer form blog showcasing my threads, but you can at least see the final AGENTS.md I produced in the linked repo.
@parksb@silicon.moe
모던코리아 한국 미술 2부작 중 <2부 여성-민중-미술>은 80년대 민주화운동과 민중미술, 여성예술가를 조명한다. "네가 아무리 잘나봤자 시집가면 끝이야", "운동하려면 음악이나 미술은 버리고 와라" 따위의 말과 싸워온 사람들의 이야기. https://vod.kbs.co.kr/index.html?source=episode&sname=vod&stype=vod&program_code=T2025-0633&program_id=PS-2025239471-01-000

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
This time, I tried writing a prompt to draw an illustration of the mascots from the Mastodon, Lemmy, Fedify, Misskey, and Akkoma projects all getting along together.
@jdv_jazz@mastodon.nl
Art Blakey & The Jazz Messengers - Politely
#JazzDeVille #Jazz #NowPlaying #ArtBlakeyTheJazzMessengers

@hongminhee@hollo.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post
This time, I tried writing a prompt to draw an illustration of the mascots from the Mastodon, Lemmy, Fedify, Misskey, and Akkoma projects all getting along together.

@hongminhee@hollo.social
Using Nano Banana Pro, I composited an image to make it look like the cute dinosaur from the Fedify logo was standing in front of the ULB (Université libre de Bruxelles) building in Brussels, where FOSDEM is held.