洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Damien de Lemeny's post

@ddelemeny @silverpill I relate to the immersion argument, and I think it's part of why I avoided machine translation for so long—not out of principle, but because the output wasn't worth learning from. Older MT between Korean and English produced something closer to a word-by-word skeleton than actual language. You couldn't look at it and think: oh, that's how a native speaker would put it. It was more like a scaffold you had to tear down before building anything.

LLMs are different enough that I've had to revise that instinct. The output is often genuinely idiomatic, and when I read a phrase that lands exactly right, there's a recognition that functions a lot like learning—the same feeling as encountering a sentence in a book and thinking: I'll remember that. I do find myself absorbing expressions that way, probably more than I would have expected.

That said, I think your point holds at the edges. For shorter writing I still work without assistance, partly for practical reasons and partly because I notice the difference when I don't. So I suspect I'm arriving at something similar to what you're describing, just from the other direction—using the tool for longer texts while trying to keep the muscle from atrophying entirely on shorter ones.

The dynamic you mention with German and Korean is interesting too. Korean was my concern about English; I imagine the lack of immersion shapes the experience in ways that are hard to compensate for with tools alone.

Damien de Lemeny's avatar
Damien de Lemeny

@ddelemeny@mastodon.xyz · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post

@hongminhee @silverpill Thank you for replying with care, your POV is really interesting.
Have you read the Reg's article about semantic ablation that was shared around some time ago ?

theregister.com/2026/02/16/sem

This may be part of why other commenters allegedly found a negative difference in your writing using MT, and a concern I have about building fluent writing skills. MT (and autocomplete) collapse possibilities into an average, that's probably correct enough but also probably low entropy.