洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Marcus Rohrmoser 🌻's post

@mro Hi, thanks for the sharp analogy! The X-ray/asbestos comparison is a classic way to view the risks of new tech.

However, my argument for “socialization” stems from the belief that LLMs are a significant productive force. If we view them as “asbestos,” the logical step is a total ban. But if we see them as a “utility” (like electricity), the current corporate monopoly is the real poison.

In a historical materialism framework, the “toxic” side effects we see today—like reckless resource consumption or data exploitation—are often driven by the capitalist mode of production (profit-first scaling). By “liberating” or socializing the material basis of AI, we gain the democratic power to regulate its use and minimize those downsides, turning it into a true public good rather than a corporate hazard.

Marcus Rohrmoser 🌻's avatar
Marcus Rohrmoser 🌻

@mro@digitalcourage.social · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post

Hi @hongminhee,
wasn't banned from the beginning, nor was . Time told. So it may with .
As to how productive they are - the data basis so far is too narrow to tell IMO. Some say so, some other. Recently a study claimed devs feel +20% but in fact are -20%.
I have the notion the L im LLM fits the B in Big IT quite well.

We have to re-focus from the means to the ends. What goals do we accomplish, not how much software do we engage.