洪 民憙 (Hong Minhee) :nonbinary:'s avatar
洪 民憙 (Hong Minhee) :nonbinary:

@hongminhee@hollo.social · Reply to Hypolite Petovan's post

@hypolite The Jevons point is well taken, but I think the assumption that model size and model capability move in lockstep is already starting to break down. Smaller, more specialized models have been closing the gap with frontier models faster than most people expected a couple of years ago. That doesn't settle the question, but it does suggest the trajectory isn't as fixed as the current “bigger is better” trend implies.

On IP: I think that argument assumes the legal and social framework around training data is static. If a public foundation model existed, the question of how its training data was collected would be negotiated very differently—with public accountability, with legislative pressure, with the possibility of opt-in or compensated datasets. The current situation is partly so bad because private actors made unilateral decisions with no one to answer to. That changes when the entity doing the training is public.

But honestly, what strikes me most about your last message is that we may be closer in position than the argument suggests. You're saying rejection is symbolic, but useful as a social signal that could hasten the collapse of the current unsustainable model. I'm not sure I disagree with that. Where we differ, I think, is in what we expect to find in the ruins. You seem to expect something more modest and less harmful to emerge on its own. I'm less confident about that—I think what fills the vacuum depends heavily on what political and social structures we've built in the meantime. Which is, I suppose, exactly why I think the direction of reclamation matters now, even if the specific path is still unclear.

Hypolite Petovan's avatar
Hypolite Petovan

@hypolite@friendica.mrpetovan.com · Reply to 洪 民憙 (Hong Minhee) :nonbinary:'s post

@hongminhee I purposefully didn't add what I expect to find in the ruins of the AI bubble because it's probably going to be wrong; but I don't think something more harmful can be brought about from the ashes of the current LLMs that were originally built to create God. We've moved away from this hubris, thankfully, and I struggle to think of a worse design requirement for any computer system.

The other issue I see is that the political and social structures you mentioned that are needed to bring ethical LLMs back cannot be built at the moment since the current LLM systems are sucking all the oxygen from the room. So the outcome of the bubble burst is up in the air, and the people who made billions from peddling the current AI systems won't go silently into the night.