@hongminhee @jnkrtech Yes! These are fantastic, and exactly why I've asked for a *left-wing* analysis of this question. The right-wing/liberal pro-AI takes are dogshit, either blindly following the hype or idealising the corporations and billionaires who own this tech. I wanted to see an analysis that starts off from a place of caution, and looks at ways of reclaiming this tech, rather than accepts it as a social good apriori. And I'm not a Marxist, but I can appreciate their commitment to material analysis.
I really like your idea of evolving new F/OSS licenses specifically for training (and your historical analysis of how the previous licenses evolved), I haven't considered that angle. I think it can be a good measure while the legislation/regulation is catching up. I'm also all-in on small, publicly owned models as an alternative to the frontier ones.
I think there's a lot of food for thought here, and from your second post I can see you've already gotten feedback from a more political/activist angle. So I'm going to dig into the most conceptually interesting to me aspect, which is the capability of the tech itself.
I'm a PL theorist and I work at an independent research lab that I co-founded with my friends. So I'm in the same boat as you when it comes to using these tools - I am free to use them to replace the menial parts of my job, and when it comes to code that I actually care about, that I write myself. But until recently, these tools couldn't even be used to write the kind of abstract code that I work on. Now... they can. Which is fascinating in its own way.