It seems that most people on this site believe that this is a good thing, but all this restriction would mean is that for the next while - the only companies able to afford mass licensing would be in the SPY 500, and that's assuming these companies wouldn't just flock to a nation outside of Americas influence.
At some point, it becomes a national security issue. This technology is going to be leveraged in ways we can't even dream up today. Copyright law needs to be re-imagined in a way that won't restrict advancement in AI, and AI-adjacent technology. It's not because we want to - it's because we have to.
It's not that hard. So if you want to ask questions or work with a Stephen King book, you have to rent it during your LLM session. OpenAi would make a small fee, the author would get the majority, and the user gets value. You don't have to be a billion-dollar company to set up a monetization structure like that. Startups could do this if they negotiate with authors.
For general questions, you can use the free wiki that's ingested into the LLM or pay a fee for general content like current events.
You keep the LLM free in the third-world out of necessity. OpenAI, in the first world, cannot ask to be treated as if it were a third-world company because we are too rich to be that ridiculous.
When Roger Bacon discovered what Gunpowder was capable of, he kept it to himself - he thought that once the poor knew how to make gunpowder, the poor would make weapons to destroy them.
We cannot let that happen with AI technology, and it is a very difficult conversation when we're talking about technology that has already replaced likely hundreds of thousands of jobs in the form of extending the amount of productivity individuals can produce.
To you, this is a moral issue, and one I absolutely agree with at its core. But this is technology, in my opinion, has the risk of eventually triggering a form of social stratification. The focus should be on keeping the technology ubiquitous, accessible, and unrestricted.
> But this is technology, in my opinion, has the risk of eventually triggering a form of social stratification. The focus should be on keeping the technology ubiquitous, accessible, and unrestricted.
But this is exactly what proposals like you’re responding to are trying to do. Ignoring the morality this is an economic issue. Massive economic value is potentially going to be created by stealing from individuals. Why can’t they get small kickbacks? Why must their contribution be completely devoid of remuneration for us to stand a chance of “winning a war” or keeping this technology accessible?
You're right. If there are methods to get creators paid, while ensuring unfettered access to all - it absolutely should happen. The legal system in America doesn't have a good track record of nuance, especially when nuance is necessary. My views come from the idea that the American legal system will either smite them into bankruptcy, or it will give them the precedent they need to exempt past violations, and carry on as usual.
These comments made me realize my viewpoints surrounding this issue are heavily based on the American legal system being very binary, with the majority of tech companies going all or nothing. Appeal your way up to the supreme court, and pray for the all.
In this case, it feels like the two most likely outcomes both hurt us.
It isn't in our nature at all, on the contrary. It is if that knowledge is useful for strategic purpose like economic advantages, but it is an exception.
At some point, we have to look at this pragmatically. To me, it's not about FAANGs getting over on the every man, it's about making sure we maintain the opportunity of playing on the same field, with the same resources.
But why must it be free? Immense amounts of money are being thrown around and at the first suggestion that maybe the thing that underpins their work should be paid for they say it’s infeasible. If you listen to Altman the future is going to be infinite. Why can’t we pay authors for their books in that case?
You may not have found ways to make AI work for you in your workflows, but millions of others have. It's not perfect, but it's useful to everyone I know that has made a meaningful attempt to experiment, discarding the bad, and integrating the good.
I call XY on this. The problem is inherit in LLMs and the solution is something else altogether, not just allowing companies to ignore the law and lobby for changing said law after the fact.
At some point, it becomes a national security issue. This technology is going to be leveraged in ways we can't even dream up today. Copyright law needs to be re-imagined in a way that won't restrict advancement in AI, and AI-adjacent technology. It's not because we want to - it's because we have to.