This is not a result of income inequality. This is a result of measures intended to reduce income inequality, which of course almost universally make things worse.
The other lending that the US (and Western governments in general) massively subsidize is education. And look at what has happened to the cost of housing and education. They have both massively exceeded general rates of inflation.
People really need to stop glossing over the very real differences between state controlled media, and media that you think is aligned with a certain political group.
You can believe Fox News is the worst entity in human history, but Fox News is not RT.
It technically works with enough data but it's pretty inefficient compared to RAG. However, changing behavior via prompting/RAG is harder than changing behavior via finetuning; they're useful for different purposes.
Because models are getting much better every couple months, I wonder if getting too attached to a process built around one in particular is a bad idea.
I would agree if Windows 2000 had the exact same APIs as the next version, but it doesn't. LLMs are text in -> text out, and you can drop in a new LLM and replace them without changing anything else. If anything, newer LLMs will just have more capabilities.
> LLMs are text in -> text out, and you can drop in a new LLM and replace them without changing anything else. If anything, newer LLMs will just have more capabilities.
I don't mean to be too pointed here, but it doesn't sound like you have built anything at scale with LLMs. They are absolutely not plug n play from a behavior perspective. Yes, there is API compatibility (text in, text out) but that is not what matters.
Even frontier SOTA models have their own quirks and specialties.
A simple example would be when models get better at following instructions, the frantic and somewhat insane-sounding exhortations required to get the crappier model to do what you want can cause the stronger model to be a bit too literal and inflexible.
A deterministic algorithm can still be unpredictable in a sense. In the extreme case, a procedural generator (like in Minecraft) is deterministic given a seed, but you will still have trouble predicting what you get if you change the seed, because internally it uses a (pseudo-)random number generator.
So there’s still the question of how controllable the LLM really is. If you change a prompt slightly, how unpredictable is the change? That can’t be tested with one prompt.
There used to be append-only, they've removed it and suggest using a credential that has no 'delete' permission. The question asked here is whether this would protect against data being overwritten instead of deleted.