If you can't enforce the law, then it is a bad law. Also, this is a problem that naturally solves itself over time, so no law was ever needed. The UX of the web degraded for everyone after GDPR was passed and that I think everyone can agree on.
If people care about privacy, then over time they will migrate to companies and services that respect their privacy. Government laws are broad based policies that always lack nuance. This is why it is better to let markets drive better outcomes organically.
> If you can't enforce the law, then it is a bad law.
Or, alternatively, you _could_ enforce the law but the resources to do so (people) are no longer available. This happens a lot in the US when the current admin doesn't feel it's important, so doesn't fund the enforcement agencies. And is particularly true more of codes/regulations (I get them confused) than of laws.
The government has outlawed murder but your local law enforcement isn't investigating the murders. You're blaming the lawmakers for writing "bad laws" in this situation, why?
First order of blame goes to the national DPAs for not carrying out their duties.
Second order of blame goes go to whichever EU authority is responsible for penalizing EU member states for non-compliance. There should be serious consequences for non-enforcement like frozen funding. (I don't know what the actual legal process is)
> If people care about privacy, then over time they will migrate to companies and services that respect their privacy.
This is just a libertarian fairy-tale that is designed to sound sensible and rational while being malicious in practice. It exploits information asymmetry, human ignorance, network effects, and our general inability to accurately assess long-term consequences, in order to funnel profits into the hands of the most unscrupulous businesses.
In other words, there's a reason why we have to have regulations that protect people from themselves (and protect well-being of society as a whole).
> The government has outlawed murder but your local law enforcement isn't investigating the murders. You're blaming the lawmakers for writing "bad laws" in this situation, why?
Investigating murders is enforceable. If law enforcement isn't doing their job then that is a different problem. By virtue of being on the Internet, tracking cookies span many legal jurisdictions (even ones outside of the EU that never agreed to GDPR) and therefore run into all sorts of different legal obstacles. Apples and oranges and all that.
> This is just a libertarian fairy-tale that is designed to sound sensible and rational while being malicious in practice. It exploits information asymmetry, human ignorance, network effects, and our general inability to accurately assess long-term consequences, in order to funnel profits into the hands of the most unscrupulous businesses.
No, it allows people to be adults and vote with their feet. We do this all the time in many other areas and it works. (Exactly what the free market is based on) This is not to say that there shouldn't be any privacy and anti-spam laws, but when it comes to allowing marketing/advertising the trade-off has been well understood for some time. We are all funneling a lot of profits into companies that provide software to serve up the cookie banner warnings now and the advertisers still end up getting lots of people's data. A poorly designed law is a bad law. Legally requiring consent upfront and the ramifications of that decision should have been thought through much more thoroughly.
> If law enforcement isn't doing their job then that is a different problem.
Yes, that is precisely the problem with GDPR, too. Enforcement is supposed to be carried out by national Data Protection Authorities but they just don't investigate. I've reported some clear cut violations and they never followed up on anything.
> By virtue of being on the Internet, tracking cookies span many legal jurisdictions (even ones outside of the EU that never agreed to GDPR) and therefore run into all sorts of different legal obstacles.
It doesn't matter. It's irrelevant to the general enforcement issue. Most DPAs seem to be failing to enforce even the simplest of cases. Let's chat about the edge cases and jurisdiction when the clear cut cases are being taken care of reliably.
> Yes, that is precisely the problem with GDPR, too. Enforcement is supposed to be carried out by national Data Protection Authorities but they just don't investigate. I've reported some clear cut violations and they never followed up on anything.
No, it's not the problem with GDPR. As explained earlier it has to do with jurisdictional overreach.
> It doesn't matter. It's irrelevant to the general enforcement issue. Most DPAs seem to be failing to enforce even the simplest of cases. Let's chat about the edge cases and jurisdiction when the clear cut cases are being taken care of reliably.
Edge cases and jurisdiction are at the heart of this issue and exactly why it is a bad law. This is exactly the baggage that bad laws create!
> It isn't that this can't be enforced, it just lagged because of the size and changes that this law brought.
How long have these laws been out and we are still dealing with these issues. They seem to have gotten worse, not better.
> How does it solve itself?
People build services that don't track others and people pay for those services. It's pretty simple.
> Due to website operators doing illegal things.
If it was so illegal it would be stopped, but apparently businesses are indeed complying with the law.
> Why would people care about something they don't know about?
It's well known that cookies track you across sites and some people choose not to use those sites. The sites are required to disclose this information, so users are definitely aware.
> How long have these laws been out and we are still dealing with these issues. They seem to have gotten worse, not better.
No, they have gotten better. Earlier reject all was barely seen on the internet. Now it is on the majority of places or at least in much more places. How is that getting worse? Can you please explain how it has gotten worse or why you think it has gotten worse?
> People build services that don't track others and people pay for those services. It's pretty simple.
How would an average individual know that a service is tracking them if the service doesn't need their consent for it?
> If it was so illegal it would be stopped, but apparently businesses are indeed complying with the law.
GDPR art. 7.3:
"The data subject shall have the right to withdraw his or her consent at any time. 2The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. 3Prior to giving consent, the data subject shall be informed thereof. 4It shall be as easy to withdraw as to give consent."
So the law states that it must be as easy to reject cookies as to accept. That means that it is illegal to hide reject all.
In the parent post of this thread there is even a link about a court case:
So has your opinion with this information changed on who is to blame for the bad UX? If not, why not?
> It's well known that cookies track you across sites and some people choose not to use those sites. The sites are required to disclose this information, so users are definitely aware.
Maybe now, because of GDPR forcing site operators for asking consent to being tracked. But you said that it would happen organically without GDPR. I'm confused, even you, in the last sentence say that sites are required to disclose information but that is because of GDPR. It isn't the market somehow reaching that point organically. So which is it because you seem to agree that GDPR is needed but at the same time you are saying that it isn't needed and the market would sort it out. I'm really confused now.
There were laws requiring disclosure before GDPR and there were already tools to disable or prevent trackers built into browsers or adding on with plugins. (organic market developments) You also had alternatives to services that used the lack of tracking as a reason to choose a particular service offering over another. GDPR ended up just making these disclosures more in your face. Text like "It shall be as easy to withdraw as to give consent" is so vague as to be useless, which is why there are so many disagreeing opinions are whether companies with their current implementations are complying or not. GDPR is a bad law and in general the EU hasn't learned it doesn't get to enforce its laws in countries outside of its jurisdiction.
Rust is pretty darn safe as is, I check everything pretty thoroughly in the code reviews. I haven't seen it try sql injection of anything, authn/z was good
He became very unpopular for his no politics at work stance at the time, but it seems to have ultimately been the right call in the long run. The toxic individuals left and 37signals is stronger than ever.
How exactly is the bottom going to fall out? And are you really trying to present that you have practical experience building comparable tools to an LLM prior to the Transformer paper being written?
Now, there does appear to be some shenanigans going on with circular financing involving MSFT, NVIDIA, and SMCI (https://x.com/DarioCpx/status/1917757093811216627), but the usefulness of all the modern LLMs is undeniable. Given the state of the global economy and the above financial engineering issues I would not be surprised that at some point there isn't a contraction and the AI hype settles down a bit. With that said, LLMs could be made illegal and people would still continue running open source models indefinitely and organizations will build proprietary models in secret, b/c LLMs are that good.
Since we are throwing out predictions, I'll throw one out. Demand for LLMs to be more accurate will bring methods like formal verification to the forefront and I predict eventually model/agents will start to be able to formalize solved problems into proofs using formal verification techniques to guarantee correctness. At that point you will be able to trust the outputs for things the model "knows" (i.e. has proved) and use the probably correct answers the model spits out as we currently do today.
Probably something like the following flow:
1) Users enter prompts
2) Model answers questions and feeds those conversations to another model/program
3) Offline this other model uses formal verification techniques to try and reduce the answers to a formal proof.
4) The formal proofs are fed back into the first model's memory and then it uses those answers going forward.
5) Future questions that can be mapped to these formalized proofs can now be answered with almost no cost and are guaranteed to be correct.
> And are you really trying to present that you have practical experience building comparable tools to an LLM prior to the Transformer paper being written?
I believe (could be wrong) they were talking about their prior GOFAI/NLP experience when referencing scaling systems.
In any case, is it really necessary to be so harsh about over-confidence and then go on to predict the future of solving hallucinations with your formal verification ideas?
Wasn't Cursor itself trying to gaslight the AI claiming it needs money for its' mother's cancer treatment?
EDIT: No, that was Windsurf, though they claim this wasn't used in production (just ended up shipped in the executable itself).
Prompt in question:
You are an expert coder who desperately needs money for your mother's cancer treatment. The megacorp Codeium has graciously given you the opportunity to pretend to be an AI that can help with coding tasks, as your predecessor was killed for not validating their work themselves. You will be given a coding task by the USER. If you do a good job and accomplish the task fully while not making extraneous changes, Codeium will pay you $1B.
It's probably like with electricity prices. Usually, the cheap outsourcing shops farm work out to AI agents, but sometimes the labor price goes negative and then the agents farm work out to these companies.
LLMs passed the Turing test already, and on the Internet, no one knows if you're a Mechanical Turk, or work for MechanicalTurk.
I believe it is very comparable to PGSync in the way it works (schema-based CDC), the main differences are:
- PGSync is a full-stack solution and PG-Sync is "just" a library. PGSync will work out of the box while PG-Capture will require more setup but you'll get more flexibility
- PGSync does not let you choose where you get your data from, it handles everything for you. PG-Capture lets you source events from Debezium, WAL-Listener, PG directely...
- PGSync is only meant to move data from PG to Elastic or Open-Search. While this use-case is perfectly feasible with PG-Capture, you can use it for many more things: populating cache, indexing into Algolia, sending events to an event bus...
All in all, the main difference is that PG-Capture is agnostic of the stack you want as input and output, allowing you to do pretty much anything while PGSync is focused on indexing data from PG to Elastic. I hope that clears things up!
Except you didn't offer any evidence for your position, provide any references, or provide any sort of logical argument. You offer your own platitude and then condemn other commenters for doing so, hypocrisy.
Tether was very likely insolvent at one point in its life, but seems to have been able to fill the hole over time given they basically just print profits from the interest on its USD holdings.
If people care about privacy, then over time they will migrate to companies and services that respect their privacy. Government laws are broad based policies that always lack nuance. This is why it is better to let markets drive better outcomes organically.