I recently saw this, and technical subjects that actually have 0 books written about them now have entire pages filled with books. Title sound good, and the page look decently good, but there's something slightly off, and when you look at it the "author" has been writing a book a week...
It's disheartening because now I will look much more into reputable publishers, and so filter off independent writers who have nothing to do with this.
I don't think that it's necessarily true, but the big problem is that discoverability is almost impossible, and that the investment to know how good a book might be is much higher than other forms of media. It's also why you might get more out of books, you have to make some efforts to ingest them, but this means it's a problem if you have no idea how good it might be.
for games, steam offers a trial of the game which can be refunded in full if you do it within two hours. It's a great feature for consumer protection imho.
I'd like to try a chapter or two of a book first, and if it doesn't grab, get a full refund. This is how you can prevent sinking time (and money, presumably) into a bad book.
I don't know where the divide comes from (cultural, generational, social class, or something else) but the idea of thinking "I want to get my money back" for something like a book, music, or a video game is strange to me.
Sometimes I make bad purchases, and that's just too bad.
The more that media becomes a product, the harder it is to feel like you're conning an artist by getting a refund on a purchase.
It's gotten incredibly easy to put media out there, and it's great that people are able to tell the stories they want through the medium they want. At the risk of sounding like I'm just bootlicking, traditional outlets used to be able to filter out some of the more low-effort content and it was easier to expect that you were at least getting mediocre stuff. At this point, a lot of really low effort and low quality junk is in the ecosystem and it's harder to just buy something that looks cool.
In all reality, I've eaten larger purchases as losses than some dumb $20 steam game or ebook or whatever. I just don't think that people are terribly unreasonable if they feel burnt badly enough to press for a refund. It's never been easier to do the old "if I can get x number of people to give me $5 each..." bit
compounded by the fact that reviews, awards, and any institution which formerly served to find good and worthy books or movies seem to have become completely detached from genuine popular interest and quality.
A lot of people will dismiss with some of the usual AI complaints. I suspect they never did real research. Getting into a paper can be a really long endeavor. The notation might not be entirely self contained, or used in an alien or confusing way. Managing to get into it might finally yield that the results in the paper are not applicable to your own, a point that is often obscured intentionally to make it to publication.
Lowering the investment to understand a specific paper could really help focus on the most relevant results, on which you can dedicate your full resources.
Although, as of now I tend to favor approaches that only summarize rather than produce "active systems" -- with the approximate nature of LLMs, every step should be properly human reviewed. So, it's not clear what signal you can take out of such an AI approach to a paper.
Related, a few days ago: "Show HN: Asxiv.org – Ask ArXiv papers questions through chat"
In the last 6 months, I've had to buy a few things that 'normal people' tend to buy (a coffee machine, fuel, ...), for which we didn't already have trusted sellers, and so checked Google.
For fuel, Google results were 90% scams, for coffee machines closer to 75%
The scams are fairly elaborate: they clone some legitimate looking sites, then offer prices that are very competitive -- between 50% and 75% of market prices -- that put them on top of SEO. It's only by looking in details at contact information that there are some things that look off (one common thing is that they may encourage bank transfers since there's no buyer protection there, but it's not always the case).
A 75% market rate is not crazy "too good to be true" thing, it's in the realm of what a legitimate business can do, and with the prices of the items being in the 1000s, that means any hooked victim is a good catch.
A particular example was a website copying the one for a massive discount appliance store chain in the Netherlands.
They had a close domain name, even though the website looked different, so any Google search linked it towards the legitimate business.
You really have to apply a high level of scrutiny, or understand that Google is basically a scam registry.
Scammers can outbid real stores on the same products for the advertising space simply because they have much better margins. And google really doesn't care about whether it is a scammer that pays them or a legit business, they do zero due diligence on the targets of the advertising.
Parent says it's an outlandish claim that they can reliably tell whether ads are clickbait.
I believe that detecting whether an ad is clickbait is a similar problem -- not exactly the same, but it suffers from the same issues:
- it's not well defined at all.
- any heuristic is constantly gamed by bad actors
- it requires a deeper, contextual analysis of the content that is served
- content analysis requires a notion of what is reputable or reasonable
If I take an LLM's definition of "clickbait", I get "sensationalized, misleading, or exaggerated headlines"; so scams would be a subset of it (it is misleading content that you need to click through). They do not provide their definition though.
So you have Google products (both the Products search and the general search) that recommend scams with an incredible rate, where the stakes are much higher. Is it reasonable that they're able to solve the general problem? How can anyone verify such a claim, or trust it?
I believe the main problem could be reframed as an improper use of analogies.
People are pushing the analogy of "artificial intelligence" and "brain", etc. creating a confusion that leads to such "laws".
What we have is a situation that is similar to birds and planes, they do not operate under the same principles at all.
Looking at the original claim, we can take from birds a number of optimization regarding air flows that are far beyond what any plane can do.
But, the impact that could be transfer to planes would be minimal compared to a boost in engine technology.
Which is not surprising since the way both systems achieve "flight" are completely different.
I don't believe such discourse would happen at all if it was just considered to be a number of techniques, of different categories with their own strength and weaknesses, used to tackle problems.
Like all fake "laws", it is based on a general idea that is devoid of any time-frame prediction that would make it falsifiable.
In "the short term" is beaten by "in the long run". How far is "the long run"?
This is like the "mean reversion law", saying that prices will "eventually" go back to their equilibrium price; will you survive bankruptcy by the time of "eventually"?
>> People are pushing the analogy of "artificial intelligence" and "brain", etc. creating a confusion that leads to such "laws". What we have is a situation that is similar to birds and planes, they do not operate under the same principles at all.
But modend neural nets ARE based on biological neurons. It's not a perfect match by any means, but synaptic "weights" are very much equivalent to model weights. Model structure and size are bigger differences.
Serious question: I'm currently re-evaluating if Cursor can speed up my daily work. Currently it is not really the case because of the many subtle errors (like switching a ":" for a ","). But mostly the problem I face is that the code base is big, with entirely outdated parts and poorly coded ones. So the AI favors the most common patterns, which are the bad ones. Even with basic instructions like "take inspiration from <part of the code that is very similar and well-written>" it still mostly takes from the overall codebase (which, by the way, was worsened by a big chunk of vibe-coded output that was hastily merged). My understanding is that a rule should essentially do the same as if it is put in the prompt directly. Is there a solution to that?
I recently switched to Claude Code, I much prefer it (I end up less in cycles of Cursor getting stuck on problems). Before I used Cursor for some months.
> My understanding is that a rule should essentially do the same as if it is put in the prompt directly. Is there a solution to that?
Yes from my understanding Cursor Rule files are essentially an invisible prefix to every prompt. I had some issues in the past with Cursor not picking up rule files until I restarted it (some glitch, probably gone by now.). So put something simple like a "version" or for your rules file and ask it what version of the rules are we following for this conversation just to validate that the process is working.
For Cursor with larger projects I use a set of larger rule files that always apply. Recently I worked with Spotify's Backstage for example and I had it index online documentation on architecture, build instructions, design, development of certain components, project layout. Easily 500+ lines worth of markdown. I tell Cursor where to look, i.e. online documentation of the libraries you use, reference implementations if you have any, good code examples and why they are good, and then it writes its own rule files - I don't write them manually anymore. That has been working really well for me. If you have a common technology stack you or way of working you can also try throwing in some examples from https://github.com/PatrickJS/awesome-cursorrules
For a codebase containing both good and bad code; maybe you can point it to a past change where code was refactored from bad to good, so it can write out what why you prefer which style and how to manage the migration from bad to good. That said; the tools are not perfect. Even with rules the bad output still can happen but larger rule files describing what you'd like to do and what to avoid makes the chance significantly smaller and the tool more pleasant to work with. I recently switched to Claude Code because Cursor tended to get "stuck" on the same problem which I don't really experience with Claude Code but YMMV.
So, I was wondering when I would see that... from my experience, I would say it also makes mediocre developers bad ones very fast. The reason being a false sense of confidence, but mostly it's because of the sheer volume that is produced.
If we want to be more precise, I think the main issue is that the AI-generated code lacks a clear architecture. It has no (or very little) respect for overall information flow, and single-responsibility principle.
Since the AI wants you to have "safe" code, so it will catch things and return non-results instead. In practice, that means the calling code has to inspect the result to see if it's a placeholder or not, instead of being confident because you'd get an exception otherwise.
Similarly, to avoid problems the AI might tweak some parameter. If for example you were to design an program to process something with AI, you might to gather_parameters -> call -> process_results. Call should not try to do funky things with parameters because that should be fixed at the gathering step. But locally the AI is always going to suggest having a bunch of "if this parameter is not good, swap it silently so that it can go through anyway".
Then tests are such a problem it would require an even longer explanation...
To echo the article, I don't want to know it was written with an AI. Just like I don't want to see that it was obviously copy-pasted from StackOverflow.
The developer can do whatever they want, but at the end, what I review is their code. If that code is bad, it is the developer's responsibility. No amount of "the agent did it" matters to me. If the code written by the agent requires heavy refactoring, then the developer has to do it, period.
However, you'll probably get an angry answer that it's management fault, or something of the sort, that is to blame (because there isn't enough time). Responsibility would have to be taken up before in pushing back if some objectives truly are not reasonable.
>No one in this world ... has ever lost money by underestimating the intelligence of the great masses of the plain people...
I think you're disproving your own point.
If you look the major flops in all industries (video-games, movies, ...) the general trend is contempt for the audience.
This generally results in some form of uproar from the most involved fans, which is disregarded because of the assumption that the general public won't pick up on it.
At the very least, I would say that for this to be true you need to have a very specific definition of intelligence that would exclude a lot of crowd behaviors.
I would suggest some shades of meaning on the Mencken quote. You're absolutely right that showing contempt for your audience will -absolutely- pave the road to losing money. In contrast if you -pander- to the lowest common denominator of intelligence required for engagement? Money printer go brrrrr.
The developers are also behind JellyCar Worlds, which I found to be a wonderfully creative physics based "platforming" (there's a twist!) challenges/puzzles. It's ton of fun to play with a kid, yet there's a lot of really complex setups to really challenge yourself if you want to. A real gem!
This is a very sensible analysis of the problems.
On the one hand, people tend to ignore how many bands fail, and how much money and effort is spent on the process.
On the other hand, labels have a deathgrip on the industry, using payola and other practices that they can afford thanks to their financial (and accounting) abilities.
One thing that could help is transparency, but in a way the lack of transparency is a good part of what keeps the system going.
Most people would not agree if they knew how little they would keep if they were successful; "what do you mean I have to pay for the losers?".
They would just want to pay for what was necessary for their success, ignoring every expense that didn't work as a "stupid label decision".
The thing is that nobody has a true recipe for success, you can just get reasonable estimates on your bets, but each bet will always be a biased coin flip.
It's disheartening because now I will look much more into reputable publishers, and so filter off independent writers who have nothing to do with this.
reply