I don't know why LLMs talk in a hybrid of corporatespeak and salespeak but they clearly do, which on the one hand makes their default style stick out like a sore thumb outside LinkedIn, but on the other hand, is utterly enervating to read when suddenly every other project shared here is speaking with one grating voice.
It's even worse than that, TikTok & Instagram are labeled "social media" despite, I'd wager, most users never actually posting anything anymore. Nobody really socializes on short form video platforms any more than they do YouTube. It's just media. At least forums are social, sort of.
Right, which is why it's so strange to suddenly see every other readme and blog post that gets shared on this site speaking with the same tone of voice. Dead Internet theory finally came here.
Not only does it scream LLM output, I happen to find it almost always grating. It's fine enough when something is labeled as AI output, but when it's nominally a human-authored document it's maddening.
Claude tics appear to include the following:
- It's not just X, it's Y
- *The problem* / *The Solution*
- Think of it as a Z that Ws.
- Not X, not Y. Just Z.
- Bold the first sentence of each element of a list. If it's writing markdown, it does this constantly
- Unicode arrows → Claude
- Every subsection has a summary. Every document also has a summary. It's "what I'm going to tell you; What I'm telling you; What I just told you", in fractal form, adhered to very rigidly. Maybe it overindexed on Five Paragraph Essays
Oh no!! Yet another thing I've been doing for the past decade which will now make me look like a robot. I thought my penchant for em-dashes was bad enough.
I have a keyboard shortcut to make the arrows. I think they look nice.
oh I think they look nice too but unfortunately they are a Claude thing now :( though I think if you use them judiciously then it won't make the whole document look generated, it's really when it's deployed in the way Claude does that it's noticable
> We use a lenient parser like ast.literal_eval instead of the standard json.loads(). It will handle outputs that deviate from strict JSON format. (single quotes, trailing commas, etc.)
A nitpick: that's probably a good idea and I've used it before, but that's not really a lenient json parser, it's a Python literal parser and they happen to be close enough that it's useful.
Appropriately, I think this was probably drafted by AI too:
> How does install.md work with my existing CLI or scripts?
> install.md doesn't replace your existing tools—it works with them. Your install.md can instruct the LLM to run your CLI, execute your scripts, or follow your existing setup process. Think of it as a layer that guides the LLM to use whatever tools you've already built.
(It doesn't X — it Ys. Think of it as a Z that Ws. this is LLM speak! I don't know why they lean on these constructions to the exclusion of all else, but they demonstrably do. The repo README was also committed by Claude Code. As much as I like some of the code that Claude produces, its Readmes suck)
Yeah, removing that line right now. Went too fast and some this copy is definitely low quality :(. Incredibly ironic for me to say that AI needs more supervision while working at the company proposing this haha.
Any other feedback you have about the general idea?
I think my preferred version of this would be a hybrid. Keep the regular installer, add a file filled with information that an LLM can use to assist a human if the install script fails for some reason.
If the installer was going to succeed in a particular environment anyway, you definitely want to use that instead of an LLM that might sporadically fail for no good reason in that same environment.
If the installer fails then you have a "knowledge base" to help debug it, usable by humans or LLMs, and if it fails, well, the regular installer failed too, so hopefully you're not worse off. If the user runs the helper LLM in yolo mode then the consequences are on them.
The other problem is that without an LLM around, your install.md is suddenly not executable, which means you're effectively importing a massive dependency. Why should I have to burn some of my token quota or pay for extra tokens just to install something?
> Elastic has been working on this gap. The more recent ES|QL introduces a similar feature called lookup joins, and Elastic SQL provides a more familiar syntax (with no joins). But these are still bound by Lucene’s underlying index model. On top of that, developers now face a confusing sprawl of overlapping query syntaxes (currently: Query DSL, ES|QL, SQL, EQL, KQL), each suited to different use cases, and with different strengths and weaknesses.
I suppose we need a new rule, "Any sufficiently successful data store eventually sprouts at least one ad hoc, informally-specified, inconsistency-ridden, slow implementation of half of a relational database"
Funny argument on the query languages in hindsight, since the latest release (https://www.paradedb.com/blog/paradedb-0-20-0 but that was after this blog) just completely changed the API. To be seen how many different API versions you get if you make it to 15 years ;)
PS: I've worked at Elastic for a long time, so it is fun to see the arguments for a young product.
You have that backwards. GFS was replaced by Colossus ca. 2010, and largely functions as blob storage with append-only semantics for modification. BigTable is a KV store, and the row size limits (256MB) make it unsuitable for blob storage. GCS is built on top of Spanner (metadata, small files) and Colossus (bulk data storage).
But that's besides the point. When people say "RDBMS" or "filesystem" they mean the full suite of SQL queries and POSIX semantics-- neither of which you get with KV stores like BigTable or distributed storage like Colossus.
The simplest example of POSIX semantics that are rapidly discarded is the "fast folder move" operation. This is difficult to impossible to achieve when you have keys representing the full path of the file, and is generally easier to implement with hierarchical directory entries. However, many applications are absolutely fine with the semantics of "write entire file, read file, delete file", which enables huge simplifications and optimizations!
Thank you, yes my knowledge was very outdated, waay before Spanner.
Spanner for GCS actually explains how public Google Cloud was always ACID for object listing, while S3 only implemented it around 2020. I always suspected that there must be some very hard piece to implement that AWS didn't have until 2020. Makes sense now that that piece was Spanner.
It explicitly doesn't, though they don't explain why not. It's not an on/off device distinction because it disables Firefox's automatic tab groups too.
A lot of anti-AI backlash seems to exempt machine translation, which as far as I can tell is just because it's been around for so long that people are comfortable with it and don't see it as new or AI-y, which imho spells doom for a lot of this- in ten years automatic tab groups will seem just as natural and non-intrusive as machine translation.
It's not mere familiarity. Machine translation is immediately useful to me. I was going to pull up google translate anyway; keeping it local to my device improves both convenience and privacy.
A local LLM that I explicitly bring up to ask a question and dismiss (ie no CPU or RAM usage) when I'm done consulting it is nice. A piece of software I'm using interrupting what I'm doing to ask me a useless and annoying question or to make an unsolicited change to my workspace leaves me thinking about permanently uninstalling it.
I will never want automatic tab groups or automatic anything else. I don't even want an "integrated" desktop environment - I use i3 to get away from that. I hate all the useless bullshit half baked features that are constantly shoved in my face.
If the modern web was compatible with it I'd use a text based browser for 90% of what I do online. And if that were the case I'd still welcome a built in machine translation feature because it's an incredibly useful tool.
Firefox's translation by default does pop up to interrupt and ask if you want to translate a page that it's detected is in another language. We're just more used to that and it's a more reliable signal that you probably want to run a tool than most.
It's still relatively new in FF and I don't think I've seen anyone complaining about it annoying them with popups, even though it absolutely does throw up an interrupting overlay, especially on mobile.
I definitely complain about this one. I can read a few languages, and rarely if ever browse a page in a language I don't understand, so popups with "do you want to translate this" are unwelcome here. It doesn't help that in the first iterations Firefox didn't offer a quick way to turn the whole thing off.
You can disable the popup but still invoke the tool manually from the main menu. I reaffirm my previously expressed dissatisfaction with modern software "features" and add that there are plenty of defaults in Firefox that I personally dislike. That includes anything that pops up unsolicited without good reason.
From a UI perspective, auto tab groups are just an extra button as far as I can tell, so it's not clear why it's getting the axe from this site, just from a pure "this is annoying" point of view.
The flow is 1) you drag a tab over another tab and it suggests a name for the tab group and 2) you click on a tab group and another button offers to suggest more of your tabs you can add to the group. That's less intrusive than Firefox Translations are by default.
These are tiny purpose built models with simple and safe use cases.
Do you also want to remove the ML that gets the results you want at the top of the address bar autocomplete. That's been around for 15 years and it's "AI" so might as well get rid of it right?
This "all ML sucks" because generative AI LLMs suck has to end. It's entirely a garbage take.
Here's my list of current Claude (I assume) tics:
https://news.ycombinator.com/item?id=46663856
reply