Hacker Newsnew | past | comments | ask | show | jobs | submit | chucky_z's commentslogin

There are 'best to brush' timelines around eating/drinking. Usually you want to either:

- Brush no less than 15m before eating

- Do not brush until 45m+ after eating

I don't fully understand the science, as I'm not a dentist, but it's something related to the way that things stick to/are absorbed by enamel and dentin.

I believe water is the exception here, you can drink water and then immediately brush. You should not brush and then immediately drink water though. You want the toothpaste to stick around and form a barrier.


Supposedly, after eating the pH in the mouth drops and becomes more acidic, which softens the enamel, so brushing will do more harm than good. That's my understanding.


Back of napkin math I’ve done previously, it breaks down around 2 million members with Hashicorps defaults. The defaults are quite aggressive though and if you can tolerate seconds of latency (called out in the article) you could reach billions without a lot of trouble.


It's also frequency of changes and granularity of state, when sizing workloads. My understanding is that most Hashi shops would federate workloads of our size/global distribution; it would be weird to try to run one big cluster to capture everything.


From my literal conversation I'm having right now, 'try to run one big cluster to capture everything' is our active state. I've brought up federation a bunch of times and it's fallen on deaf ears. :)

We are probably past the size of the entirety of fly.io for reference, and maintenance is very painful. It works because we are doing really strange things with Consul (batch txn cross-cluster updates of static entries) on really, really big servers (4gbps+ filesystems, 1tb memory, 100s of big and fast cores, etc).


Who orchestrates the orchestrators? is the question we’ve never answered at HashiCorp. We tried expanding Consul’s variety of tenancy features, but if anything it made the blast radius problem worse! Nomad has always kept its federation lightweight which is nice for avoiding correlated failures… but we also never built much cluster management into federated APIs. So handling cluster sprawl is an exercise left to the operator. “Just rub some terraform on it” would be more compelling if our own products were easier to deploy with terraform! Ah well, we’ll keep chipping away at it.


systemd-fleet, by the original CoreOS folks. https://github.com/coreos/fleet

I used this when it was brand new for a bit and it was so incredibly smooth and worked so well. It solved the problem of controlling systemd units remotely so well. I'm pretty sure the only reason it never actually took off was kubernetes and coreos's acquisition, however it actively solves the 'other half' of the k8s problem which is managing the state of the host itself.


I believe the context is that the CVE is that this bypasses the sandbox entirely; so in this specific case this is a real, full-blown RCE. Your comment makes it seem at a glance that you're saying it's a DOS at worse.


Thanks for replying, but my comment is not saying that at all -- it's pushing back on someone making the claim that the new CVE is no worse than what could already be done, by pointing out that what could already be done was (presumably) only a DoS, while the new CVE is full RCE.

I've reread my comment and the parent comment, and I don't understand how this is not clear?


I've used FIM in the past to catch a CEO modifying files in real-time at a small business so I could ping him and ask him to kindly stop. It's not just about BS _processes_. :D


That means CEO has access to do the changes. It's technically easier to remove that, than to insert FIM into the deployment process. (And will stop other unauthorised employees too) I mean, you already need a working deployment pipeline for FIM anyway, so enforce that?


The CEO would've found it very easy to remove the blocker in that case (me). This is the life of small tech businesses. Also, they were modifying configuration files (php-fpm configurations iirc) and not code.

FIM is very useful for catching things like folks mucking about with users/groups because you typically watch things like /etc/shadow and /etc/passwd, or new directories created under /home, or contents of /var/spool/mail to find out if you're suddenly spamming everyone.


That’s a great real-world story. Exactly the kind of unexpected modification FIM can help surface—not only security incidents, but also operational surprises.


If it's more than even like 10GB this is going to take _awhile_.

I love the pg_* commands but they ain't exactly the speediest things around.


As someone who's lived in the bay for a bit over 10 years now, when I first moved here Google was very much that company that you think they were. Now, they are not. Every single friend (and it was >50% when I moved here!) has since left Google in the bay area. There is one left at Google entirely, and they're only remaining due to physical location (near family outside the US). I have watched my friends get brutally and relentlessly pipped over the tiniest bullshit reasons. This is all entirely 2nd hand so my perspective is very skewed, but even my friends from Facebook/Netflix/Apple weren't treated that way.


I'm aware of the many changes; including the cancellation of Google SoC. However, gp claimed neither Google nor Apple have a benevolent track record towards open source, and that doesn't ring true to me. The old Google was very benevolent, perhaps only rivalled by Red Hat and (old) IBM.


Hi, can you provide a few examples of 'tiniest bullshit reasons'? Kinda curious as what is considered bullshit there, I'm from the EU with zero experience of anything like S.F.


One was pipped because they were placed on a moonshot, told how amazing their work was, gave internal talks on it, then the moonshot was defunded... so they got pipped over their lack of business impact. Instead of, y'know, being placed on a normal team, like where they came from only a year or so before.


spell out 'development' with hammer emojis. bring ascii art back as emoji art.

(i actually do this in slack messages and folks find it funny and annoying, but more funny)


Incredibly good. A lot of main GPU board partners are making LLM-focused boards. There was a few previews on Gamers Nexus from... Computex I think? I wanna say lots of memory, dual GPU boards that looked very well built for local LLM usage.


With unified memory?


https://www.servethehome.com/maxsun-intel-arc-pro-b60-dual-g...

It's two B60 GPUs on a single x16 PCIe card. Nothing unified.


Working with the vram of both gpus on the same card is aeons faster than taking a system ram roundtrip.


Looks like they are independent x8 GPUs on bifurcated x16 PCIe slot.


I've used `tuned` a lot. It's really extremely good for personal machines/workstations, and really okay for servers. In my case I'm almost 50/50 with it in professional cases, where 50% of the time I had a real good time with it, and 50% of the time I turned it off and used startup scripts (like cloud-init per-boot and whatnot).

Overall, I'd say give it a shot as it can be really powerful and I do actually like it. Don't be afraid to go 'no, I know how to do this better, myself' and turn it off though.


I disable it whenever setting up a new system. It gets irq bindings for networking wrong every single time, and moves irqs around in ways that defeats the whole point of having per CPU queues. Not sure why that behaviour is enabled by default as it makes no sense.


Please lodge a bug because I seem to think the same way but lack the larger deployment experience to explain how to do it more generally than my tiny use case.


It's easier to uninstall it. There's nothing good that tuned has ever done for me.

FYI: messing with irq bindings for per-cpu queues of nics has been a bug for at least 16 years depending on the nic. FYI: Intel launched the 82599 back in 2009.

Clueless software developers should not be messing with kernel settings like irq bindings. Software that does that is not worth my time.


Complaining here won't improve the situation, filing a bug might.


Again, I don't use it, so I'm not going to file bugs against it. My post is simply providing relevant context for other readers here to understand what the limitations of the software referenced in this post. In no world are technically competent folks under an obligation to teach other people how to do things right. Doubly so when the software is paid for by a large multinational corporation that has the resources to do the job right.


No obligation, obviously, no one argues that. The issue is tone.

> Clueless software developers should not be messing with kernel settings like irq bindings. Software that does that is not worth my time.

Come on, man. If you don’t want to help, just don’t respond. If you want to warn someone against something, just be bare-minimum polite. It’s easy.


Shooting the messenger won't fix or prevent a code quality problem.

Edit: Let me explain why I am of this opinion. Of late my life is being made miserable by poor quality software. There seems to be an entire generation of programmers that are skipping the whole part of the design process where one explores the problem space a given piece of software is meant to fit into. In doing so, they are willfully ignoring how the user will experience their software.

This includes networking products that have no means of recovery when the cloud credentials are lost. When the owner of the product loses their credentials and no longer has access to the email address they originally signed up for, the only solution is a manual reset of every single device in the network. Have you every had to spend hours taking a ladder into a building to rip down a dozen access points that are paperweights because there's no way to recover from this?

Take LLMs. They're great at filling in reams of boilerplate code where the structure is generally the same as everything else. So much of the software industry is about building CRUD apps for your favourite corporation, and there's not a lot of thinking throughout the process. But what happens when you're building a complex application that involves careful performance optimization on many core CPUs and numerous race conditions with complex failure modes? Not so good. And the person driving the LLM isn't going to patch the security holes in the "vibe code" they submitted to the Linux kernel because they don't even know how it works.

Or LLMs that skip off the guard rails and feed desperate individuals information on how to kill themselves?

What about the Full Self Driving vehicles that drive at full speed into emergency vehicles parked on a road with lights flashing that the most naive of drivers would instinctively slow down for while approaching?

What about search engines that have prematurely deployed "AI" features that hallucinate search results for straightforward queries?

How about the world's largest e-commerce website that can't perform a simple keyword search for an attribute of a product (like the size of an SSD)? When I specify 8TB, I mean products that are 8TB, not 512GB!!!

How about CPUs that lose 10-20% of their touted performance gains at launch because of bugs that are "fixed" by software and microcode updates after launch?

What about the email service that blocks emails that are virtually identical to every other email sent to a mailing list because it wasn't delivered using TLS? Oh, but the spammers that have SPF + DKIM + ARC + whatever validation get to have their messages delivered because they have put an Unsubscribe link in the headers.

How about the online advertising platforms that push scams on the elderly with ads that are ephemeral to prevent anyone from sharing a link to what they just saw and report it?

So if I say there is a problem with a software developer being clueless about features they have implemented, it is a valid criticism that is based in facts about the way their software was designed and how it functions.

There are still people who value their reputation enough to put in the effort to explore the problem space and anticipate the user's needs to avoid issues like this, but I fear that they are going to be pushed out of the industry because they're not fast enough in the race to foist the "next big thing" onto an unsuspecting public.

We need simple, reliable, functional software that meets the needs of its users. And we're losing that.

It's a sad state of affairs that we have to deal with in 2025. We have truly entered the age of "Fuck you" software that ignores what it does to its users and actively harms them.


Instead of writing a novel of rant, you could really spend that time filing a bug.


For tuned, I don't need the bug fixed, as I simply don't want it changing any kernel settings at all. Uninstalling it achieves the result that I want.

The rest of the rant is valid and the issues are virtually impossible to get fixed.

Please enlighten me: how does one file a bug against spam filtering on gmail.com or get rid of broken AI summaries on google.com that will garner an acknowledgement and get the underlying issue fixed?


I thought this was about tuned.


We used to call them script kiddies. Running stuff without understanding what's happening and why.


What did you accomplish with it?

Another answer talks about saving 40W. Why not? But it's not much in a normal power-cost environment.


I did the reverse, I needed something to keep CPUs pegged at 100% power all the time, and for some reason the boxes I was using at the time kept going 'no no it's ok I need to save power,' but that lead to really inconsistent performance. Tuned, in that environment, 'just worked' after I wrote a custom profile.

There was another issue I was able to fix with it in AWS, but I legitimately can't recall what it is.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: