Hacker Newsnew | past | comments | ask | show | jobs | submit | modderation's commentslogin

Ceph storage uses a hierarchical consistent hashing scheme called "CRUSH" to handle hierarchical data placement and replication across failure domains. Given an object ID, its location can be calculated, and the expected service queried.

As a side effect, it's possible to define a logical topology that reflects the physical layout, spreading data across hosts, racks, or by other arbitrary criteria. Things are exactly where you expect them to be, and there's very little searching involved. Combined with a consistent view of the cluster state, this avoids the need for centralized lookups.

The original paper is a surprisingly short read: https://ceph.com/assets/pdfs/weil-crush-sc06.pdf DOI: 10.1109/SC.2006.19


Depends on the setup, but programmatic access to a Gmail account that's used for admin purposes would allow for hijacking via key/password exfiltration of anything in the mailbox, sending unattended approvals, and autonomous conversations with third parties that aren't on the lookout for impersonation. In the average case, the address book would probably get scraped and the account would be used to blast spam to the rest of the internet.

Moving further, if the OAuth Token confers access to the rest of a user's Google suite, any information in Drive can be compromised. If the token has broader access to a Google Workspace account, there's room for inspecting, modifying, and destroying important information belonging to multiple users. If it's got admin privileges, a third party can start making changes to the org's configuration at large, sending spam from the domain to tank its reputation while earning a quick buck, or engage in phishing on internal users.

The next step would be racking up bills in Google's Cloud, but that's hopefully locked behind a different token. All the same, a bit of lateral movement goes a long way ;)


This looks interesting! I've been building a similar tool that uses TreeSitter to follow changes to AST contents across git commits, with the addition of tying the node state to items in another codebase. In short, if something changes upstream, the corresponding downstream functionality can be flagged for review.

The ultimate goal is to simplify the building and maintenance of a port of an actively-maintained codebase or specification by avoiding the need to know how every last upstream change corresponds to the downstream.

Just from an initial peek at the repo, I might have to take a look at how the author is processing their TreeSitter grammars -- writing the queries by hand is a bit of a slow process. I'm sure there are other good ideas in there too, and Diffsitter looks like it'd be perfect for displaying the actual semantic changes.

Early prototype, heavily relies on manual annotations in the downstream: https://github.com/NTmatter/rawr

(yes, it's admittedly a "Rewrite it in Rust" tool at the moment, but I'd like it to be a generic "Rewrite it in $LANG" in the future)


It's even more fun when you extend it to negative integers, reals, and the complex plane!

Matt Parker (Stand-up Maths) delves into this in a very approachable manner: https://www.youtube.com/watch?v=ghxQA3vvhsk


I'm guessing it'd look something like this on a 1-dimensional number line:

    --- >   | > >> . << < |   < ---
The dot in the middle would be the singularity, the pipes the event horizon, and the contents would be increasingly warped spacetime that may or may not exist, depending on your interpretation of things.


I think it's an interesting thought experiment. What would happen if the stock market were quantized to a blind one trade per-minute granularity?

I suspect this would put everyone on more even footing, with less focus on beating causality and light lag, placing more focus on using the acquired information to make longer-term decisions. This would open things up to anyone with a computer and a disposable income, though it would disappoint anyone in the high-frequency trading field.


> What would happen if the stock market were quantized to a blind one trade per-minute granularity?

Like one share of stock trades each minute in each name? Or one trade randomly executes?

If the former, you stop trading the stock and start trading something pointing at it. If the latter, the rich get to trade.

> less focus on beating causality and light lag

You’d have to ban cancelling orders, otherwise you bid and offer and then cancel at the last minute. Either way, you’d be constantly calculating the “true” price while the market lags and settling economic transactions on that basis. (My guess is the street would settle on a convention for the interauction model price.)

If you’re upset about stock markets looking like casinos, the problem isn’t the fast trading. It’s the transparency. Just don’t report trades until the end of the day.

If you aesthetically don’t like HFT, that’s a tougher problem as the price of the stock points at something tied to reality, and reality runs real time.

Both ideas sort of look like the private markets.


He means every minute a single "opening trade" style trade happens and clears overlapping sections of the order book

This has the advantage of every trader getting the same price every minute. And racing against the clock has marginal utility


> racing against the clock has marginal utility

It has the same utility as in the opening cross, the most algorithmically-trafficked moments of trading after the closing cross. The last order can incorporate more information than an earlier one. Given the book is assembled transparently, that means an order submitted close to the deadline can “see” other orders in a way they couldn’t “see” it.


> blind one trade per-minute granularity

"Blind" meaning that no orders can "see" each other.


You would change the rules, but I think the result would largely remain the same. As a market participant with the fastest access to data from other markets, news, and similar sources, as well as low order entry latency, you would still be able to profit from information asymmetry.

Imagine that a company announces the approval of its new vaccine a few milliseconds before the periodic trade occurs. As an HFT firm, you have the technology to enter, cancel, or modify your orders before the periodic auction takes place, while less sophisticated players remain oblivious to what just happened. The same applies to price movements on venues trading the same instrument, its derivatives, or even correlated assets in different parts of the world.

On the other hand, you risk increasing price volatility (especially in cases where there is an imbalance between buyers and sellers during the periodic auction) and making markets less liquid.


How about "Vidja" -- the .fr domain seems to be available, the top google hit is for an IKEA floor lamp, and it is generally a silly English mispronunciation of "video" (you kids and yer vidja games...) :)


WORM prevents after-the-fact modification, but it isn't very helpful in the case of persistent threats.

The concern is that the tampering has already been committed to the backups. When was the "Break Glass" password last rotated? Is it protected by one or more Yubikeys that were manufactured before they fixed that nasty exploit? What other attack vectors are baked in through malfeasance or human error?


My comment was not in reply to passwords, "yubikeys" or anything else you mentioned, so your techsplaining about those things was a bit misplaced. MY point was that if the backups are on WORM tapes, and we still have those backups, then there's nothing to fear being compromised from those backups. Everything other than WORM tapes you wrote about is outside the scope of my comment.


Perhaps you are the Sheriff? My baby shot me down, but they did not shoot the Deputy.


Ground control, this is Major Tom, we're got 99 luftballons up here!


That's a useful step, but the options are still Full Cloud Dependency or DIY with Zero Security.

Why haven't they implemented rudimentary access control with printer-side Basic Auth (or the equivalents auth for MQTT and FTP). Add optional SSL support to prevent tampering/MITM on a potentially hostile network, and the unauthenticated access concerns listed in [1] should disappear.

Any problems related to potentially damaging instructions should be best-effort mitigated by the firmware and otherwise indemnified by a "your own fault for using a third-party slicer" clause in the EULA.

Bambu Labs shouldn't need to be in the authentication/authorization path, unless we're actively using their cloud environment.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: