Hacker Newsnew | past | comments | ask | show | jobs | submit | mjevans's commentslogin

I forget the term, it might be Dependency Graph.

Hypothetically lets say there's a synchronized quantum every 60 seconds. Order of operations might not matter if transactions within that window do not touch any account referenced by other transactions.

However every withdrawal is also a deposit. If Z withdraws from Y, and Y withdraws from X, and X also withdraws from Z there's a related path.

Order also matters if any account along the chain would reach an 'overdraft' state. The profitable thing for banks to do would be to synchronously deduct the withdrawals first, then apply them to maximize the overdraft fees. A kind thing would be the inverse, assume all payments succeed and then go after the sources. Specifying the order of applied operations, including aborts, in the case of failures is important.


Those transfers would be represented as having dependencies on both accounts they touch, and so would be forced to be ordered.

Transfer(a, b, $50)

And

Transfer(b, c, $50)

Are conflicting operations. They don't commute because of the possibility that b could overdraft. So the programmer would need to list (a, b) as the dependencies of the first transaction and (b, c) as the second. Doing so would prevent concurrent submission of these transactions from being executed on the fast path.


As I was implying with chains after the toy example, the issue of ordering matters when there's a long sequence of operations that touches many accounts. How easy is it to track all of the buckets touched when for every source there could be a source upstream from any other transaction?

A temporary table could hold that sort of data in a format that makes sense for rolling back the related transactions and then replying them 'on the slow path' (if order matters).


I don't need AI for that automation, I need _good robotics that are cheap_.

This is the approach of 1X NEO Home Robot, and of Starship Technologies delivery robots.

They do the robotics part, and then remotely operate them (though on the paper it is officially "hybrid").


I agree the AI play is dumb. But lots of people are unsolved problems in robotics and think “I’m sure an LLM can do that”. I think sweet green is doing the same

For what, actually?

X X Windows L.E. => (e)X X Windows (Wayland) L.E. (Linux Edition)

Even the one major 'windows' app that my mom needs to use is going Web only... so I figure if I install Debian Stable + Widevine that'll cover 99.9% of the use case and I gain an OS that just works correctly.


Problems I face WRT Traffic / Commutes / Related factors:

* Drivers who can't just drive at at _least_ the speed limit. Flow is mentioned several times in the article, but flow is also a major part of traffic issues I face daily. Every time drivers refuse to merge right to allow others to pass (state law here). Every time drivers slow down instead of speeding up because they're unsure. Every time there's traffic enforcement for revenue rather than enforcing the laws that would promote a smooth and steady commute. That causes the rate of flow to decrease. It lets other slower drivers merge into the gaps opened in front (which pushes the stack of cars further back and further slows the flow, compared to just going down the road). The only way to clear a log jam in a river is to get the logs out, down the river in the case of traffic. After the block clears up traffic should go slightly _faster_ to pull the flow forward, removing the pressure and restoring safety and expediency for drivers behind.

* Freeways built to hub and spoke main city designs, when I need to cross around major geographic features (lakes, 'very big hills' with a couple mounts along the most obvious paths).

* No where NEAR enough housing built in the last 40+ years anywhere near jobs. (Solution: have good building codes and auto approval if code conditions are met, and build build build.)

* Family with roots in an area far from where jobs are today... the suburbia of my childhood is not a center of well paying white collar jobs. (That's what hub and spoke to the big city used to be; before businesses escaped to other outlying areas.)


> Drivers who can't just drive at at _least_ the speed limit.

I don't mind people who drive under the speed limit (it is, after all, meant to be a limit and not a minimum speed), but they need to not hang out in the passing lane. Nobody should be hanging out in the passing lane in general, but you especially don't get to do it if you aren't even driving the speed limit.


Food -> 'basic needs'... so yeah, Shelter, food, etc. That's why most of us drive. You are also correct to separate Philia and Eros ( https://en.wikipedia.org/wiki/Greek_words_for_love ).

A job is better if your coworkers are of a caliber that they become a secondary family.


30mS for a website is a tough bar to clear considering Speed of Light (or rather electrons in copper / light in fiber)

https://en.wikipedia.org/wiki/Speed_of_light

Just as an example, round trip delay from where I rent to the local backbone is about 14mS alone, and the average for a webserver is 53mS. Just as a simple echo reply. (I picked it because I'd hoped that was in Redmond or some nearby datacenter, but it looks more likely to be in a cheaper labor area.)

However it's only the bloated ECMAScript (javascript) trash web of today that makes a website take longer than ~1 second to load on a modern PC. Plain old HTML, images on a reasonable diet, and some script elements only for interactive things can scream.

    mtr -bzw microsoft.com
    6. AS7922        be-36131-cs03.seattle.wa.ibone.comcast.net (2001:558:3:942::1)         0.0%    10   12.9  13.9  11.5  18.7   2.6
    7. AS7922        be-2311-pe11.seattle.wa.ibone.comcast.net (2001:558:3:3a::2)           0.0%    10   11.8  13.3  10.6  17.2   2.4
    8. AS7922        2001:559:0:80::101e                                                    0.0%    10   15.2  20.7  10.7  60.0  17.3
    9. AS8075        ae25-0.icr02.mwh01.ntwk.msn.net (2a01:111:2000:2:8000::b9a)            0.0%    10   41.1  23.7  14.8  41.9  10.4
    10. AS8075        be140.ibr03.mwh01.ntwk.msn.net (2603:1060:0:12::f18e)                  0.0%    10   53.1  53.1  50.2  57.4   2.1
    11. AS8075        2603:1060:0:10::f536                                                   0.0%    10   82.1  55.7  50.5  82.1   9.7
    12. AS8075        2603:1060:0:10::f3b1                                                   0.0%    10   54.4  96.6  50.4 147.4  32.5
    13. AS8075        2603:1060:0:10::f51a                                                   0.0%    10   49.7  55.3  49.7  78.4   8.3
    14. AS8075        2a01:111:201:f200::d9d                                                 0.0%    10   52.7  53.2  50.2  58.1   2.7
    15. AS8075        2a01:111:2000:6::4a51                                                  0.0%    10   49.4  51.6  49.4  54.1   1.7
    20. AS8075        2603:1030:b:3::152                                                     0.0%    10   50.7  53.4  49.2  60.7   4.2

In the cloud era this gets a bit better but my last job I removed a single service that was adding 30ms to response time and replaced it with a consul lookup with a watch on it. It wasn’t even a big service. Same DC, very simple graph query with a very small response. You can burn through 30 ms without half trying.

Hell, even force threads to be allocated from a bucket of N threads defined at compile time. Surely that'd work for embedded / GPU space?

Difference in Go is that you've _expressly_ constructed a dependency ring. Should Go or any runtime go out of it's way to detect a dependency ring?

This the programming equivalent of using welding (locks) to make a chain loop, you've just done it with the 3D space impossible two links case.

As with the sin of .await(no deadline), the sin here is not adding a deadline.


In my view, the major design sin was not _forcing_ failure into the outcome list.

.await(DEADLINE) (where deadline is any non 0 unit, and 0 is 'reference defined' but a real number) should have been the easy interface. Either it yields a value or it doesn't, then the programmer has to expressly handle failure.

Deadline would only be the minimum duration after which the language, when evaluating the future / task, would return the empty set/result.


> Deadline would only be the minimum duration after which the language, when evaluating the future / task, would return the empty set/result.

This appears to be misunderstanding how futures work in Rust. The language doesn't evaluate futures or tasks. A future is just a struct with a poll method, sort of like how a closure in Rust is just a struct with a call method. The await keyword just inserts yield points into the state machine that the language generates for you. If you want to actually run a future, you need an executor. The executor could implement timeouts, but it's not something that the language could possibly have any way to enforce or require.


Rust's no_std does not have access to a reliable clock.

Does that imply a lot of syscalls to get the monotonic clock value? Or is there another way to do that?

On Linux there is the VDSO, which on all mainstream architectures allows you to do `clock_gettime` without going through a syscall. It should take on the order of (double digit) nanoseconds.

If the scheduler is doing _any_ sort of accounting at all to figure out any remote sort of fairness balancing at all, then whatever resolution that is probably works.

At least for Linux, offhand, popular task scheduler frequencies used to be 100 and 1000hz.

Looks like the Kernel's tracking that for tasks:

https://www.kernel.org/doc/html/latest/scheduler/sched-desig...

"In CFS the virtual runtime is expressed and tracked via the per-task p->se.vruntime (nanosec-unit) value."

I imagine the .vruntime struct field is still maintained with the newer "EEVDF Scheduler".

...

A Userspace task scheduler could similarly compare the DEADLINE against that runtime value. It would still reach that deadline after the minimum wait has passed, and thus be 'background GCed' at a time of the language's choice.


The issue is that no scheduler manages futures. The scheduler sees tasks, futures are just a struct. See discussion of embedded above: there is no “kernel esque” parallel thread

Thank you. Every time I've tried to approach the concept of Rust's parallelism this is what rubs me the wrong way.

I haven't yet read a way to prove it's correct, or even to reasonably prove a given program's use is not going to block.

With more traditional threads my mental model is that _everything_ always has to be interrupt-able, have some form of engineer chosen timeout for a parallel operation, and address failure of operation in design.

I never see any of that in the toy examples that are presented as educational material. Maybe Rust's async also requires such careful design to be safely utilized.


Guess Rust is more built for memory safety not concurrency? Erlang maybe? Why can't we just have a language that is memory safe and built for concurrency? Like Ocaml and Erlang combine?

Are you looking for Gleam? Simple but powerful typed functional language for BEAM and JavaScript. It’s a bit high level compared to Ocaml in terms of needing a thick runtime and being somewhat far from machine code.

Really beautiful language design imo. Does a great job avoiding the typelevel brainfuck problem I have with Haskell.

https://gleam.run/


Rust is absolutely built for concurrency, even moreso than for memory safety--it just so happens that memory safety is a prerequisite for thread safety. You're going to have a hard time finding any other industrial-strength language that statically prevents data races. If you can use Erlang, then sure, use Erlang. But if you can't use Erlang, and you need concurrency, you're not going to find a better candidate than Rust.

Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: