Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's almost certainly a bad idea in a web server context like this article is talking about. You improve best-case latency when the server's not loaded, but now you're using 3 threads per request to get a less than 2x speedup (and in a bigger example it would be worse), so your scaling behaviour will get worse.


always spawning a thread is of course the naive implementation. You can put an upper bound on the number of threads and fallback to synchronous execution of async operations in the worst case (for example inside the wait call).

If your threads are a bit more than dumb os threads (say, an hybrid M:N scheduler) you can do smarter scheduling, including work stealing of course.


Well, as your threads become less like threads and more like a future/async runtime you come closer to the advantages and disadvantages of a future/async runtime, yes.


The underlying thread model have always been 'async' in some form under the hood, i.e. at some point there is always a multiplexer/scheduler that schedules continuations. Normally this is inside the kernel, but M:N or purely superspace based thread models have been used for decades.

Really the only difference between the modern async model and other 'threaded' model is its 'stacklessness' nature. This is both a major problem (due to the green/red function issue and not being able to abstract away asynchronicity) and an advantage (due to the guaranteed fixed stack size, and, IMHO overrated, ability to identify yield points).

At the end of the day is always continuations all the way down.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: