Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Here's a great article somebody posted on HN awhile back, "What Color is Your Function?"

-- http://journal.stuffwithstuff.com/2015/02/01/what-color-is-y...

The proper abstraction is threads--i.e. a stack structure used for the storage of temporary values, which is shared by nested function invocations as well as similar structured programming constructs like if/else conditionals that together represent a singular thread of control through the program as it processes data and events. "Thread of control" is precisely what you're cobbling together with promises, async/await, etc, using clunky compiler constructs bolted onto the language model and implementation. The problem is that we conflate the implementation with the abstraction, sometimes by necessity (the legacy of the contiguous, so-called "C stack") but usually unnecessarily.

It pays to remember that some operating systems, such as some of IBM's mainframe platforms, IIRC, implement a thread stack as a linked-list of invocation frames, rather than a contiguous array. But it's completely transparent, as it should be. Async/await does the exact same thing, except it's not transparent as it should be. So now you have _two_ distinct, mutually incompatible kinds of functions solely because the implementation is unable to properly hide the gory details, which is ludicrous on its face.



I feel the conclusions of that article were pretty spot on. Having done a bunch of async programming in Go, and now picking up Node on the side, I can't help but agree that Go got this right. You have the simplest conceptual model (a synchronous one), with the runtime responsible for making it all run async. JS certainly seems to have come a long way, but it feels like the next step will be the most difficult one, requiring a serious redesign of the runtime itself.


I'm not fluent in Go but I read this blog post about asynchronicity in Go, https://www.golang-book.com/books/intro/10. Is the idea that Go will be synchronous in calls made with the go keyword only when there's some barrier put in place by a channel?


Only if you create a channel with no buffer: then it becomes blocking send or receive. Go channels block when they have nothing to do, either no room to send anything (in the case of a default channel with no buffer) or nothing to receive (when the channel has no messages).

It's not uncommon to use a select statement to allow work to continue (it may act like a loop) and wait to receive a message on a channel. This is the common pattern for handling timeouts: create a timer goroutine that will wake at a set time and send a message to a timeout channel, keep checking to see if work is done, and if the timer fires then cancel the select with an appropriate message (function call or return an error value).


You have two distinct function calls to know exactly which one is giving up control and which one doesn't. This allows for synchronization-free programming model.

And even if your green threads implementation multiplexes them onto a single thread where technically you don't have to do synchronization, having identical function calls makes it very hard to guarantee where your code yields and you still have to use synchronization primitives with all the problems that come with it.


light-weight multithreading that enters a scheduler loop when I/O would block is great, and I haven't seen any advantages of async/await compared to that, other than ease of implementation (you make the programmer or library author implement multithreading, rather than implementing it in your runtime).

Promises are at least slightly more interesting because they can functionally compose in various ways (but most of the time it's just a dozen then() calls in a row, and threading would have been better).


There are definitely patterns where a Promise API is more readable. You can implement a Promise API with or without separate threads, but if you execute it on a separate thread the number of library calls the Promise function can make isn't limited.

FWIW, I'd be careful about conflating issues such as how to handle I/O. Threads don't imply preemptive scheduling. Lua coroutines are implemented as threads, but that's the extent of things--there's no builtin scheduler or event loop nor features specifically directed toward implementing those things[1], just coroutine.resume and coroutine.yield. (yield can be called from any function at any depth as long as the call stack is on a coroutine thread, which in Lua is every stack except the main one.) And that's fine by me. The most fundamental issue from the perspective of the language design is about how to represent a thread of control that is consistent with the other structured programming constructs like function calls. A batteries-included event loop or a preemptive scheduler are nice to have, but notably they're much easier to implement (internally or third-party) and use if you have a thread construct.

[1] Debugging hooks notwithstanding.


I'm sure you can agree, that not having to worry about all kinds of races, contentions, deadlocks, wasting time on shotgun debugging, but still never really feeling like your program is reliable enough and about other "nice" things that come with shared memory multithreading is a huge advantage for any concurrent program.


Multithreading is not synonymous with preemptive scheduling. That's an entirely different issue. A chain of promises or async/await functions is no better in this respect than so-called green threading, and in many cases is worse than a cooperatively scheduled framework that provides more explicit control over the points at which thread execution can switch. For example, a system built on stackful coroutines where thread execution only occurs at explicit resume points, or a simple message passing model where execution changes only at the point a message is transferred. These are basically continuation passing style, except importantly the _state_ of recursive function invocations is completely transparent to intermediate functions, without having to explicitly pass around state or annotate your function definitions. In other words, no different than how you'd write any other function.

That's my point. The better _abstraction_ for all these things is a thread, no matter how you choose to schedule chunks of work or how you choose to implement things under the hood. A thread is just a construct that encompasses nested function invocations, and that construct is what promises and async/away emulate, except that that they leak implementation details and restrict the normal things functions do, like call other functions.


Here's the thing, if you can call functions that themselves can yield - you are in a shared memory multithreading model, where you can never guarantee for any function not to yield, so you have to use synchronization for that guarantee with all the same issues.


In a cooperative multi-threaded model you can only have data-races at function-call boundaries. For example: `x+=1` can never data-race.


Except if your programming language allows you to override the += operator.


You are proposing a situation where someone overrides += to specifically both call a blocking function and to not make it work correctly. I'm not saying it doesn't happen, but bad code is bad code regardless of your paradigm.

Though I must admit I've not done cooperative multitasking in a language with operator overloading so I can't say whether or not this is a problem in practice.


In can happen for example in Python with gevent.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: