Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Swift Evolution: Actors (github.com/apple)
264 points by mpweiher on March 16, 2021 | hide | past | favorite | 154 comments



Chris's review is really impressive. Anyone doing code review should take note.


I agree: well-written, well-presented, well-considered.

(Of course I'm heavily biased against inheritance, but I think it's a very good review regardless.)


Technical aspects aside I really enjoyed the empathy he showed, spending considerable time showering love to the author on what he liked about the proposal and how well it was written and how overall he’s excited and then skillfully transitioned into the critique of the proposal and not the person.


I love actors, but baking them into the language like this seems like the wrong approach to me.

The language should provide all the necessary abstractions to create an isolated context. Actors can then "just" be a default abstraction provided by the standard library.

Side note: has the Swift cross platform support improved? I was very much looking forward to using Swift as a general purpose language, but Apple didn't seem very interested in making it a true first class cross platform language. I believe IBM also dropped their "Swift for the server" efforts a while ago due to similar concerns.


but..the "provide all the necessary abstractions to create an isolated context" is the hard part. It trickles into every part of execution. How do you do isolated contexts without saying any process can be preempted at any moment? How do you isolate things when there's shared memory? To do this right you end up with something like Erlang which makes a lot of sacrifices in order to get there, though you have a fighting chance at dealing with concurrency. And if you bolt it on to the runtime without those pieces in place you end up with something like Akka which has tons of gotchas and ends up being way more confusing than just programming the way the JVM wants you to.

it seems like these fundamentals are in tension, and tradeoffs need to be made, which just means that there won't be a one size fits all paradigm. maybe someone will create something that is truly magic and does everything, but more likely they're trying to sell you something.


There are also the various static checks that prevent you from accessing actor-isolated state outside the actor, or passing non-Sendable state across actor boundaries. It's not clear how a general language extension mechanism could implement those.


I wonder if Rust can do that, by using data lifetimes incompatible between actors, thus protecting them from being passed around beside sending.

I also don't see why a namespace, like that of a function, won't be enough isolation from peeking between actors.

Somehow Scala got Akka, the actor framework, without having to tailor the language for that.


> How do you isolate things when there's shared memory?

That's precisely what actors aim to solve. Actors pass messages, not memory.

> Akka

If you look at an actor framework built in a feature-rich language (Orleans), that problem isn't apparent at all. Even Rust, with all the ownership complexity, manages to do elegantly handle actors, without an explicit plan to do so. Your critique is more relevant to Java than actor libraries.


It seems like the problems of memory ownership and the first simple solution to it -- borrowing -- are fundamental to computing architecture.


I have some personal Swift-on-the-server projects deployed in "production" (as in, I use them every day, but I'm the only user) and it works great for me. They're running in Docker containers deployed to Heroku; there might be better ways to deploy it, but I'm not a backend engineer anymore so this is as much as I knew how to do.

I haven't tried it on Windows yet, but I know that it is officially supported now.


> The language should provide all the necessary abstractions to create an isolated context. Actors can then "just" be a default abstraction provided by the standard library.

A stated goal is static, compile time, prevention of concurrency error. You can't do that with a concurrency library unless you restrict yourself to purely deterministic code.


While it takes some work I have successfully run Swift code on the usual suspects (iOS/Mac) as well as Windows, Android and Linux.

If you ditch Foundation and rely on C libs you are able to use it pretty much anywhere.


What are you using for GUI libraries? Or are these mostly accessed from the command line?

In my own experimentation, I’ve seen some success with Vapor 4 and PlotHTML


I've actually been developing my own UI library backed by SDL. It currently uses a DSL-like system similar to SwiftUI. I just spent some late nights working on a "hot-reloading" system, though it only works on Mac/Linux currently.

In theory any C based UI libs that support those platforms should work as long as they can be called from Swift.


That’s pretty dang cool! I’d love to contribute, if you’re open to it.

The most difficult thing I’ve done with Swift/C was packaging a Python interpreter inside a Mac app with a few lxml dependencies called with PythonKit. Might not begin to scratch the surface of what you need though...


Could always use some help. It's still in it's infancy right now, very much a PoC to see if it was worth attempting. Once it gets to a good enough state I will make the repo public and let you know. Props to

Nice job embedding Python in Swift. I haven't done Python, but Wren and Gravity were pretty challenging to get working right. I was going to use a scripting layer for the UI, but eventually came up with a solution that allows for Swift to be used. I sort of mimic the system Apple uses for Playgrounds/Live Preview in Xcode.


> What are you using for GUI libraries? Or are these mostly accessed from the command line?

I'm not a Swift user, but the approach I've seen is people using it to run a localhost HTTP server for a HTML+JS frontend.


i'd be EXTREMELY interested in knowing more about that. I'm currently developing an app on iOS in swift, in the hope of porting the model layer to other platforms such as android (i don't mind redevelopping the UI).

i have been carefully avoiding any objective-c code so far, but i must admit i haven't been to careful with foundation dependencies.

Do you have a blog post explaining what you did and what limitations you encountered ?


I don't have any write-ups from myself. I was planning on eventually posting something, I have a lot of notes to sort through.

You can find some info by searching, there has been a few attempts to get it running on Android. Also just searching around github will net some other attempts.

The foundation dependency is the biggest problem. You can check out https://github.com/apple/swift-corelibs-foundation which gets rid of the objc runtime dependency for other platforms, though you will find yourself writing implementations for each eventually.

I would say if you are going to stay with just Android/iOS to look into maybe moving to Kotlin or C for your common layer and then call it from Swift. Kotlin multi-platform also has been improving recently.

EDIT: Also look at the Swift github, there are some docs about different platforms. https://github.com/apple/swift/blob/main/docs/Android.md


I was going to agree but looking at the proposal I don't.

"Actor" in this case is more like a new access modifier than another type.

I don't see how you could really do this in a standard library as cleanly.


> Side note: has the Swift cross platform support improved? I was very much looking forward to using Swift as a general purpose language, but Apple didn't seem very interested in making it a true first class cross platform language. I believe IBM also dropped their "Swift for the server" efforts a while ago due to similar concerns.

The reality is that the ability for this to happen _always_ depends on the motivation of parties to make it happen. With a proprietary language/compiler, you are at the mercy of the owner while with open source you can contribute the changes - but in either case you can't will other parties to see the business value and do the work for you.

There are still first-party projects to help with Swift on the Server (which is necessarily cross-platform), swift has official containers on docker hub, there is and Apple does contribute to these efforts. X86 Ubuntu, Centos and Amazon Linux are part of Apple's official CI, while the community contributes CI for various other chip architectures, Android, and Windows.

However, the bulk of Apple's investment in Swift is obviously in their own interests - just like any other open source contributor. IBM and Google both stopped contributing officially, but from what I've seen it appears that has more slowed community work than stopped it.


I think part of the issue here is that Swift is radically more difficult to compile compared with other languages. At least the last time I checked, which was several years ago, Swift was using a number of slightly-modified versions of libraries compared to what you would get from a standard package manager, so setting up the environment to compile swiftc on an arbitrary Linux distro was no easy task.


“has the Swift cross platform support improved?”

That’s hard to answer without a “since when”, but the answer probably is “yes”. See https://swift.org/platform-support. There are Windows binaries now, for example.

Many of Apple’s libraries aren’t available on non-Apple platforms, though, so many would say it hasn’t improved enough to be competitive with other languages.


The "official" Swift LSP was begun in 2018 but there doesn't seem to be much going on there these days. The readme still says "SourceKit-LSP is still in early development, so you may run into rough edges with any of the features." That hasn't changed since the first commit. I can't help but feel that part of the problem is that the language is too complicated.


sourcekit-lsp ships with Xcode and has gotten much better. The first release would hang for minutes the moment you imported Foundation; these days it doesn't really hurt performance at all. (I use it in Sublime Text, FWIW.)


sourcekit-lsp is actually pretty light-weight implementation as it shares the same sourcekit code with Xcode. VSCode's sourcekit-lsp integration is OK to me.

The only issue for me is that I don't use Swift Package Manager, hence, the build system (Bazel) integration is simply not there (the BSP support from sourcekit-lsp is dead).


In addition to broader abstractions making code more readable and providing developers with less footguns in getting the isolation wrong, it also gives the compiler more opportunity to both default to safety (such as with structured concurrency proposal) and optimize protection and object allocation/retention patterns.


> baking them into the language like this seems like the wrong approach to me

Wasn’t it baked into Erlang (well, OTP, but like there’s a distinction), where the model originated?


The actor model didn't originate in Erlang and AFAIK the developers of Erlang only accidentally implemented it, learning of it later. Wikipedia[1] has history of the actor model from the 70s when Erlang was developed in the 80s.

[1] https://en.wikipedia.org/wiki/History_of_the_Actor_model


Thanks for the correction, I didn’t know that!


Yeah.I would say the Swift ecosystem is too detached from different platforms than iOS/macOS.

It's a nice language but useless if you want to use it as a generic purpose tool.


I think "useless" is way too strong here. Swift doesn't have the same community support outside of Apple platforms as something like Rust and Go, but it's a very capable general purpose language with plenty of platform-agnostic packages available if you look for them. Also the C interop is great, so you have access to the last 50 years of prior art.

I think the argument is vastly overstated that Swift is not useful without Apple frameworks. It's similar to C# - you have the language, and then you have Windows APIs. Nobody would argue that C# is "useless" outside of Windows.


With the small detail that nowadays Microsoft is heavily commited to make C# and all the libraries that aren't using Windows specific APIs work with the same level of support across all platforms.

Compare Swift on Windows versus .NET (VB, C#, F#) on macOS.


Oh I totally agree that C# has a much better cross-platform story than Swift.

My only point is that that the argument I seem to see often that Swift is not useful outside of macOS/iOS because you don't have SwiftUI/CoreData/whatever is not a good argument.


Kind of, Swift doesn't exist in isolation, there is competition to think about and unless we are speaking about iOS devs using Linux servers, there is very little business value not to chose some other technology with a stronger cross-platform support.


Yep, it sure seems like professional adoption of Swift on alternative platforms is microscopic outside of people who were already iOS/macOS developers. I don't think this is what they had in mind when Lattner somewhat-seriously talked about the "World Domination Plan" in the earlier days.


Jony Ive - Introducing Swift Actors .......


Then it's lucky that 1 billion people use those OS:es. The wealthiest 1 billion at that...


Its okish on ubuntu and wsl but windows still lag hard. Worst think is very narrowed libaries working only on apple devices even most cool open source projects are just locked and are looking only on apple ecosystem (dont get me wrong its huge enough to just ignore other ecosystem at the moment sadly)


Why would anyone want Swift on other platforms?

From the start it seemed to be mostly a product of NIH and the necessity to move away from Objective C while keeping some compatibility. Other platforms tend to allow you much better choices.


Personally I find it to be a nice blend of language features that I really enjoy. It's expressive and readable.

If you can find a language that you actually like to work in then why not use it on other platforms?

The dependencies really come down to the Apple frameworks. Ditch those and it's no different than any other general purpose language.


I have not seen any "much better choice". Swift is one of the best languages I have ever used on any platform.


> Why would anyone want Swift on other platforms?

Myriad reasons. If—for whatever reason—someone likes to program in Swift, having high-quality implementations available on various platforms is a win.

> Other platforms tend to allow you much better choices.

This assumes an objective measure of programming language goodness. Whether a language is better than Swift is an opinion. Opinions vary.


> we can implement a correct version of transfer(amount:to:)

    // Safe: this operation is the only one that has access to the actor's isolated
    // state right now, and there have not been any suspension points between
    // the place where we checked for sufficient funds and here.
    balance = balance - amount
    
    // Safe: the deposit operation is placed in the `other` actor's mailbox; when
    // that actor retrieves the operation from its mailbox to execute it, the
    // other account's balance will get updated.
    await other.deposit(amount: amount)
Am I correct in thinking that the system temporarily destroys money here (and hopes to recreate it if the receiver doesn't crash?)

If I have a closed system of actors constantly transferring money among themselves, and I poll to see the total amount of money in the system, will it fluctuate?


I was thinking the same thing as I read this. Keep in mind, these examples are meant to illustrate how Actors work, not how to write good banking code.


I think that you are correct. There is a paragraph on actor's reentrancy: https://github.com/apple/swift-evolution/blob/main/proposals... which points out that you can definitely get balance before await, therefore, have fluctuated total.


There is also Swift Atomics[1] that is probably needed here

[1] https://swift.org/blog/swift-atomics/


This is an especially poor choice of example given that it's the canonical example for STM, and STM actually solves the problem.


They say that the `deposit` call is placed in `other`s mailbox. As long as the mailbox is preserved between restarts, then there is no danger of the money disappearing. The money will temporarily be unavailable - that is, in neither account's balance. But the money is still somewhere, in the mailbox.


A crash between updating balance and putting the message in the mailbox can still result in an inconsistent state.


A crash would destroy both balance and the mailbox, since they're in memory in the same process, so it's OK. The code that updates persistent storage isn't shown.


right, somehow I was thinking of persistent memory.


In the code above it certainly looks like that - is this not idiomatic swift-finance code though?


TLDR: account balance is not a good example to simply explain actors.

Actor model in general doesn't guarantee the ordering of the messages.

The is possibility that all the withdraw messages arrive first and all the deposits later.

Balance poll result makes sense only with assumption "all messages before this point of time have been processed"


Every time I read about new Swift features, I'm reminded of this paragraph from Aaron Hillegass's book "Cocoa Programming for Mac OS X". I don't know why.

    Once upon a time, there was a company called Taligent, which was created by IBM and Apple to develop a set of    
    tools and libraries like Cocoa. About the time Taligent reached the peak of its mindshare, I met one of its 
    engineers at a trade show. I asked him to create a simple application for me: A window would appear with a  
    button, and when the button was clicked, the words "Hello, World!" would appear in a text field. The engineer 
    created a project and started subclassing madly: subclassing the window and the button and the event handler. 
    Then he started generating code: dozens of lines to get the button and the text field onto the window. After 45 
    minutes, I had to leave. The app still did not work. That day, I knew that the company was doomed. A couple of years   
    later,Taligent quietly closed its doors forever.


You can literally create the app described in his quote in about 2 minutes flat with the following lines of code...

  import SwiftUI

  @main
  struct FooApp: App {
 @State private var isClicked: Bool = false
 var body: some Scene {
  WindowGroup {
   VStack {
    Button("Click me!") { isClicked.toggle() }
    if isClicked { Text("Hello, World!") }
   }.frame(width: 640, height: 480)
  }
 }
  }


Even in the objc-cocoa days, you could create a .xib, drag a few connections, and have this working with 10 lines of code (8 of which generated automatically by XCode)


It’s a funny excerpt, but I don’t see how it’s relevant to this. It seems to me that the whole point of introducing actors is to reduce complexity.


The main moral of this story is that people like to claim hindsight for some sort of psychic power.


As I said, I really can't put my finger on it, but maybe it is this (tongue in cheek).

    Taligent: The whole point of adding all those abstractions is to reduce complexity.


One of the funny things about our industry is that when our smartest really give their best to do The Right Thing™, we get Taligent and Swift.

And when people do everything "wrong", we get amazing things like Smalltalk and Cocoa.

Hmm...


What's wrong with Swift? It's a massively popular language


Remember Swift is also used in the backend and other areas that are not UI, not every new feature in Swift will be oriented to make UI apps easier, for that you have Swift UI the framework.

We don't get this type public proposals for Swift UI, I wish Swift UI would be open source, because the documentation hasn't been great in my experience and at least this type of proposals get publicly documented and discussed before being implemented.


Funnily enough I just recalled this quote recently when casually looking over this page about WinUI3 : https://docs.microsoft.com/en-us/windows/apps/winui/winui3/x...

The whole XAML/WPF thing just seem to me a bit of a dig's dinner.


I like Actors. I have played with them in both Pony and Elixir. They were always an obvious next step from Smalltalk style everything-is-a-fricking-message oriented programming/thinking.

Is it just me, or is Swift getting really complex? As I read through Lattners evaluation, I found myself wondering "hadn't heard of that @Thing yet. I wonder how many there are that I don't know about."

Wile I was impressed about some of the nuance that goes into the combinatorial complexity of the many design patterns Swift wants to capture under its broad umbrella to make it a language for everyone, I kind of have to accept that I will probably never have the bandwidth to acquire the High Priest level of knowledge to really get the thing. Which means the odds of ending up with unexpected and surprising behavior actually rise. I feel more and more this way about Kotlin, Rust, and even Python these days.

Reading/writing code in these increasingly multi-paradigm ecosystems is starting to feel like reading epic poetry (e.g. Dante or Beowulf), but where different sections are written in different languages, with different prose and rhyming styles, some pieces a mix of two left for the reader to figure out.

I used to stress about getting these systems enough to code idiomatically in them, to use them "right." But more and more, I'm finding myself developing personal dietary menus in each, where I pick a nutritious subset that I can wrangle into some sort of expressive consistency, and just write code in that subset, idiomatic-fling-of-the-season be damned.


My experience with Swift was that there was a section of the language API that I used heavily, then I had features I would use every now and then, and others I would use almost never (like keypaths). I think one of the advantages of a multi-paradigm language is that you have the freedom to carve out a sub-language which is appropriate for a given problem or for your personal taste.

That said, I think Swift does sometimes feel like it's missing a metaprogramming system. Like property wrappers and function builders could probably be generated by a sufficiently powerful macro system, but instead they exist as one-off features with significant complexity.


> Is it just me, or is Swift getting really complex? As I read through Lattners evaluation, I found myself wondering "hadn't heard of that @Thing yet. I wonder how many there are that I don't know about."

Probably a fair few, although I wouldn't blame you for not knowing about something like @Sendable - since that is closely associated with the actor proposal.


Oh! Actors is one of my favorite paradigms of lately. I have been reading plenty about Akka as well.

When I start looking into the subject further, I start falling into a rabbit hole of how Actors are sometimes treated as a database, that is hard to query, and provides no transaction-ability across actors. There is some argument that Actors could be reimplemented using Postgres' "SELECT ... FOR UPDATE", since you can lock the row for the duration of the transaction.

Anyone else have experience managing large amounts of data inside of Actors have a say about this?


You have to walk a fine line with Akka IMHO. Having inherited and supported an Akka Cluster for 4+ years, I'd strongly recommend that you evaluate whether you really need an Actor based system or if a plain-old set of services talking to each other via a message-broker would suffice.

The particular system my team inherited gave us nothing but issues: cluster coordination issues, so quorum wasn't met and things wouldn't start cleanly, network partitioning issues so nodes would randomly be considered dead, actors "becoming" (the term used IIRC) other types of actors based on messages received... the whole thing was just.. too ephemeral and organic for our tastes. I used to get excited thinking about systems like that, but I've since grown more conservative in my technology / architectural preferences (i.e. choose boring technology). We actually ended up replacing it with exactly what I suggested: a plain-old set of services talking via a message-broker. It's stupid simple and we've all slept better.

Edit: I will say that the use of actors constrained to a single service to handle concurrency is probably way more supportable than in a clustered mode.


With something like Elixir/Erlang, the distributed system is quite robust and reliable from my experience. A bit rigid and somewhat difficult to configure for custom topologies, but dependable overall.

>a plain-old set of services talking via a message-broker

That said, I think you're absolutely right with distributed Akka. I'm very hesitant to fully embrace it, and we use simple service-level APIs to communicate between nodes. I understand the developers have made a lot of progress on the functionality of remote Akka over the years, but it's just not as tried and true as I would like. Using Aeron for message transport for example, is something that may be the best tool available, but is really hard to sell to my org when simple services are more approachable and maintainable.

>actors "becoming" (the term used IIRC) other types of actors based on messages received

Yeah... I didn't understand why my org was using the Classic Akka instead of the new and improved Typed Akka for a long time. But cool-looking things like this just aren't worth it sometimes. Especially when Classic Akka "just works".


I don't have experience with managing large amounts of data with actors. But I can say from the actor experience I do have that you can't think about it in the mindset of "how can I replace my relational database with actors".

Actors can be a near-sliver bullet for some things, but you have to think about solutions differently. I always though a bank was a good example. One actor per-account, how do you think about money in flight and such? Probably in a fundamentally different way than you would in postgres.

> hard to query

That one is easy though, CQRS


I use Akka extensively and have not heard (and would never) use Actors in that capacity.


Once you have actors it helps to have a priority mechanism, latency guarantees, a scheduling abstraction so work can be partitioned, some sort of boost mechanism to prevent starvation, a timer abstraction, and a state machine abstraction to carry out work flows.


Swift concurrency has that in other reviews. Although not "guarantees" since it doesn't define a realtime system.


The idea of the "mailbox" reminds me of Dispatcher.BeginInvoke in C# on Windows. It is very powerful, but can also lead to messy code where many threads are mutating shared state and failing to fully reason about all the other mutations that can happen.

Systems with explicit communication, i.e. based on queues, encourage the programmer to limit the number of possible ways in which worker threads can interact with the thread that manages mutable state. They also require something like a big switch statement where all possible interactions are listed in one place. It seems like in actor-based code, all the possible interactions with the mutable state could be spread across the code base.

On the other hand, these hand-written message loops and switch statements are essentially the same code that the compiler would generate for an actor's mailbox processing loop. One could argue that writing them by hand is a waste of effort. Programmers simply need to use discipline. (Similar to virtual dispatch in C++, for example.)

I am curious about the opinions of programmers more familiar with concurrent architectures. Does the actor model make it easy to write concurrent spaghetti code?


>Does the actor model make it easy to write concurrent spaghetti code?

I think you could write concurrent spaghetti code in any environment just as easily, to be honest. I also think it really depends on which actor-model implementation you're using. Each offers their own unique experience, as I've found that there's a noticeable difference in the Scala+Akka approach vs Elixir+OTP, for example.

The actor model is just one way of reasoning about concurrency. Your mailbox serializes/linearizes your interactions so you don't necessarily need locks. Because you get that for free, you can write "single-threaded" code to handle each message. Messages can be sent across a network, so now you can concurrently interact with remote actors too. The simplicity of the model stops there.

Concurrent code gets complicated very quickly by nature. If you architect your application to have messages causing ripple effects in your system, your application behavior is going to be very difficult to reason about. But that's nothing new, either. Instead of sending messages, concurrent function calls could produce the same issue.

Is reasoning about message flows and behaviors hard? Yes, but that's just a byproduct of concurrency. Like you said, "programmers simply need to use discipline", but everyone will disagree on what discipline looks like. Each actor model implementation will have their "best practices" to mitigate the complexity of concurrent interactions, as will each organization using said implementation for their project. At my work, the way we use Scala+Akka was very structured and not at all the way I expected, having been used to Elixir+OTP.

As I think the issues of the actor model are actually issues with reasoning about concurrency in general, I would pick actors over coroutines just for the up-front structure and simplicity it provides.


> Is reasoning about message flows and behaviors hard? Yes, but that's just a byproduct of concurrency.

Maybe it's just me, but message flows and behavior is a lot easier to think about than concurrent access to shared state.

In an Erlang world, it's fairly easy to look at a process's code and see what it does for each message it might get, and think about if it's correct or not. Of course, it's sometimes hard to see which messages it might get from other code; also the system behavior can be a challenge to understand at times, although generally going back to making sure each actor does the right thing, and making sure the right number are running gets you most of the way there.

With the re-entrant Swift actors proposed in the link, I'm not so sure it will be easy. It looks like if your actor A's method needs to call another actor and read the response, that opens actor A to running other methods between the call and response; that makes it hard to think about again.


>Maybe it's just me, but message flows and behavior is a lot easier to think about than concurrent access to shared state.

In general, I agree with you. I think the actor model makes concurrent code readable, understandable, and potentially easier to maintain due to that readability. People inexperienced with concurrency might find it a lot more approachable than the alternatives; an actor feels so similar to an object at times and are easier to use than threads or green-threads. I've personally found that actors also give me performance right out of the box, when I could have really screwed something up with locking.

But other than that, doesn't it all boil down to the same thing?

Actors serialize access/modification of state through ordered messages. Locks serialize access/modification of state through acquiring the lock. And when you have complex access or modification patterns in your code, the actor model doesn't help all that much. To me it's like each message acquires the lock to the actor state.

Sure, you can see what messages cause what behavior or state change for an actor A, but the rest of the system can still be complex. Which actors send a given message to actor A, and why or when do they do this? It's not obvious, and the same complexity can be found in event-driven systems, lock-y threaded systems, etc.

>With the re-entrant Swift actors proposed in the link, I'm not so sure it will be easy.

I'm personally not convinced about re-entrant actors. I don't see why you'd give up the guarantee of blocking until a message is fully handled.

They state:

"Moreover, it helps ensure that actors can make timely progress within a concurrent system, and that a particular actor does not end up unnecessarily blocked on a long-running asynchronous operation (say, downloading a file)."

This example is particularly weak in my opinion. An actor could block on the long-running async operation, of course. But isn't that the brute-force/naive approach? I can't see any one doing this in practice.

You could create a long-running actor whose only job is to sequentially handle requests to download files and send the file descriptors back as a message response. You could create a worker pool to do that as well. You could decompose your one actor into a staged pipeline of actors, so any requests that block can go to one pipeline, and other swiftly handled messages can go to another pipeline. Of course, none of these are simple solutions, but neither is designing an actor with re-entrancy in mind.

>that makes it hard to think about again

I agree. However, they do seem to be pushing the envelope and not just reinventing the wheel. I'm curious to see how this plays out.


> But other than that, doesn't it all boil down to the same thing?

> Actors serialize access/modification of state through ordered messages. Locks serialize access/modification of state through acquiring the lock. And when you have complex access or modification patterns in your code, the actor model doesn't help all that much. To me it's like each message acquires the lock to the actor state.

I think yes and no. The big thing for me is it's easy to mess up locks. I've locked one thing and modified another, or read outside the lock, then locked to write, and the value changed. I also just did something similar except there's a mix of locks and atomics (kernel side of pthread umutexes needs to use atomics to mark the mutex as contested).

Actors serialization of work via mailbox (and its implicit locking) makes it hard to make basic mistakes. Of course, complex flows are still complex.

We're definitely in agreement on re-entrant actors. On the plus side, if it doesn't work well, I expect they'll change course.


(some thoughts)

The "actor properties can only be accessed via self." constraint seems a very elegant and simple static expression that is understandable in a program. Love it.

On top of that, the rest of the proposal appears to stick to "message passing is method invocation" way of thinking about objects and therefore syntactically both get mixed up. I have a hunch that the language would be better off being explicit about message passing and keep it separate from "invoke an async method". (The proposal requires automatic conversion to message passing when it sees an async method invocation on an actor.)

The "access immutable properties directly" facility should reduce a lot of boilerplate that would otherwise crop up, but actor usage itself should stay away from styles that require such cross-actor access (opinion).


>I have a hunch that the language would be better off being explicit about message passing and keep it separate from "invoke an async method".

What would be the benefit of this? (I haven't read the proposal in detail, but) If you can invoke an async method of an actor, it still has to be run in serial with the execution of messages to avoid races.

Besides, in the implementation I doubt that "messages" will be anything more than delayed async methods (and the implementation notes seem to support this). Actual actor systems rarely treat messages as first-class serializable entities - unless they hope to execute across multiple machines.


You're right on both counts. The mechanism of implementation will be very close .. which is what is in the proposal too.

Having programmed in Erlang, I feel that the notation (i.e. syntax) influences moment to moment thinking. So if there is overlapping notation between method invocation and message passing, I expect increased opportunity to code and design incorrectly, resulting in subtle bugs.

Making "async message passing" syntactically explicit and obivous clarifies a lot and prevents expectations of calling other methods, facing compiler errors, or worse having to deal with specified behaviours that may not be what I had in mind.

Edit: one example of this confusion in the spec is the whole part about inheritance in actors ... which looks like a terrible idea to me.


Investment in STM (Software Transactional Memory) through implementation of supporting data structures, could be a more effective investment for this problem space.


Clojure’s STM is basically unused: from what I can tell, STM turns out to be better in theory than in practice.


The problem IMHO is that there aren't a lot of problems that are particularly suited to STM.

I can't think of many instances where I would use STM over CAS (e.g. an atom in Clojure); and the times I have needed something akin to STM (e.g. building an incremental dataflow engine with transactions) I wanted more control over the runtime behavior than Clojure's STM provided.

Plenty of business systems need transactional guarantees and either durability or distribution (or both), which makes an external store like Postgres or Redis a ton more attractive than STM within a single process.


Some manner of pluggable STM protocol to define a standard programming model on top of conflict detection would be pretty nice tho imo.

Features like postgres's serialized transaction isolation -- which as a non expert I think of as basically database implemented STM -- are really flexible -- but most ecosystems programming models don't really expose this capability in a full proof and natural way. You have to work to too hard to take full advantage of the capability ...


I don't think image computing is a good idea and so I don't think complicated memory updating abstractions like this are necessary.

The other problem is that Intel tried to add STM to their hardware, but it has so many bugs that it's been disabled in multiple CPU generations. I only know one program that uses it and it's a PS3 emulator, not a database.


I really like how Swift is shaping up. They keep making choices that imho give you good balance of ergonomics and performance.


I can't help but feel the opposite, as if Swift is a dog chasing cars. Function builders are a really awkward unnecessary feature that was implemented so that SwiftUI could have a decent looking DSL. Some types showed that the whole protocol oriented programming model breaks down in some serious ways. And now they are looking at doing actors as a whole new built-in functionality rather than just a concurrency library. It looks like the Swift language is just too rigid and unable to evolve in useful ways without the blessing of the language developers themselves.


> Function builders are a really awkward unnecessary feature that was implemented so that SwiftUI could have a decent looking DSL

I totally agree. It's actually a big part of the reason I pulled back from Swift development for personal projects, since it appears that the core team has no problem shoe-horning in half-baked features if it's required to make the necessary impact at the keynote of WWDC.

However I think that example (along with property wrappers) shouldn't be used to impugn the language design decisions as a whole. Protocol-oriented programming is extremely powerful in Swift, and since I have switched to mostly using Rust over the past year for personal projects, there are a lot of things about Swift I really miss.


> ...since it appears that the core team has no problem shoe-horning in half-baked features if it's required to make the necessary impact at the keynote of WWDC.

To be fair, that 'shoe-horning' took nearly a year and a half. Apple was willing to ship the feature but took their time and got lots of community feedback before a version of it became officially part of the language. I believe SwiftUI has already migrated over to the standardized version of it.

SwiftUI is in some ways amazing - because you (rightfully) get this feeling that they had several ridiculously-overqualified compiler experts working in tandem with UX experts and low-level framework engineers on it.

But the flip side is that you don't have a lot of parallels to draw experience from when using it - you must gain experience by working with SwiftUI itself.


> Apple was willing to ship the feature but took their time and got lots of community feedback before a version of it became officially part of the language.

So let me tell you, as someone who was a very active member of the community when these features were announced, this is not what that felt like. I, like the rest of the community, found out about function builders by noticing code presented on a slide at WWDC which would not compile under the then-current version of Swift. There was then a large debate within the community, where members of the core team presented post-hoc rationalizations about how "Swift was always intended as a language which would support DSL's and declarative programming". But the message was clear - this feature was going into the language, with zero prior community review or input, because it was required for the business goals of Apple.

The thing that made this particularly jarring for many in the community is that it was so far out of character for the general language evolution process. If anything Swift had been known for being slow at adopting new features. Every addition or change had to meet a very high standard of 1. feeling cohesive with the language, 2. not limiting the future design space, and 3. not presenting risks with respect to the ABI.

Highly necessary features like variadic generics with obvious utility languished for years, cross-platform support and tooling remained in this grey zone of working in some select cases but not others, and here were radical changes to the language being made which were not known to the community at all. For me and a lot of others it clarified things about the direction and governance philosophy of the language.

> But the flip side is that you don't have a lot of parallels to draw experience from when using it - you must gain experience by working with SwiftUI itself.

What would be the big differences you would see with other hot-reloading FRP-style frameworks like React and Flutter?


> So let me tell you, as someone who was a very active member of the community when these features were announced, this is not what that felt like. I, like the rest of the community, found out about function builders by noticing code presented on a slide at WWDC which would not compile under the then-current version of Swift. There was then a large debate within the community, where members of the core team presented post-hoc rationalizations about how "Swift was always intended as a language which would support DSL's and declarative programming". But the message was clear - this feature was going into the language, with zero prior community review or input, because it was required for the business goals of Apple.

Yes, and this is one of the reasons that having well defined walls (and having names for those) is really important for commercially-backed open source projects.

It is perhaps clearer to say there is Swift, the language and open source implementation which has an open process for contributing changes. Then there is Apple iSwift, a vendor fork of that language, still open source, that Apple uses to leverage Swift for their platforms. This is similar to the relationship Apple has with clang and used to have with gcc (writing their own precompiled headers and blocks features back in the day, plus objective C itself at one time).

Function Builders were added to iSwift, and then Apple spent over a year getting it into the Swift language proper. Changes which affected the syntax of a mainline Swift feature were not off the table, although it would have affected the timeline of Apple being able to migrate developers to it.


Realistically Apple is doing 95% of the lift and while this might have being handled better given Apple bet the house on Swift it is unrealistic that they would not push features they critically need for their products. Pragmatically speaking of all the choices in this space Swift is shaping up to be a very strong option and compared to say Go is way more open in the way it is being developed.


> Function builders are a really awkward unnecessary feature that was implemented so that SwiftUI could have a decent looking DSL.

Ugh. I agree. I haven't actually used SwiftUI in anger yet, so I try to reserve my judgement, but the whole thing seems like a lot of magic and complexity to me (not to mention reports of some performance issues/gotchas).

Didn't they also add some weird dynamic JavaScripty property getters or something?



> Some types showed that the whole protocol oriented programming model breaks down in some serious ways.

How so? Rust has an equivalent feature (impl Trait) for example. It solves a legitimate problem.


I really enjoy working with Swift but after looking at this Actor proposal I think I agree with you. It seems like you'd always want to use actors? Unless you specifically don't want things to be concurrency safe. It feels like something the language should have had from the start or should have as an external library.


> It seems like you'd always want to use actors?

My impression is that when working with Swift your first tool of choice should be a value type, so a strict or an enum. Actors would only be desirable when you need class-like behaviors and need concurrency. Maybe I’m missing something, but I suspect many of us will never use actors directly at all. I certainly don’t see any parts of my code which demand this kind of structure.


You'll want actors if you have shared mutable state. But the most common usage of concurrent programming will be covered with simple async / await calls.


I can't help but get the feeling that plan with Swift is to be the only language you can effectively use for Apple platforms, but also is probably going to be the language that will only be used on Apple platforms because of Apple's whims like the function builders.


Realistically if choosing between C++, Rust, Go and Swift for things that are not highly performance sensitive I would happily go with Swift.


agree.


The point of doing concurrency as a language feature is that it can stop you from writing incorrect programs in a way that a library can't. Besides, Swift already has concurrency libraries because Objective-C did - dispatch, NSOperation, NSRunLoop and so on. The problem is it has too many of them.


Agree. Either choose to use lisp syntax and lisp macros or add all the language "features" ad-hoc one by one over the years.


That is stretching the quote fairly thin. Programming in Swift is nothing like programming in Lisp on many levels.


Eh its kind of an interesting question. Where is the line between what should be a language feature and what should be a macro. At the extreme arguing a feature should not be part of the language and be a library does translate to advocating lisp for everything.

But concurrency is a really interesting case. Concurrency solutions are highly specific. I think we'd all agree there is no "answer" for concurrency, which puts it square in the library category. And there are consequences to getting your language's concurrency solution wrong. For example, I'd point to Clojure's built-in STM as a huge swing-and-a-miss. Otoh Erlang is the poster child for why doing concurrency right really does require language-level support.

What's the way out? Imo either build your language around concurrency from day 1, ie Erlang, Go, Pony, or accept that concurrency will always be a best effort good-enough solution in your language.


>> Programming in Swift is nothing like programming in Lisp on many levels.

Precisely. "Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp." https://en.wikipedia.org/wiki/Greenspun%27s_tenth_rule


I feel like it's really lame that its Swift version 157 or whatever and they still don't have a stable async/concurrency story besides callbacks. What's more is that they actually chose an error handling mechanism that doesn't even work with callbacks/closures! So sometimes you have a throwing function and sometimes you return a Result.

As much as I actually do enjoy working with Swift (real type classes!!), there are some parts that are a big WTF.


> they still don't have a stable async/concurrency story besides callbacks. What's more is that they actually chose an error handling mechanism that doesn't even work with callbacks/closures! So sometimes you have a throwing function and sometimes you return a Result.

This is exactly what the new async/await feature solves though. Actors are built on top of that.


Yeah, I know. I'm just lamenting how long it took to get to async/await. It's weird for the official language of a mobile app ecosystem to not have a pretty strong async story pretty early on, IMO.


They already had GCD and Operation provided by the platform, which worked perfectly with Swift, and dealt with concurrency very nicely.

They weren’t under any pressure at all to have any other ‘async’ story early on and so intentionally chose to focus on other fundamentals and take their time to build something that would be well designed and additive.


I argue that GCD, etc, don't actually work perfectly with Swift, though. Swift functions use a flavor of "checked errors" in the function signature via the `throws` keyword. When you're passing around callbacks or implementing Operation, there's no where to allow a `throws` function. Very similar problem to Java with checked exceptions and closures.

The early versions of Swift didn't even have a standard Result type, IIRC.

So even today, you will see some Swift functions that have `throws` in the signature and some that return `Result<T>`.

It was always awkward.


They work perfectly in that they are not buggy, and they are efficient and reasonably straightforward to use and work as designed with Swift.

I agree that they are not idiomatic, which is the point I’m making.

There was a perfectly working solution that had no barriers to its use with swift, so it made sense not to rush the replacement.


If you agree that it's not idiomatic (and maybe I can get you to agree that it's more than just non-idiomatic when it comes to async operations that can fail), then are you actually refuting my original comment: "It's weird for the official language of a mobile app ecosystem to not have a pretty strong async story pretty early on, IMO."?

Maybe it was the right call in the end (I'm of the opinion that it literally doesn't matter. Apple could ask us to develop in COBOL or even something as crazy as Objective-C and we'd all still do it), but I still think it's fair to say it was weird/surprising to have awkward/difficult async tools for an official mobile OS language.


Having developed shipping apps in swift 1.0, using GCD, it wasn’t any more awkward than using GCD with objective C, indeed generally it was easier because Swift is more concise.

I guess in relation to your original statement, I am refuting it.

They had a strong solution which was pretty easy to use, and as good as the language they were replacing, and well integrated with the platform.

The idea that they ‘didn’t have a strong solution’ doesn’t really hold from my point of view.

It was strong, and working which gave them the luxury of time to develop an idiomatic solution as the language matured.

If there really hadn’t been a strong solution in place, I would have agreed with you.


> I feel like it's really lame that its Swift version 157 or whatever and they still don't have a stable async/concurrency story besides callbacks.

It is frustrating, but it seems like they are finally getting there. It’s frustrating since a language like Go had a good concurrency model almost from the start. But Go had different constraints and arguably has less surface area to cover.


Go is arguably a simplistic language built _around_ a simplistic concurrency model.

Go is twice as old (since initial public release) and still doesn't have decent support for generalized algorithms, for example. They have struggled just as long to come up with a more usable error model.


Over a long enough timeline, all languages converge on the Erlang feature set!


Erlang has an incredibly compelling value proposition! The problem is that the tooling and ecosystem are not really there. I tried to dip my toe into it maybe a year ago, and the error messages have not held up according to modern standards.


Have you tried Elixir? It's a thin language wrapped over Erlang, filling in the holes of the areas you mentioned, and was designed for exactly that purpose (programmer ergonomics).


No I haven't but that sounds like a great concept for a project


I really don’t like that we have the overhead of a new keyword called actor


What kind of overhead do you think a keyword could bring?


it is called language verbosity... over time it ads up and creates a difficult language to work with, and hard for beginners to start with


I actually think one of Swift's strengths is that it takes progressive disclosure of complexity seriously. It's possible to write a hello world which is no more complex than the python version, and gradually learn about the more esoteric concepts as you need them.

By contrast, a language like Rust is far more parsimonious with introducing keywords, but I think I encountered 10 new concepts in the first month I was working with the language, all of which I had to understand at some level in order to progress with my project.


This seems to be a way to tackle concurrency, without addressing distributed programming.

Why?

As far as I can tell, both Erlang and Akka (the two cited examples that implement actors) do both concurrent and distributed systems.


How would you want to address it?

There is already a distributed programming library on the system called XPC, so people already have experience with it, but you certainly can't program as if every method call might become remote. Mainly the problem is every call can fail, and sometimes retrying is correct and sometimes it isn't, but also the costs of passing a large function parameter become different cross-process and then more different cross-machine.

Note ObjC already had some language features for an older library (DO) like 'inout' parameters, but they were actually removed because XPC instead only uses callbacks.


That was my first reaction as well when i saw the way swift team was laying out the roadmap to actors. The broke all what we consider an "actor system" into pieces, and implemented every part in different, hopefully orthogonal, proposals..

If that works it's going to be quite interesting. But i feel that it goes against what i've learned from go language design decisions : concurrency is such a deep concern that you have to build the whole language around it, as well as from erlang, where they basically designed the virtual machine around the actor requirements.

It's going to be some interesting times..


Because distributed part is library not language feature?


Actors aren't just about local concurrency, and if a language is attempting to bake them in, they should leave some room for the distributed case in my opinion.

One of the biggest benefits of using actors is that whether an actor exists on the same machine or across the network can be abstracted away from you. You're just sending messages, so you can send a message over the wire and it would be as simple to the developer as sending it locally. Not considering this use case would make it a very limiting language feature.


> You're just sending messages, so you can send a message over the wire and it would be as simple to the developer as sending it locally

If you are expecting a reply, or some side-effect in another system, as a result of the message you sent (and usually you would expect that, otherwise you wouldn't send the message in the first place) then it's not that simple. If the actor is in the same OS process on the same machine, then message delivery is reliable, and you know you'll either get a reply OR a signal that the other actor died. If, on the other hand, the actor is on another machine across the network then the semantics are different. You cannot always differentiate between the remote actor dying and an intermittent network connection error. So you need to take that into account in your protocol design - for example by making operations idempotent.

I've many times seen Erlang code where the developers didn't make this distinction - because the message passing operation looks the same, remote or not - and as a result the system is not resilient to network failures.


That's an important point, and didn't mean to make the problem of distributed actors appear trivial. In my mind, all the more reason to consider the distributed case when baking in actors into your language. For example, what distributed primitives would the language support vs leave modular for libraries?

>I've many times seen Erlang code where the developers didn't make this distinction - because the message passing operation looks the same, remote or not - and as a result the system is not resilient to network failures.

It's not about the message passing operation looking the same. The developers erroneously assumed that message delivery was guaranteed. The core issue here is not unique to actor systems.

Erlang/Elixir and Akka do not guarantee message delivery (even for a local case) from what I understand; guaranteed message delivery may mean different things in different contexts (like message queued in mailbox vs message is received from mailbox). In my opinion, developers should always program defensively when writing networked applications using something like the circuit-break pattern or making operations idempotent as you mentioned. When using actor systems, developers should not assume guaranteed message delivery unless the tool allows them to.

I'm not familiar with other actor systems so I can't speak on their guarantees, but Erlang has outlined the reasoning why messages shouldn't be considered to be guaranteed here [1].

[1] If I send a message, is it guaranteed to reach the receiver?: https://erlang.org/faq/academic.html#idp32844816


That link is talking specifically about the bare send (!) when in a distributed setting. Erlang guarantees locally (in the same VM) and guarantees ordering as well locally (ordering when done in a synchronous block, if you have a single process do send msg1 followed by send ms2, locally it's guaranteed that msg1 will be "received" first, if the receiver is alive). Outside the same VM even if in the same host it can fail (someone closed the socket the other VM is on for e.g).

Erlang also bakes OTP, that is a library for messaging semantics and process behaviours (that processes have to implement to be OTP) and introduces the concept of a "call", where a unique reference is created for the message being sent and only when the receiver processes the message and "replies" (with the answer and the reference) is the "call" considered complete and allows to be sure the message was processed to the point of sending that reply. This is the solution the "ack" mentioned in the linked doc refers to. It's not inherent in send because send is async and the only way to have it know that, is to wait for an ack.

(you can implement the call semantics with plain processes, but it's such a normal thing that in OTP it's baked at a lower level for the process behaviours included in OTP, mostly all the gen_* behaviours)

All of this breaks down in distributed settings because it's physically impossible to guarantee. Your message may be received but the answer back may not because the network glitched or the hardware blew before the response was sent. These are problems of distributed systems though. You can be sure that if you get a reply from a call that the message was received. You still need to take (or not) care of some of the failure modes accordingly to your requirements (be it having idempotency, retry logic, nodes behaving as queue processors, etc). Some failure modes encode the reason as well, for instance a failed call to a non-existent pid in a functioning node that is reachable is different than a failed call to a non-reachable node, but a failure in a node that went down or a node that is alive but not reachable is impossible to discern without additional things.


Have they tried and found this to be the optimal case? If so, those findings should be part of the proposal if it were up to me.

Distributed programming is not a little add-on you can slap on top of a non-distributed system if history has taught us anything.


This is not the only proposal, it's covered elsewhere.

In the original manifesto: https://gist.github.com/lattner/31ed37682ef1576b16bca1432ea9...


Erlang, Axum, Agent Tcl, Active Oberon are just a few examples where it is a language feature.


awaits in actor-isolated functions being reentrant seems like a huge opportunity for bugs, because it is not obvious at all:

https://github.com/apple/swift-evolution/blob/main/proposals...


I find it hard to accept that a real actor model can be implemented at compile time!


Whar do you mean by that?


At first I assumed this would lock all touched state across all touched actors such that you get the same kind of cross table-row locking guarantees a SQL call would give you

...but it seems like its still an open question in the RFC?


actor state is meant to be read and modified only by the actor itself, which operates off of its own message mailbox (e.g. event queue).

I would expect actors to operate on their own state and on immutable data, and things such as transactional data coordination to be separate.

That said, I don't have nearly the experience with actors as many on HN - are you thinking of a particular actor-based system w.r.t touched state?


Will it go the way of AD in Swift or Tensorflow for Swift. Maybe not coanit might fit in the Swift ecosystem


From what I can tell UIKit and friends will adopt this actors eventually via bridging (just like they adopted Swift 3's error handling).


No, because that was a fork, and this is mainline development.


I thought the proper way to implement something like this was:

- fixed point instead of floating point numbers for currency.

- have a journal with transactions such as withdrawals and deposits, plus a "checkpoint" at one point in time where you store the computed balance.

e.g.: your balance is = starting balance for today + today withdrawals + today deposits



This post is a somewhat inaccurate rewording of official advice from the dispatch team at Apple, but that's one of the teams working on Swift concurrency so, like, it'll be fine.


lidbdispatch is a concrete implementation of a scheduler on OS threads. Actors (and new async model in Swift) are abstractions that are independent of underlying concurrency implementation. So the presumption that the new asynchronous model will inescapable have threads overhead is wrong.


No idea why you got downvoted. The fact that the first implementation of the actor system is most certainly going to be on top of libdispatch makes it a very interesting property. Libdispatch is really not known for being able to spawn thousands of "light thread". And so there is indeed a potential pitfall in making asynchronous + concurrent code easier.

However, after having read a lot of the swift proposal relative to concurrency, i am still unsure if the end result of production code is going to be lots of actors running in parallel, or just lots of async calls multiplexed on a few OS threads.


You can easily have thousands of tasks (blocks) in dispatch, and thousands of queues as long as they're targeted to the same base queue.

The issue with dispatch is that it wasn't communicated enough that you want to have a few base queues and then a tree of other serial queues on top of that, so people end up running too many CPU threads at once, on a device that often has less than one core free at a time.


Another thing that’s about to be bolted onto Swift at the language level (instead of making it a library). Sad!


You'll need to explain how the static isolation checking could be feasibly implemented in a library.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: