You may be correct, but the rate of change in baseball is glacially slow compared to the other sports. One of baseball's intrinsic values is it's legacy, tradition, and history. Some may scoff at that, and I think there are good arguments against legacy/tradition as a reason to withhold change, but there are a lot of people out there that believe this. The MLB Commissioner's have largely been tasked with protecting that tradition and history.
I think divs sent a generation down the wrong track. It’s weakly semantic and omnipresent in every 101 tutorial; it makes the semantics overall seem weak/insufficient.
It’s just the default block-level containing element, so it serves its place but is not well explained in these tutorials (just as spans are the default inline element).
In my 25 years of experience writing HTML and CSS most engineers don’t understand semantic HTML, nor do they take the time to learn it; largely because companies don’t value it, unless they’re heavily SEO-focused companies.
I once worked at a company that would run an HTML5 validation test in our CI/CD pipeline. That was very helpful as it identified invalidly nested elements and taught proper semantic HTML.
I tried to do this a few weeks ago, I tried to build a NIF around an existing C lib. I was using Claude Opus and burned over $300 (I didn't have Pro) on tokens with no usable results.
Get Pro, 4 is quite good at Elixir now but you have to stay on it. 3.5 was not, so I imagine next version of Claude will be able to handle the more esoteric things like NIFs, etc.
I've completely refactored my Elixir codebase with Claude 4, expanded the test suite by 1,000 more tests, and released a few more features faster than I ever have to actual paying customers. Tidewave MCP is helpful as are some custom commands and a well tunded CLAUDE.md But you do you.
It's not perfect - you often have to remind it not to write imperative style code and to lean on Elixir conventions like "with" statements, function head matching, not reassigning vars, etc.
Here's one Claude-vibed project that makes me money that I run in addition to my saas, which is Elixir. I'm not strong in TypeScript and this is an Astro static site, so Claude has been really helpful. Backend is Supabase (postgres) and a few background jobs via https://pgflow.dev (pgmq) that fetch and populate job openings and uses some AI steps to filter then classify into the correct categories (among other things, there's also an job application flow and automated email newsletter): https://jobsinappraisal.com
This is clearly low quality, non-idiomatic AI-generated Elixir code. So the likely answer is that "you" did not use this at all; AI did.
I review this kind of AI-generated Elixir code on a daily basis. And it makes me want to go back to ~2022, when code in pull requests actually made sense.
Apologies for the rant, this is just a burnt out developer tired of reviewing this kind of code.
PS: companies should definitely highlight "No low-quality AI code" in job listings as a valid perk.
tldr; we're building a headless browser in Elixir that will embed on device, communicate to the native 1st class rendering engine (i.e. SwiftUI, Jetpack Compose, WinUI3, etc...) via disterl, and allow for web-like ergonomics to build truly native SSR applications.
The README for Elixir Pack is a bit focused on the LiveView Native project but that documentation will soon be updated to remove mention of it.
I don't understand why this headless browser should be in Elixir and why it should communicate via disterl/BERT? Although disterl is native to Erlang/OTP/BEAM VM, it should be implemented in native rendering engines.
Don't get me wrong, I prefer writing in Elixir to JS/TS or native (Swift/Kotlin etc.)
> to build truly native SSR applications
Why do you still call it SSR if it is rendered on the client device?
Is there a long-form article about this project, preferably with visuals/diagrams?
Can you explain a bit of the workflow you expect for offline support?
Like would I have one set of LiveViews that run on device and a database wrapper that handles online vs offline queries? Do you envision all view code running on device then?
I didn't find them useful when I wrote my entries. LLMs get confused with code that "looks like" other code, and that intentional misdirection is half the fun of a good IOCCC entry. Plus, the morality filters get annoying once you obfuscate the code. Plugging an unsubmitted entry into Gemini, it refuses to even explain it because it thinks it's malware.
LLMs can help you analyze the code, but not write it. Their ability to obfuscate is quite limited and uninspired. The last IOCCC was in 2020, so we've had plenty of time to work on it.
I would go further and say the fine tuning on code, mostly by llms generating for other llms and human sweat shops writing example code to train on is actually to teach the llms the opposite of clever code and obfuscated code. Llms try to create readable, documented code (with different levels of success). When I make them generate terse/obfuscated code, they cannot help themselves by putting too much readable things in there. I asked claude to do the moon phase one and it had the calculation correctn but could not figure out how to draw the ascii moon so it just printed the values, used emojis next to the ascii etc. But when you ask to do it with normal code, it does figure it out.
Hello. I can confirm being the person that produced the shows live event and graphics and whatnot that I had a chance to see if any of the llms available could understand the code and beyond some very superficial stuff they more or less completely failed to understand any of the entries this year. Hope you enjoyed the presentation. There will be more to come in the out favorite universe channel in the future that should be fun.
The two projects have different usecases so they can't be directly compared. Slegehammer bindgen makes calling javascript from rust faster in the browser. Wasmtime is a native runtime for WASM outside of the browser
I hate to say this but usually when I hear that people have problems making Erlang/Elixir fast it comes down to a skill issue. Too often devs explore coming from another language and implement code as they would from that other language in Elixir and then see it's not performant. When we've dug into these issues we usually find misunderstandings on how to properly architect Elixir apps to avoid blocking and making as much use of distribution as possible.
You'd have to refer to all of the applications running on the BEAM that are distributed across multiple datacenters. Fly.io's entire business model is predicated on globally distributing your application using the BEAM. I'm not sure what that book said exactly perhaps the original intent was local distribution but Erlang has been around for over 30 years at this point. What it's evolved into today is architecturally unique compared to any other language stack and is built for global distribution with performance at scale.
> Even though Erlang’s asynchronous message-passing model allows it to handle network latency effectively, a process does not need to wait for a response after sending a message, allowing it to continue executing other tasks. It is still discouraged to use Erlang distribution in a geographically distributed system. The Erlang distribution was designed for communication within a data center or preferably within the same rack in a data center. For geographically distributed systems other asynchronous communication patterns are suggested.
Not clear why they make this claim, but I think it refers to how Erlang/OTP handles distribution out of the box. Tools like Partisan seem to provide better defaults: https://github.com/lasp-lang/partisan
I've run dist cross datacenters. Dist works, but you need to have excellent networking or you will have exciting times.
It's pretty clear, IMHO, that dist was designed for local networking scenarios. Mnesia in particular was designed for a cluster of two nodes that live in the same chassis. The use case was a telephone switch that could recover from failures and have its software updated while in use.
That said, although OTP was designed for a small use case, it still works in use cases way outside of that. I've run dist clusters with thousands of nodes, spread across the US, with nodes on east coast, west coast and Texas. I've had net_adm:ping() response times measured in minutes ... not because the underlying latency was that high, but because there was congestion between data centers and the mnesia replication backlog was very long (but not beyond the dist and socket buffers) ... everything still worked, but it was pretty weird.
Re Partisan, I don't know that I'd trust a tool that says things like this in their README:
> Due to this heartbeating and other issues in the way Erlang handles certain internal data structures, Erlang systems present a limit to the number of connected nodes that depending on the application goes between 60 and 200 nodes.
The amount of traffic used by heartbeats is small. If managing connections and heartbeats for connections to 200 other nodes is not small for your nodes, your nodes must be very small ... you might ease your operations burden by running fewer but larger nodes.
I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now. I want to say on the order of hundreds of thousands. While I was at WhatsApp, we were having issues with things like pg2 that used the global module to do cluster wide locking. If those locks weren't acquired very carefully, it was easy to get into livelock when you had a large cluster startup and every node was racing to take the same lock to do something. That sort of thing is dangerous, but after you hit it once, if you hit it again, you know what to hammer on, and it doesn't take too long to fix it.
Either way, someone who says you can't run a 200 node dist cluster is parroting old wives tales, and I don't trust them to tell you about scalability. Head of line blocking can be an issue in dist, but one has to be very careful to avoid breaking causality if you process messages out of order. Personally, I would focus on making your TCP networking rock solid, and then you don't have to worry about head of line blocking very often.
That said, to answer this from earlier in the thread
> I have read the erlang/OTP doesn’t work well in high latency environments (for example on a mobile device), is that true? Are there special considerations for running OTP across a WAN?
OTP dist is built upon the expectation that a TCP connection between two nodes can be maintained as long as both nodes are running. If that expectation isn't realistic for your network, you'll probably need to use something else, whether that's a custom dist transport, or some other application protocol.
For mobile ... I've seen TCP connections from mobile devices stay connected upwards of 60 days, but it's not very common, iOS and Android aren't built for it. But that's not really an issue, because the bigger issue is Dist has no security barriers. If someone is on your dist, they control all of the nodes in your cluster. There is no way that's a good idea for a phone to be connected into, especially if it's a phone you don't control, that's running an app you wrote to connect to your service --- there's no way to prevent someone from taking your app, injecting dist messages and spawning whatever they want on your server... that's what you're inviting if you use dist.
This application is running dist between BEAM on the phone and Swift on the phone, so lack of a security barrier is not a big issue, and there shouldn't be any connectivity issues between the two sides (other than if it's hard to arrange for dist to run on a unix socket or something)
That said, I think Erlang is great, and if you wanted to run OTP on your phone, it could make sense. You'd need to tune runtime/startup, and you'd need to figure out some way to do UX, and you'd need to be OK with figuring out everything yourself, because I don't think there's a lot of people with experience running BEAM on Android. And you'd need to be ok with hiring people and training them on your stack.
> I had thought I favorited a comment, but I can't find it again; someone had linked to a presentation from WhatsApp after I left, and they have some absurd number of nodes in clusters now.
I'm involved with this project and wanted to provide some context. This is an extraction for a much larger effort where we're building a web browser that can render native UI. Think instead of:
`<div>Hello, world!!</div>`
we can do:
`<Text>Hello, world!</Text>`
I want to be clear: this is not a web renderer. We are not rendering HTML. We're rendering actual native UI. So the above in SwiftUI becomes:
`Text("Hello, world!")`
And yes we support modifiers via a stylesheet system, events, custom view registration, and really everything that you would normally be doing it all in Swift.
Where this library comes into play: the headless browser is being built in Elixir to run on device. We communicate with the SwiftUI renderer via disterl. We've built a virtual DOM where each node in the vDOM will have its own Erlang process. (I can get into process limit for DOMs if people want to) The Document will communicate the process directly to the corresponding SwiftUI view.
We've taken this a step further by actually compiling client-side JS libs to WASM and running them in our headless browser and bridging back to Elixir with WasmEx. If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework. So think of actual native targets for Hotwire, LiveWire, etc...
We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
This originally started as the LiveView Native project but due to some difficulties collaborating with the upstream project we've decided to broaden our scope.
Swift's portability means we should be able to bring this to other languages as well.
We're nearing the point of integration where we can benchmark and validate this effort.
> If this works we'll be able to bring the development ergonomics of the Web to every native platform that has a composable UI framework.
You appear to be saying this with a straight face. I must be missing something here. What is beneficial about the web model that native is lacking?
I hope I’m not being an old curmudgeon, but I’m genuinely confused here. To me, web dev is a lovecraftian horror and I’m thankful everyday I don’t have to deal with that.
Native dev is needlessly fragmented and I’ve longed for a simple (not Qt) framework for doing cross platform native app dev with actual native widgets, so thanks for working on that. But I a bit mystified at the idea of making it purposefully like web dev.
Sounds like things are converging more or less where I thought they would: "websites" turning into live applications, interfacing with the native UI, frameworks, etc. using a standardized API. Mainframes maybe weren't the worst idea, as this sort of sounds like a modern re-imagining of them.
The writing was more or less on the wall with WASM. I don't know if this project is really The Answer that will solve all of the problems but it sounds like a step in that direction and I like it a lot, despite using neither Swift nor Erlang.
Firefox used XUL, not XAML. Still does, for some things that are not available in HTML. (By the way, you can enable devtools for the browser UI itself and take a look!)
XAML will be a target as we intend to build a WinUI3 client. Of the big three native targets: Apple, Android, Windows the later may be the easiest as from what I've seen nearly everything is in the template already
It's going to be really hard to resist the urge to put a programming language in there. It always starts innocent: 'let's do some validation'. Before you know it you're Turing complete.
I believe swiftUI doesn't give access to the UI tree elements unlike UIkit. So I assume you're not allowing the use of the xml-like code to be in control of the UI?
It's rather just an alternative to write swiftUI code?
How do you handle state? Isomorphically to what is available in swiftUI?
Is your VDOM an alternate syntax for an (Abstract) Syntax tree in fact?
Is it to be used as an IR used to write swiftUI code differently?
How is it different from Lynx? React Native? (probably is, besides the xml like syntax, again state management?)
That's correct, but we can make changes to the views at runtime and these merge into the SwiftUI viewtree. That part has been working for years. As far as how we take the document and convert to SwiftUI views, there is no reflection in Swift or runtime eval. The solution is pretty simple: dictionary. We just have the tag name of an element mapped to the View struct. Same with modifiers.
As far as how it is different from React Native. That's a good question, one that I think is worth recognizing the irony which is that, as I understand it, without React Native our project probably wouldn't exist. From what I understand RN proved that composable UI was the desired UX even on native. Prior to RN we had UIKit and whatever Android had. RN came along and now we have SwiftUI and Jetpack Compose, both composable UI frameworks. We can represent any composable UI frameworks as a markup, not so much with the prior UI frameworks on native, at least not without defining our own abstraction above them.
As far as the differentiator: backend. If you're sold on client-side development then I don't think our solution is for you. If however you value SSR and want a balnance between front end and backend that's our market. So for a Hotwire app you could have a Rails app deployed that can accept a "ACCEPT application/swiftui" and we can send the proper template to the client. Just like the browser we parse and build the DOM and insantiate the Views in the native client. There are already countless examples of SSR native apps in the AppStore. As long as we aren't shipping code it's OK, which we're not. Just markup that represents UI state. The state would be managed on the server.
Another areas we differ is that we target the native UI framework, we don't have a unified UI framework. So you will need to know HTML - web, SwiftUI - iOS, Jetpack Compose - Android. This is necessary to establish the primitives that we can hopefully get to the point to build on top of to create a unified UI framework (or maybe someone solves that for us?)
With our wasm compilation, we may even be able to compile React itself and have it emit native templates. No idea if that would work or not. The limits come when the JS library itself is enforcing HTML constraints that we don't observe, like case sensitive tag names and attributes.
What about offline mode? Well for use cases that don't require it you're all set. We have lifecycle templates that ship on device for different app states, like being offline. If you want offline we have a concept that we haven't implemented yet. For Elixir we can just ship a version of the LV server on device that works locally then just does a datasync.
You don't need JIT to hot load code. That's irrelevant.
And yes you can hot load code to modify the application. As long as you don't alter the purpose or scope of features under review. There is a specific callout as well that you can dynamically load in "casual games" from a community of contributing creators.
You're repeating outdated nonsense from over a decade ago! Understanding current App Store guidelines can be key for finding competitive edge when there are so many like yourself who scare devs off doing things that Apple now allows.
> We can currently build for nearly all SwiftUI targets: MacOS, iPhone, iPad, Apple Vision Pro, AppleTV. Watch is the odd duck out because it lacks on-device networking that we require for this library.
Could you please elaborate on the statement about Apple Watch? Apple Watch can connect to WiFi directly with Bluetooth off on its paired iPhone. Specific variants also support cellular networks directly without depending on the paired iPhone. So is it something more nuanced than the networking part that’s missing in Apple Watch?
Third party apps can’t use the network though. Iirc there’s an async message queue with eventual delivery that each app gets, which it can use to send messages back and forth with a paired phone app.
That was once the case, but no longer. Third-party WatchOS apps can work without a phone present, up to being installed directly from the watch's app store. They can definitely do independent networking, but there are still some restrictions, eg they can't do it when backgrounded, and websockets are pretty locked down (only for audio-streaming as per Apple policy).
I reckon the lack of general-purpose websockets is probably the issue for a system based on Phoenix LiveView.
With how complexity happy webdevs like to get with their DOM structure, would this actually be performant compared to an equivalent webview in practice? Especially since your using SwiftUI which has a lot more performance foot guns compared to UIKit.
How does elixir_pack work? Is it bundling BEAM to run on iOS devices? Does Apple allow that?
Years ago I worked at Xamarin, and our C# compiler compiled C# to native iOS code but there were some features that we could not support on iOS due to Apple's restrictions. Just curious if Apple still has those restrictions or if you're doing something different?
I haven't been following BeamAsm that closely, because I'm not working in Erlang at work.... But it strikes me that there's not really a reason that the JIT has to run at runtime, although I understand why it is built that way. If performance becomes a big issue, and BeamAsm provides a benefit for your application (it might not!), I think it would be worth trying to figure out how to assemble the beam files into native code you can ship onto restrictive platforms without shipping the JIT assembler.
Not sure as I haven't done any work with it. On a cursory glance it could have some overlap but it appears to not target the 1st class UI frameworks. It looks to be a UI framework unto itself. So more of a Flutter than what we're doing is my very quick guess. We get major benefits from targeting the 1st class UI frameworks, primarily being we let them do the work. Developing a UI native framework I think is way way more effort than what we've done so we let Apple, Google, and Microsoft to decide what the desired user experience is on their devices. And we just allow our composable markup to represent those frameworks. A recent example of this is with the new "glass" iOS 26 UI update. We had our client updated for the iOS 26 beta on day 1 of its release. Flutter has to re-write their entire UI framework if they want to adapt to this experience.
Hyperview creator here. Yes, it sounds like the difference is that your project is directly rendering platform-native UI widgets, while Hyperview is built on top of React Native for the cross-platform layer.
Curious how you will handling the differences between platforms. For example, Android prefers top tab bars, while on iOS the convention is to put tab bars below the content.
This is one of the fundamental differences for what we're doing. We are not building a write-once-run-everywhere solution. SwiftUI will have its own templates, Jetpack (Android) will have its templates, WinUI3 will have its templates.
We're delivering LVN as I've promised the Elixir community this for years, from LVN's perspective nothing really changes. We hit real issues when trying to support live components and nested liveviews, if you were to look at the liveview.js client code those two features make significant use of the DOM API as they're doing significant tree manipulation. For the duration of this project we've been circling the drain on building a browser and about three months ago I decided that the just had to go all the way.
I hope I'm not reading into this too cynically, but your phrasing makes it sound like the project is not going as well as originally hoped.
It's pretty well-established at this time that cross-platform development frameworks are hard for pretty much any team to accomplish... Is work winding down on the LiveView Native project, or do you expect to see an increase in development?
The LVN Elixir libraries are pretty much done and those really shouldn't change out side of perhaps additional documentation. I have been back and forth on the 2-arity function components that we introduced. I may change that back to 1-arity and move over to annotating the function similar to what function components already support. That 2-arity change was introduced in the current Release Candidate so we're not locked in on API yet.
What is changing is how the client libraries are built. I mentioned in another comment that we're building a headless web browser, if you haven't read it I'd recommend it as it gives a lot of detail on what we're attempting to do. Right now we've more or less validated every part with the exception of the overall render performance. This effort replaces LVN Core which was built in Rust. The Rust effort used UniFFI to message pass to the SwiftUI client. Boot time was also almost instant. With The Elixir browser we will have more overhead. Boot time is slower and I believe disterl could carry over overhead than UniFFI bindings. However, the question will come down to if that overhead is significant or not. I know it will be slower, but if the overall render time is still performant then we're good.
The other issue we ran into was when we started implementing more complex LiveView things like Live Components. While LVN Core has worked very well its implementation I believe was incorrect. It had passed through four developers and was originally only intended to be a template parser. It grew with how we were figuring out what the best path forward should be. And sometimes that path meant backing up and ditching some tech we built that was a dead end for us. Refactoring LVN Core into a browser I felt was going to take more time than doing it in Elixir. I built the first implementation in about a week but the past few months has been spent on building GenDOM. That may still take over a year but we're prioritizing the DOM API that LiveView, Hotwire, and Livewire will require. Then the other 99% of DOM API will be a grind.
But to your original point, going the route of the browser implementation means we are no longer locked into LiveView as we should be able to support any web client that does similar server/client side interactivity. This means our focus will be no longer on LiveView Native individually but ensuring that the browser itself is stable and can run the API necessary for any JS built client to run on.
I don't think we'd get to 100% compatibility with LiveView itself without doing this.
reply