Turbo frames and streams are a wonderful pattern that I wished was used more than it is. We see people resort to whole React frontends for their web applications just to add simple features that could be done nearly for free using frame and streams.
Yeah we did Vue pretty heavily and decided it wasn’t the right solution, except for a subset of things that require advanced UI complexity HTML+simple JS can’t offer (aka “islands of functionality”), since half our team is backend devs who just want to make regular simple Rails views. We’re planning a big shift to Turbo for all new feature development.
So much complexity in JS is scary and doesn’t scale well when most of the serious app logic is in Rails. Even for me who was one of the first adopters of Backbone.js looong ago and I’ve been doing JS heavy frontends most of my career.
If we had a more consistent SSR type solution to mix with Rails like Fresh or Remix or whatever I might be less tempted to switch but SSR is very premature unless you’re using a full JS framework which I have little interest in using, even though I write Typescript for a living.
Backend devs doing "just simple rails views" only works when those views are really simple, and mostly for internal tools. Otherwise it becomes a total nightmare to maintain.
I agree the "popular" JavaScript solutions are a complexity, over engineering and maintenance nightmare (I'm talking about Next.js mostly here).
But if you use somethink like Inertia, which TL;DR basically just makes React (or Vue) a templating library for your backend framework, it becomes super simple and easy to maintain. I find Inertia great because it's the perfect bridge that can make both backend and frontend devs happy, as it gives you the best of both worlds.
Hotwire is great, but you do need to invest into its way of doing things, and believe me... backend devs who hate on JS doing frontends leads to disaster in the same way JS devs hating on PHP/Ruby/Etc doing backend also leads to disaster.
Last year we replaced Hotwire in a medium size Rails app with Inertia because even if it was quite fast to build new features with, it became a total nightmare to maintain. Simple things such as a modal were split in 4 or 5 different files... some logic was in controllers, other in templates, then the stimulus controller for this and that, and the frames, and... nobody knew what to change.
The problem with Hotwire is that you need to be super "creative" with your solutions and take decisions like should I use a stimulus controller, a frame, a stream? or what? at every single step, and a single functionality ends up split in multiple different places. While with React or Vue it would be just a single component with an isOpen=true/false and done.
I insist... if you look at the "modern frontend" situation thinking on Next.js, Redux, css-in-js, etc... yes... it's a f*ng over engineering marketing hype bullshit hell.
But React/Vue on their own do provide some real benefits and real solutions to actual problems.
I think inertia brings to backend frameworks all the benefits of these tools, with minimal (and worth it, in my opinion) trade offs.
If you end up using Hotwire, just be careful and be warned... it's very likely to become a mess sooner, especially if used by JS haters.
fwiw, I disagree with your take on Hotwire: that's really all of Rails. Rails is built for expert users and you simply must be disciplined about how you build it or you will regret it; I don't think that's unique to Hotwire at all.
I'm commenting to thank you for sharing Inertia though. Very interesting approach; I'm going to do some reading of the code.
Apologies, my only disagreement was that it was Hotwire that requires care when I really think it's the entirety of Rails. Thanks again for the suggestion on intertia.js !
The only concern I have is around making things more efficient server side.
For example if you have an expensive DB query that's not really cacheable, you can use streams and frames to only have that query execute once on the first page load but then you can navigate between pages that only load in a different frame.
A good example is something like a video course platform with a video frame, table of contents area and another frame below the video to load specific things like questions, references, show notes, etc.. As you click links in the table of contents, it loads new content in the video frame but the questions wouldn't change and you also don't need to re-render the table of contents either. You get to skip querying and rendering views for all of those expensive things.
With Turbo 8, if you used the new approach you would be rendering non-cacheable very expensive queries and views on every page transition. I want to use Turbo 8's features so much but for the biggest and most complex aspect of my app I don't realistically see how I can. The bookkeeping pain on manually updating 5+ areas of the page and writing a bunch of stream hooks which is multiplied as you add more actions is real though.
I'm hoping as Turbo 8 gets more mature new patterns will emerge to let us get the best of both worlds (performance and complexity compression).
This scenario does seem like something that would be cacheable to me though. Really its a matter of where you're doing the caching -- on the server, or in the client. It's easier to manage cache invalidation (and state in general) on the server so that seems like the preferable option to me for most things.
In the context of Rails specifically, something like Solid Cache helps a lot with this too. There are other similar approaches for other languages/databases as well, although most implementations are based on Postgres instead of MySQL.
If the specificity & complexity of the query is the barrier to caching, it's also possible that breaking the query up a bit could still net better performance if it makes it easier to cache.
> This scenario does seem like something that would be cacheable to me though
The table of contents has a course with sections, lessons and downloads / videos with sums of video lengths that roll up to each section and back to the whole course. It rarely changes but the monkey wrench is there's a checkbox next to each lesson where a user can toggle a lesson as being complete so now it's really personalized for each user with a model storing this state. I didn't want to save this client side in local storage in case they watch things on multiple devices.
Having a separate cache for each user seemed like it would be a big mess, especially since these "lesson completions" move very fast in the sense that a user can toggle a handful of them per session. This table is also pretty big. Imagine having a 200 lesson course but you have 10,000 people who took it. Now we're at 2 million rows. Russian doll caching seemed pretty dangerous here, I'd have to have touch: true on like 5 cascading models where editing 1 thing would bust components of the cache for everyone.
The queries themselves aren't too complicated. It's pretty much 3 queries with "includes" being used at times to avoid N+1 queries. It's getting the course, then looping over its sections and then looping over its lessons and in that lesson loop it gets any downloads. Then there's the lesson completion state for the current user too to see if it should be checked or not.
It doesn't seem very friendly to being cached.
I really wanted to avoid having a single cache for the whole TOC and making a separate ajax request to load in the user's lesson completions after the page has loaded. The UI gets jittery with the checkboxes being populated afterwards. This also opens a few cans of worms. Even if this were to happen in the end with Turbo 8, I'd still have to query the per user lesson completions + all of the resources (questions or show notes, etc.) presented under the video on every page transition.
I don't know all the details of course, but I think I'd approach this by splitting the user specific info into a short lived user specific cache + globally caching the non user specific stuff for a bit longer. Ideally that way you only have to query the user specific part at most once every 5 minutes or something like that.
Or even if you didn't cache the user specific bits, you could cache the rest and just query for that every time.
Other dom morphing setups have the concept of "ignore this sub-tree" which means that on first load you can render the ToC and checkboxes for the user, but then on the AJAX requests from Turbo omit that section of the page entirely. This lets you cache and share them whilst when the user refreshes the page they get the latest ToC.
It adds complexity but then if you need caching, what doesn't?
I'm not familiar with that omission process. What would that look like in practice?
It sounds like I'd still get a poor UX on the first page load (UI checkbox jitter), and on all page transitions I'd need to run the user specific lesson completions query produced from the ajax request and all of the other heavy content under the video (questions + show notes, etc.) would need to get rendered which in the end is much more expensive than using frames to avoid any of that from being processed.
I'm not trying to make the case of sticking with frames + streams either, I think it's been kind of painful to use them with all that's going on within the page. I'd love to be able to use the morph approach. I think I'd be able to delete tons of both server side and client side code.
Turbo morphing will allow to do exactly that, exclude a section from morphing, by wrapping it in:
<div data-turbo-permanent>...</div>
I think it's not yet documented since turbo has not yet landed in a production release of Turbo but it's been merged into main, you can see it mentioned on the PR: https://github.com/hotwired/turbo/pull/1019 (Look for "Exclude sections from morphing" title in the PR description).
Author here. Yes! Turbo 8 changes are very exciting.
It hasn't been released yet but it was merged in main and I was trying it out today. I plan dig more into it and write a breakdown. I'm really interested in how it's exactly working to see what corner cases we need to watch out for. As is usual with "magical" features they're great until you run into a corner case.
Probably the best way to think about morphing is as something that gets you up and running really quickly. Then as your app matures you can replace high traffic pages with turbo frames and streams or something else. In a typical app, even when it's mature you might find that 80% of the pages are low enough on the performance requiremetns that you can just morphing and then use the development time savings to really optimise the other 20%.
They are in the same wheelhouse. HTMX is more generalized, Turbo/Hotwire is more opinionated. HTMX can easily be run anywhere in any language. Turbo is easiest run in Rails/Ruby.
Similar goals, different philosophies and approaches.
A few differences: HTMX makes it possible to enhance any element with hypertext capabilities. I don’t believe it works with JavaScript disabled. Hotwire still relies on <a> and <form> to make HTTP requests as its focus is more geared towards progressive enhancement; the Hotwire portions of the app will still work with JS disabled.