Where he encouraged people to lean into intentionality and finding purpose rather than using therapy as a replacement?
I have a diagnosed anxiety disorder and I’ve benefited GREATLY from talk therapy in numerous ways. I’m an advocate for therapy. I simultaneously stand behind his post as a healthy nudge for many.
I don't think people in CS should be giving broad overly simplistic mental health advice. Let's leave that to the pros. I'm grateful of DHH's contributions to society but his hot takes are marketing ploys. Mental health should not be leveraged as a marketing ploy.
Fwiw, I struggled with anxiety and depression for nearly 20 years. Commitment to therapy and my modalities brought me out of that. My _therapist_ guided me, including finding purpose and building meaningful relationships.
I’ve never felt like “throw everything into a queue” was a mindset within the Ruby community, nor have we done that at my companies. And multi-region is a business decision.
Resque was a staple for a long long time. In the jvm world, throw everything into Kafka is also a staple of a lot of "enterprise" shops. Or SQS for AWS places I've worked at. I think it is not a ruby language thing, but a certain kind of architecture thing.
True that it is not uncommon to use Sidekiq or Resque , but Rails 8 is going to be the first version to ship with a queuing system (SolidQueue), later this year. So queueing has been an add-on for 20 years. I don't think it is quite a staple.
Rails 8 came out in November, and `rails new` generates an app with the solid trio in the Gemfile. Been fun playing around with it for new side projects :)
It does have a GIL. You’re not wrong, but by that same logic, there’s pitfalls when using multi-threading as well, even in languages where it’s native (e.g., Elixir).
Regardless, in my experience, when you run into scenarios that need queueing, multi-threading, etc., you need to know what you’re doing.
Look, you can insist that a 1440p monitor can only show blurry text all you like, but the problem that people are talking about is that the text is even blurrier than that.
I have native 1440p 120Hz on my main screen which is more than 30inches across (ultrawide). I can see pixels if I look close enough, but I do not see any pixels at usual reading distance.
I have used retina displays of various sizes -- but after a while I just set them down to half their resolution usually (i.e. I do not use the 200% scaling from the OS, rather set them to be 1440p (or lower on 13inch laptops)). I have not seen an advantage to retina displays.
Text on 1440p looks great with full hinting and subpixel rendering. Unfortunately, macOS does neither, so the jump to retina feels more significant than it is.
Biggest draw for me with 1440p 32" is being the same DPI as a 1080p 24". I like to have one big monitor and then 2 small flank vertical monitors and having them all be the same DPI just makes headaches go away on every operating system I use them with.
Maybe a comparison to AWS Global Accelerator would be helpful to understand the "global" aspect. Having instances in multiple regions is just a starting point.
Ahhh got it, this is focusing primarily on load balancing at a lower layer of routing then than what I'm referring to. While not wrong, "global load balancing" threw me off a bit.
EDIT: see the other reply, appears that it handles both given it leverages Fly's Anycast setup.
Global state is a tool that will almost always lead to bad architecture in an app where architecture matters. I'm sure you can point to a counterexample or two where a set of devs managed to keep disciplined indefinitely, but that doesn't change the fact that allowing people to reach into a mutable variable from anywhere in the system enables trivially accessible spooky action at a distance, and spooky action at a distance is a recipe for disaster in a medium to large code base.
In a project with more than a few people on it, your architecture will decay if it can decay. Avoiding global state removes one major source of potential decay.
“Almost” is key there. I respect your position, but it’s an always/never take, and the longer I am in this industry, the more I find myself leaning into “it depends.” Here’s a take that articulates this being done well on a large codebase better than I can in a short comment: https://dev.37signals.com/globals-callbacks-and-other-sacril...
No, it isn't—I'm the one who inserted the word "almost" into that sentence! Where did you get the idea that I meant always/never?
Like I said, you can point to exceptions but that doesn't change the rule. It's better to teach the rule and break it when you really know what you're doing—when you understand that you're breaking a rule and can articulate why you need to and why it's okay this time—than it is to spread the idea that globals are really just fine and you need to weigh the trade-offs. The odds are strongly against you being the exception, and you should act accordingly, not treat globals as just another tool.
Sometimes amputation is the right move to save someone's life, but you certainly should not default to that for every papercut. It's a tool that comes out in extreme circumstances only when a surgeon can thoroughly justify it.
Respectfully, your response further qualifies what I meant by your take being an always/never. I’m aware you’re the one who put “almost” in there, and I didn’t meant to imply you were being stubborn with that take, that’s why I said (and genuinely meant) that I respect it.
But I’m also aware that you’re comparing using global state to amputating a human limb. I don’t think it’s nearly that extreme. I certainly wouldn’t say global state “almost always leads to bad architecture,” as evidenced by my aligning with a framework which has a whole construct for globals baked into it (Rails’ Current singleton) that I happen to enjoy using.
Sure, global state is a sharp knife, which I already said. It can inflict pain, but it’s also a very useful tool in certain scenarios (more than would equate to “almost [never]” IMO).
So your response aligns with how I took your original post, and what I inferred “almost” really meant: basically never. My point is that I don’t agree with your take being a “rule.” While I understand your perspective, instead of saying basically never, I would say, “it depends.”