> The merge workflow is not inherently complicated or convoluted. It's just that git is.
What makes merging in git complicated? And what's better about darcs and mercurial?
(PS Not disagreeing just curious, I've worked in Mercurial and git and personally I've never noticed a difference, but that doesn't mean there isn't one.)
Darcs is a special case because it coevolved a predecessor/fork/alternative to CRDTs [0] (called "Patch Theory"). Darcs was slow because darcs supported a lot of auto-merging operations git or mercurial can't because they don't have the data structures for it. Darcs had a lot of smarts in its patch-oriented data structures, but sadly a lot of those smarts in worst cases (which were too common) led to exponential blowouts in performance. The lovely thing was that often when Darcs came out of that slow down it had a great, smart answer. But a lot of people's source control workflows don't have time to wait on their source control system to reason through an O(n ^ 2) or worse O(n ^ n) problem space. To find a CRDT-like "no conflict" solution or even a minimal conflict that is a smaller diff than a cheap three-way diff.
[0] Where CRDTs spent most of a couple of decades shooting for the stars and assuming "Conflict-Free" was manifest destiny/fate rather than a dream in a cruel pragmatic world of conflicts, Darcs was built for source control so knew emphatically that conflicts weren't avoidable. We're finally at the point where CRDTs are starting to take seriously that conflicts are unavoidable in real life data and trying new pragmatic approaches to "Conflict-Infrequent" rather that "Conflict-Free".
At the end of the day all of these have the user start with state A turn that into state B and then commit that. How the that operation is stored internally (as a snapshot of the state or as a patch generated at commit time) is really irrelevant to the options that are available for resolving conflicts at merge time.
Auto-merging code is also a double-edged sword - just because you can merge something at the VCS-level does not mean that the result is sensible at the format (programming language) or conceptual (user expectation) levels.
Having used darcs for a while and still being a fan of it despite having followed everyone to git, the data storage is not irrelevant and does affect the number of conflicts to resolve and the information to resolve it.
It wasn't just "auto-merging" that is darcs' superpower, it's in how many things that today in git would need to be handled in merges that darcs wouldn't even consider a merge, because its data structure doesn't.
Darcs is much better than git at cherry picking, for instance, where you take just one patch (commit) from the middle of another branch. Darcs could do that without "history rewriting" in that the patch (commit) would stay the same even though its "place in line" was drastically moved. That patch's ID would stay the same, any signatures it might have would stay the same, etc, just its order in "commit log" would be different. If you later pulled the rest of that branch, that also wouldn't be a "merge" as darcs would already understand the relative order of those patches and "just" reorder them (if necessary), again without changing any of the patch contents (ID, signatures, etc).
Darcs also has a few higher level patch concepts than just "line-by-line diffs", such as one that tracks variable renames. If you changed files in another branch making use of an older name of a variable and eventually merge it into a branch with the variable rename, the combination of the two patches (commits) would use the new name consistently, without a manual merge of the conflicting lines changed between the two, because darcs understands the higher level intent a little better there (sort of), and encodes it in its data structures as a different thing.
Darcs absolutely won't (and knows that it can't) save you from conflicts and manual merge resolution, there are still plenty of opportunities for those in any normal, healthy codebase, but it gives you tools to focus on the ones that matter most. Also yes, a merge tool can't always verify that the final output is correct or builds (the high level rename tool, for instance, is still basically a find-and-replace and can be over-correct false positives and and miss false negatives). But it's still quite relevant to merges the types of merges you need to resolve in the first place, and how often they occur, and what qualifies as a merge operation in the first place.
Though maybe you also are trying to argue the semantics of what constitutes a "merge", "conflicts", and an "integration"? Darcs won't save you from "continuous integration" tools either, but it will work to save your continuous integration tools from certain types of history rewriting.
"At the end of the day" the state-of-the-art of VCS on-disk representation and integration models and merge algorithms isn't a solved problem and there are lots of data structures and higher level constructs that tools like git haven't applied yet and/or that have yet to be invented. Innovation is still possible. Darcs does some cool things. Pijul does some cool things. git was somewhat intentionally designed to be the "dumb" in comparison to darcs' "smart", it is even encoded in the self-deprecating name (from Britishisms such as "you stupid git"). It's nice to remind ourselves that while git is a welcome status quo (it is better than a lot of things it replaced like CVS and SVN), it is not the final form of VCS nor some some sort of ur-VCS which all future others will derive and resembles all its predecessors (Darcs predates git and was an influence in several ways, though most of those ways are convenience flags that are easy to miss like `git add -p` or tools that do similar jobs in an underwhelming fashion by comparison like `git cherry-pick`).
> Most apps are buggy & sloppy, including Finder, Calendar, Mail, Music, and Clock.
Relative to what? E.g., which GUI apps are better than these? I'd list all of those apps (except Music, and maybe Clock, which I don't use enough to judge) as some of the strongest GUI apps I use today (although Notes, Logic Pro, and Final Cut would be my top three apps Apple makes today, in that order). Note that doesn't mean those apps are without flaws, but I'd be hard pressed to name anything definitively better. Ableton Live/MaxMSP is probably the only non-Apple ecosystem GUI app I can think of that I'd consider first rate (I might add Sublime Text/Sublime Merge, but I haven't used those enough to say definitively). Acorn, OmniGraffle, OmniOutliner, NetNewsWire, Transmit, Things, BBEdit, are all Apple ecosystem apps I use regularly that I'd consider great, but I don't think any of those are definitively better than Apple's first-party apps. So curious what software you're comparing Apple's apps to that you'd consider definitively better than them?
(Regarding Mail and Calendar, curious if you're using those with Gmail or Exchange. Mail/Calendar only work ok with those services.)
> Relative to what? E.g., which GUI apps are better than these?
Relative to the same apps 5+ years ago. I'm not claiming there are better GUI apps. I'm saying that the quality of the native apps has decayed, with prominent bugs or poor designs that have been around for years.
I do feel that one of the interesting things to happen to software in recent years is how most super-popular native applications (most of those developed by Apple) have nosedived in quality, while web applications have done a tremendous job maintaining their quality. Many web experiences are now superior to native experiences, certainly due to nosediving native quality, but also I suspect because the web has always standardized on one stack, HTML/CSS/JS, and we get to reap the benefits of 30+ years of startlingly stable infrastructural consistency.
This is what happens when the same hyper-smart people get to chip away at n% annual performance gains in V8 for 20 years. Apple, on the other hand, pushes major UI system refactors every ~10 years, disrupting all the hard-fought stability and optimizations that have been made to that point. Microsoft pushes new ways to build UIs, it seems, even more often.
> I do feel that one of the interesting things to happen to software in recent years is how most super-popular native applications (most of those developed by Apple) have nosedived in quality, while web applications have done a tremendous job maintaining their quality. Many web experiences are now superior to native experiences, certainly due to nosediving native quality, but also I suspect because the web has always standardized on one stack, HTML/CSS/JS, and we get to reap the benefits of 30+ years of startlingly stable infrastructural consistency.
I'd be curious which apps you'd consider the best examples of this high quality experience? The only web app I even think is worth commenting on is Figma, which is easily the best web app I've ever used, but an app I'd only rank as mid-tier overall. VS Code is the closest analogy, VS Code is clearly a great app overall, in that it solves it's need very effectively, but its not an app I exudes quality the way the apps listed here do https://news.ycombinator.com/item?id=45252567 (as an example of how VS Code doesn't exude quality, note how when VS Code loads it's UI elements for the first time, each element pops in separately, instead of the entire UI displaying instantly and simultaneously, this creates the impression of the app struggling to display its textual UI). I think Figma is slightly worse than VS Code, mainly because it's a web app, which presents all sorts of problems inherent to the platform, e.g.:
- Conflicts between keyboard shortcuts with the browser/web app split
- Bizarre tacked-on native-app affordances (e.g., breaking the back button and high-jacking the right-click contextual menu, both to essentially disguise the fact it's a web app?)
- Poor fit with the URL overall as a UI element (e.g., what does the URL mean when you're knee-deep in a single component in a larger document?)
In summary, the web's core UI elements just don't seem fit well with desktop use cases. I can understand web apps being a nice compromise, e.g., collaborating on Google Docs/Figma is a good practical fit (the web helps with a lot of the challenge of collaborating on the same doc). But they never feel pleasant or high quality to me.
I do think web apps have become much better indeed.
I believe there is a few reasons:
- the reliance on clouds for many people, so it becomes more convenient to load your data in a browser window than fetch it locally
- the mixing of UI elements and data makes for more flexible software because the boundaries are not strong for many applications, data can be UI as well
- the inherent lock in and facilitation of licence management for the developer. A web software can't be pirate in any meaningful way to it's easier to require people to pay
- the large improvements in computing power that makes lower performance of the web software almost a non-issue for an increasingly large amount of applications
- and of course the major optimisations the stack has received, enabling better software overtime
Apple is still pushing the local first with native UI software in the name of privacy but at the same time they are also pushing cloud stuff and are not very competitive cost wise.
There is still some use cases where local first is necessary, like video editing and its large files, that Apple target quite well. But it's not clear how long that is going to last. Even resident fiber connections have increased speed quite a lot and if you can ingest a remote server fast enough, it won't matter that much if it's local. The UI just has to stream the data view in real time fast enough, which is already fine with most fiber connections.
The major issue is latency but that is becoming much better with router upgrade and data center placed at key geographical places to serve most people well.
I would root for the "Apple way" if they would keep the personal computer that you fully control philosophy but instead, they are pushing "services" stuff for revenue just like the others. You end up paying more for not many benefits.
This is the problem with the iPhone as well, because in theory you could use it without iCloud but in practice many of the features rely heavily on it and they have made zero investment in local first use case. This can be seen in the absolutely abysmal transfer speed of a wired iPhone (still USB 2 for most models, like WTF) and just generally terrible syncing if you don't use iCloud. We are very far from the iLife promise of Steve Jobs and it just doesn't make any sense to overpay for them to hold all the control.
There's some truth to the software getting worse, but it's also a different world (e.g., I see the decline as being a result of supporting different devices [and especially syncing between them]). But my point is more that still making the best GUI software around is pretty good for a company of Apple's size and complex priorities! (I don't think Apple makes software that's better than everyone else, but I do think most of their core apps are on par with the best.)
> Delight is overblown, in my opinion. I think most of the people truly delighted by fancy animation are just other designers.
Appreciating delight (for it's own sake) in software design I'd consider a core trait of (old-school?) Apple fans. E.g., lamenting the decline of whimsy in the post-Jobs era.
I think there's truth to it being relatively niche, appreciating delight that is, but it's certainly not confined just to designers. E.g., like I'm saying here, a core trait of Apple fans is appreciating these kinds of details.
I should've been more specific. I was moreso referring to the trend of designers claiming they're adding delight, where in reality they're just muddying the experience with visual effects that might be striking, but lack any depth or improvement to the core experience.
I absolutely do believe that software can be delightful. Linear comes to mind as an example - there are lots of little nuances to their interactions and it just feels so good to use.
This is such a funny example because language is the main way that we communicate with LLMs. Which means you can make tie both of your points together in the same example: If you take a scene and describe it in words, then have an LLM reconstruct the scene from the description, you'd likely get a scene that looks very different then the original source. This simultaneous makes both your point and the person you're responding to's point:
1. Language is an abstraction and it's not deterministic (it's really lossy)
2. LLMs behave differently than the abstractions involved in building software, where normally if you gave the same input, you'd expect the same output.
Yes, most abstractions are not as clean as leak free functional abstractions. Most abstractions in the world are leaky and lossy. Abstraction was around long before computers were invented.
One argument for abstraction being different from delegation, is when a programmer uses an abstraction, I'd expect the programmer to be able to work without the abstraction, if necessary, and also be able to build their own abstractions. I wouldn't have that expectation with delegation.
> The vast majority of programmers don't know assembly, so can in fact not work without all the abstractions they rely on.
The problem with this analogy is obvious when you imagine an assembler generating machine code that doesn't work half of the time and a human trying to correct that.
I mean, it’s more like 0.1% of the time but I’ve definitely had to do this in embedded programming on ARM Cortex M0-M3. Sometimes things just didn't compile the way I expected. My favorite was when I smashed the stack and I overflowed ADC readings into the PC and SP, leading to the MCU jumping completely randomly all over the codebase. Other times it was more subtle things, like optimizing away some operation that I needed to not be optimized away.
> Do you therefore argue programming languages aren't abstractions?
Yes, and no.
They’re abstractions in the sense of hiding the implementation details of the underlying assembly. Similarly, assembly hides the implementation details of the cpu, memory, and other hw components.
However, except with programming languages you don’t need to know the details of the underlying layers except for very rare cases. The abstraction that programming languages provide is simple, deterministic, and well documented. So, in 99.999% of cases, you can reason based on the guarantees of the language, regardless of how those guarantees are provided.
With LLMs, the relation between input and output is much more loose. The output is non-deterministic, and tiny changes to the input can create enormous changes in the output seemingly without reason. It’s much shakier ground to build on.
I do not think determinism of behaviour is the only thing that matters for evaluating the value of an abstraction - exposure to the output is also a consideration.
The behaviour of the = operator in Python is certainly deterministic and well-documented, but depending on context it can result in either a copy (2x memory consumption) or a pointer (+64bit memory consumption). Values that were previously pointers can also suddenly become copies following later permutation. Do you think this through every time you use =? The consequences of this can be significant (e.g. operating on a large file in memory); I have seen SWEs make errors in FastAPI multipart upload pipelines that have increased memory consumption by 2x, 3x, in this manner.
Meanwhile I can ask an LLM to generate me Rust code, and it is clearly obvious what impact the generated code has on memory consumption. If it is a reassignment (b = a) it will be a move, and future attempts to access the value of a would refuse to compile and be highlighted immediately in an IDE linter. If the LLM does b = &a, it is clearly borrowing, which has the size of a pointer (+64bits). If the LLM did b = a.clone(), I would clearly be able to see that we are duplicating this data structure in memory (2x consumption).
The LLM code certainly is non-deterministic; it will be different depending on the questions I asked (unlike a compiler). However, in this particular example, the chosen output format/language (Rust) directly exposes me to the underlying behaviour in a way that is both lower-level than Python (what I might choose to write quick code myself) yet also much, much more interpretable as a human than, say, a binary that GCC produces. I think this has significant value.
Unrelated to the gp post, but isn't LLMs more like a deterministic chaotic system than a "non-deterministic" one? "Tiny changes to the input can change the output quite a lot" is similar to "extreme sensitivity to initial condition" property of a chaotic system.
I guess that could be a problematic behavior if you want reproducibility ala (relatively) reproducible abstraction like compilers. With LLMs, there are too many uncontrollable variables to precisely reproduce a result from the same input.
The vast majority of programmers could learn assembly, most of it in a day. They don’t need to, because the abstractions that generate it are deterministic.
This is a tautology. At some level, nobody can work at a lower level of abstraction. A programmer who knows assembly probably could not physically build the machine it runs on. A programmer who could do that probably could not smelt the metals required to make that machine. etc.
However, the specific discussion here is about delegating the work of writing to an LLM, vs abstracting the work of writing via deterministic systems like libraries, frameworks, modules, etc. It is specifically not about abstracting the work of compiling, constructing, or smelting.
This is meaningless. An LLM is also deterministic if configured to be so, and any library, framework, module can be non-deterministic if built to be. It's not a distinguishing factor.
They are probabilistic. Running them on even different hardware yields different results. And the deltas compound the longer your context and the more tokens you're using (like when writing code).
But more importantly, always selecting the most likely token traps the LLM in loops, reduces overall quality, and is infeasible at scale.
There are reasons that literally no LLM that you use runs deterministically.
With temperature set to zero, they are deterministic if inference is implemented with deterministic calculations.
Only when you turn the temperature up they become probabilistic for a given input in that case. If you take shortcuts in implementing the inference, then sure, rounding errors may accumulate and prevent that, but that is not an issue with the models but with your choice of how to implement the inference.
To address your specific point in the same way: When we're talking about programmers using abstractions, we're usually not talking about the programming language their using, we're talking about the UI framework, networking libraries, etc... they're using. Those are the APIs their calling with their code, and those are all abstractions that are all implemented at (roughly) the same level of abstraction as the programmer's day-to-day work. I'd expect a programmer to be able to re-implement those if necessary.
> I wouldn't have that expectation with delegation.
Managers tend to hire sub managers to manage their people. You can see this with LLM as well, people see "Oh this prompting is a lot of work, lets make the LLM prompt the LLM".
Note, I'm not saying there are never situations where you'd delegate something that you can do yourself (the whole concept of apprenticeship is based on doing just that). Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.
I guess I'm not 100% sure I agree with my original point though, should a programmer working on JavaScript for a website's frontend be able to implement a browser engine. Probably not, but the original point I was trying to make is I would expect a programmer working on a browser engine to be able to re-implement any abstractions that they're using in their day-to-day work if necessary.
The advice I've seen with delegation is the exact opposite. Specifically: you can't delegate what you can't do.
Partially because of all else fails, you'll need to step in and do the thing. Partially because if you can't do it, you can't evaluate whether it's being done properly.
That's not to say you need to be _as good_ at the task as the delegee, but you need to be competent.
For example, this HBR article [1]. Pervasive in all advice about delegation is the assumption that you can do the task being delegated, but that you shouldn't.
> Just that it's not an expectation, e.g., you don't expect a CEO to be able to do the CTO's job.
I think the CEO role is actually the outlier here.
I can only speak to engineering, but my understanding has always been that VPs need to be able to manage individual teams, and engineering managers need to be somewhat competent if there's some dev work that needs to be done.
This only happens as necessary, and it obviously should be rare. But you get in trouble real quickly if you try to delegate things you cannot accomplish yourself.
I think what you're trying to reference is APIs or libraries, most of which I wouldn't consider abstractions. I would hope most senior front-end developers are capable of developing a date library for their use case, but in almost all cases it's better to use the built in Date class, moment, etc. But that's not an abstraction.
Summary of the original author's (Alex Russell) background from his about page (https://infrequently.org/about-me/) (it seemed odd to me to just focus on his most recent role):
> Microsoft Partner Product Architect on the Edge team and Blink API OWNER. It is my professional mission to build a web that works for everyone.
> From 2008-2021, I was a software engineer at Google working on Chrome, Blink, and the web platform. I served as the first Web Standards Tech Lead for Chrome (2015-2021) and was a three-time elected member of the W3C Technical Architecture Group (2013-2019) and a representative to TC39 for a decade.
On the IDE marketshare question, the Stackoverflow Developer Survey asks questions like this and I always jump to that section. Here's my comment on HN summarizing the most recent survey https://news.ycombinator.com/item?id=44725015
I also think your observation about VS Code's rise forcing JetBrains into a corner is spot on.
On a side tangent, I find it odd that the whole VS Code phenomena is under analyzed. Before VS Code, text editors and IDEs were one of the healthiest software categories around, with the market leader hovering around 35%, which is great for competition enforcing quality (DAWs are still like this today). Now text editors have become more like the Adobe suite, where there's in 800 lb gorilla doing whatever it wants and everyone else is competing over scraps (if you say VS Code is actually good though, Photoshop was amazing when it made its rise too). Developers just let this happen to their core tool without anyone really talking about?
I honestly don't understand the popularity of VS Code, at all. If I wanted to cobble together a development environment from scratch, I'd just go use Emacs. Hell, I'd end up with a better product than a bunch of buggy as VS Code plugins that don't constantly act up and regularly break.
While I wouldn't consider my employer a '.Net shop' anymore, it's a fact that it still remains the most used language across the organization. Many of my coworkers have ditched Visual Studio, jumped to VS Code, gotten pissed off at it after a while, tried Rider, and eventually switched.
If anything, I think VS Code is in an incredibly unhealthy state. Sure, Microsoft initially managed to pull a Chrome with it and ate the lunch of a lot of more basic text editors, but many people are getting frustrated with the ecosystem. Really, between Spacemacs and Neovim, actual community-driven projects are coming out with much more polished and better integrated tools than Microsoft - partly thanks to them pushing everyone and their dog to build language servers. I'm sticking to proper IDE's for basically everything but the occasional itch I have to do something in lisp, but hot damn does what you get out of the box thanks to LSP support and tree-sitter make VS Code less appealing to people willing to make the switch.
Don't know the context for that snippet you're sharing, but the issue for OSC 52 support for clipboard syncing is marked as `Open` in the Mosh repo https://github.com/mobile-shell/mosh/issues/637
To be fair I haven't tested mosh without tmux, I always have tmux running. So I guess the situation is that it's possible to make it work with tmux, but maybe mosh on its own doesn't support OSC52. Which for me is good enough.
Doesn't SSH drop if the connection drops? I keep my connections with Mosh/Eternal Terminal open for weeks or even months at a time, when devices are going in and out connectivity many times.
What makes merging in git complicated? And what's better about darcs and mercurial?
(PS Not disagreeing just curious, I've worked in Mercurial and git and personally I've never noticed a difference, but that doesn't mean there isn't one.)