I have Luddite feelings reading about alternatives to Git.
As an industry we have soooo many languages, frameworks, tools, distros etc. It's like we are pre metric system or standardization on screw thread sizing.
I am really happy that at least for VCS, we have a nearly universal solution in Git, except for the big tech folks.
Sure, jj might solve some issues, but once it gets serious traction, all the tooling that works with e.g. repo analysis will need to start supporting git and jj. More docs need to be created, junior developers will need to learn both systems (as git is not going anywhere).
Given all the downstream effects, I do not think introducing another VCS is a net positive.
In my case, the shoddiness and thoughtlessness of Git's user interface pisses me off so much that I just want it to be replaced. A good tool similar to Git may even explain Git's concepts better than Git or its documentation that likes to discuss "some tree-ish refs".
I started using git when my employer still used svn. This was possible because `git-svn` was so good and it seamlessly allowed me to use branches to do development while still committing to svn's trunk. I think jj is trying to do something similar: jj-unaware tools work reasonably well in colocated jj repositories, and jj-unaware peers (CI tools, etc) work exactly as they did before.
I do agree that you can't really use jj without also knowing "a fair amount" about git, but notably you never need to use the git cli to be an effective contributor to a github project, which is basically the same as `git-svn` was back before git got popular.
More docs need to be created, junior developers will need to learn both systems (as git is not going anywhere).
Not true. Using jj is a choice made by a single developer. Nothing proprietary escapes from a jj clone of a git repo. Junior devs can use vanilla git if they really want to.
All the tooling that works with e.g. repo analysis will need to start supporting git and jj
Also not true, unless I misunderstand what you mean by repo analysis. Colocated jj repos are more or less supersets of git repos.
I see what you mean, and having looked a bit more deeply today, it's clear that jj is very compatible.
However, my point is that having two ways of doing this within a team is already confusing. What if one person writes a "this is how we work" document on the wiki and mentions some git, and the next person rewrites it to jj? It's extra things to think about. It's like supporting developers from a team with both Windows and Linux (Debian, Arch, Ubuntu, etc). Teams do it, and it's all possible, but it was nice that at least everyone used git.
If a product is 10x better than what's currently available, it will see rapid adoption. There was obviously something about git that made it MUCH better than the precursors and that's why it obliterated everything else.
I highly doubt that new tools will be 10x better than git. Maybe 20%?
One way I compare the git to jj transition (if it happens, or for whom it happens) to the svn to git transition is: branching in svn was awful. It was heavyweight and you were signing up for pain later down the road. Git made branching easy and normal, almost something you barely need to think about. jj does a similar thing for rebasing. For someone whose familiarity with git is clone, pull, push, merge, creating branches (so, basic/working/practical familiarity but even "rebase -i" might be pushing the limits)- for someone like that what jj offers is a similar "lift" of a feature (rebase) from "scary" to "normal" similar to what git did for branching compared to svn.
That's just one aspect of the whole thing, and of course if you're a git rebase wizard (or have tools that make you that) then this won't seem relevant. But I think for a lot of people this might be a salient point.
I'm happy to give it a try when I have some time, but I have 0 problems with git right now so it's not top of my list. My critique is also really not towards jj specifically, I'm just discussing the idea that git has extremely wide adoption now and that this has benefits :)
Git absolutely is a productivity drain and should be replaced, particularly as agentic coding takes over, as its footgun elements get magnified when you have a lot of agents working on one codebase at once. I dislike jj as move forward because I don't think it goes far enough for the amount of friction moving to it as an industry would entail.
The next generation of VCS should be atomic, with a proper database tracking atoms, and "plans" to construct repo states from atoms. A VCS built around these principles would eliminate branching issues (no branches, just atoms + plans), you could construct relationships from plan edit distances and timestamps without forcing developers to screw with a graph. This would also allow macros to run on plans and transform atoms, enable cleaner "diffs" and make it easy to swap in and out functionality from the atom database instead of having to hunt through the commit graph and create a patch.
The downside of an atomic design like this is you have to parse everything that goes into VCS to get benefits, but you can fallback to line based parsing for text files, and you can store pointers to blobs that aren't parseable. I think the tradeoff in terms of DX and features is worth it but getting people off git is going to be an epic lift.
No, but the lift of replacing git is huge, so we shouldn't do it for a Python2->Python3 sitaution, we should have a replacement that really brings big wins.
I have no idea what problem this is supposed to solve. Where is the V in VCS here? How do you track the provenance/history of changes?
You may not be "forcing" developers to "screw with a graph" (what?) but you are forcing them to screw with macros (we're adding a built-in scripting layer to the VCS?) and these unfamiliar new concepts called atoms and plans.
> A VCS built around these principles would eliminate branching issues (no branches, just atoms + plans)
And it would introduce zero new confusing issues of its own?
> This would also [...] make it easy to swap in and out functionality from the atom database instead of having to hunt through the commit graph and create a patch.
This is a weird use case. Version control systems aren't typically used for storing and swapping around bits of functionality as a first-class, ongoing concern.
Not to mention you still need to figure out how atoms get stitched together. How do you do it without diff-based patches? No superior solution exists, AFAIK.
If you have a database of atoms and plans, the V is a row in a plan table, and you reconstruct history using plan edit distance, which is more robust than manually assigned provenance anyhow (it will retain some history for cherry picked changes, for instance).
I'm sure there would be new issues, but I think they'd be at the management/ops level rather than the individual dev level, which is a win since you can concentrate specialization and let your average devs have better DX.
Is it a weird use case? Imagine you refactor some code, but then you realize that a function was being called in a slightly incorrect way after a change (prior to the refactor so the revert isn't trivial) and you have to go back and revert that change, let's say over 100 files to be fun, and let's say that the code isn't perfectly identical. With git you probably have to do surgery to create a patch, with an atomic system you can easily macro this change, or you could even expose a UI to browse different revisions of a piece of code cleanly (which would blow up with git).
if I make a plan which causes the project to be identical to its state 5 years ago, the edit distance is zero, but in no way can you call that a measure of history
You're still thinking in graphs. That plan would already exist in the database, you would just be making it a build target instead of whatever new plan was targeted before.
It seems as though you've come up with a model for representing source code repos in terms of a data model of your own design, solving problems of your own choosing. But what you describe is not a version control system in the generally agreed upon sense of the word.
Having used jj for the last year and a half, I hear you on the tooling issues. From IDEs, to plugins, to docs, to workflows, to LLMs, jj is not as well supported as git.
Despite that, it's still a net time-saver for me, and I suspect the same will be true for others. Git imposes constant overhead, mostly with its poor UI, but also by some of its unnecessary models (e.g., staging as a separate concept).
As far as I can tell, jj intends to be more like a cross-VCS frontend (with their own native backend coming at some point). If tooling supports jj, it would automatically support git, jj's native backend, Google's hybrid backend and any other backend a user could add.
I see it says Corsair so cannot tell what exact model it is but I did relatively similar thing.
Reason being if you keep the load under certain wattage the PSU will run in passive cooling mode. My rig will never reach 50% of what Corsair SF750 Platinum can deliver and not mention in normal light load circumstances. It spins up its fans only when the load reaches ~300W or so.
Some people are very anal about any kind of noise coming out of their rigs. I personally undervolt everything to keep the fans at bay/minimum and having extra legroom in PSU department helps a lot
Same. Part of the background info on this is PSUs are usually most efficient at (or just under) 50% load. They are also more efficient at 240 V than 120 V, if you have the circuit. So the real efficiency can end up varying significantly, depending how you use it. The efficiency is not usually much to write home about in terms of your electric bill, but it does help drive the ability for the PSU to cool itself passively.
One of my best friends is a corneal surgeon at a university hospital, and does cataract surgery when there are other special complications. This is in Western Europe.
He told me about a work trip to India he did and how amazed he was by the routine, efficiency and lack of waste there was over there in regular cataract surgery. Literally one doctor would handle 5x to 10x as many surgeries per day as their western counterparts. Where each surgery here requires full sterilization from scratch, there they kept their wrapping etc between surgeries, and had two beds side by side. The surgeon would do one surgery while the other patient would be changed. Then he turned around and did the next cataract surgery.
This sounds a bit like the Sunrise Hospital Emergency Room treating patients from the Mandalay Bay shooting in 2017
> I said, “Bring all your patients together.” They brought them all towards me, and I was at the head of multiple beds, spiraling out like flower petals around its center. We pushed drugs on all of them, and they all got intubated, transfused, chest tubed, and then shuffled to Station 1.
> the respiratory therapist, said, “Menes, we don’t have any more ventilators.” I said, “It’s fine,” and requested some Y tubing. Dr. Greg Neyman, a resident a year ahead of me in residency, had done a study on the use of ventilators in a mass casualty situation. What he came up with was that if you have two people who are roughly the same size and tidal volume, you can just double the tidal volume and stick them on Y tubing on one ventilator.
Groovy in an emergency scenario, but I, a humble non-doctor, like the idea of fewer compromises of sterility as the de rigueur way of doing things for non emergency surgeries.
That's a really awesome article. It shows the power of just thinking about the what if scenario comes in very handy, because you begin to note information that might be useful.
I have written a few disaster/emergency plans and they are often forgotten about almost immediately, to the point they forget they even had them after a year.
At my old gig the on-call rota involved doing a randomized fire drill at the start of your on call week. Think "the site is slow" or "users are seeing 500 errors." This included a TUI program to walk you through the fire drill with your partner. That same TUI was the tool to walk you through debugging a real fire, including alerting the right slack channels. It was very much inspired by how commercial pilots handle failures using checklists, allowing you to do as much of the problem solving as possible before the problem.
It feels gross to compare it to ER docs saving lives in a mass casualty scenario but, well, I have a boring life but can relate to one aspect of theirs.
Its not the same but when your business is dead because one of the sysadmins accidentally deleted all the users from the system, its not all that different because many people's lives are on the line but the choices are different but some can become fatal.
>> What he came up with was that if you have two people who are roughly the same size and tidal volume, you can just double the tidal volume and stick them on Y tubing on one ventilator.
As a patient, I'm not sure if I'd be comfortable with the doctor operating on me doing a speedrun.
Full sterilization before each surgery is a good thing. Better safe than sorry. Same for only having one patient in the operating room - reduced risk of contamination and human error.
It's not a binary choice between a civil war surgeon's saw and an immaculate cleanroom, safety exists on a spectrum. Better safe enough than so incomprehensibly safe that the procedure is unaffordable and the doctor has 1/10th the experience and 100x as much paperwork!
Imagine, for a minute, that there's a physical lever at the FDA that controls the amount of cost, bureaucracy, and triple-checks that occur in hospitals. Think one of those steam engine throttles, with the big pawl release lever, it's set in front of an angle gauge with colors from red to green. One side is marked "Anarchy" and the other is marked "Better safe than sorry". Right now, that lever (or the metaphorical regulatory lever, the physical lever doesn't actually exist AFAIK) is as far over to the "safe" side as I can imagine it possibly being. The US lags behind on many modern medications and procedures, health care is unaffordable for somewhere between many and most, it's so miserably difficult to enter the field that we're not educating and training enough people, and the people that we do have trained are spending too much of their time doing paperwork and fighting the insurance system to take care of people. If you or a loved one have ever gotten a refusal of treatment or needed to wait for your disease to get worse before you can get care, you know how real this lever is.
If you ever get access to that lever, please, bring it at least one click back off the limiter. Maybe two. The potential harms that you imagine could caused by contamination and human error, at the moment, are less than the actual harms that are happening right now due to lack of affordable access.
People are going blind, in pain, or dying right now because it's too far towards the "better safe than sorry" side. If you were on a fixed income and found yourself unable to afford a $8000 cataract surgery as the world slowly grew dim, you'd wish you could visit an efficient practice and get it done for $150, even if that meant there was another patient on the other side of the OR.
That's fair, except that sometimes there aren't enough resources (qualified surgeons, facilities, etc) for everyone to get that kind of care. I'd rather cheap care that is 95% good enough than none at all. (For things I really need - I think a majority of what the healthcare industry does is counterproductive but there is also plenty of stuff that's good like cataract surgery for example.)
This one is a tough sell, in that regime the doctors will have a significantly higher amount of practice which might translate into mastery. On the other hand I would expect post-procedure tracking and reporting be significantly better in the west.
One person's waste is another person's more comfortable routine; one person's efficiency is another person's grueling day. (Even setting aside possible complications from not re-sterilizing, in this specific example.)
Our goal should be to have a comfortable amount of capacity in the system so that we don't need to sweat the details, not to hyper-optimize everyone into human machines.
> Aravind uses a two-pronged approach to addressing the lack of ophthalmologists: First, it enhances the efficiency of the existing staff. The hospital has an innovative “assembly line” operating theatre that allows a single surgeon to alternate between two fully prepared tables, each supported by dedicated instrument sets and nursing teams. This approach enables six to eight cataract operations per hour compared to an industry norm of one, while delivering clinical outcomes that even surpass those achieved in the UK’s National Health Service.
6 an hour isn’t unusual at a dedicated center in the US.
I had early cataract surgery at a “mill” here in NJ. There are similar centers all over. In talking both with my eye doctor and my cousin who is an eye surgeon in on the other side of the country, I was told it was better to go with a doctor who specialized in this surgery at a dedicated center (common called a mill). The rate of complications is less because they have really dialed in the procedure and have seen everything. The first day I saw him, I was literally the last patient. He said he had operated on 80 eyeballs that day. I think it was a long day, with more than eight hours but he does a few of those days a week at different centers. He has a large crew of support staff and multiple rooms to achieve this throughput. He did a good job. It was not inexpensive. He was driving a nice Porsche. He didn’t have time for a pleasant bedside chat.
I still don’t know why I had to get the surgery at 50. I haven’t had any other weird health issues like that. The one odd thing is that my grandfather was the first person to do cataract surgery in Lithuania, back in the 1920s. I always wonder if there was a link.
> Where each surgery here requires full sterilization from scratch, there they kept their wrapping etc between surgeries, and had two beds side by side.
I imagine the pace in India was also borne out of necessity - there are just so many more cases to go through there, that the surgeons had no choice but to adapt.
I only got it through the story my friend told me, so I really don't know any details.
That being said, these people are being treated for a health issue, most of the successfully. There might be more risk, but the benefits might outweigh them.
Really cool! I built the website for an antique maps dealer (Dat Narrenschip) when I was 15 or so and fell in love with antique maps. It's still up and running but now on Shopify.
Over the years I experimented a bit with leaflet.js and thought of overlaying maps too so you can navigate maps through time, but quickly realized it was super difficult. Kudos for setting this up!
If you want to expand to other regions, or chat, or get access to high-res scans, let me know. I think plenty of old maps sellers would love to sell their maps this way.
I'm creating Comper, an infinite canvas that has all your organization's code and documentation on it. If you zoom in, you can see the code, if you zoom out you see the big picture. By giving everything a place on the map, it becomes easier to figure out your way through the landscape and understand the systems. Different modes can you show you different things: code age, authorship (bus-factor, is the person still with the company etc), languages used, security issues. There's time-travel, think Gource for all software in your company, and maybe the most fun: a GeoGuessr for code. Select the repos for your team (or if you feel confident, of the entire org), you get a snippet and have to guess where it is. The plan is for LLMs + tree-sitter to analyze all the code and show relations to other systems, databases etc.
My initial announcement got the top spot in "What are you working on? (February 2025)" https://news.ycombinator.com/item?id=43157056 but now I'm a lot further, there's a website https://comper.io and the company is getting incorporated within two weeks.
Last week I showed it off in the Feeling of Computing Meetup (fka Future of Coding) - the recording is here and the reactions were extremely positive https://www.youtube.com/watch?v=3-rg-FPZJtk
I'm opening the private beta soon, where I mix using the product with consultancy, to get better customer feedback. Not sure if that will work, but I don't have all the features yet for bottom-up adoption.
Uncaught (in promise) DOMException: The fetching process for the media resource was aborted by the user agent at the user's request.
Uncaught (in promise) DOMException: The media resource indicated by the src attribute or assigned media provider object was not suitable.
No video with supported format and MIME type found.
"Progressive compilation" would be more fun: The compiler has a candidate output ready at all times, starting from a random program that progressively gets refined into what the source code says. Like progressive JPEG.
Great! 80-20, Pareto principle, we're gonna use that! We are as good as done with the task. Everyone take phinnaeus as an example. This is how you get things done. We move quickly and break things. Remember our motto.
Yes, the way I described it is actually a sensible approach to some problems.
"Almost-in-time compilation" is mostly an extremely funny name I came up with, and I've trying to figure out the funniest "explanation" for it for years. So far the "it prints a random answer" is the most catchy one, but I have the feeling there are better ones out there.
Indeed, I often get the impression that (young) academics want to model the entire world in RDF. This can't work because the world is very ambiguous.
Using it to solve specific problems is good. A company I work with tries to do context engineering / adding guard rails to LLMs by modeling the knowledge in organizations, and that seems very promising.
The big question I still have is whether RDF offers any significant benefits for these way more limited scopes. Is it really that much faster, simpler or better to do queries on knowledge graphs rather than something like SQL?
I think it's a journey a lot of us have gone on, it's an appealing idea until you hit a variety of really annoying cases and where you are depends on how you end up trying to solve it. I'm maybe being unfair to the academic side but this is how I've seen it (exaggerated to show what I mean hopefully).
The more academic side will add more complexity to the modelling, trying to model it all.
The more business side will add more shortcuts to simplify the modelling, trying to get just something done.
Neither is wrong as such but I prefer the tendency to focus on solving an actual problem because it forces you to make real decisions about how you do things.
I think being able to build up knowledge in a searchable way is really useful and having LLMs means we finally have technology that understands ambiguity pretty well. There's likely an excellent place for this now that we can model some parts precisely and then add more fuzzy knowledge as well.
> The big question I still have is whether RDF offers any significant benefits for these way more limited scopes. Is it really that much faster, simpler or better to do queries on knowledge graphs rather than something like SQL?
I'm very interested in this too, I think we've not figured it out yet. My guess is probably no in that it may be easier to add the missing parts to non-rdf things. I have a rough feeling that actually having something like a well linked wiki backed by data sources for tables/etc would be great for an llm to use (ignoring cost, which for predictions across a year or more seems pretty reasonable).
They can follow links around topics across arbitrary sites well, you only need more programmatic access for aggregations typically. Or rare links.
The academic / business divide is a great example of the correct model depending on what you want to do. The academic side wants to understand, the business side wants to take action.
For example, the Viable System Model[1] can capture a huge amount of nuance about how a team functions, but when you need to reorganize a disfunctional team, a simple org chart and concise role descriptions are much more effective.
As an industry we have soooo many languages, frameworks, tools, distros etc. It's like we are pre metric system or standardization on screw thread sizing.
I am really happy that at least for VCS, we have a nearly universal solution in Git, except for the big tech folks.
Sure, jj might solve some issues, but once it gets serious traction, all the tooling that works with e.g. repo analysis will need to start supporting git and jj. More docs need to be created, junior developers will need to learn both systems (as git is not going anywhere).
Given all the downstream effects, I do not think introducing another VCS is a net positive.