Hacker Newsnew | past | comments | ask | show | jobs | submit | more lubutu's commentslogin

You're not the only ones, but I can't understand this approach. Do people never then read the version history? It must be impossible to understand commits' diffs with the changes all squashed together.


Not the OP, but I think that the point of squashing every PR is that the reviewers/PR run the whole PR, not the individual commits. If you have a PR with 5 commits, 4 of which break the build and the last one fixes it, then merging that will be a problem if you need to git bisect later.

So the idea is really "what's the point of having a history full of broken state?".

> It must be impossible to understand commits' diffs with the changes all squashed together.

This would be a hint that your PR was too big and addressing more than one thing.


> So the idea is really "what's the point of having a history full of broken state?".

I rebase commits so they don't break the build but the history remains clean and incremental. Selective fixups and so on isn't the same as squashing everything into a single commit.

> This would be a hint that your PR was too big and addressing more than one thing.

I don't think so. Sure, that can be true, but squashes can also simply lose vital history. Suppose you remove a file and then replace it with code copied and modified from another file. If you then squash that, all Git will say is you made a massive edit to the file.


> I rebase commits so they don't break the build but the history remains clean and incremental.

Sure, and that's fine. The idea of the squash workflow is that they don't expect that. It's just different, and that's the rationale behind it :-).

> all Git will say is you made a massive edit to the file.

Which IMO is exactly what happened in this case xD. But again... whatever floats your boat, I was just talking from the point of view of a squash workflow.


> If you have a PR with 5 commits, 4 of which break the build and the last one fixes it, then merging that will be a problem if you need to git bisect later.

And the answer is that you don't; each commit is individually testable and reviewable. Changes requested by reviewers are squashed into the commits and then merged into the project. Unfortunately, while the git command line has "range-diff" to ease review with this workflow, neither GitHub not Gitlab have an equivalent in their UI.


Well, I was obviously meaning that "workflows that squash the commits in a PR are workflows where each individual commit is not tested/reviewed separately".

Of course, if your workflow is different, then... well it is different. Doesn't make the "squash workflows" irrational.

Disclaimer: I don't squash PRs.


> And the answer is that you don't; each commit is individually testable and reviewable.

How does this work in practice? Is every single atomic commit reviewed by someone? When do they review each of those commits? How many commits typically go into a PR?

> Changes requested by reviewers are squashed into the commits and then merged into the project.

So a reviewer finds the appropriate commit that their comment applies to, and then changes the actual commit itself? Who is the author of the commit at that point?

I'm trying to understand what you're talking about, because you seem to have something figured out, for a problem that every team I've worked on struggles with.


> Is every single atomic commit reviewed by someone? When do they review each of those commits? How many commits typically go into a PR?

1) yes 2) when a PR is submitted 3) it can be a lot for a huge project-wide refactoring, but generally I would say 1 to 5 is typical and up to 20 is not strange.

> So a reviewer finds the appropriate commit that their comment applies to, and then changes the actual commit itself?

No, the author applies the requested change and force-pushes once he has gotten all the requested changes applied.

> because you seem to have something figured out

Thanks! But it's not me—it's how Linux has used git from the beginning, for example. In fact it's the only workflow that is used by projects that still use email instead of GitHub/Gitlab PRs, but (trading some old pain with new pain) it is possible to use it even with the latter. The harder part is marching the review comments to the new patch, which is actually pretty easy to do with emails.

It's quite some work and there's some learning curve. But depending on the project it can be invaluable when debugging. It depends a lot on how much the code can be covered by tests, in particular.


Yeah, I think that the email workflow (which I love) is more adapted to this!


I can only conclude that people who think squashing a work item into a single commit is great have never had to do serious bug hunting, relying on commit history for context, nor have they ever moved forges and lost the context from "the context is in the PR anyway".


I think I've done all those things except forges. I don't know what that one is. I still like squashing. I've wven become the git expert on my team. With squashed PR resolutions, I can more reliably use bisect. Many individual commits were never actually meaningful in the first place.


GitLab, Bitbucket, GitHub, etc are forges


I think it's a bit of a limited conclusion. Maybe they really just make small PRs that make sense, and maybe they rewrite the commit message into something useful when squashing.


My employer has all PRs merged by a bot once they're approved. The bot takes the PR description and uses that as the commit message. The PR is the unit of change that gets reviewed, not the commit. This makes for a nice linear bisectable history of commits (one per PR) with descriptions, references to issues on our tracker, etc. And no need to worry about force pushing, rebasing, etc, unless you want to do so.

Of course it's got the same end result as doing an interactive rebase & combining all the in-progress commits into a single reviewable unit of change with a good commit message, but it's a bit more automatic.


It's not by the same person — sta.li was by Anselm R Garbe. It's more like a spiritual successor.


Ah, thank you for the clarification. I read the linked to (by me) comment too quickly and/or without thinking enough! Cheers!


Thanks for sharing!

  ; these produce an error, since `b` isn't defined when the body of `a` is compiled
  let a = \x -> (b x * 3),
      b = \x -> (a x / 2)
It surprised me when this was called out, given that both a and b are defined in the one 'let'. Was there a specific reason you decided not to treat it as a 'letrec'?


Yeah I went back and forth on this a little bit. If all variables are defined at the beginning of the `let` expression, you can't rebind a variable like this (assuming some previous `x`):

    let x = x + 1
because the new `x` shadows the old one before `x + 1` is compiled. But if you're defining a recursive function, like this:

    let foo = \n -> foo(n + 1)
you need `foo` to be defined before you compile the lambda body, or else the compiler will think you're referencing an undefined variable.

At one point I had an exception where, if the value of a variable being defined is a lambda, it defines the variable first to allow recursion, but not otherwise. But this felt inconsistent and kind of gross. Instead, I decided to have `def` expressions behave like that, and disallow recursion in `let`. `def` is always a function definition, so you'd almost always want that behavior, and I felt that since it has a different syntax, slightly different assignment semantics wouldn't be so bad.

For mutual recursion you have to go a little further, and find all the definitions in the block before you start compiling anything. So `def` expressions are also hoisted to the top of the block and pre-defined at the beginning. This felt ok to me since `def` expressions look like big, formal, static definitions anyway, and it didn't seem surprising that they would have whole-block scope.


For my hobby language, I figured let rec should be the default and let nonrec the marked case, for exactly the rebinding application. However, it's been over a year since I came to that conclusion, and I still haven't gotten around to implementing the nonrecursive path. (but: mine is very no-moving-parts Immutable, so ymmv)


By the way, why do you need a backslash to define a lambda? Apparently it doesn't give any additional information. All you need to know that's a lambda is the presence of the -> operator. Is that a way to make the compiler faster?


That was a pretty late change to the syntax actually — I really, really wanted Javascript-style lambdas but with a skinny arrow, like `(x, y) -> x + y`. But it made parsing and compiling really finicky, so I settled on the backslash syntax, which I've seen in a couple other languages. It almost looks like a "λ"!


Alternatively, we can go the other way, and dispense with the arrow:

  \x \y x + y
rather closer to the lambda notation in maths, and works well for compact expressions such as S from the SKI combinator calculus:

  \x \y \z x z (y z)
This looks much better with syntax highlighting (hard to demonstrate here of course), being both trivial to implement and informative - just have the back slashed tokens in blue or whatever.

Cassette looks really nice - great intro page, and a great name too! Making something simple is much harder and more valuable than making something complicated.


Lurrus into Dead Weight — that's a nice start.


I think this is the one thing I feel BitKeeper does better than Git. Git can get confused about where a file came from, for moves but especially for copies, and so the version history ends, even if you ask it to try and follow along. BitKeeper, on the other hand, keeps the moves and copies as part of the history, so you can always trace it through to the origin of the file, no matter how circuitous.


git log has --follow but unfortunately it only works when spefying a single file and not e.g. a whole directory.


I looked around and found an AMA [1].

> Python, using the Twisted framework for networking.

> Omegle runs on just one server: a Linode 2880. It used to be on a 720, which was very close to sufficient. No database at the moment, but if it never needs one, I'll most likely use PostgreSQL.

[1]: https://www.reddit.com/r/IAmA/comments/9vbd7/i_made_omegleco...


The link to the proof is dead :(



When I was looking for work back in 2016 there was a company who advertised their way of working as "infrared", if I remember correctly. It was all about total openness and autonomy. For example, they asked one employee to decide how many hours everyone should work per day, and they decided on something, which was then applied company-wide. This was apparently "democracy in action". Quite odd. I think they've since dropped that whole thing.


You know another project with much of its source files in the top-level directory? https://github.com/git/git


It's in bash(1) under "Process Substitution" — https://manpages.debian.org/bookworm/bash/bash.1.en.html#Pro...


> Phispher - use javascript to show a phisphing page

Do you mean phishing? I searched for "Phispher" thinking it might be a particular tool or something, but nothing turned up.


I’m guessing it’s the noun meaning “a person who phishes” or “a person who uses phishing”?


So why not phisherman and extend the metaphor further from phishing and phisher tools.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: