At the end of the day, all I really care about is consistency. It's annoying to switch between projects which use different branch names to describe the same thing.
That being said, this is a dumb reason to introduce inconsistency.
I had this problem too until I forced myself to use the `gcm` alias from oh-my-zsh’s Git plugin. IIRC it checks out to master if it exists, else main. It has almost entirely removed this distinction from my life.
It's kinda funny how uv is written in Rust and many Python libraries where performance is expected to matter (NumPy, Pandas, PyTorch, re, etc.) are implemented in C. Even if you call into fast code from Python you still have to contend with the GIL which I find very limiting for anything resembling performance.
Python's strong native story has always been one of its biggest draws: people find it ironic that so much of the Python ecosystem is native code, but it plays to Python's strength (native code where performance matters, Python for developer joy/ergonomics/velocity).
> Even if you call into fast code from Python you still have to contend with the GIL which I find very limiting for anything resembling performance.
It depends. A lot of native extension code can run without the GIL; the normal trick is to "detach" from the GIL for critical sections and only reconnect to it once Python needs to see your work. PyO3 has a nice collection of APIs for holding/releasing the GIL and for detaching from it entirely[1].
I didn't know about detaching from the GIL... I'll look into that.
> native code where performance matters, Python for developer joy/ergonomics/velocity
Makes sense, but I guess I just feel like you can eat your cake and have it too by using another language. Maybe in the past there was a serious argument to be made about the productivity benefits of Python, but I feel like that is becoming less and less the case. People may slow down (a lot) writing Rust for the first time, but I think that writing JavaScript or Groovy or something should be just as simple, but more performant, do multi-threading out of the box, and generally not require you to use other languages to implement performance critical sections as much. The primary advantage that Python has in my eyes is: there are a lot of libraries. The reason why there are a lot of libraries written in Python? I think it's because Python is the number 1 language taught to people that aren't specifically pursuing computer science / engineering or something in a closely related field.
Yes, I think Python is excellent evidence that developer ecosystems (libraries, etc.) are paramount. Developer ergonomics are important, but I think one of the most interesting lessons from the last decade is that popular languages/ecosystems will converge onto desirable ergonomics.
Python is the ultimate (for now) glue language. I'd much rather write a Python script to glue together a CLI utility & a C library with a remote database than try to do that all in C or Rust or BASH.
In my analysis, the lion's share of uv's performance improvement over pip is not due to being written in Rust. Pip just has horrible internal architecture that can't be readily fixed because of all the legacy cruft.
And for numerical stuff it's absolutely possible to completely trash performance by naively assuming that C/Rust/Fortran etc. will magically improve everything. I saw an example in a talk once where it superficially seemed obvious that the Rust code would implement a much more efficient (IIRC) binary search (at any rate, some sub-linear algorithm on an array), but making the data available to Rust; as a native Rust data structure, required O(N) serialization work.
> So they should be able to get similar results in Python then?
I'm making PAPER (https://github.com/zahlman/paper) which is intended to prove as much, while also filling some under-served niches (and ignoring or at least postponing some legacy features to stay small and simple). Although I procrastinated on it for a while and have recently been distracted with factoring out a dependency... I don't want to give too much detail until I have a reasonable Show HN ready.
But yeah, a big deal with uv is the caching it does. It can look up wheels by name and find already-unpacked data, which it hard-links into the target environment. Pip unpacks from the wheel each time (which also entails copying the data rather than doing fast filesystem operations, and its cache is an HTTP cache, which just intercepts the attempt to contact PyPI (or whatever other specified index).
Python offers access to hard links (on systems that support them) in the standard library. All the filesystem-related stuff is already implemented in C under the hood, and a lot of the remaining slowness of I/O is due to unavoidable system calls.
Another big deal is that when uv is asked to precompile .pyc files for the installation, it uses multiple cores. The standard library also has support for this (and, of course, all of the creation of .pyc files in CPython is done at the C level); it's somewhat naive, but can still get most of the benefit. Plus, for the most part the precompiled files are also eligible for caching, and last time I checked even uv didn't do that. (I would not be at all surprised to hear that it does now!)
> It totally depends on the problem that you're trying to solve.
My point was more that even when you have a reasonable problem, you have to be careful about how you interface to the compiled code. It's better to avoid "crossing the boundary" any more than absolutely necessary, which often means designing an API explicitly around batch requests. And even then your users will mess it up. See: explicit iteration over Numpy/Pandas data in a Python loop, iterative `putpixel` with PIL, any number of bad ways to use OpenGL bindings....
> explicit iteration over Numpy/Pandas data in a Python loop
Yeah, I get it. I see the same thing pretty often... The loop itself is slow in Python so you have APIs that do batch processing all in C. Eventually I think to myself, "All this glue code is really slowing down my C." haha
What about using PyPy? You'll probably see a significant improvement in these benchmarks. You should also give it a shot in Node which I expect to be about on par with PyPy, but without the GIL.
If anyone wants to be surprised by optimization, a great way to do it is to look at all the cases where, even though Python is slower than C, the Python interpreter written in Python is faster than the Python interpreter written in C.
Also, if we're going to suggest 'write it in another language' approaches, rewrite it in Golang. I detest writing in Golang but once you get the hang of things you can get to the point where your code only takes twice the time to write and 2% of the time (and memory) to run.
Totally, I'm a big fan of statically typed, compiled languages; especially when the codebase is large and/or there are a lot of contributors. I chose the Node example because I feel like it offers the same "ease-of-use" that draws people to Python.
> get to the point where your code only takes twice the time to write and 2% of the time (and memory) to run.
100%. Sometimes this matters, sometimes it doesn't, but if we're talking about "smart performance hacks" this is definitely a top contender.
I work on a Python project and I really wish that it supported multi-threading. If I rewrote it, I would prioritize that feature in the target language.
Totally agree. But parent‘s point was formulated as such that rules are generally bad, and I wanted to point out that some rules are necessary, and sometimes you may even need to add some (and obv sometimes also remove some).
I also avoid apps. I tell everyone that I meet to avoid apps because the general population is going to drive us right into a future where there are no more web-based options and almost everything must be accessed through a separate app. People are simply not aware of what they're giving up by using apps that would work perfectly fine as websites.
See you're already assuming war is coming and that it is a fact of life, inevitable just as winter is. With that mindset there is no other alternatives from the get go
If you walk around with a gun in your pocket drooling about self defence scenarios you might end up escalating a situation that could have been avoided altogether.
And you're assuming war is not coming? That there will never be another war? Don't get me wrong, I'm anti-war, but that sounds like an unwise assumption to make given history.
> If you walk around with a gun in your pocket drooling about self defence scenarios you might end up escalating a situation that could have been avoided altogether.
True, but this is another big assumption about the character of the person carrying the gun. The statistics on people with a permit to carry and the usage of their weapons tell a different story than you present. That is, they are much more likely to deescalate, most defensive gun usages require no shots fired, and license to carry owners are much less likely than the general population to commit or be convicted of a crime.
That being said, this is a dumb reason to introduce inconsistency.
reply