Hacker Newsnew | past | comments | ask | show | jobs | submit | Joky's commentslogin

> I'm still waiting on a Python project that compiles to pure C

In case you haven't tried it yet, Pythran is an interesting one to play with: https://pythran.readthedocs.io

Also, not compiling to C but to native code still would be Mojo: https://www.modular.com/max/mojo


Does it really matters in performance. I see python in these kind of setups as orchestrators of computing apis/engines. For example from python you instruct to compute following list etc. No hard computing in python. Performance not so much of an issue.


Marshaling is an issue as well as concurrency.

Simply copying a chunk of data between two libraries through Python is already painful. There are so-called "buffer API" in Python, but it's very rare that Python users can actually take advantage of this feature. If anything in Python as much as looks at the data, that's not going to work etc.

Similarly, concurrency. A lot of native libraries for Python are written with the expectation that nothing in Python really runs concurrently. And then you are presented with two bad options: try running in different threads (so that you don't have to copy data), but things will probably break because of races, or run in different processes, and spend most of the time copying data between them. Your interface to stuff like MPI is, again, only at the native level, or you will copy so much that the benefits of distributed computation might not outweigh the downsides of copying.


Python is a bad joke that went too far.


wow, someone thinks python is bad? what year is it? 2003?


Do you think in a decade or so, most popular Python dependencies will work well enough no-GIL for multithreading to be a bit less terrible?


I think we will get there in the end but it will be slow.

When I was doing performances stuff: Intel was our main platform and memory consistency was stronger there. We would try to write platform-agnostic multi threading code (in our case, typically spin-locks) but without testing properly on the other platforms, we would make mistakes and end up with race conditions, accessing unsynchronized data, etc.

I think Python will be the same deal. With Python being GIL'd through most of its life cycle bits and pieces won't work properly, multi-threaded until we fix them.


there’s already a PEP to address the GIL. python is going through its JVM optimization phase. it’s too popular and ubiquitous that improving the GIL and things like it are inevitable


There's not just a PEP, CPython already can be compiled in No-GIL mode. All we need is support from modules.


I believe it matters for startup time and memory usage. Once you've fully initialized the library and set it off, the entire operation happens without the Python interpreter's involvement, but that initial setup can still be important sometimes.


nuitka already does this


You can configure "squash&merge" to use the PR title and description for the commit message now, which makes it reviewable!


> You could make the argument that this well-understood process could be broken out into its own class/package/module and tested with its own public interface, but if there really is only one consumer then that's kind of a strange trade-off to make in many cases.

That's how I develop in general: a "component" does not exist because it has multiple-clients, but because it is a conceptual piece of logic that makes sense to document and test in isolation. It allows to define what is the public API of this component and what isn't. This is how software scales and stays maintainable over time IMO.


I don't disagree, I'm just saying it's situational. The trade-off doesn't always make sense. But I typically develop in much the same way as you do.


There is something to be said about individual productivity (whatever that means in a very innovative/creative environment) vs team/company output, just today I saw this in my feed: https://flocrivello.com/changing-my-mind-on-remote-about-bei... And that's coming from someone who actually tried to build a business out of remote work (TeamFlow was the product).

I can be much more productive at home when it is about my individual contribution (me coding to deliver something unambiguous), but xxx individuals doing this does not necessarily align into a great product: that does not scale.


I while back I was slammed on here for making essentially a statement that my remote team seems to individually claim/feel more productive but the net team productivity I felt decreased as we transitioned from in office to remote.


I think this observation is spot on - and you don't have to look far to understand why individual productivity != systems productivity. 100% individual utilization in a system is a negative - manufacturing companies learned this years ago and is where the principles of the toyota system/kanban/lean manufacturing/etc. rose from. The only resource that should be 100% utilized in a process is the bottleneck - and anytime anyone is interrupted to help the bottleneck, that is a net win for the company output, even if individually it feels annoying.

It's really unfortunate that it seems so many people are in the "you can pry remote work from my cold dead hands" camp that it's hard to even have a conversation that doesn't devolve into "I feel more productive remote, so you shouldn't care where I work".


Definitely seems like it’s a strange hill that people are willing to die on these days.

I’m pragmatic. I’d rather have a job than look for one. So if my company decides to RTO, I am going to RTO.


For some, it's a pay cut, in money and free time. It's a valid reason they'd rather change job.


I have zero problem with that. If your current gig is not a good fit for any reason, change it. I guess I mean it’s strange to me how some folks feel entitlement enough to think that their individual preference should be important enough to demand their employers accommodate them. i.e. they shouldn’t have to find another job.

It’s simply not the same thing as safe working conditions, ADA accommodations, etc… it’s a preference…and one mostly born out of the pandemic.


I have the experience that the quality of the office building makes a difference. We moved to a new building a few months back with more space, fewer people crammed into one room, generally quieter. People that used to work from home now prefer to be in the office as much as possible.


They claim they are writing the actual kernel code (as in the implementation of a matmul) with it, and it was presented as a "system programming language": this goes far beyond "high-level tasks" it seems.


You can write kernels in a language with a GC. You just write kernels that don't allocate.


Alternatively have the GC as an OS service.


It depends what you mean by "new subsystem" and "transitioning to": what seems like a given is that the notion of "one size fits all" of LLVM IR is behind us and the need to multi-level IR is embraced. LLVM IR is evolving to accommodate this better, within reason (that is: it stay organized around a pretty well defined core instruction set and type system), and MLIR is just the fully extensible framework beyond this. It is to be seen if anyone would have the appetite to port LLVM IR (and the LLVM framework) to be a dialect, I think there are challenges for this.


TensorFlow is also a runtime, yet we model its dataflow graph (the input to the runtime) as a dialect, same for ONNX. TensorRT isn't that different actually.


All of Google TPU is powered by the XLA compiler, so any MLPerf benchmark result from Google comes powered by XLA. Anything JAX is also built on top of XLA, so you can take JAX performance as a point of comparison as well if you'd like.


The movement of paddling has a natural rotation of the shaft when you raised the fixed hand for a stroke on the other side, it's quite straightforward to figure out sitting and mimicing the movement.

During this movement if the blade aren't feathered at all you have to compensate with some bending of the wrist. The amount of rotation of the shaft induced depends on how much you raise the hand/elbow, and so is fairly dependent on your style of stroke. This is the main way I think should be approached feathering: how much vertical do you intend to paddle? From there the angle should follow to optimize for the least amount of wrist twisting.

In general paddling very vertical will come with more angle in between the blades. I practice slalom and use to have 70-80 degrees crossing, but I tend to paddle less vertically now (aging? Lack of training?) and I'm down to 60 degrees comfortably now.


I have never believed this argument. In particular, I, and my bones, joints, etc, are approximately symmetric under left-right reflection. This means that I am not chiral, and I would expect an optimized paddle that I hold, symmetrically, in two hands, to be similarly non-chiral. (It’s a paddle, not a oar!)

More concretely, if some biomechanical factor made it a good idea to rotate the top of the left paddle forward θ degrees at the start of the left-side stroke, I would want to rotate the top of the right paddle forward θ degrees at the start of the right-side stroke. But with a feathered paddle, one of those thetas is positive and one is negative.

This is wrong in a way that used to bother me deeply whenever I kayaked. I would unfeather any unfeatherable paddle I used to restore proper symmetry.

I suppose what’s really happening is that people feather to reduce wind resistance or because that’s how they learned, and once they’ve learned it, it feels natural.

I would be willing to believe that feathering either direction is somehow biomechanically superior to not feathering at all and that the symmetry is broken arbitrarily, but I would want to see evidence :)


The asymmetry is compensated thusly: when you feather your paddle, you should grip in only with your dominant hand. It should rotate freely in your non-dominant hand.

Your wrist movements can then be symmetric -- your non-dominant wrist is free to move as you please.

And because you are free to set the feather as desired -- you can set it in such a way to minimize the rotation required of your dominant hand (and, per the above, non-dominant hand) to transition from one half of the stroke to the other.


Can you elaborate on the signed integer overflow and the implicit initialization differences you're referring to?



But these are options, it's not a big deal to me that compiler offers special options for special use cases. It's not clear to me if you are saying that the *default* for clang and GCC differs, aren't they both using `fno-wrapv` by default?


These options provide a different interpretation of UB like signed integer overflow and implicit initialization. It's in reference to Ralf's blog post:

>I honestly think trying to write a highly optimizing compiler based on a different interpretation of UB would be a worthwhile experiment. We sorely lack data on how big the performance gain of exploiting UB actually is. However, I strongly doubt that the result would even come close to the most widely used compilers today—and programmers that can accept such a big performance hit would probably not use C to begin with. Certainly, any proposal for requiring compilers to curtail their exploitation of UB must come with evidence that this would even be possible while keeping C a viable language for performance-sensitive code.

He doesn't know such interpretations are implemented by widely used C compilers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: