Hacker Newsnew | past | comments | ask | show | jobs | submit | mbauman's commentslogin

Forget compilers, SSA is an immensely valuable readability improvement for humans, too.


Why have

    while (c < 10) { c *= 3; }
when you could have

  %2 = alloca i32, align 4
  %3 = alloca i32, align 4
  store i32 %0, ptr %3, align 4
  br label %4, !dbg !18

  4:
  %5 = load i32, ptr %3, align 4, !dbg !19
  %6 = icmp slt i32 %5, 10, !dbg !20
  br i1 %6, label %7, label %10, !dbg !18

  7:
  %8 = load i32, ptr %3, align 4, !dbg !21
  %9 = mul nsw i32 %8, 3, !dbg !21
  store i32 %9, ptr %3, align 4, !dbg !21
  br label %4, !dbg !18


The second code snippet doesn't use SSA. It just translates the first loop into IR and mangles the variable names. Here is an SSA version of that in the Scheme language.

  (let loop ((c c)) (if (< c 10) (loop (* c 3)) c))
Notice that this is stateless and also returns the final value of “c” from the loop. People who use the below style have tended to find that it is much easier to reason about for more complicated looping structures.


I intended it to be mostly a joke. Many people equate LLVM IR with SSA.

I might even argue that easy to read and easy to reason about are opposites.

For the most part, languages like Python and Ruby can be very easy to read but difficult to understand precisely what actually happens at runtime.

Things like LLVM IR are much more explicit, making it easier to reason about but extremely difficult to read.

Maybe somewhere between is "pure" side effect free functional programs.


Try mem2reg on that to get rid of the loads and stores.


That's quite the interesting perspective, but I'd say it gives "them" more organization and unified focus than is real. It's an open source language and ecosystem. Folks use it — and gripe about it and contribute to it and improve it — if they like it and find it valuable.

All I can say is that many of "us" live in that tension between high level and low level every day. It's actually going to become more pronounced with `--trim` and the efforts on static compilation in the near term. The fact that Julia can span both is why I'm a part of it.


Willison's razor: Never dismiss behaviors as either malice or stupidity when there's a much more interesting option that can be explored.


I side with Occam's razor here, and with another commenter in this thread. People are construing entire conspiracy theories to explain fake replies when asked for system prompt, lying in Github repos, etc.


OP (and I) can definitely distinguish the two. The trouble is that I can no longer find the humans who are actually posting valuable information.


A highly relevant must-watch here is Bret Victor's Computational Public Space:

https://www.youtube.com/watch?v=PixPSNRDNMU


Yes, this really seems like an argument between two contrived straw people at the absolute extremes.


The even bigger challenge is determining _what_ you need to observe in the first place.

As a simplistic analogy, evolutionary designs of FPGA boards can end up relying upon idiosyncratic properties of the board(s) and create circuits that "shouldn't work" based on an idealized electrical circuit model. And they may not be transferable to other boards. In other words, to "understand" some evolutionary FPGA circuits, you need to "observe" more than just the gate configurations and idealized schematic.

Brains are not FPGAs or even circuits, but I think the analogy holds. They're not _just_ idealized representations of spiking neural networks.


I remember reading an article about this years ago - a non-transferable FPGA configuration which had some logic that should be unreachable but didn't work without it. Very faschinating but I've not been able to find it since.


I think you want that paper "An Evolved Circuit, Intrinsic in Silicon, Entwined with Physics" available there https://cgi.cse.unsw.edu.au/~cs4601/refs/papers/es97thompson...


The key for me — as someone who has been around for a long time and is at JuliaHub — is that Julia excels most at problems that don't already have an efficient library implementation.

If your work is well-served by existing libraries, great! There's no need to compete against something that's already working well. But that's frequently not the case for modeling, simulation, differential equations, and SciML.


The ODEs stuff in Julia is nice, but I think diffusers/JAX is a reasonable backbone to copy over whatever you need from there. I do think Julia is doing will in stats and has gotten some mindshare from R in that regard.

But I think a reasonably compentent Python/JAX programmer can roll out whatever they need relatively easily (especially if you want to use the GPU). I do miss Tullio, though.


But how does one get the composability of multiple dispatch? A problem with the Python world IME is the need to reinvent everything in Jax ecosystem, in the Pytorch ecosystem, etc -- because of fundamental language limitations. This is tedious, and feels like a lot of wasted effort.

Another example: It's frustrating that Flax had to implement it's own "lifted" transformations instead of being able to just use jax transformations -- which makes it impossible to just slot a Flax model into a jax library that integrates ODEs. Equinox might be better on this front, but that means that all the models now need to be re-implemented in Equinox. The fragmentation and churn in the Python ecosystem is outrageous -- the only reason it doesn't collapse under its own weight is how much funding and manpower ML stakeholders are able to pour into the ecosystem.

Given how much the ecosystem depends on that sponsored effort, the popular frameworks will likely prioritize ML applications, and corollary use cases will be second class citizens in case of design tradeoffs. Eg: framework overheads matter less when one is trying to use large NN models -vs- when one is trying to use small models, or other parametric approaches.


So, I’ll admit that I’m not a fan of multiple dispatch a la Julia. I much prefer typeclasses and explicit union types. Also, I found Julia is really the worst of both worlds of garbage collected languages and manual memory management because it ostensibly has a garbage collector but you find yourself basic preallocating memory or faffing around with StaticArrays trying to figure out why memory isn’t being allocated (often it comes down to some type instability nonsense because the type system built around multiple dispatch can’t correctly type the program). At this point I’d rather just use C++ or Rust than Julia, I’m getting annoyed just thinking about the nonsense I used to deal with.

Also, IIRC, it’s not terribly difficult to use flax with equinox. It’s just a matter of storing the weight dict and model function in an equinox module. Filter_jit will correctly recognize the weights as a dynamic variable and the flax model as a static variable.


> But I think a reasonably compentent Python/JAX programmer can roll out whatever they need relatively easily

You mean in terms of the ODE stuff, Julia provides?


Diffusers is pretty well done (I think the author was basically rewriting some Julia libraries and adapting them to JAX). I can’t imagine it being too hard to adapt most SciML ODE solvers.

For simulations, JAX will choke on very “branchy” computations. But, honestly I’ve had very little success differentiating through those computations in the first place and they don’t run well on the GPU. Thus, I’m generally inclined to use wrappers around C++ (or ideally Rust) for those purposes (my use-case is usually some rigid-body dynamics style simulation).


> We know an optimally dense packing for 2D spheres

We do? I think this is the latest state of the research here — only 14 Ns are proven optimal and it's certainly nowhere close to a general algorithm: http://hydra.nat.uni-magdeburg.de/packing/cci/

There's something to be said for a dozen lines of numpy that gets "close enough". That said, I do think there should be something better.


If they're 2D and packed on a plane, though, I think we do. That's reasonable if we're talking about "close enough." I wouldn't argue with the brevity of the numpy, though. Choosing how to place the disk over a plane of circles in an optimal way could be complicated.


$20/hr fulltime is ~40k/yr or ~$3300/mo. As just one benchmark: can you find housing in your area for $1100/mo?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: