Pickling + unpickling the object is a neat trick to update objects to point to the new methods, but it's even more straightforward to just patch `obj.__class__ = reloaded_module.NewClass`. This is what ipython's autoreload extension used to do (and still does in some circumstances, along with other tricks to patch up old references), though nowadays it's had some improvements over this approach: https://github.com/ipython/ipython/pull/14500
Oh nice, thank you for that tip. I was doing the opposite, `new_obj = mod.Class(...)` and then assigning the dicts from the old object (which was when I realized the pickle save/load was easier).
I developed a tool (https://github.com/smacke/ffsubsync) which can sync subtitles against each other (or even against an audio track), and this can be used in conjunction with other tools such as https://pypi.org/project/srt/ to combine multiple subtitle streams into a single stream. I've used this strategy to good effect to get both English and Chinese subtitles up at once.
Interesting, the proof seems to have originated as a submission to The Art of Problem Solving. I wish I had these site when I was in high school. I would have poured over them.
Close. I sent it to rrusczyk around the time it got accepted by the Monthly. I wanted him to see it because AoPS was playing a huge role in my mathematical education at the time, and was happy when he wanted to post it on the website.
nbgather used static slicing to get all the code necessary to reconstruct some cell. I actually worked with Andrew Head (original nbgather author) and Shreya Shankar to implement something similar in ipyflow (but with dynamic slicing and a not-as-nice interface): https://github.com/ipyflow/ipyflow?tab=readme-ov-file#state-...
I have no doubt something like this will make its way into marimo's roadmap at some point :)
Function-level caching is the best match for how I'd use it. Often the reason for bothering to cache is that the underlying process is slow, so some kind of future-with-progress wrapper could also be interesting. An example of how that could be used would be wrapping a file transfer so the cell can show progress and then when the result is ready unwrap the value for use in other cells. Or another example would be training in PyTorch, yield progress or stats during the run and then the final run data when complete.
I'm a big fan of Marimo (and of Akshay and Myles in particular); it's great to finally see a viable competitor to Jupyter as it can only mean good things for the ecosystem of scientific tooling as a whole.
Fortran can be faster than C because it has a more restrictive memory model (compiler can make more optimizations when it knows nothing else is aliasing some memory)
Fortran can’t be faster than C; you can write inline assembly kernels in C, you can add keywords to promise no aliasing, etc etc.
Fortran just has better syntax and defaults than C for this stuff. A devoted, expert C tuner with unlimited time on their hands can do anything. A grad student with like a year of experience can write a Fortran code that is almost as good, and finish their thesis. Or, a numerics expert can write a numerical code for their experiments and be reasonably sure that they are operating within a good approximation of the actual capabilities of their machine (if you are an expert on numerical computing and C+assembly, you can write a library like BLIS or gotoBLAS and become famous, but you have to be better than everybody else in two pretty hard fields).
IMO this is important to point out because somebody can bring a microbenchmark toy problem to show C beating Fortran easily. As long as they spend way more effort on the problem than it deserves.
I don't think anything here is in conflict with the statement that Fortran can be faster than C in some cases, but yes you're right that technically that should be qualified as naive Fortran can be faster than naive C
You can inline assembly in a lot of languages. However I’d argue that doing so is thus no longer writing code in that host language.
To put it another way, your comment is a little like saying Bash is just as fast as C because you can write shell scripts that inline assembled executables.
In principle yes, in practice, no. This is one of those places where our developer intuition fails us, and I fully include mine. It feels like it ought to be feasible, even now, but in practice it turns out that there are just so many ways to screw up that optimization without realizing it, with responsibility for it scattered between compilation units (which is to say it isn't even necessarily clearly one things fault, these can arise in interactions), and once it creeps in just a little bit it tends to spread so quickly, that in practice it is not practical to try to exclude the problems. You can't help but write some tiny crack it sneaks into.
This is part of why I'm in favor of things like Rust moving borrow checking into the langauge. In principle you can statically analyze all of that in C++ with a good static analyzer, in practice, it's a constant process of sticking fingers in broken dikes[1] and fighting the ocean. Sometimes you just need a stronger wall, not more fingers.
The main issue is that C doesn't have a way to pass arrays of any size to functions while preserving their type (the size is part of an array's type in C); by convention, one generally passes a pointer to its first element and a separate length parameter. A compiler cannot know that any two pointers do not point to the same location. Hence, it's harder to vectorize code like this, because each store to result[i] may affect arr1[i] or arr2[i]; in other words, the pointers might alias. In C89:
/* They're all the same length */
void add(float *result, float *arr1, float *arr2, size_t len)
{
size_t i;
for (i = 0; i < len; ++i)
result[i] = arr1[i] + arr2[i];
}
The solution is supposed to be the "restrict" keyword, which informs the compiler that other pointers do not alias this one. It was added in C99. You declare a pointer that doesn't alias like this:
float *restrict float_ptr;
If a restrict pointer is aliased by another pointer, the behavior is undefined.
It's hard to judge the extent to which this helps. Apparently, when Rust annotated all its mutable references and Box<T> types with the LLVM equivalent of the "restrict" annotation, they exposed a lot of bugs in LLVM's implementation of it, because most C code doesn't use "restrict" pointers as extensively as Rust code uses mutable references and Boxes.
Theoretically with `restrict`, but that doesn't really help.
But the biggest advantage of Fortran is it's support for multidimensional arrays, slices,...
> Another giveaway is the ratio of stars to watchers / forks. I remember one project with thousands of stars but only 10 users "watching" it. They went on to raise a sizable seed round too.
Is the recursion limit an implementation detail of the compiler though, or something statically built into the language? If the former there's nothing to prevent the language itself from being Turing complete.
(Similar to how every Turing complete programming language today runs on something technically weaker than a Turing machine due to finite memory.)
Perhaps unrelated but Ive been meaning to ask and I couldn't find - what's the deal with python-lsp-server and virtualenvs? It seems like it has to be installed in the virtualenv to provide completions, etc. Is that right?