I've standardized on getting github actions to create/pull a docker image and run build/test inside that. So if something goes wrong I have a decent live debug environment that's very similar to what github actions is running. For what it's worth.
I do the same with Nix as it works for macOS builds as well
It has the massive benefit of solving the lock-in problem. Your workflow is generally very short so it is easy to move to an alternative CI if (for example) Github were to jack up their prices for self hosted runners...
That said, when using it in this way I personally love Github actions
Nix is so nice that you can put almost your entire workflow into a check or package. Like your code-coverage report step(s) become a package that you build (I'm not brave enough to do this)
I run my own jenkins for personal stuff on top of nixos, all jobs run inside devenv shell, devenv handles whatever background services required (i.e. database), /nix/store is shared between workers + attic cache in local network.
Oh, and there is also nixosModule that is tested in the VM that also smoke tests the service.
First build might take some time, but all future jobs run fast. The same can be done on GHA, but on github-hosted runners you can't get shared /nix/store.
same here. though, i think bazel is better for DAGs. i wish i could use it for my personal project (in conjunction with, and bootstrapped with nix), but that's a pretty serious tooling investment that I just feel is just going to be a rabbit hole.
package.lock is JSON only, Nix is for the entire system, similar to a Dockerfile
Nix specifies dependencies declaritively, and more precisely, than Docker (does by default), so the resulting environment is reproducibly the same. It caches really well and doubles as a package manager.
Despite the initial learning curve, I now personally prefer Nix's declarative style to a Dockerfile
I was doing something similar when moving from Earthly. But I have since moved to Nix to manage the environment. It is a lot better of a developer experience and faster! I would checkout an environment manager like Nix/Mise etc so you can have the same tools etc locally and on CI.
I tend to have most of my workflows setup as scripts that can run locally in a _scripts diorectory, I've also started to lean on Deno if I need anything more complex than I'm comfortable with in bash (even bash in windows) or powershell, since it executes .ts directly and can refer directly to modules/repos without a separate install step.
This may also leverage docker (compose) to build/run different services depending on the stage of action. Sometimes also creating "builder" containers that will have a mount point for src and output to build and output the project in different OSes, etc. Docker + QEMU allows for some nice cross-compile options.
The less I rely on Github Actions environment the happier I am... the main points of use are checkout, deno runtime, release please and uploading assets in a release.
It sucks that the process is less connected and slow, but ensuring as much as reasonable can run locally goes a very long way.
Yeah, images seem to work very well as an abstraction layer for most CI/CD users. It's kind of unfortunate that they don't (can't) fully generalize across Windows and macOS runners as well, though, since in practice that's where a lot of people start to get snagged by needing to do things in GitHub Actions versus using GitHub Actions as an execution layer.
Masterclass in turning a goodbye email into a hire me after my next gig ends. I’m not being sarcastic, this is a great example of highlighting the value they added.
I wonder how hard it is to remove that SynthID watermark...
Looks like: "When tested on images marked with Google’s SynthID, the technique used in the example images above, Kassis says that UnMarker successfully removed 79 percent of watermarks." From https://spectrum.ieee.org/ai-watermark-remover
Berkeley National Lab did a great study on this recently [0]. Short answer what's raised prices over the last 5 years, slide 22 in the linked doc: supply chain disruption increasing hardware prices, wildfires, and renewable policies (ahem, net metering) that over-reimburse asset owners.
I'd love to be able to point at something that implicates data centers, but first I'd need to see the data. So far, no evidence. Hint: it would show up in bulk system prices not consumer rates, which are dominated by wires costs.
I can live with the different visual style but iOS 26 has cost about 30% of my battery even running all day on low power mode on an iPhone 14. It’s horrendous. Hard to even get through one day on a charge now.
Yeah I’ve never understood this for lithium ion systems. Maybe some parallel or series the cells differently to get different total max power outputs? But I don’t expect that would affect cost either way.
With flow batteries there are definitely differences since the power and energy components of the system can each be scaled independently from each other. Ie need more total energy then just expand the amount of liquid electrolyte storage you have.
Sigh. This is a point in favor of not allowing free access to ChatGPT at all given that people are getting mad at GPT-4o-mini which is complete garbage for anything remotely complex... and garbage for most other things, too.
Just give 5 free queries of 4o/o3 or whatever and call it good.
Or a non-normie. Even while logged in, I had no idea what ChatGPT model it was using, since it doesn't label it. All the label says is "great for everyday tasks".
And as a non-normie, I obviously didn't take its analysis seriously, and compared it to Grok and Gemini 2.5. The latter was the best.
This is an awesome, and as a bonus I learned about a mature reactive notebook for python. Great stuff.
The data sharing is awesome. I previously used Google Colab to share runnable code with non-dev coworkers, but their file support requires some kludges to get it working decently.
I know I should just RTFM, but are you all working on tools to embed/cross-compile/emulate non-python binaries in here? I know this is not a good approach, but as a researcher I would love to shut down my server infrastructure and just use 3-4 crusty old binaries I rely on directly in the browser.
Now that there is great GPU-accelerated remote desktop options, I mostly just remote into more powerful machines. Even a country away the on-screen performance is almost like sitting at the machine, and as a bonus I don't hear every fan on my laptop going crazy. I've been a happy Parsec.app user for a while, but there are many other options (e.g. RustDesk has this).
reply