Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That exists on every dev’s machine.


In theory, the code is there, but putting a project back together in a hurry after trouble at Github is non-trivial. Especially if the build process uses proprietary stuff such as "GitHub actions". The issues and discussions are all Github-only, too.


It’s crazy that there is no common CI spec. Every single platform is completely different.

Last time I tried Act it had some pretty severe limitations. Perhaps I should take it for another spin.


The common CI spec is sh & make, but everybody hates those I guess.

Git hook management's really awkward. Even with tools to synchronize them (... all of them? You may want some that are just for your own use) it's a pain. "I want these hooks to run, in order, but only when a merge commit happens on a machine with such-and-such designation, and I want it to run the task on a different machine, but we need to make sure that runner's a Windows box because..." that just sucks to self-manage, and yeah, there's no standard for expressing that, you're bound to incompatible solutions.

Secret management's a hellscape and everyone's always glad when someone else solves the problem for you. That alone is like 50% of the value of Github Actions.


> The common CI spec is sh & make, but everybody hates those I guess.

These aren't very good at some of the things you actually want to use a CI for, other than the literal "build" step (which probably is using them anyway, or the per-language equivalent).

Coordinating _and visualising_ multiple builds where some parts run in parallel (e.g. multiplatform), or parts that might be different or skipped depending on how the flow was triggered. Managing caches and artifacts between stages and between runs. Managing running in different contexts, including generic shell and across multiple machine types. A consistent way to manage and inject secrets. Reusing common parts across different CI workflows.

I suppose you could have a parent job taking up a full slot to run a Makefile that launches and manages the running of jobs on other nodes, but imagine you'd have to step into some toolset that abstracts some of it, and hope that is shared development or you end up with an in-house convoluted nightmare of shell scripts.

"Something DAG and yaml shaped" is about the closest convergence we have gotten, and the closest that it looks like we'll get.


> sh & make

Are we talking GNU Make, nmake, BSD Make? (Isn't this why autotools exist in the first place -- to make something allegedly cross-platform for different flavors of Make)?

I get bitten repeatedly by sh being a symlink to different Shells, even though I've been using it for many years. The most recent piece of insanity being "cd foo bar" resulting in an error, but changing to foo directory while in some other version simply resulting in an error.

Also, error reporting. It's way too broken to consider sh a reliable tool. I wish things could be done with very simple tools and all this complexity around them was unnecessary. Unfortunately, here, this complexity while isn't unavoidable is indeed warranted due to abysmal quality of the simple tools.


I understand the sentiment, but I think it's phrased incorrectly.

What's needed is a formally defined CI spec(s). Common is bad for the same reason any monopoly is bad. Formally-defined solves some of the same problems common is solving where it's important to be protected from random failures of the sole provider, but it also makes it, at least theoretically, easier to have multiple providers.

This is similar to how C is different from Make. C is a standard that anyone can implement, while make is a weird language defined by its implementation that some tried to reimplement, but by doing so only increased the insanity of compatibility issues.

Of course there were multiple attempts to make common / standard definitions for general-purpose automation tools. Make is one of those, Ant is another one, and there's plenty more. I'm not sure why none really sticks around to the point of becoming a universally accepted tool / standard. Some reasons I can think about are: languages build special-purpose automation tools around their implementation which are often the selling point of the language, an attempt to sell it to developers, so the authors are disincentivized from making them general-purpose. There isn't a consensus on what such tools should do and what kind of guarantees they need to offer. Some such guarantees may come with a price, sometimes very steep price, so would be hard to sell. Eg. something like tup offers highly reliable reproducible and isolated builds, but at a cost of complexity and resources, whereas something like Make offers no guarantees, but is easier to get started with and to be productive.

Maybe it could be possible to extract just the component of CI that deals with the "skeleton" of automation, defining abstract tasks, dependencies between them etc... but then immediately there'd be plenty of systems that'd try to complement this automation with their own (proprietary) extensions which would circle us back to the initial problem...


The business case for CI vendors is at odds with a common spec.

Otherwise any customer who gets a bill for more than a few dollars will replace them with a VM and copy of Jenkins.


My approach is to write bash scripts that do the heavy lifting, and use of pipes and output redirection to coordinate the individual "steps". For example, one script would run the test suite, and another would process the code coverage reports.

In CI, it now just needs to run the scripts in the correct order.


The source code of the actions is in your repo. This is more of the problem of relying on proprietary software; if the company that makes it dies, you die too!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: