Tangential since it's not PG related but I'm more and more moving away from cron and I prefer using systemd timers (I'm on RHEL at work). I just find the interface to list and manager timers better and I can just handle everything like a systemd service anyways.
Maybe you could make a target unit file like “jobs.target” and in your timer unit files do “WantedBy=jobs.target”. Then you could do “systemctl start/stop jobs.target”
First, list and save the currently active timers:
```bash
systemctl list-timers --state=active --no-legend | awk '{print $NF}' > /tmp/active_timers.txt
```
Stop all active timers:
```bash
sudo systemctl stop $(cat /tmp/active_timers.txt)
```
Later, restart the previously active timers:
```bash
sudo systemctl start $(cat /tmp/active_timers.txt)
```
> As a rule of thumb, if you're processing less than 1000 jobs per day or your jobs are mostly lightweight operations (like sending emails or updating records), you can stick with this solution.
This seems... excessively low? Chancy is on the heavier side and happily does many millions of jobs per day. Postgres has no issue with such low throughput, even on resource constrained systems (think a $5 vps). Maybe they meant 1000 per second?
I recently used PG-Boss to setup jobs to refresh auth tokens in the background. Very easy to use, would recommend taking a look. Docs are a bit minimal, but there's not that much to it either. (https://timgit.github.io/pg-boss/#/)
You don't need WASP for any of this, certainly not worth learning their custom DSL for it. Two of their points about how it makes it better are moot, setting queue names (one line of code) and type safety (you should be using TS already). I've not seen the value in their abstractions and indirection.
Or the aptly named pg_cron which is in RDS for example. TFA is just a marketing piece for Wasp, presumably to improve its SEO since 'postgres cron' more obviously gets you to pg_cron otherwise.
pg_cron is for pg specific cron tasks. You use pg_cron to truncate a table, compute pg views, values, aggregates, etc. Basically just running PG queries on a CRON schedule.
pg_cron itself won't run an external script for you. Like you can't do
you can use pg_cron to insert a job-row in a jobs table that you have some consumer that runs a `select * from jobs where status = 'pending' limit 1;`. Then you're on the hook to handle the pg updates for dispatching and handling updates, job status, etc. You could even call that implementation pg-boss if it's not taken.
The postgres COPY FROM PROGRAM will run external scripts, as the postgres user. Not necessarily a good architecture, of course. I did one day manage to fix a broken sshd with it by passing it su commands (rate that experience as 0 stars, would not recommend)
I have a node app that has one-off scheduled tasks. Between node-cron and real Linux cron, I went with real cron because node-cron just polls every second, which is extremely inefficient and I'm on a free tier.
How does your library work in this regard? If my node server is down, will my scheduled tasks still execute? I notice you have a .start() method, what does that do? Is it polling periodically?
There's many ways to skin this cat. Personally I invested all my knowledge and focus into systemd timers. No doubt you have your own ways that make sense for you.