this post was submitted on 27 Mar 2026
83 points (94.6% liked)
Programming
26271 readers
190 users here now
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Yeah, I always plead for as much as possible to be automated offline. Ideally, I'd like the CI/CD job to trigger just one command, which is what you'd trigger offline as well.
In practice, that doesn't always work out. Because the runners aren't insanely beefy, you need to split up your tasks into multiple jobs, so that they can be put onto multiple runners.
And that means you need to trigger multiple partial commands and need additional logic in the CI/CD to download any previous artifacts and upload the results.
It also means you can restart intermediate jobs.
But yeah, I do often wonder whether that's really worth the added complexity...
What I usually push for is that every CI task either sets up the environment or executes that one command™ for that task. For example, that command can be
uv run ruff checkorcargo fmt --all -- --checkor whatever.Where the CI-runs-one-script-only (or no-CI) approach falls apart for me is when you want to have a deployment pipeline. It's usually best not to have deployment secrets stored in any dev machine, so a good place to keep them is in your CI configs (and all major platforms support secrets stored with an environment, variable groups, etc). Of course, I'm referring here to work on a larger team, where permission to deploy needs to be transferrable, but you don't really want to be rotating deployment secrets all the time either. This means you're running code in the pipeline that you can't run locally in order to deploy it.
It also doesn't work well when you build for multiple platforms. For example, I have Rust projects that build and test on Windows, MacOS, and Linux which is only possible by running those on multiple runners (each on a different OS and, in MacOS's case, CPU architecture).
The compromise of one-script-per-task can usually work even in these situations, from my experience. You still get to use things like GitHub's matrix, for example, to run multiple runners in parallel. It just means you have different commands for different things now.