I remember a Rich Hickey talk where he described Datomic, his database. He said "the problem with a database is that it's over there." By modeling data with immutable "facts" (a la Prolog), much of the database logic can be moved closer to the application. In his case, with Clojure's data structures.
Maybe the the problem with CI is that it's over there. As soon as it stops being something that I could set up and run quickly on my laptop over and over, the frog is already boiled.
The comparison to build systems is apt. I can and occasionally do build the database that I work on locally on my laptop without any remote caching. It takes a very long time, but not too long, and it doesn't fail with the error "people who maintain this system haven't tried this."
The CI system, forget it.
Part of the problem, maybe the whole problem, is that we could get it all working and portable and optimized for non-blessed environments, but it still will only be expected to work over there, and so the frog keeps boiling.
I bet it's not an easy problem to solve. Today's grand unified solution might be tomorrow's legacy tar pit. But that's just software.
and that's it (and that can even trigger a container build).
I've spent far too much time debugging CI builds that work differently to a local build, and it's always because of extra nonsense added to the CI server somehow. I've yet to find a build in my industry that doesn't yield to this 'pattern'.
Your environment setup should work equally on a local machine or a CI/CD server, or your devops teams has identically set it up on bare metal using Ansible or something.
Agreed with this sentiment, but with one minor modification: use a Makefile instead. Recipes are still chunks of shell, and they don’t need to produce or consume any files if you want to keep it all task-based. You get tab-completion, parallelism, a DAG, and the ability to start anywhere on the task graph that you want.
It’s possible to do all of this with a pure shell script, but then you’re probably reimplementing some or all of the list above.
> Part of the problem, maybe the whole problem, is that we could get it all working and portable and optimized for non-blessed environments, but it still will only be expected to work over there, and so the frog keeps boiling.
Build the software inside of containers (or VMs, I guess): a fresh environment for every build, any caches or previous build artefacts explicitly mounted.
Then you can stack as many turtles as you need - such as having build scripts that get executed as a part of your container build, having Maven or whatever else you need inside of there.
It can be surprisingly sane: your CI server doing the equivalent of "docker build -t my_image ..." and then doing something with it, whereas during build time there's just a build.sh script inside.
Transactions and a single consistent source of truth with stuff like observability and temporal ordering is centralized and therefore "over there" for almost every place you could be in.
As long as communications have bounded speed (speed of light or whatever else) there will be event horizons.
The point of a database is to track changes and therefore time centrally. Not because we want to, but because everything else has failed miserably. Even conflicting CRDT change merges and git merges can get really hairy really quickly.
People reinvent databases about every 10 years. Hardware gets faster. Just enjoy the show.
How does that script handle pushing to ghcr, or pulling an artifact from a previous stage for testing?
In my experience these are the bits that fail all the time, and are the most important parts of CI once you go beyond it taking 20/30 seconds to build.
A clean build in an ephemeral VM of my project would take about 6 hours on a 16 core machine with 64GB RAM.
Sheesh. I've got a multimillion line modern C++ protect that consists of a large number of dylibs and a few hundred delivered apps. A completely cache-free build is an only few minutes. Incremental and clean (cached) builds are seconds, or hundreds of milliseconds.
It sounds like you've got hundreds of millions of lines of code! (Maybe a billion!?) How do you manage that?
It’s a few million lines of c++ combined with content pipelines. Shader compilation is expensive and the tooling is horrible.
Our cached builds on CI are 20 minutes from submit to running on steam which is ok. We also build with MSVC so none of the normal ccache stuff works for us, which is super frustrating
Honestly not really… sure it might not be as fast but the ability to know I can debug it and build it exactly the same way locally is worth the performance hit.
The most concerning part about modern CI to me is how most of it is running on GitHub Actions, and how GitHub itself has been deprioritizing GitHub Actions maintenance and improvements over AI features.
> Thank you for your interest in this GitHub repo, however, right now we are not taking contributions.
> We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time.
IMO development is too complex and misdirected in general since we cargo cult FAANG.
Need AWS, Azure or GCP deployment? Ever thought about putting it on bare metal yourself? If not, why not? Because it's not best practice? Nonsense. The answer with these things is: it depends, and if your app has not that many users, you can get away with it, especially if it's a B2B or internal app.
It's also too US centric. The idea of scalability applies less to most other countries.
many ppl also underestimate how capable modern hardware is: for ~10usd you could handle like a million concurrent connections with a redis cluster on a handful of VPSs...
Relevant: Program Your Own Computer in Python (https://www.youtube.com/watch?v=ucWdfZoxsYo) from this year's PyCon, emphasizing how much you can accomplish with local execution and how much overhead can be involved in doing it remotely.
Requirements are complex too. Even if you don't need to scale at all, you likely do need zero-downtime deployment, easy rollbacks, server fault tolerance, service isolation... If you put your apps into containers and throw them onto Kubernetes, you get a lot of that "for free" and in a well-known and well-tested way. Hand-rolling even one of those things, let alone all of them together, would take far too much effort.
I know SaaS businesses that don't as they operate in a single country, within a single timezone and the availability needs to be during business days and business hours.
> easy rollbacks
Yea, I haven't seen exceptions at all on this. So yea.
> server fault tolerance
That really depends. Many B2B or internal apps are fine with a few hours, or even a day, of downtime.
> service isolation
Many companies just have one app and if it's a monolith, then perhaps not.
> Hand-rolling even one of those things
Wow, I see what you're trying to say and I agree. But it really comes across as "if you don't use something like Kubernetes you need to handroll these things yourself." And that's definitely not true. But yea, I don't think that's what you meant to say.
I'm definitely curious about alternatives for getting these features without k8s. Frankly, I don't like it, but I use it because it's the easiest way I've found to get all of these features. Every deployment I've seen that didn't use containers and something like k8s either didn't have a lot of these features, implemented them with a bespoke pile of shell scripts, or a mix of both.
For context, I work in exactly that kind of "everyone in one time zone" situation and none of our customers would be losing thousands by the minute if something went down for a few hours or even a day. But I still like all the benefits of a "modern devops" approach because they don't really cost much at all and it means if I screw something up, I don't have to spend too much time unscrewing it. It took a bit more time to set up compared to a basic debian server, but then again, I was only learning it at the time and I've seen friends spin up fully production-grade Kubernetes clusters in minutes. The compute costs are also negligible in the grand scheme of things.
Unix filesystem inodes and file descriptors stick around until they are closed, even if the inode has been unlinked from a directory. The latter is usually called "deleting the file".
All the stuff Erlang does.
Static linking and chroot.
The problems and the concepts and solutions have been around for a long time.
Piles and piles of untold complexity, missing injectivity on data in the name of (leaky) abstractions and cargo-culting have been with us on the human side if things for even longer.
And as always: technical and social problems may not always benefit from the same solutions.
Ok so let's say you statically link your entire project. There are many reasons you shouldn't or couldn't, but let's say you do. How do you deploy it to the server? Rsync, sure. How do you run it? Let's say a service manager like systemd. Can you start a new instance while the old one is running? Not really, you'll need to add some bash script glue. Then you need a loadbalancer to poll the readiness of the new one and shift the load. What if the new instance doesn't work right? You need to watch for that, presumably with another bash script, stop it and keep the old one as "primary". Also, you'll need to write some selinux rules to make it so if someone exploits one service, they can't pivot to others.
Congrats, you've just rewritten half of kubernetes in bash. This isn't reducing complexity, it's NIH syndrome. You've recreated it, but in a way that nobody else can understand or maintain.
Holy shit you don't get anything for _free_ as a result of adopting Kubernetes dude. The cost is in fact quite high in many cases - you adopt Kubernetes and all of the associated idiosyncrasies, which can be a lot more than what you left behind.
For free as in "don't have to do anything to make those features, they're included".
What costs are you talking about? Packaging your app in a container is already quite common so if you already do that all you need to do is replace your existing yaml with a slightly different yaml.
If you don't do that already, it's not really that difficult. Just copy-paste your your install script or rewrite your Ansible playbooks into a Dockerfile. Enjoy the free security boost as well.
What are the other costs? Maintaining something like Talos is actually less work than a normal Linux distro. You already hopefully have a git repo and CI for testing and QA, so adding a "build and push a container" step is a simple one-time change. What am I missing here?
I'm not sure why no one mentioned it yet, but the CI tool of sourcehut (https://man.sr.ht/builds.sr.ht/) simplifies all of this.
It just spins a linux distro of your choice, and executes a very bare bone yml that essentially contains a lot of shell commands, so it's also easy to replicate locally.
There are 12 yml keywords in total that cover everything.
Other cool things are the ability to ssh in a build if it failed(for debugging), and to run a one-time build with a custom yml without committing it(for testing).
I believe it can checkout any repository, not just one in sourcehut that triggers a build, and that has also a GraphQL API
The article resonates a lot with me. I've been seeing the transition from Jenkins to Azure DevOps / GitHub Actions (same thing more or less) in the company I'm working at and came to very similar conclusions. The single big Jenkins machine shared by 10+ teams mixing UI configuration from 20 plugins with build systems and custom scripts wasn't great, so it was the right decision to move away from it. However, neither great is the current workflow of write->commit->wait->fail->write... while figuring out the correct YAML syntax of some third party GitHub Action that is required to do something very basic like finding files in a nested folder by pattern.
Take a look at Prefect - https://www.prefect.io/ - as far as I can see, it ticks a lot of the boxes that the author mentions (if you can live with the fact that the API is a Python SDK; albeit a very good one that gives you all the scripting power of Python). Don't be scared away by the buzzwords on the landing page, browsing the extensive documentation is totally worthwhile to learn about all of the features Prefect offers. Execution can either happen on their paid cloud offering or self-hosted on your own physical or cloud premises at no extra cost. The Python SDK is open source.
Disclaimer: I am not affiliated with Prefect in any way.
I agree on build systems and CI being closely related, and could (in an ideal world) benefit from far tighter integration. But..
> So here's a thought experiment: if I define a build system in Bazel and then define a server-side Git push hook so the remote server triggers Bazel to build, run tests, and post the results somewhere, is that a CI system? I think it is! A crude one. But I think that qualifies as a CI system.
Yes the composition of hooks, build, and result posting can be thought as a CI system. But then the author goes on to say
> Because build systems are more generic than CI systems (I think a sufficiently advanced build system can do a superset of the things that a sufficiently complex CI system can do)
Which is ignoring the thing that makes CI useful, the continuous part of continuous integration. Build systems are explicitly invoked to do something, CI systems continuosly observe events and trigger actions.
In the conclusion section author mentions this for their idealized system:
> Throw a polished web UI for platform interaction, result reporting, etc on top.
I believe that platform integrations, result management, etc should be pretty central for CI system, and not a side-note that is just thrown on top.
This speaks to me. Lately, I’ve encountered more and more anti patterns where the project’s build system was bucked in favor of using something else. Like having a maven project and instead of following the declarative convention defining profiles and goals, everything was a hodge podge of shell scripts that only the Jenkins pipeline knew how to stitch together. Or a more recent case where the offending project had essential build functionality embedded in a Jenkins pipeline so you have to reverse engineer what it’s doing just so you can execute the build steps from your local machine. A particularly heinous predicament as the project depends on the execution of the pipeline to provide basic feedback.
Putting too much responsibility in the ci environment makes life as a developer (or anyone responsible for maintaining the ci process) more difficult. It’s far more superior to have a consistent use of the build system that can be executed the same way on your local machine as it is in your ci environment. I suppose this is the mess you find yourself in when you have other teams building your pipelines for you?
These online / paid CI systems are a dime a dozen and who knows what will happen to them in the future…
Im still rocking my good old jenkins machine, which to be fair took me a long time to set up, but has been rock solid ever since and will never cost me much and will never be shut down.
But i can definitely see the appeal of github actions, etc…
until you have to debug a GH action, especially if it only runs on main or is one of the handful of tasks that are only picked up when committed to main.
god help you, and don’t even bother with the local emulators / mocks.
But debugging Jenkins jobs is absolute pain too, in varying ways depending on how the job was defined (clicking through the ui, henerated by something, groovy, pipelines, etc).
nektos/act was considered good enough to be adopted as the CI solution for Gitea and Forgejo. The latter uses it for all their development, seems to work out fine for them.
I've never been a fan of GitHub Actions (too locked-in/proprietary for my taste), so no idea if it lives up to expectations.
I've had a great experience using `act` to debug github actions containers. I guess your mileage, as usual, will vary depending on what you are doing in CI.
I've been able to effectively skip the entire CI/CD conversation by preferring modern .NET and SQLite.
I recently spent a day trying to get a GH Actions build going but got frustrated and just wrote my own console app to do it. Polling git, tracking a commit hash and running dotnet build is not rocket science. Putting this agent on the actual deployment target skips about 3 boss fights.
I wrote Linci to tackle this issue a few years back
Https://linci.tp23.org
Ci is too complicated and are basically about locking. But what you (should) do is run cli commands on dedicated boxes in remote locations.
In Linci every thing done remote is the same locally. Just pick a box for the job.
There is almost no code, and what there is could be rewritten is any language if you prefer. Storage is git/VCs + filesystem.
Filesystem are kit fashionable because they are a problem for the big boys but not for you or I. File system storage makes thing easy and hackable.
That is unix bread and butter. Microsoft need a ci in yaml. Linux does not.
Been using it for a while an a small scale and it's never made me want anything else.
Scripting bash
Remoting ssh
Auth pam
Notification irc/II (Or mail stomp etc)
Scheduling crond
Webhooks not needed if repo is on the same container use bash for most hooks, and nodejs server that calls cli for github
Each and every plug-in is a bash script and some env variables.
Read other similar setups hacked up with make. But I don't like the env vars handling and syntax of make. Bash is great if what you do is simple, and as the original article points out so clearly, if your ci is complicated you should probably rethink it.
Oh and debugging builds is a charm: Ssh in to the remote box, and run the same commands the tool is running, as the same user in a bash shell(the same language) .
CI debugging at my day job is literally impossible. Read logs, try the whole flow again from the beginning.
With Linci, I can fix any stage in the flow, if I want to, or check-in and run again if I an 99% sure it will work.
Drone was absolutely perfect back when it was Free Software. Literally "run these commands in this docker container on these events" and basically nothing more. We ran the last fully open source version much longer than we probably should have.
When they went commercial, GitHub Actions became the obvious choice, but it's just married to so much weirdness and unpredictability.
Whole thing with Drone opened my eyes at least, I'll never sign a CLA again
But you see - it's efficient if we add _our_ configuration layer with custom syntax to spawn a test-container-spawner with the right control port so that it can orchestrate the spawning of the environment and log the result to production-test-telemetry, and we NEED to have a dns-retry & dns-timeout parameter so our test-dns resolver has time to run its warm-up procedure.
In my view, the CI system is supposed to run builds and tests in a standardized/reproducible environment (and to store logs/build artifacts).
This is useful because you get a single source of truth for "does that commit break the build" and eliminate implicit dependencies that might make builds work on one machine but not another.
But specifying dependencies between your build targets and/or sourcefiles, is turning that runner into a bad, incomplete reimplementation of make, which is what this post is complaining about AFAICT.
You're 100% right IMHO about the convergence of powerful CI pipelines and full build systems. I'm very curious what you'll think if you try Dagger, which is my tool of choice for programming the convergence of CI and build systems. (Not affiliated, just a happy customer)
I absolutely don't understand what it does from the website. (And there is way too much focus on "agents" on the front page for my tastes, but I guess it's 2025)
edit: all the docs are about "agents"; I don't want AI agents, is this for me at all?
So, it sounded interesting but they have bet too hard on the "developer marketing" playbook of "just give the minimum amount of explanation to get people to try the stuff".
For example, there is a quick start, so I skip that and click on "core concepts". That just redirects to quick start. There's no obvious reference or background theory.
If I was going to trust something like this I want to know the underlying theory and what guarantees it is trying to make. For example, what is included in a cache key, so that I know which changes will cause a new invocation and which ones will not.
I’ve been using Pulumi automation in our CI and it’s been really nice. There’s definitely a learning curve with the asynchronous Outputs but it’s really nice for building docker containers and separating pieces of my infra that may have different deployment needs.
If complex ci becomes indistinguishable from build systems, simple ci becomes indistinguishable from workflow engines. in an ideal world you would not need an ci product at all. the problem is there is neither a great build system nor workflow engine.
Any universal build system is complex. You can either make the system simple and delegate the complexity to the user, like the early tools, e.g. buildbot. Or you can hide the complexity to the best of your ability, like GitHub actions. Or you expose all the complexity, like jenkins. I'm personally happy for the complexity being hidden and can deal with a few leaky abstractions if I need something non standard.
Yeah I think this is totally true. The trouble is there are loads of build systems and loads of platforms that want to provide CI with different features and capabilities. It's difficult to connect them.
So you can have your build system construct its DAG and then convert that into a `.gitlab-ci.yaml` to run the actual commands (which may be on different platforms, machines, etc.). Haven't tried it though.
I've used dynamic pipelines. They work quite well, with two caveats: now your build process is two step and slower. And there are implementation bugs on Gitlab's side: https://gitlab.com/groups/gitlab-org/-/epics/8205
FWIW Github also allows creating CI definitions dynamically.
The author has a point about CI being a build system and I saw it used and abused in various ways (like the CI containing only one big Makefile with the justification that we can easily migrate from one CI system to another).
However, with time, you can have a very good feel of these CI systems, their strong and weak points, and basically learn how to use them in the simplest way possible in a given situation. Many problems I saw IRL are just a result of an overly complex design.
I have built many CI/build-servers over the decades for various projects, and after using pretty much everything else out there, I've simply reverted, time and again - and, very productively - to using Plain Old Bash Scripts.
(Of course, this is only possible because I can build software in a bash shell. Basically: if you're using bash already, you don't need a foreign CI service - you just need to replace yourself with a bash script.)
I've got one for updating repo's and dealing with issues, I've got one for setting up resources and assets required prior to builds, I've got one for doing the build - then another one for packaging, another for signing and notarization, and finally one more for delivering the signed, packaged, built software to the right places for testing purposes, as well as running automated tests, reporting issues, logging the results, and informing the right folks through the PM system.
And this all integrates with our project management software (some projects use Jira, some use Redmine), since CLI interfaces to the PM systems are easily attainable and set up. If a dev wants to ignore one stage in the build pipeline, they can - all of this can be wrapped up very nicely into a Makefile/CMakeLists.txt rig, or even just a 'build-dev.sh vs. build-prod.sh' mentality.
And the build server will always run the build/integration workflow according to the modules, and we can always be sure we'll have the latest and greatest builds available to us whenever a dev goes on vacation or whatever.
And all this with cross-platform, multiple-architecture targets - the same bash scripts, incidentally, run on Linux, MacOS and Windows, and all produce the same artefacts for the relevant platform: MacOS=.pkg, Windows=.exe, Linux=.deb(.tar)
Its a truly wonderful thing to onboard a developer, and they don't need a Jenkins login or to set up Github accounts to monitor actions, and so on. They just use the same build scripts, which are a key part of the repo already, and then they can just push to the repo when they're ready and let the build servers spit out the product on a network share for distribution within the group.
This works with both Debug and Release configs, and each dev can have their own configuration (by modifying the bash scripts, or rather the env.sh module..) and build target settings - even if they use an IDE for their front-end to development. (Edit: /bin/hostname is your friend, devs. Use it to identify yourself properly!)
Of course, this all lives on well-maintained and secure hardware - not the cloud, although theoretically it could be moved to the cloud, there's just no need for it.
I'm convinced that the CI industry is mostly snake-oil being sold to technically incompetent managers. Of course, I feel that way about a lot of software services these days - but really, to do CI properly you have to have some tooling and methodology that just doesn't seem to be being taught any more, these days. Proper tooling seems to have been replaced with the ideal of 'just pay someone else to solve the problem and leave management alone'.
But, with adequate methods, you can probably build your own CI system and be very productive with it, without much fuss - and I say this with a view on a wide vista of different stacks in mind. The key thing is to force yourself to have a 'developer workstation + build server' mentality from the very beginning - and NEVER let yourself ship software from your dev machine.
(EDIT: call me a grey-beard, but get off my lawn: if you're shipping your code off to someone else [github actions, grrr...] to build artefacts for your end users, you probably haven't read Ken Thompsons' "Reflections On Trusting Trust" deeply or seriously enough. Pin it to your forehead until you do!)
How much of this is a result of poorly thought out build systems, which require layer after layer of duct tape? How much is related to chasing "cloud everything" narratives and vendor specific pipelines? Even with the sanest tooling, some individuals will manage to create unhygenic slop. How much of the remainder is a futile effort to defend against these bad actors?
Disagree - using the one built into your hosting platform is the way to go, and I’d that doesn’t work for whatever reason, teamcity is better in every way
The fact that maintaining any Jenkins instance makes you want to shoot yourself and yet it's the least worst option is an indictment of the whole CI universe.
I have never seen a system with documentation as awful as Jenkins, with plugins as broken as Jenkins, with behaviors as broken as Jenkins. Groovy is a cancer, and the pipelines are half assed, unfinished and incompatible with most things.
This is pretty much my experience too. Working with jenkins is always complete pain, but at the same time I can't identify any really solid alternatives either. So far sourcehut builds is looking the most promising, but I haven't had chance to use it seriously. While it's nominally part of the rest of sourcehut ecosystem, I believe it could be run with minor tweaks also standalone if needed
I am working on this problem and while I agree with the author, there is room for improvement for the current status quo:
> So going beyond the section title: CI systems aren't too complex: they shouldn't need to exist. Your CI functionality should be an extension of the build system.
True. In the sense that if you are running a test/build, you probably want to start local first (dockerize) and then run that container remotely. However, the need for CI stems from the fact that you need certain variables (ie: you might want to run this, when commit that or pull request this or that, etc.) In a sense, a CI system goes beyond the state of your code to the state of your repo and stuff connected to your repo (ie: slack)
> There is a GitHub Actions API that allows you to interact with the service. But the critical feature it doesn't let me do is define ad-hoc units of work: the actual remote execute as a service. Rather, the only way to define units of work is via workflow YAML files checked into your repository. That's so constraining!
I agree. Which is why most people will try to use the container or build system to do these complex tasks.
> Taskcluster's model and capabilities are vastly beyond anything in GitHub Actions or GitLab Pipelines today. There's a lot of great ideas worth copying.
You still need to run these tasks as containers. So, say if you want to compare two variables, that's a lot of compute for a relatively simple task. Which is why the status quo has settled with GitHub Actions.
> it should offer something like YAML configuration files like CI systems do today. That's fine: many (most?) users will stick to using the simplified YAML interface.
It should offer a basic programming/interpreted language like JavaScript.
This is an area where WebAssembly can be useful. At its core, WASM is a unit of execution. It is small, universal, cheap and has a very fast startup time compared to a full OS container. You can also run arbitrarily complex code in WASM while ensuring isolation.
My idea here is that CI becomes a collection of executable tasks that the CI architect can orchestrate while the build/test systems remain a simple build/test command that run on a traditional container.
> Take Mozilla's Taskcluster and its best-in-class specialized remote execute as a service platform.
That would be a mistake, in my opinion. There is a reason Taskcluster has failed to get any traction. Most people are not interested in engineering their CI but in getting tasks executed on certain conditions. Most companies don't have people/teams dedicated for this and it is something developers do alongside their build/test process.
> Will this dream become a reality any time soon? Probably not. But I can dream. And maybe I'll have convinced a reader to pursue it.
I am :) I do agree with your previous statement that it is a hard market to crack.
You can roll your own barebones DAG engine in any language that has promises/futures and the ability to wait for multiple promises to resolve (like JS's Promise.all()):
For each task t in topological order:
Promise.all(all in-edges to t).then(t)
Want to run tasks on remote machines? Simply waves hands make a task that runs ssh.
I've investigated this idea in the past. It's an obvious one but still good to have an article about it, and I'd not heard of Taskcluster so that's cool.
My conclusion was that this is near 100% a design taste and business model problem. That is, to make progress here will require a Steve Jobs of build systems. There's no technical breakthroughs required but a lot of stuff has to gel together in a way that really makes people fall in love with it. Nothing else can break through the inertia of existing practice.
Here are some of the technical problems. They're all solvable.
• Unifying local/remote execution is hard. Local execution is super fast. The bandwidth, latency and CPU speed issues are real. Users have a machine on their desk that compared to a cloud offers vastly higher bandwidth, lower latency to storage, lower latency to input devices and if they're Mac users, the fastest single-threaded performance on the market by far. It's dedicated hardware with no other users and offers totally consistent execution times. RCE can easily slow down a build instead of speeding it up and simulation is tough due to constantly varying conditions.
• As Gregory observes, you can't just do RCE as a service. CI is expected to run tasks devs aren't trusted to do, which means there has to be a way to prove that a set of tasks executed in a certain way even if the local tool driving the remote execution is untrusted, along with a way to prove that to others. As Gregory explores the problem he ends up concluding there's no way to get rid of CI and the best you can do is reduce the overlap a bit, which is hardly a compelling enough value prop. I think you can get rid of conventional CI entirely with a cleverly designed build system, but it's not easy.
• In some big ecosystems like JS/Python there aren't really build systems, just a pile of ad-hoc scripts that run linters, unit tests and Docker builds. Such devs are often happy with existing CI because the task DAG just isn't complex enough to be worth automating to begin with.
• In others like Java the ecosystem depends heavily on a constellation of build system plugins, which yields huge levels of lock-in.
• A build system task can traditionally do anything. Making tasks safe to execute remotely is therefore quite hard. Tasks may depend on platform specific tooling that doesn't exist on Linux, or that only exists on Linux. Installed programs don't helpfully offer their dependency graphs up to you, and containerizing everything is slow/resource intensive (also doesn't help for non-Linux stuff). Bazel has a sandbox that makes it easier to iterate on mapping out dependency graphs, but Bazel comes from Blaze which was designed for a Linux-only world inside Google, not the real world where many devs run on Windows or macOS, and kernel sandboxing is a mess everywhere. Plus a sandbox doesn't solve the problem, only offers better errors as you try to solve it. LLMs might do a good job here.
But the business model problems are much harder to solve. Developers don't buy tools only SaaS, but they also want to be able to do development fully locally. Because throwing a CI system up on top of a cloud is so easy it's a competitive space and the possible margins involved just don't seem that big. Plus, there is no way to market to devs that has a reasonable cost. They block ads, don't take sales calls, and some just hate the idea of running proprietary software locally on principle (none hate it in the cloud), so the only thing that works is making clients open source, then trying to saturate the open source space with free credits in the hope of gaining attention for a SaaS. But giving compute away for free comes at staggering cost that can eat all your margins. The whole dev tools market has this problem far worse than other markets do, so why would you write software for devs at all? If you want to sell software to artists or accountants it's much easier.
The issue that I see is that "Continuous integration" is the practice of frequently merging to main.
Continuous: do it often, daily or more often
Integration: merging changes to main
He's talking about build tools, which are a _support system_ for actual CI, but are not a substitute for it. These systems allow you to Continuously integrate, quickly and safely. But they aren't the thing itself. Using them without frequent merges to main is common, but isn't CI. It's branch maintenance.
Yes, semantic drift is a thing, but you won't get the actual benefits of the actual practice if you do something else.
If you want to talk "misdirected CI", start there.
I remember a Rich Hickey talk where he described Datomic, his database. He said "the problem with a database is that it's over there." By modeling data with immutable "facts" (a la Prolog), much of the database logic can be moved closer to the application. In his case, with Clojure's data structures.
Maybe the the problem with CI is that it's over there. As soon as it stops being something that I could set up and run quickly on my laptop over and over, the frog is already boiled.
The comparison to build systems is apt. I can and occasionally do build the database that I work on locally on my laptop without any remote caching. It takes a very long time, but not too long, and it doesn't fail with the error "people who maintain this system haven't tried this."
The CI system, forget it.
Part of the problem, maybe the whole problem, is that we could get it all working and portable and optimized for non-blessed environments, but it still will only be expected to work over there, and so the frog keeps boiling.
I bet it's not an easy problem to solve. Today's grand unified solution might be tomorrow's legacy tar pit. But that's just software.
Your build should be this:
and that's it (and that can even trigger a container build).I've spent far too much time debugging CI builds that work differently to a local build, and it's always because of extra nonsense added to the CI server somehow. I've yet to find a build in my industry that doesn't yield to this 'pattern'.
Your environment setup should work equally on a local machine or a CI/CD server, or your devops teams has identically set it up on bare metal using Ansible or something.
Agreed with this sentiment, but with one minor modification: use a Makefile instead. Recipes are still chunks of shell, and they don’t need to produce or consume any files if you want to keep it all task-based. You get tab-completion, parallelism, a DAG, and the ability to start anywhere on the task graph that you want.
It’s possible to do all of this with a pure shell script, but then you’re probably reimplementing some or all of the list above.
> Part of the problem, maybe the whole problem, is that we could get it all working and portable and optimized for non-blessed environments, but it still will only be expected to work over there, and so the frog keeps boiling.
Build the software inside of containers (or VMs, I guess): a fresh environment for every build, any caches or previous build artefacts explicitly mounted.
Then, have something like this, so those builds can also be done locally: https://docs.drone.io/quickstart/cli/
Then you can stack as many turtles as you need - such as having build scripts that get executed as a part of your container build, having Maven or whatever else you need inside of there.
It can be surprisingly sane: your CI server doing the equivalent of "docker build -t my_image ..." and then doing something with it, whereas during build time there's just a build.sh script inside.
Transactions and a single consistent source of truth with stuff like observability and temporal ordering is centralized and therefore "over there" for almost every place you could be in.
As long as communications have bounded speed (speed of light or whatever else) there will be event horizons.
The point of a database is to track changes and therefore time centrally. Not because we want to, but because everything else has failed miserably. Even conflicting CRDT change merges and git merges can get really hairy really quickly.
People reinvent databases about every 10 years. Hardware gets faster. Just enjoy the show.
It’s why I’ve started making CI simply a script that I can run locally or on GitHub Actions etc.
Then the CI just becomes a bit of yaml that runs my script.
How does that script handle pushing to ghcr, or pulling an artifact from a previous stage for testing?
In my experience these are the bits that fail all the time, and are the most important parts of CI once you go beyond it taking 20/30 seconds to build.
A clean build in an ephemeral VM of my project would take about 6 hours on a 16 core machine with 64GB RAM.
Sheesh. I've got a multimillion line modern C++ protect that consists of a large number of dylibs and a few hundred delivered apps. A completely cache-free build is an only few minutes. Incremental and clean (cached) builds are seconds, or hundreds of milliseconds.
It sounds like you've got hundreds of millions of lines of code! (Maybe a billion!?) How do you manage that?
It’s a few million lines of c++ combined with content pipelines. Shader compilation is expensive and the tooling is horrible.
Our cached builds on CI are 20 minutes from submit to running on steam which is ok. We also build with MSVC so none of the normal ccache stuff works for us, which is super frustrating
Fuck. I write shader compilers.
Are you not worried about parallelisation in your case? Or have you solved that in another way (one big beefy build machine maybe?)
Honestly not really… sure it might not be as fast but the ability to know I can debug it and build it exactly the same way locally is worth the performance hit.
I want my build system to be totally declarative
Oh the DSL doesn't support what I need it to do.
Can I just have some templating or a little bit of places to put in custom scripts?
Congratulations! You now have a turing complete system. And yes, per the article that means you can cryptocurrency mine.
Ansible terraform Maven Gradle.
Unfortunate fact is that these IT domains (builds and CI) are at a junction of two famous very slippery slopes.
1) configuration
2) workflows
These two slippery slopes are famous for their demos of how clean and simple they are and how easy it is to do. Anything you need it to do.
In the demo.
And sure it might stay like that for a little bit.
But inevitably.... Script soup
Alternative take: CI is the successful monetization of Make-as-a-Service.
The most concerning part about modern CI to me is how most of it is running on GitHub Actions, and how GitHub itself has been deprioritizing GitHub Actions maintenance and improvements over AI features.
Seriously, take a look at their pinned repo: https://github.com/actions/starter-workflows
> Thank you for your interest in this GitHub repo, however, right now we are not taking contributions.
> We continue to focus our resources on strategic areas that help our customers be successful while making developers' lives easier. While GitHub Actions remains a key part of this vision, we are allocating resources towards other areas of Actions and are not taking contributions to this repository at this time.
They are instead focusing on Agentic Workflows which used natural language instead of YAML.
https://github.com/githubnext/gh-aw
[delayed]
IMO development is too complex and misdirected in general since we cargo cult FAANG.
Need AWS, Azure or GCP deployment? Ever thought about putting it on bare metal yourself? If not, why not? Because it's not best practice? Nonsense. The answer with these things is: it depends, and if your app has not that many users, you can get away with it, especially if it's a B2B or internal app.
It's also too US centric. The idea of scalability applies less to most other countries.
many ppl also underestimate how capable modern hardware is: for ~10usd you could handle like a million concurrent connections with a redis cluster on a handful of VPSs...
Relevant: Program Your Own Computer in Python (https://www.youtube.com/watch?v=ucWdfZoxsYo) from this year's PyCon, emphasizing how much you can accomplish with local execution and how much overhead can be involved in doing it remotely.
One beelink in a closet runs our entire OP’s cluster.
This
Requirements are complex too. Even if you don't need to scale at all, you likely do need zero-downtime deployment, easy rollbacks, server fault tolerance, service isolation... If you put your apps into containers and throw them onto Kubernetes, you get a lot of that "for free" and in a well-known and well-tested way. Hand-rolling even one of those things, let alone all of them together, would take far too much effort.
> you likely do need zero-downtime deployment
I know SaaS businesses that don't as they operate in a single country, within a single timezone and the availability needs to be during business days and business hours.
> easy rollbacks
Yea, I haven't seen exceptions at all on this. So yea.
> server fault tolerance
That really depends. Many B2B or internal apps are fine with a few hours, or even a day, of downtime.
> service isolation
Many companies just have one app and if it's a monolith, then perhaps not.
> Hand-rolling even one of those things
Wow, I see what you're trying to say and I agree. But it really comes across as "if you don't use something like Kubernetes you need to handroll these things yourself." And that's definitely not true. But yea, I don't think that's what you meant to say.
Again, it depends
I'm definitely curious about alternatives for getting these features without k8s. Frankly, I don't like it, but I use it because it's the easiest way I've found to get all of these features. Every deployment I've seen that didn't use containers and something like k8s either didn't have a lot of these features, implemented them with a bespoke pile of shell scripts, or a mix of both.
For context, I work in exactly that kind of "everyone in one time zone" situation and none of our customers would be losing thousands by the minute if something went down for a few hours or even a day. But I still like all the benefits of a "modern devops" approach because they don't really cost much at all and it means if I screw something up, I don't have to spend too much time unscrewing it. It took a bit more time to set up compared to a basic debian server, but then again, I was only learning it at the time and I've seen friends spin up fully production-grade Kubernetes clusters in minutes. The compute costs are also negligible in the grand scheme of things.
Unix filesystem inodes and file descriptors stick around until they are closed, even if the inode has been unlinked from a directory. The latter is usually called "deleting the file".
All the stuff Erlang does.
Static linking and chroot.
The problems and the concepts and solutions have been around for a long time.
Piles and piles of untold complexity, missing injectivity on data in the name of (leaky) abstractions and cargo-culting have been with us on the human side if things for even longer.
And as always: technical and social problems may not always benefit from the same solutions.
Ok so let's say you statically link your entire project. There are many reasons you shouldn't or couldn't, but let's say you do. How do you deploy it to the server? Rsync, sure. How do you run it? Let's say a service manager like systemd. Can you start a new instance while the old one is running? Not really, you'll need to add some bash script glue. Then you need a loadbalancer to poll the readiness of the new one and shift the load. What if the new instance doesn't work right? You need to watch for that, presumably with another bash script, stop it and keep the old one as "primary". Also, you'll need to write some selinux rules to make it so if someone exploits one service, they can't pivot to others.
Congrats, you've just rewritten half of kubernetes in bash. This isn't reducing complexity, it's NIH syndrome. You've recreated it, but in a way that nobody else can understand or maintain.
Holy shit you don't get anything for _free_ as a result of adopting Kubernetes dude. The cost is in fact quite high in many cases - you adopt Kubernetes and all of the associated idiosyncrasies, which can be a lot more than what you left behind.
For free as in "don't have to do anything to make those features, they're included".
What costs are you talking about? Packaging your app in a container is already quite common so if you already do that all you need to do is replace your existing yaml with a slightly different yaml.
If you don't do that already, it's not really that difficult. Just copy-paste your your install script or rewrite your Ansible playbooks into a Dockerfile. Enjoy the free security boost as well.
What are the other costs? Maintaining something like Talos is actually less work than a normal Linux distro. You already hopefully have a git repo and CI for testing and QA, so adding a "build and push a container" step is a simple one-time change. What am I missing here?
I'm not sure why no one mentioned it yet, but the CI tool of sourcehut (https://man.sr.ht/builds.sr.ht/) simplifies all of this. It just spins a linux distro of your choice, and executes a very bare bone yml that essentially contains a lot of shell commands, so it's also easy to replicate locally.
There are 12 yml keywords in total that cover everything.
Other cool things are the ability to ssh in a build if it failed(for debugging), and to run a one-time build with a custom yml without committing it(for testing).
I believe it can checkout any repository, not just one in sourcehut that triggers a build, and that has also a GraphQL API
A big reason people use actions is because they need to run things on MacOS and Windows.
The article resonates a lot with me. I've been seeing the transition from Jenkins to Azure DevOps / GitHub Actions (same thing more or less) in the company I'm working at and came to very similar conclusions. The single big Jenkins machine shared by 10+ teams mixing UI configuration from 20 plugins with build systems and custom scripts wasn't great, so it was the right decision to move away from it. However, neither great is the current workflow of write->commit->wait->fail->write... while figuring out the correct YAML syntax of some third party GitHub Action that is required to do something very basic like finding files in a nested folder by pattern.
Take a look at Prefect - https://www.prefect.io/ - as far as I can see, it ticks a lot of the boxes that the author mentions (if you can live with the fact that the API is a Python SDK; albeit a very good one that gives you all the scripting power of Python). Don't be scared away by the buzzwords on the landing page, browsing the extensive documentation is totally worthwhile to learn about all of the features Prefect offers. Execution can either happen on their paid cloud offering or self-hosted on your own physical or cloud premises at no extra cost. The Python SDK is open source.
Disclaimer: I am not affiliated with Prefect in any way.
I agree on build systems and CI being closely related, and could (in an ideal world) benefit from far tighter integration. But..
> So here's a thought experiment: if I define a build system in Bazel and then define a server-side Git push hook so the remote server triggers Bazel to build, run tests, and post the results somewhere, is that a CI system? I think it is! A crude one. But I think that qualifies as a CI system.
Yes the composition of hooks, build, and result posting can be thought as a CI system. But then the author goes on to say
> Because build systems are more generic than CI systems (I think a sufficiently advanced build system can do a superset of the things that a sufficiently complex CI system can do)
Which is ignoring the thing that makes CI useful, the continuous part of continuous integration. Build systems are explicitly invoked to do something, CI systems continuosly observe events and trigger actions.
In the conclusion section author mentions this for their idealized system:
> Throw a polished web UI for platform interaction, result reporting, etc on top.
I believe that platform integrations, result management, etc should be pretty central for CI system, and not a side-note that is just thrown on top.
This speaks to me. Lately, I’ve encountered more and more anti patterns where the project’s build system was bucked in favor of using something else. Like having a maven project and instead of following the declarative convention defining profiles and goals, everything was a hodge podge of shell scripts that only the Jenkins pipeline knew how to stitch together. Or a more recent case where the offending project had essential build functionality embedded in a Jenkins pipeline so you have to reverse engineer what it’s doing just so you can execute the build steps from your local machine. A particularly heinous predicament as the project depends on the execution of the pipeline to provide basic feedback.
Putting too much responsibility in the ci environment makes life as a developer (or anyone responsible for maintaining the ci process) more difficult. It’s far more superior to have a consistent use of the build system that can be executed the same way on your local machine as it is in your ci environment. I suppose this is the mess you find yourself in when you have other teams building your pipelines for you?
These online / paid CI systems are a dime a dozen and who knows what will happen to them in the future…
Im still rocking my good old jenkins machine, which to be fair took me a long time to set up, but has been rock solid ever since and will never cost me much and will never be shut down.
But i can definitely see the appeal of github actions, etc…
until you have to debug a GH action, especially if it only runs on main or is one of the handful of tasks that are only picked up when committed to main.
god help you, and don’t even bother with the local emulators / mocks.
But debugging Jenkins jobs is absolute pain too, in varying ways depending on how the job was defined (clicking through the ui, henerated by something, groovy, pipelines, etc).
Yea, thats really a pain and could be improved.
Are there any Jenkins Gurus out there who can give some tips?
What are the good local emulators for gh actions? The #1 reason we don’t use them is because the development loop is appallingly slow.
nektos/act was considered good enough to be adopted as the CI solution for Gitea and Forgejo. The latter uses it for all their development, seems to work out fine for them.
I've never been a fan of GitHub Actions (too locked-in/proprietary for my taste), so no idea if it lives up to expectations.
none of them are good ime, stopped using actions for the same reason
Sourcehut builds is so much better. You can actually ssh into the machine and debug it directly.
There is a community action for doing so in Github too, but god knows if it's secure or works as well as Sourcehut.
https://github.com/marketplace/actions/debugging-with-ssh
I've had a great experience using `act` to debug github actions containers. I guess your mileage, as usual, will vary depending on what you are doing in CI.
i tried act a couple years ago and ran into a lot of issues when running actions that have external dependencies
+1 for Jenkins.
At $dayjob they recently set up git runners. The effort I’m currently working on has the OS dictated to us, long story don’t ask. The OS is centos 7.
The runners do not support this. There is an effort to move to Ubuntu 22.04. The runners also don’t support this.
I’m setting up a Jenkins instance.
I've been able to effectively skip the entire CI/CD conversation by preferring modern .NET and SQLite.
I recently spent a day trying to get a GH Actions build going but got frustrated and just wrote my own console app to do it. Polling git, tracking a commit hash and running dotnet build is not rocket science. Putting this agent on the actual deployment target skips about 3 boss fights.
I wrote Linci to tackle this issue a few years back
Https://linci.tp23.org
Ci is too complicated and are basically about locking. But what you (should) do is run cli commands on dedicated boxes in remote locations.
In Linci every thing done remote is the same locally. Just pick a box for the job.
There is almost no code, and what there is could be rewritten is any language if you prefer. Storage is git/VCs + filesystem.
Filesystem are kit fashionable because they are a problem for the big boys but not for you or I. File system storage makes thing easy and hackable.
That is unix bread and butter. Microsoft need a ci in yaml. Linux does not.
Been using it for a while an a small scale and it's never made me want anything else.
Scripting bash Remoting ssh Auth pam Notification irc/II (Or mail stomp etc) Scheduling crond Webhooks not needed if repo is on the same container use bash for most hooks, and nodejs server that calls cli for github
Each and every plug-in is a bash script and some env variables.
Read other similar setups hacked up with make. But I don't like the env vars handling and syntax of make. Bash is great if what you do is simple, and as the original article points out so clearly, if your ci is complicated you should probably rethink it.
Oh and debugging builds is a charm: Ssh in to the remote box, and run the same commands the tool is running, as the same user in a bash shell(the same language) .
CI debugging at my day job is literally impossible. Read logs, try the whole flow again from the beginning.
With Linci, I can fix any stage in the flow, if I want to, or check-in and run again if I an 99% sure it will work.
Drone was absolutely perfect back when it was Free Software. Literally "run these commands in this docker container on these events" and basically nothing more. We ran the last fully open source version much longer than we probably should have.
When they went commercial, GitHub Actions became the obvious choice, but it's just married to so much weirdness and unpredictability.
Whole thing with Drone opened my eyes at least, I'll never sign a CLA again
It lives on as Woodpecker, the fork of the last truly free version. As simple as it gets, no CLAs required to contribute.
Sometimes I feel we Amazonians are in a parallel world when it comes to building and deploying.
Wait a CI isn't supposed to be a build system that also runs tests?
But you see - it's efficient if we add _our_ configuration layer with custom syntax to spawn a test-container-spawner with the right control port so that it can orchestrate the spawning of the environment and log the result to production-test-telemetry, and we NEED to have a dns-retry & dns-timeout parameter so our test-dns resolver has time to run its warm-up procedure.
And I want it all as a SaaS!
In my view, the CI system is supposed to run builds and tests in a standardized/reproducible environment (and to store logs/build artifacts).
This is useful because you get a single source of truth for "does that commit break the build" and eliminate implicit dependencies that might make builds work on one machine but not another.
But specifying dependencies between your build targets and/or sourcefiles, is turning that runner into a bad, incomplete reimplementation of make, which is what this post is complaining about AFAICT.
A CI system is more like a scheduler.
To make things simple: make is a build system, running make in a cron task is CI.
There is nothing special about tests, it is just a step in the build process that you may or may not have.
A CI is really just a "serverless" application.
You're 100% right IMHO about the convergence of powerful CI pipelines and full build systems. I'm very curious what you'll think if you try Dagger, which is my tool of choice for programming the convergence of CI and build systems. (Not affiliated, just a happy customer)
https://dagger.io/
I absolutely don't understand what it does from the website. (And there is way too much focus on "agents" on the front page for my tastes, but I guess it's 2025)
edit: all the docs are about "agents"; I don't want AI agents, is this for me at all?
So, it sounded interesting but they have bet too hard on the "developer marketing" playbook of "just give the minimum amount of explanation to get people to try the stuff".
For example, there is a quick start, so I skip that and click on "core concepts". That just redirects to quick start. There's no obvious reference or background theory.
If I was going to trust something like this I want to know the underlying theory and what guarantees it is trying to make. For example, what is included in a cache key, so that I know which changes will cause a new invocation and which ones will not.
Ideally, CI would just invoke the build system. With nix, this os trivial.
I’ve been using Pulumi automation in our CI and it’s been really nice. There’s definitely a learning curve with the asynchronous Outputs but it’s really nice for building docker containers and separating pieces of my infra that may have different deployment needs.
If complex ci becomes indistinguishable from build systems, simple ci becomes indistinguishable from workflow engines. in an ideal world you would not need an ci product at all. the problem is there is neither a great build system nor workflow engine.
Local-first, CI-second.
CI being a framework, is easy to be locked into -- preventing local-first dev.
I find justfiles can help unify commands, making it easier to prevent accruement of logic in CI.
Any universal build system is complex. You can either make the system simple and delegate the complexity to the user, like the early tools, e.g. buildbot. Or you can hide the complexity to the best of your ability, like GitHub actions. Or you expose all the complexity, like jenkins. I'm personally happy for the complexity being hidden and can deal with a few leaky abstractions if I need something non standard.
Yeah I think this is totally true. The trouble is there are loads of build systems and loads of platforms that want to provide CI with different features and capabilities. It's difficult to connect them.
One workaround that I have briefly played with but haven't tried in anger: Gitlab lets you dynamically create its `.gitlab-ci.yaml` file: https://docs.gitlab.com/ci/pipelines/downstream_pipelines/#d...
So you can have your build system construct its DAG and then convert that into a `.gitlab-ci.yaml` to run the actual commands (which may be on different platforms, machines, etc.). Haven't tried it though.
I've used dynamic pipelines. They work quite well, with two caveats: now your build process is two step and slower. And there are implementation bugs on Gitlab's side: https://gitlab.com/groups/gitlab-org/-/epics/8205
FWIW Github also allows creating CI definitions dynamically.
If there’s something worse than a gitlab-ci.yaml file that is a dynamically-generated gitlab-ci.yaml file.
That's why God created Jenkins. My favourite application ever
The author has a point about CI being a build system and I saw it used and abused in various ways (like the CI containing only one big Makefile with the justification that we can easily migrate from one CI system to another).
However, with time, you can have a very good feel of these CI systems, their strong and weak points, and basically learn how to use them in the simplest way possible in a given situation. Many problems I saw IRL are just a result of an overly complex design.
CI = Continuous Integration
'Continuous Integration' in case anyone is wondering. Not spelled out anywhere in the article.
I have built many CI/build-servers over the decades for various projects, and after using pretty much everything else out there, I've simply reverted, time and again - and, very productively - to using Plain Old Bash Scripts.
(Of course, this is only possible because I can build software in a bash shell. Basically: if you're using bash already, you don't need a foreign CI service - you just need to replace yourself with a bash script.)
I've got one for updating repo's and dealing with issues, I've got one for setting up resources and assets required prior to builds, I've got one for doing the build - then another one for packaging, another for signing and notarization, and finally one more for delivering the signed, packaged, built software to the right places for testing purposes, as well as running automated tests, reporting issues, logging the results, and informing the right folks through the PM system.
And this all integrates with our project management software (some projects use Jira, some use Redmine), since CLI interfaces to the PM systems are easily attainable and set up. If a dev wants to ignore one stage in the build pipeline, they can - all of this can be wrapped up very nicely into a Makefile/CMakeLists.txt rig, or even just a 'build-dev.sh vs. build-prod.sh' mentality.
And the build server will always run the build/integration workflow according to the modules, and we can always be sure we'll have the latest and greatest builds available to us whenever a dev goes on vacation or whatever.
And all this with cross-platform, multiple-architecture targets - the same bash scripts, incidentally, run on Linux, MacOS and Windows, and all produce the same artefacts for the relevant platform: MacOS=.pkg, Windows=.exe, Linux=.deb(.tar)
Its a truly wonderful thing to onboard a developer, and they don't need a Jenkins login or to set up Github accounts to monitor actions, and so on. They just use the same build scripts, which are a key part of the repo already, and then they can just push to the repo when they're ready and let the build servers spit out the product on a network share for distribution within the group.
This works with both Debug and Release configs, and each dev can have their own configuration (by modifying the bash scripts, or rather the env.sh module..) and build target settings - even if they use an IDE for their front-end to development. (Edit: /bin/hostname is your friend, devs. Use it to identify yourself properly!)
Of course, this all lives on well-maintained and secure hardware - not the cloud, although theoretically it could be moved to the cloud, there's just no need for it.
I'm convinced that the CI industry is mostly snake-oil being sold to technically incompetent managers. Of course, I feel that way about a lot of software services these days - but really, to do CI properly you have to have some tooling and methodology that just doesn't seem to be being taught any more, these days. Proper tooling seems to have been replaced with the ideal of 'just pay someone else to solve the problem and leave management alone'.
But, with adequate methods, you can probably build your own CI system and be very productive with it, without much fuss - and I say this with a view on a wide vista of different stacks in mind. The key thing is to force yourself to have a 'developer workstation + build server' mentality from the very beginning - and NEVER let yourself ship software from your dev machine.
(EDIT: call me a grey-beard, but get off my lawn: if you're shipping your code off to someone else [github actions, grrr...] to build artefacts for your end users, you probably haven't read Ken Thompsons' "Reflections On Trusting Trust" deeply or seriously enough. Pin it to your forehead until you do!)
How much of this is a result of poorly thought out build systems, which require layer after layer of duct tape? How much is related to chasing "cloud everything" narratives and vendor specific pipelines? Even with the sanest tooling, some individuals will manage to create unhygenic slop. How much of the remainder is a futile effort to defend against these bad actors?
2025 and Jenkins still the way to go
Disagree - using the one built into your hosting platform is the way to go, and I’d that doesn’t work for whatever reason, teamcity is better in every way
The fact that maintaining any Jenkins instance makes you want to shoot yourself and yet it's the least worst option is an indictment of the whole CI universe.
I have never seen a system with documentation as awful as Jenkins, with plugins as broken as Jenkins, with behaviors as broken as Jenkins. Groovy is a cancer, and the pipelines are half assed, unfinished and incompatible with most things.
This is pretty much my experience too. Working with jenkins is always complete pain, but at the same time I can't identify any really solid alternatives either. So far sourcehut builds is looking the most promising, but I haven't had chance to use it seriously. While it's nominally part of the rest of sourcehut ecosystem, I believe it could be run with minor tweaks also standalone if needed
"Jenkins is the worst form of CI except for all those other forms that have been tried" - Winston Churchill, probably
Not a single definition of CI in the posting at all.
A tale as old as time I suppose…
> But if your configuration files devolve into DSL, just use a real programming language already.
This times a million.
Use a real programming language with a debugger. YAML is awful and Starlark isn’t much better.
> Use a real programming language with a debugger. YAML is awful and Starlark isn’t much better.
I was with you until you said "Starlark". Starlark is a million times better than YAML in my experience; why do you think it isn't?
bonus points when you start embedding code in your yamlified dsl.
Since the article came out in 2021 did anyone ever build the product of his dreams described in the conclusion?
I am working on this problem and while I agree with the author, there is room for improvement for the current status quo:
> So going beyond the section title: CI systems aren't too complex: they shouldn't need to exist. Your CI functionality should be an extension of the build system.
True. In the sense that if you are running a test/build, you probably want to start local first (dockerize) and then run that container remotely. However, the need for CI stems from the fact that you need certain variables (ie: you might want to run this, when commit that or pull request this or that, etc.) In a sense, a CI system goes beyond the state of your code to the state of your repo and stuff connected to your repo (ie: slack)
> There is a GitHub Actions API that allows you to interact with the service. But the critical feature it doesn't let me do is define ad-hoc units of work: the actual remote execute as a service. Rather, the only way to define units of work is via workflow YAML files checked into your repository. That's so constraining!
I agree. Which is why most people will try to use the container or build system to do these complex tasks.
> Taskcluster's model and capabilities are vastly beyond anything in GitHub Actions or GitLab Pipelines today. There's a lot of great ideas worth copying.
You still need to run these tasks as containers. So, say if you want to compare two variables, that's a lot of compute for a relatively simple task. Which is why the status quo has settled with GitHub Actions.
> it should offer something like YAML configuration files like CI systems do today. That's fine: many (most?) users will stick to using the simplified YAML interface.
It should offer a basic programming/interpreted language like JavaScript.
This is an area where WebAssembly can be useful. At its core, WASM is a unit of execution. It is small, universal, cheap and has a very fast startup time compared to a full OS container. You can also run arbitrarily complex code in WASM while ensuring isolation.
My idea here is that CI becomes a collection of executable tasks that the CI architect can orchestrate while the build/test systems remain a simple build/test command that run on a traditional container.
> Take Mozilla's Taskcluster and its best-in-class specialized remote execute as a service platform.
That would be a mistake, in my opinion. There is a reason Taskcluster has failed to get any traction. Most people are not interested in engineering their CI but in getting tasks executed on certain conditions. Most companies don't have people/teams dedicated for this and it is something developers do alongside their build/test process.
> Will this dream become a reality any time soon? Probably not. But I can dream. And maybe I'll have convinced a reader to pursue it.
I am :) I do agree with your previous statement that it is a hard market to crack.
You can roll your own barebones DAG engine in any language that has promises/futures and the ability to wait for multiple promises to resolve (like JS's Promise.all()):
Want to run tasks on remote machines? Simply waves hands make a task that runs ssh.I've investigated this idea in the past. It's an obvious one but still good to have an article about it, and I'd not heard of Taskcluster so that's cool.
My conclusion was that this is near 100% a design taste and business model problem. That is, to make progress here will require a Steve Jobs of build systems. There's no technical breakthroughs required but a lot of stuff has to gel together in a way that really makes people fall in love with it. Nothing else can break through the inertia of existing practice.
Here are some of the technical problems. They're all solvable.
• Unifying local/remote execution is hard. Local execution is super fast. The bandwidth, latency and CPU speed issues are real. Users have a machine on their desk that compared to a cloud offers vastly higher bandwidth, lower latency to storage, lower latency to input devices and if they're Mac users, the fastest single-threaded performance on the market by far. It's dedicated hardware with no other users and offers totally consistent execution times. RCE can easily slow down a build instead of speeding it up and simulation is tough due to constantly varying conditions.
• As Gregory observes, you can't just do RCE as a service. CI is expected to run tasks devs aren't trusted to do, which means there has to be a way to prove that a set of tasks executed in a certain way even if the local tool driving the remote execution is untrusted, along with a way to prove that to others. As Gregory explores the problem he ends up concluding there's no way to get rid of CI and the best you can do is reduce the overlap a bit, which is hardly a compelling enough value prop. I think you can get rid of conventional CI entirely with a cleverly designed build system, but it's not easy.
• In some big ecosystems like JS/Python there aren't really build systems, just a pile of ad-hoc scripts that run linters, unit tests and Docker builds. Such devs are often happy with existing CI because the task DAG just isn't complex enough to be worth automating to begin with.
• In others like Java the ecosystem depends heavily on a constellation of build system plugins, which yields huge levels of lock-in.
• A build system task can traditionally do anything. Making tasks safe to execute remotely is therefore quite hard. Tasks may depend on platform specific tooling that doesn't exist on Linux, or that only exists on Linux. Installed programs don't helpfully offer their dependency graphs up to you, and containerizing everything is slow/resource intensive (also doesn't help for non-Linux stuff). Bazel has a sandbox that makes it easier to iterate on mapping out dependency graphs, but Bazel comes from Blaze which was designed for a Linux-only world inside Google, not the real world where many devs run on Windows or macOS, and kernel sandboxing is a mess everywhere. Plus a sandbox doesn't solve the problem, only offers better errors as you try to solve it. LLMs might do a good job here.
But the business model problems are much harder to solve. Developers don't buy tools only SaaS, but they also want to be able to do development fully locally. Because throwing a CI system up on top of a cloud is so easy it's a competitive space and the possible margins involved just don't seem that big. Plus, there is no way to market to devs that has a reasonable cost. They block ads, don't take sales calls, and some just hate the idea of running proprietary software locally on principle (none hate it in the cloud), so the only thing that works is making clients open source, then trying to saturate the open source space with free credits in the hope of gaining attention for a SaaS. But giving compute away for free comes at staggering cost that can eat all your margins. The whole dev tools market has this problem far worse than other markets do, so why would you write software for devs at all? If you want to sell software to artists or accountants it's much easier.
Fiefdoms. Old as programming. Always be on the lookout for people who want to be essential rather than useful.
(2021)
The issue that I see is that "Continuous integration" is the practice of frequently merging to main.
Continuous: do it often, daily or more often
Integration: merging changes to main
He's talking about build tools, which are a _support system_ for actual CI, but are not a substitute for it. These systems allow you to Continuously integrate, quickly and safely. But they aren't the thing itself. Using them without frequent merges to main is common, but isn't CI. It's branch maintenance.
Yes, semantic drift is a thing, but you won't get the actual benefits of the actual practice if you do something else.
If you want to talk "misdirected CI", start there.
[dead]