Build system
Dec. 12th, 2024 05:46 pmStupid tech questions.
At work the source tree has a bunch of different components. A couple are "big" like the FPGA image, and the whole device image which includes that, plus a linux distro which includes some of the other components as well. The other components are mostly "small", small individual C/C++ programs or python packages.
The small components are mostly managed by conan package manager. This is ever so useful for assuring that each explicitly lists its dependencies and doesn't just depend on lots of header and source files from elsewhere in the source tree. But we are not using most of the functionality of a package manager -- everything in the source tree is assumed to be compatible, nothing depends on an earlier version of anything else or anything.
In effect the package management (a) is an easy way to install the latest thing on a different computer, as is done during the automated tests which interface with real hardware and (b) a convenient way of caching results between different builds. This doesn't matter that much for most of the components which use the package manager, but does matter for the FPGA image which takes 40 minutes to build, and is usually downloaded from jenkins rather than built fresh.
What I don't like is that there's no way to say "build everything necessary". You need to know all the individual components to build, or the sequence of jenkins jobs to trigger. And as there have been more added, I think it's got less convenient to build all the ones which could have been affected by a change, and to make sure you haven't missed any.
The other concern is that at the moment, "packages" built locally or from main are usually identified by a "release" number, but that is not always different. Whereas other builds identify packages by a git hash, which we would like to use in more places, but doesn't make it clear which is "the latest".
This is not all how we want it to be! It was put together for valid reasons but not planned in advance.
What I want is... a way of building all the different components that need to be built. With some (but not necessarily conmplete) ability to figure out prerequisites, or "which things have changed". Which is effectively... a build system. It feels like it should be obvious what build system to use, but I'm not finding it obvious yet!
Like, that could be make. But one requirement is that it's easy to say "the linux/windows version of this python package depends on the linux/windows version of this C++ package" and "the fragle configuration of the device image depends on the blah configuration of the fpga" and Make doesn't easily support that sort of parameterised target. I think?
I could write a simple script to "build everything". I quite like that. But... surely writing a NEW build system isn't correct?
We could use one of the systems we have, like conan. But conan seems optimised to be a package manager installing prerequisites, not "rebuilding these directories". Bitbake seems very well designed for this but is designed for building embedded linux systems and is probably more heavyweight than we want. And atm each component is built from some command line, we don't really want conan build system invoking other conan build systems.
It might be simpler if we moved away from the packaging and used an old school "The outputs are HERE in the source tree. You use the ones in the source tree and build them if necessary." But I'm not sure.
There's a bunch of ways which would be fine but maybe not great. It feels like there must be some standard answer but I'm not sure what it is. Thoughts?
At work the source tree has a bunch of different components. A couple are "big" like the FPGA image, and the whole device image which includes that, plus a linux distro which includes some of the other components as well. The other components are mostly "small", small individual C/C++ programs or python packages.
The small components are mostly managed by conan package manager. This is ever so useful for assuring that each explicitly lists its dependencies and doesn't just depend on lots of header and source files from elsewhere in the source tree. But we are not using most of the functionality of a package manager -- everything in the source tree is assumed to be compatible, nothing depends on an earlier version of anything else or anything.
In effect the package management (a) is an easy way to install the latest thing on a different computer, as is done during the automated tests which interface with real hardware and (b) a convenient way of caching results between different builds. This doesn't matter that much for most of the components which use the package manager, but does matter for the FPGA image which takes 40 minutes to build, and is usually downloaded from jenkins rather than built fresh.
What I don't like is that there's no way to say "build everything necessary". You need to know all the individual components to build, or the sequence of jenkins jobs to trigger. And as there have been more added, I think it's got less convenient to build all the ones which could have been affected by a change, and to make sure you haven't missed any.
The other concern is that at the moment, "packages" built locally or from main are usually identified by a "release" number, but that is not always different. Whereas other builds identify packages by a git hash, which we would like to use in more places, but doesn't make it clear which is "the latest".
This is not all how we want it to be! It was put together for valid reasons but not planned in advance.
What I want is... a way of building all the different components that need to be built. With some (but not necessarily conmplete) ability to figure out prerequisites, or "which things have changed". Which is effectively... a build system. It feels like it should be obvious what build system to use, but I'm not finding it obvious yet!
Like, that could be make. But one requirement is that it's easy to say "the linux/windows version of this python package depends on the linux/windows version of this C++ package" and "the fragle configuration of the device image depends on the blah configuration of the fpga" and Make doesn't easily support that sort of parameterised target. I think?
I could write a simple script to "build everything". I quite like that. But... surely writing a NEW build system isn't correct?
We could use one of the systems we have, like conan. But conan seems optimised to be a package manager installing prerequisites, not "rebuilding these directories". Bitbake seems very well designed for this but is designed for building embedded linux systems and is probably more heavyweight than we want. And atm each component is built from some command line, we don't really want conan build system invoking other conan build systems.
It might be simpler if we moved away from the packaging and used an old school "The outputs are HERE in the source tree. You use the ones in the source tree and build them if necessary." But I'm not sure.
There's a bunch of ways which would be fine but maybe not great. It feels like there must be some standard answer but I'm not sure what it is. Thoughts?