@MartinK, I was thinking along the same lines and wanted to try. Last week I’ve got the maps stack compiled for Fedora at OBS (https://build.opensuse.org/project/show/home:rinigus:maps) using the same RPM SPECs as for Sailfish. So, it is a matter of adding SFOS target to test.
From legal POV, it seems that Suse expects it to be used by open source projects. As to whether we can compile against closed-source bits, I don’t know. I would have guessed that we can compile our OSS against SFOS bits, but see https://en.opensuse.org/openSUSE:Build_Service_application_blacklist .
There is a difference in use of sb2 (SFOS OBS) which is not used by Suse. Although, we have large fraction of Nemo stack compiled at Suse against Fedora 32 (for ARM as well). Let’s see how it works on devices.
From the last month experience of using Suse OBS, I can see that it is significantly loaded with many projects. In particular, ARM 32-bit does not have much worker nodes and you could end up waiting for a while with them. Aarch64 is OK, x86_64 has sometimes longer delays with OBS repository state recalculations. All in all, it is slower and you do notice it than SFOS OBS.
I just did a Docker-based CI/CD evaluation using the public Drone.io build service to create a build pipeline for @rinigus OSM Scout Server components.
My experiences on a two day run are:
OBS SPEC files needs to be slightly modified for build pipeline using SailfishOS Docker SDK by @coderus. The relevant parts are downloading the git repository instead of using tarballs plus sometimes SDK specific fixes for the compiler toolchain, e.g. one package tried to use CC instead of GCC (mapnik python helper script) which needed to be patched aso. You usually have to verify build processes SPEC by SPEC.
Drone.io public build service (cloud.drone.io) aborts building a pipeline after one hour usage. A long build processes like mapnik will fail for this. I examined the SPEC file and modified build and install to speed up compilation (scons.py will use -j1 instead of an higher parallel build). Again SPEC file was modified to increase build times.
Serial builds for applications with a lot of packages must be parallelized because of build time limit of one hour per pipeline on standard Drone.io setup.
A lot of redundancy as for every pipeline SailfishOS SDK must be setup with the required packages. This can be however speed up if providing a custom SDK base docker image with all the needed packages pre-installed.
Artifacts must be stored externally as there is no shared storage between pipelines. However steps share the same storage (workspace), but keep the one hour build time limit in mind.
Drone.io’s plugins are a bit wonky especially build cache plugins for SFTP or S3 cache/sync. I ended up using rlcone docker image to copy artifacts to S3 storage back and forward.
Parallelizm must be hand-crafted. This is very cubersome if building a large project with a lot of dependencies that must be build and fulfilled.
You will end up with a huge YAML file as build pipelines can not be evaluated at runtime like Concourse CI does.
Compilation time for 11 packages required for OSM Scout Server took 25 minutes on cloud.drone.io including release upload to Github
Drone.io is a nice public build service for smaller SailfishOS applications if there are not too many dependencies that needs to be build. If so you will end up in YAML hell and have to face build timeouts if not parallelized massively. On the other hand Drone.io is as simple as it can be to have CI/CD in place for your SailfishOS applications. For building a Linux distribution and a lot of dependant RPMs it is not.
So, if I understand correctly, you have to hand-craft sequence by which the compilation is done for each of the projects if you want to use that CI service? Something that has been done for you when you use OBS. And, as you demonstrate, it is far from trivial in the case of OSM Scout Server. Similar problem will occur with the ports and anything complicated that we would like to build from SFOS.
That’s it. There are a lot of bolts and nuts missing when it comes to dependency solving and RPM repository deployment. Simply there is no tool on the market to my best knowledge that converts RPM SPEC files with accessing public RPM repositories to analyse dependencies (which packages are already found in repository and can be skipped from building, needs to be build or are missing) and convert everything into a big .drone.yaml pipeline keeping things highly parallelized using some dependency graph magic.
In an ideal world you would simply put your SPECs into a build dir, launch Drone.IO converter that uses the public Jolla repos, checks available packages against SPEC and knows what packages needs to be installed or if missing eventually build from a different SPEC file. In the end everything is uploaded into a cache repo so zippy can use its own package resolving logic instead of manually installing the missing RPMs like I did in the POC.
But wait! Isn’t there a thing called OBS for that?
Another funny fact about rebuilding qmapboxgl using the OBS SPEC file (alas slightly modified to clone the Github repository and branch into sfos) is that OBS build is much smaller than Docker-based build:
4.7 MB
[nemo@Sailfish lib]$ ls libqmap* -lah
-rwxr-xr-x 1 root root 4.7M Jun 7 23:00 libqmapboxgl.so
[nemo@Sailfish lib]$
We got a lot of good insights from community in the forum on how OBS is used and why it is important. We’ve considered this input while doing our roadmaps, and will try to improve for example the SDK.
The plan is not yet ready to be shared, but we will be sharing it well in advance before doing major changes in public infra. I expect for the next couple of months the OBS will run as usual and any changes would be announced well in advance.
Please remember to consider the common: and native-common: repos used for community ports. Id like to suggest that it could decrease the maintenance burden if the server was a replication (without repos) of whatever version of OBS is used internally.
@veskuh, thank you very much for the update. With the adoption of aarch64 and running OBS for few more months, do you plan to fix OBS aarch64 targets? As it is, we cannot build software for 4.0.1.x targeting aarch64 nor use OBS for porting to that arch. Which is pity, as if there are more developers working on this adoption, we could help out ironing issues and move faster forward.
If we are going to keep community OBS running for longer time then obviously we do need to use time to make some necessary updates. Need for aarch64 -support is noted, but there is likely some extra effort compared to just keeping OBS setup running. I think our planning is starting to take some shape and that we can open them up for community comments in near future.