OBS shut down and next steps

@nekron: thanks for summary. Few small remarks regarding pros:

Pros:

Docker-based CI guarantees clean builds

I am not sure in which case OBS builds are not clean. Maybe you could provide an example?

Supports different resources for input and output (Github, Gist, S3 …) of source code and builds

In practice, OBS provides _service facility that allows you to pull sources from git and other systems. Thus, we don’t upload source, but it is made on request from source repository.

CI scripts can be hosted inside the git repo (one source of truth also for the source code as you don’t need to clone a local copy into OBS)

As explained above, OBS solution ends up doing the same if you use _service. Almost all packages I have seen use such automatic upload. Notable exception are droid packages required for ports.

Build process for independant RPMs can be parallized if there are multiple worker nodes

This is automatically done by OBS

Pipelines allow dependencies, i.e. some staging job like a testcase fails so the complete pipeline will stop

Same can be done with OBS by refusing to compose RPM if the tests fail. In this case, all dependent packages will be either using older version (if such exists) or not compiled due to inability to resolve dependency.

5 Likes

I mean by pulling a docker image and running the build within the container let you start compling on a green field with no artifacts left on the build host from a previous build. OBS workers do the same, but launches qemu VMs instead of a LXC container which possibly consumes more resources.
Both ways (fresh VM or fresh container) generate a clean build so this point is neutral in both cases.

Ok, didn’t know that as the OBS repos I investigated had the source code as local tarballs.

Yep, modern CI/CD does this, too (Concourse-CI).

So basically OBS == CI/CD++ :wink:

Let’s see what solution Jolla will offer as OBS replacement for complex builds if anything equal OBS is even possible.

2 Likes

Huge thanks for these examples. Automatic change log and release scripts were one of my bigger gripes about SFOS app development. I think this could be a separate topic on it’s own? Any idea if OpenRepos supports some kind of API release like Github?

Would be nice to have script that automatically sets a release in Github with changelog (combining what I already have with your example should be sufficient) and then sets a release in OpenRepos.

Some kind of API was available for harmattan applications during appsformeego migration.
No automation API for creating packages available at this time.
There was a client, but it’s outdated as of now.

4 Likes

What’s OBS? Could someone please spell out the acronym, and if possible also provide a link to the announcement referenced, or provide some other context making it possible to understand exactly what this thread is about?

My best hypothesis is that O stands for Openrepos, but I have no candidates for the other two letters.

Reading this should answer all your questions. The announcement topic is linked in the first post above, btw. (although it’s not that discoverable…)

2 Likes

https://sailfishos.org/wiki/Open_Build_Service

3 Likes

I would just chime in to suggest a slight variation on the “Docker running on a CI/CD” solution talked above.

Background: I have experience in building packages - old school RPMs (barebone rpmbuild, in chroot containers mock), older RPM buildsystems (koji), more modern buildsystems (to me OBS is what Koji should have been, but I never managed to get my then employer to shift), and end-user packaging systems (such as conda: I am mostly contributing to bioconda nowadays at work).

Conda has an interesting approach that works in channels:
semi-centralized repository of multiple recipes.

That would be a way to solve the complains of @nekron:

In the end it comes to the situation that community will have a fragmented build and repository space for non-harbour compliant applications.

Instead we could imagine that some communities, e.g.: openrepos, handle giant collection of SPEC recipes, and the CI/CD scripts of that repository handle the recursive compilation of all recipes (or at least all the new ones in a commit/pull-request, etc.) when a project necessitates multiple package, they will be just seen as multiple SPECs in the same channel.

Another interesting effect, is that most package will have their recipe to build from source centralized in a few places. So if Jolla decides to introduce AArch64, it will be seen as just yet another target for which to build the recipes in a channel.

  • It’s not unlike AArch64 showing up as another possible target build arch in OBS.
  • It’s much better than the current “hunt every developper on openrepos, hope that they still pay attention to requests/messages, and pester them until they do yet another build on they local SDK/Docker and manually upload it” into which we are heading with the disappearance of OBS.

(I know that F-Droid is also having such a concept of “the repo is in charge with building the software, not each individual developper” but I don’t have any dev experience there).

4 Likes

Looks like there are many non-OpenSUSE targets on the OpenSUSE build service: https://en.opensuse.org/openSUSE:Build_Service_supported_build_targets

Fedora, even RHEL - I wonder if the Sailfish OS build target could be somehow included as well ?
Or are there some reasons this would not be possible ? Proprietary bits in the SFOS SDK come to mind as well as possibly builder resources I guess?

5 Likes

@MartinK, I was thinking along the same lines and wanted to try. Last week I’ve got the maps stack compiled for Fedora at OBS (https://build.opensuse.org/project/show/home:rinigus:maps) using the same RPM SPECs as for Sailfish. So, it is a matter of adding SFOS target to test.

Looking at Fedora target, META (https://build.opensuse.org/projects/Fedora:32/meta) seems to be doable for SFOS as well. Project config is a bit more complicated (https://build.opensuse.org/projects/Fedora:32/prjconf). Corresponding SFOS variant has significant amount of settings at https://build.merproject.org/project/prjconf/sailfishos:3.3.0.16

From legal POV, it seems that Suse expects it to be used by open source projects. As to whether we can compile against closed-source bits, I don’t know. I would have guessed that we can compile our OSS against SFOS bits, but see https://en.opensuse.org/openSUSE:Build_Service_application_blacklist .

There is a difference in use of sb2 (SFOS OBS) which is not used by Suse. Although, we have large fraction of Nemo stack compiled at Suse against Fedora 32 (for ARM as well). Let’s see how it works on devices.

From the last month experience of using Suse OBS, I can see that it is significantly loaded with many projects. In particular, ARM 32-bit does not have much worker nodes and you could end up waiting for a while with them. Aarch64 is OK, x86_64 has sometimes longer delays with OBS repository state recalculations. All in all, it is slower and you do notice it than SFOS OBS.

5 Likes

Using GitLab Pages you can already roll and host your own repositories pretty easily on GitLab. I have done it for debian and Arch packages.

Here are my CI template configurations: https://gitlab.com/nobodyinperson/ci-templates (especially reprepro and pacman-repo)

Here an example of a single project that builds and hosts my personal Arch packages on GitLab: https://gitlab.com/nobodyinperson/repository

Here an example of a single project that builds and hosts a debian repository: https://gitlab.com/nobodyinperson/sshtunnel

Should be quickly adaptable to RPM repos as well.

And BTW for (Python/)QML-only SailfishOS apps, you don’t even need the SDK and can build the RPM in GitLab CI yourself like for my Hasher app: https://gitlab.com/nobodyinperson/harbour-hasher

3 Likes

I just did a Docker-based CI/CD evaluation using the public Drone.io build service to create a build pipeline for @rinigus OSM Scout Server components.

My experiences on a two day run are:

  • OBS SPEC files needs to be slightly modified for build pipeline using SailfishOS Docker SDK by @coderus. The relevant parts are downloading the git repository instead of using tarballs plus sometimes SDK specific fixes for the compiler toolchain, e.g. one package tried to use CC instead of GCC (mapnik python helper script) which needed to be patched aso. You usually have to verify build processes SPEC by SPEC.
  • Drone.io public build service (cloud.drone.io) aborts building a pipeline after one hour usage. A long build processes like mapnik will fail for this. I examined the SPEC file and modified build and install to speed up compilation (scons.py will use -j1 instead of an higher parallel build). Again SPEC file was modified to increase build times.
  • Serial builds for applications with a lot of packages must be parallelized because of build time limit of one hour per pipeline on standard Drone.io setup.
  • A lot of redundancy as for every pipeline SailfishOS SDK must be setup with the required packages. This can be however speed up if providing a custom SDK base docker image with all the needed packages pre-installed.
  • Artifacts must be stored externally as there is no shared storage between pipelines. However steps share the same storage (workspace), but keep the one hour build time limit in mind.
  • Drone.io’s plugins are a bit wonky especially build cache plugins for SFTP or S3 cache/sync. I ended up using rlcone docker image to copy artifacts to S3 storage back and forward.
  • Parallelizm must be hand-crafted. This is very cubersome if building a large project with a lot of dependencies that must be build and fulfilled.
  • You will end up with a huge YAML file as build pipelines can not be evaluated at runtime like Concourse CI does.
  • Compilation time for 11 packages required for OSM Scout Server took 25 minutes on cloud.drone.io including release upload to Github

Drone.io is a nice public build service for smaller SailfishOS applications if there are not too many dependencies that needs to be build. If so you will end up in YAML hell and have to face build timeouts if not parallelized massively. On the other hand Drone.io is as simple as it can be to have CI/CD in place for your SailfishOS applications. For building a Linux distribution and a lot of dependant RPMs it is not.

If you want to check for yourself look here:

3 Likes

Here are the exact build times on cloud.drone.io:

Compared to OBS build is slightly faster even doing all the zypper install commands.

E.g. OSM Scout Server cloud.drone.io 227s vs 426s (OBS), valhalla cloud.drone.io 848s vs. 1193s (OBS).

1 Like

So, if I understand correctly, you have to hand-craft sequence by which the compilation is done for each of the projects if you want to use that CI service? Something that has been done for you when you use OBS. And, as you demonstrate, it is far from trivial in the case of OSM Scout Server. Similar problem will occur with the ports and anything complicated that we would like to build from SFOS.

2 Likes

That’s it. There are a lot of bolts and nuts missing when it comes to dependency solving and RPM repository deployment. Simply there is no tool on the market to my best knowledge that converts RPM SPEC files with accessing public RPM repositories to analyse dependencies (which packages are already found in repository and can be skipped from building, needs to be build or are missing) and convert everything into a big .drone.yaml pipeline keeping things highly parallelized using some dependency graph magic.

In an ideal world you would simply put your SPECs into a build dir, launch Drone.IO converter that uses the public Jolla repos, checks available packages against SPEC and knows what packages needs to be installed or if missing eventually build from a different SPEC file. In the end everything is uploaded into a cache repo so zippy can use its own package resolving logic instead of manually installing the missing RPMs like I did in the POC.

But wait! Isn’t there a thing called OBS for that?

Cheers,
Nek

6 Likes

Another funny fact about rebuilding qmapboxgl using the OBS SPEC file (alas slightly modified to clone the Github repository and branch into sfos) is that OBS build is much smaller than Docker-based build:

4.7 MB :grinning:

[nemo@Sailfish lib]$ ls libqmap* -lah
-rwxr-xr-x    1 root     root        4.7M Jun  7 23:00 libqmapboxgl.so
[nemo@Sailfish lib]$

vs.

305 MB :thinking:

[nemo@Sailfish ~]$ rpm -qlv qmapbox.rpm
-rwxr-xr-x    1 root    root                305907968 Nov 11 20:37 /usr/lib/libqmapboxgl.so
[nemo@Sailfish ~]$

Looks like the whole lib was compiled statically using the SDK.

UPDATE:
Ok it looks like OBS is stripping the lib before packing the RPM. If I manually strip the .so file it will shrink to 4.7 MB.

UPDATE 2:
Mhh… brp-strip is called and SDK uses this script …

# Strip ELF binaries
for f in `find $RPM_BUILD_ROOT -type f \( -perm -0100 -o -perm -0010 -o -perm -0001 \) -exec file {} \; | \
        grep -v "^${RPM_BUILD_ROOT}/\?usr/lib/debug"  | \
        grep -v ' shared object,' | \
        sed -n -e 's/^\(.*\):[  ]*ELF.*, not stripped/\1/p'`; do
        $STRIP -g "$f" || :
done

That means that binaries if not shared objects will be stripped. However libqmapbox.so is a shared object file so stripping would never happen.

On the other hand the OBS RPM contains a stripped .so:

[nemo@Sailfish ~]$ file /usr/lib/libqmapboxgl.so
/usr/lib/libqmapboxgl.so: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=53a35a42e1488c25544bb05bbd8dfa0f08b49577, stripped
[nemo@Sailfish ~]$

Is there some post-processing on OBS before the RPM was build or does OBS use some patched brp-strip script removing the grep -v “shared object”?

1 Like

@veskuh Any update on the plans for OBS? thx

7 Likes

We got a lot of good insights from community in the forum on how OBS is used and why it is important. We’ve considered this input while doing our roadmaps, and will try to improve for example the SDK.

The plan is not yet ready to be shared, but we will be sharing it well in advance before doing major changes in public infra. I expect for the next couple of months the OBS will run as usual and any changes would be announced well in advance.

12 Likes

Please remember to consider the common: and native-common: repos used for community ports. Id like to suggest that it could decrease the maintenance burden if the server was a replication (without repos) of whatever version of OBS is used internally.

2 Likes

@veskuh, thank you very much for the update. With the adoption of aarch64 and running OBS for few more months, do you plan to fix OBS aarch64 targets? As it is, we cannot build software for 4.0.1.x targeting aarch64 nor use OBS for porting to that arch. Which is pity, as if there are more developers working on this adoption, we could help out ironing issues and move faster forward.

7 Likes