OBS shut down and next steps

What’s OBS? Could someone please spell out the acronym, and if possible also provide a link to the announcement referenced, or provide some other context making it possible to understand exactly what this thread is about?

My best hypothesis is that O stands for Openrepos, but I have no candidates for the other two letters.

Reading this should answer all your questions. The announcement topic is linked in the first post above, btw. (although it’s not that discoverable…)

2 Likes

https://sailfishos.org/wiki/Open_Build_Service

3 Likes

I would just chime in to suggest a slight variation on the “Docker running on a CI/CD” solution talked above.

Background: I have experience in building packages - old school RPMs (barebone rpmbuild, in chroot containers mock), older RPM buildsystems (koji), more modern buildsystems (to me OBS is what Koji should have been, but I never managed to get my then employer to shift), and end-user packaging systems (such as conda: I am mostly contributing to bioconda nowadays at work).

Conda has an interesting approach that works in channels:
semi-centralized repository of multiple recipes.

That would be a way to solve the complains of @nekron:

In the end it comes to the situation that community will have a fragmented build and repository space for non-harbour compliant applications.

Instead we could imagine that some communities, e.g.: openrepos, handle giant collection of SPEC recipes, and the CI/CD scripts of that repository handle the recursive compilation of all recipes (or at least all the new ones in a commit/pull-request, etc.) when a project necessitates multiple package, they will be just seen as multiple SPECs in the same channel.

Another interesting effect, is that most package will have their recipe to build from source centralized in a few places. So if Jolla decides to introduce AArch64, it will be seen as just yet another target for which to build the recipes in a channel.

  • It’s not unlike AArch64 showing up as another possible target build arch in OBS.
  • It’s much better than the current “hunt every developper on openrepos, hope that they still pay attention to requests/messages, and pester them until they do yet another build on they local SDK/Docker and manually upload it” into which we are heading with the disappearance of OBS.

(I know that F-Droid is also having such a concept of “the repo is in charge with building the software, not each individual developper” but I don’t have any dev experience there).

4 Likes

Looks like there are many non-OpenSUSE targets on the OpenSUSE build service: https://en.opensuse.org/openSUSE:Build_Service_supported_build_targets

Fedora, even RHEL - I wonder if the Sailfish OS build target could be somehow included as well ?
Or are there some reasons this would not be possible ? Proprietary bits in the SFOS SDK come to mind as well as possibly builder resources I guess?

5 Likes

@MartinK, I was thinking along the same lines and wanted to try. Last week I’ve got the maps stack compiled for Fedora at OBS (https://build.opensuse.org/project/show/home:rinigus:maps) using the same RPM SPECs as for Sailfish. So, it is a matter of adding SFOS target to test.

Looking at Fedora target, META (https://build.opensuse.org/projects/Fedora:32/meta) seems to be doable for SFOS as well. Project config is a bit more complicated (https://build.opensuse.org/projects/Fedora:32/prjconf). Corresponding SFOS variant has significant amount of settings at https://build.merproject.org/project/prjconf/sailfishos:3.3.0.16

From legal POV, it seems that Suse expects it to be used by open source projects. As to whether we can compile against closed-source bits, I don’t know. I would have guessed that we can compile our OSS against SFOS bits, but see https://en.opensuse.org/openSUSE:Build_Service_application_blacklist .

There is a difference in use of sb2 (SFOS OBS) which is not used by Suse. Although, we have large fraction of Nemo stack compiled at Suse against Fedora 32 (for ARM as well). Let’s see how it works on devices.

From the last month experience of using Suse OBS, I can see that it is significantly loaded with many projects. In particular, ARM 32-bit does not have much worker nodes and you could end up waiting for a while with them. Aarch64 is OK, x86_64 has sometimes longer delays with OBS repository state recalculations. All in all, it is slower and you do notice it than SFOS OBS.

5 Likes

Using GitLab Pages you can already roll and host your own repositories pretty easily on GitLab. I have done it for debian and Arch packages.

Here are my CI template configurations: https://gitlab.com/nobodyinperson/ci-templates (especially reprepro and pacman-repo)

Here an example of a single project that builds and hosts my personal Arch packages on GitLab: https://gitlab.com/nobodyinperson/repository

Here an example of a single project that builds and hosts a debian repository: https://gitlab.com/nobodyinperson/sshtunnel

Should be quickly adaptable to RPM repos as well.

And BTW for (Python/)QML-only SailfishOS apps, you don’t even need the SDK and can build the RPM in GitLab CI yourself like for my Hasher app: https://gitlab.com/nobodyinperson/harbour-hasher

3 Likes

I just did a Docker-based CI/CD evaluation using the public Drone.io build service to create a build pipeline for @rinigus OSM Scout Server components.

My experiences on a two day run are:

  • OBS SPEC files needs to be slightly modified for build pipeline using SailfishOS Docker SDK by @coderus. The relevant parts are downloading the git repository instead of using tarballs plus sometimes SDK specific fixes for the compiler toolchain, e.g. one package tried to use CC instead of GCC (mapnik python helper script) which needed to be patched aso. You usually have to verify build processes SPEC by SPEC.
  • Drone.io public build service (cloud.drone.io) aborts building a pipeline after one hour usage. A long build processes like mapnik will fail for this. I examined the SPEC file and modified build and install to speed up compilation (scons.py will use -j1 instead of an higher parallel build). Again SPEC file was modified to increase build times.
  • Serial builds for applications with a lot of packages must be parallelized because of build time limit of one hour per pipeline on standard Drone.io setup.
  • A lot of redundancy as for every pipeline SailfishOS SDK must be setup with the required packages. This can be however speed up if providing a custom SDK base docker image with all the needed packages pre-installed.
  • Artifacts must be stored externally as there is no shared storage between pipelines. However steps share the same storage (workspace), but keep the one hour build time limit in mind.
  • Drone.io’s plugins are a bit wonky especially build cache plugins for SFTP or S3 cache/sync. I ended up using rlcone docker image to copy artifacts to S3 storage back and forward.
  • Parallelizm must be hand-crafted. This is very cubersome if building a large project with a lot of dependencies that must be build and fulfilled.
  • You will end up with a huge YAML file as build pipelines can not be evaluated at runtime like Concourse CI does.
  • Compilation time for 11 packages required for OSM Scout Server took 25 minutes on cloud.drone.io including release upload to Github

Drone.io is a nice public build service for smaller SailfishOS applications if there are not too many dependencies that needs to be build. If so you will end up in YAML hell and have to face build timeouts if not parallelized massively. On the other hand Drone.io is as simple as it can be to have CI/CD in place for your SailfishOS applications. For building a Linux distribution and a lot of dependant RPMs it is not.

If you want to check for yourself look here:

3 Likes

Here are the exact build times on cloud.drone.io:

Compared to OBS build is slightly faster even doing all the zypper install commands.

E.g. OSM Scout Server cloud.drone.io 227s vs 426s (OBS), valhalla cloud.drone.io 848s vs. 1193s (OBS).

1 Like

So, if I understand correctly, you have to hand-craft sequence by which the compilation is done for each of the projects if you want to use that CI service? Something that has been done for you when you use OBS. And, as you demonstrate, it is far from trivial in the case of OSM Scout Server. Similar problem will occur with the ports and anything complicated that we would like to build from SFOS.

2 Likes

That’s it. There are a lot of bolts and nuts missing when it comes to dependency solving and RPM repository deployment. Simply there is no tool on the market to my best knowledge that converts RPM SPEC files with accessing public RPM repositories to analyse dependencies (which packages are already found in repository and can be skipped from building, needs to be build or are missing) and convert everything into a big .drone.yaml pipeline keeping things highly parallelized using some dependency graph magic.

In an ideal world you would simply put your SPECs into a build dir, launch Drone.IO converter that uses the public Jolla repos, checks available packages against SPEC and knows what packages needs to be installed or if missing eventually build from a different SPEC file. In the end everything is uploaded into a cache repo so zippy can use its own package resolving logic instead of manually installing the missing RPMs like I did in the POC.

But wait! Isn’t there a thing called OBS for that?

Cheers,
Nek

6 Likes

Another funny fact about rebuilding qmapboxgl using the OBS SPEC file (alas slightly modified to clone the Github repository and branch into sfos) is that OBS build is much smaller than Docker-based build:

4.7 MB :grinning:

[nemo@Sailfish lib]$ ls libqmap* -lah
-rwxr-xr-x    1 root     root        4.7M Jun  7 23:00 libqmapboxgl.so
[nemo@Sailfish lib]$

vs.

305 MB :thinking:

[nemo@Sailfish ~]$ rpm -qlv qmapbox.rpm
-rwxr-xr-x    1 root    root                305907968 Nov 11 20:37 /usr/lib/libqmapboxgl.so
[nemo@Sailfish ~]$

Looks like the whole lib was compiled statically using the SDK.

UPDATE:
Ok it looks like OBS is stripping the lib before packing the RPM. If I manually strip the .so file it will shrink to 4.7 MB.

UPDATE 2:
Mhh… brp-strip is called and SDK uses this script …

# Strip ELF binaries
for f in `find $RPM_BUILD_ROOT -type f \( -perm -0100 -o -perm -0010 -o -perm -0001 \) -exec file {} \; | \
        grep -v "^${RPM_BUILD_ROOT}/\?usr/lib/debug"  | \
        grep -v ' shared object,' | \
        sed -n -e 's/^\(.*\):[  ]*ELF.*, not stripped/\1/p'`; do
        $STRIP -g "$f" || :
done

That means that binaries if not shared objects will be stripped. However libqmapbox.so is a shared object file so stripping would never happen.

On the other hand the OBS RPM contains a stripped .so:

[nemo@Sailfish ~]$ file /usr/lib/libqmapboxgl.so
/usr/lib/libqmapboxgl.so: ELF 32-bit LSB shared object, ARM, EABI5 version 1 (GNU/Linux), dynamically linked, BuildID[sha1]=53a35a42e1488c25544bb05bbd8dfa0f08b49577, stripped
[nemo@Sailfish ~]$

Is there some post-processing on OBS before the RPM was build or does OBS use some patched brp-strip script removing the grep -v “shared object”?

1 Like

@veskuh Any update on the plans for OBS? thx

7 Likes

We got a lot of good insights from community in the forum on how OBS is used and why it is important. We’ve considered this input while doing our roadmaps, and will try to improve for example the SDK.

The plan is not yet ready to be shared, but we will be sharing it well in advance before doing major changes in public infra. I expect for the next couple of months the OBS will run as usual and any changes would be announced well in advance.

12 Likes

Please remember to consider the common: and native-common: repos used for community ports. Id like to suggest that it could decrease the maintenance burden if the server was a replication (without repos) of whatever version of OBS is used internally.

2 Likes

@veskuh, thank you very much for the update. With the adoption of aarch64 and running OBS for few more months, do you plan to fix OBS aarch64 targets? As it is, we cannot build software for 4.0.1.x targeting aarch64 nor use OBS for porting to that arch. Which is pity, as if there are more developers working on this adoption, we could help out ironing issues and move faster forward.

7 Likes

@veskuh, ping regarding aarch64 support at OBS. See the question above

1 Like

If we are going to keep community OBS running for longer time then obviously we do need to use time to make some necessary updates. Need for aarch64 -support is noted, but there is likely some extra effort compared to just keeping OBS setup running. I think our planning is starting to take some shape and that we can open them up for community comments in near future.

2 Likes

@veskuh, thank you very much for sharing it! I am sure we all wait forward to your plan and will be happy to constructively discuss it.

4 Likes

Community OBS - Refurbished and re-floated :sunglasses:

8 Likes