OBS shut down and next steps

Yeah, I have personally always used gitlab-ci for my apps, but from what I know, porters benefited from OBS a lot more and for them gitlab-ci/github actions is probably not an option. (You can also simplify the gitlab-ci example a bit more, if you don’t build both architectures in one go, although that may use more CI minutes.)

1 Like

As described above by @r0kk3rz, @piggz, and myself: this is not about simpler app (in terms of dependencies) development where Docker or current SDK would work fine. Closing OBS will hit complex projects that

  • develop platform further
  • ports
  • complex apps relying on many libraries

These will not be helped much by Docker as we will be missing tracking of dependencies and publishing all in repos.

9 Likes

one need to create local ci scripts. repos can be added to ssu/zypper via file:// schema from local folder. this adds a lot of hassle, but doable. If you decide to keep sfos development and not give up on this platform :slight_smile:

3 Likes

Yes, that is true, complex projects and ports may require some other solution.

Edit: I hope Gitlab package registry will have some day RPM support: https://gitlab.com/gitlab-org/gitlab/-/issues/5932

Same applies to Github package registry: https://github.com/features/packages

one need to create local ci scripts. repos can be added to ssu/zypper via file:// schema from local folder. this adds a lot of hassle, but doable.

so, in principle, reproducing OBS by those scripts.

If you decide to keep sfos development and not give up on this platform

Time will tell. Right now it does not inspire to invest time into it as an open source developer. So, I prefer now to deal with other projects and see how OBS situation will get resolved.

@skvark: let’s see how that Gitlab RPM repo ‘some day’ will match our needs

3 Likes

To sum up the problem:

  1. OBS can be seen as a CI
  2. OBS provides a RPM package repository (as CD)
  3. building of RPMs is done via RPM SPEC files that optional references dependencies from the custom build RPM repository, re-building a package will trigger all dependencies to rebuild.
  4. you can chose different build targets (arm, amd64)
  5. you can chose different releases (SFOS 3.3.x, 3.2.x)

Replacing this with an alternative solution would mean to provide two components: CI/CD system and repository service.

RE 1:
There are many CI/CD tools around (Jenkins, Travis, Drone, Concourse-CI). The build-pipeline per package could be scripted into CI files, i.e. downloading coderus docker image, pulling source code from github as ressource, building within docker env and pushing final RPMs into repository ressouce.
However the build job order must be resovled by the developer himself, i.e. root packages first then building child packages and so on.

That being said someone must provide the CI/CD infrastructure to solve this part of the problem.

RE 2:
That part of the story must be provided by some service provider (Github, Gitlab) or setup in the build pipeline, i.e. the creation or update of the required repository meta files will be triggered as a pipeline job on the repository. A basic http or S3 server repository will not solve this problem as “createrepo” command must be triggered locally to build the index files for all provided RPMs. A custom solution with some kind of API must be provided for CD to trigger this repository creation step.

RE 3:
If 1 and 2 are in place this should work (depends on the defined pipelines).

RE 4:
Docker build env provides both targets so arm and amd64 are supported.

RE 5:
CI scripts can define different target pipelines pulling the tagged docker image for the required SFOS release.

Pros:

  • Docker-based CI guarantees clean builds
  • CI scripts can be hosted inside the git repo (one source of truth also for the source code as you don’t need to clone a local copy into OBS)
  • Build process for independant RPMs can be parallized if there are multiple worker nodes
  • Pipelines allow dependencies, i.e. some staging job like a testcase fails so the complete pipeline will stop
  • Supports different resources for input and output (Github, Gist, S3 …) of source code and builds
  • more control for the developer as he defines the pipelines and what should be done
  • use a CI/CD tool that fits the job (CI/CD tool maintenance should be much lower than full OBS setup)

Cons:

  • more scripting (pipelines, shellscripts) needed
  • will take some effort to migrate packages to CI/CD pipeline
  • needs an external repository with some createrepo magic that can be triggered via CI/CD or service provider offering this feature
  • decentralized (devs will host their own CI/CD + repository)

Verdict:

In the end it comes to the situation that community will have a fragmented build and repository space for non-harbour compliant applications.

Jolla should consider if:

  • hosting a community repository with createrepo task / API is feasible
  • providing a docker-based CI/CD for automatic build pipelines (much lower maintenance instead of OBS, community has centralized CI/CD and repository for their work). I personally would suggest Concourse-CI here.
2 Likes

@nekron: thanks for summary. Few small remarks regarding pros:

Pros:

Docker-based CI guarantees clean builds

I am not sure in which case OBS builds are not clean. Maybe you could provide an example?

Supports different resources for input and output (Github, Gist, S3 …) of source code and builds

In practice, OBS provides _service facility that allows you to pull sources from git and other systems. Thus, we don’t upload source, but it is made on request from source repository.

CI scripts can be hosted inside the git repo (one source of truth also for the source code as you don’t need to clone a local copy into OBS)

As explained above, OBS solution ends up doing the same if you use _service. Almost all packages I have seen use such automatic upload. Notable exception are droid packages required for ports.

Build process for independant RPMs can be parallized if there are multiple worker nodes

This is automatically done by OBS

Pipelines allow dependencies, i.e. some staging job like a testcase fails so the complete pipeline will stop

Same can be done with OBS by refusing to compose RPM if the tests fail. In this case, all dependent packages will be either using older version (if such exists) or not compiled due to inability to resolve dependency.

5 Likes

I mean by pulling a docker image and running the build within the container let you start compling on a green field with no artifacts left on the build host from a previous build. OBS workers do the same, but launches qemu VMs instead of a LXC container which possibly consumes more resources.
Both ways (fresh VM or fresh container) generate a clean build so this point is neutral in both cases.

Ok, didn’t know that as the OBS repos I investigated had the source code as local tarballs.

Yep, modern CI/CD does this, too (Concourse-CI).

So basically OBS == CI/CD++ :wink:

Let’s see what solution Jolla will offer as OBS replacement for complex builds if anything equal OBS is even possible.

2 Likes

Huge thanks for these examples. Automatic change log and release scripts were one of my bigger gripes about SFOS app development. I think this could be a separate topic on it’s own? Any idea if OpenRepos supports some kind of API release like Github?

Would be nice to have script that automatically sets a release in Github with changelog (combining what I already have with your example should be sufficient) and then sets a release in OpenRepos.

Some kind of API was available for harmattan applications during appsformeego migration.
No automation API for creating packages available at this time.
There was a client, but it’s outdated as of now.

4 Likes

What’s OBS? Could someone please spell out the acronym, and if possible also provide a link to the announcement referenced, or provide some other context making it possible to understand exactly what this thread is about?

My best hypothesis is that O stands for Openrepos, but I have no candidates for the other two letters.

Reading this should answer all your questions. The announcement topic is linked in the first post above, btw. (although it’s not that discoverable…)

2 Likes

https://sailfishos.org/wiki/Open_Build_Service

3 Likes

I would just chime in to suggest a slight variation on the “Docker running on a CI/CD” solution talked above.

Background: I have experience in building packages - old school RPMs (barebone rpmbuild, in chroot containers mock), older RPM buildsystems (koji), more modern buildsystems (to me OBS is what Koji should have been, but I never managed to get my then employer to shift), and end-user packaging systems (such as conda: I am mostly contributing to bioconda nowadays at work).

Conda has an interesting approach that works in channels:
semi-centralized repository of multiple recipes.

That would be a way to solve the complains of @nekron:

In the end it comes to the situation that community will have a fragmented build and repository space for non-harbour compliant applications.

Instead we could imagine that some communities, e.g.: openrepos, handle giant collection of SPEC recipes, and the CI/CD scripts of that repository handle the recursive compilation of all recipes (or at least all the new ones in a commit/pull-request, etc.) when a project necessitates multiple package, they will be just seen as multiple SPECs in the same channel.

Another interesting effect, is that most package will have their recipe to build from source centralized in a few places. So if Jolla decides to introduce AArch64, it will be seen as just yet another target for which to build the recipes in a channel.

  • It’s not unlike AArch64 showing up as another possible target build arch in OBS.
  • It’s much better than the current “hunt every developper on openrepos, hope that they still pay attention to requests/messages, and pester them until they do yet another build on they local SDK/Docker and manually upload it” into which we are heading with the disappearance of OBS.

(I know that F-Droid is also having such a concept of “the repo is in charge with building the software, not each individual developper” but I don’t have any dev experience there).

4 Likes

Looks like there are many non-OpenSUSE targets on the OpenSUSE build service: https://en.opensuse.org/openSUSE:Build_Service_supported_build_targets

Fedora, even RHEL - I wonder if the Sailfish OS build target could be somehow included as well ?
Or are there some reasons this would not be possible ? Proprietary bits in the SFOS SDK come to mind as well as possibly builder resources I guess?

5 Likes

@MartinK, I was thinking along the same lines and wanted to try. Last week I’ve got the maps stack compiled for Fedora at OBS (https://build.opensuse.org/project/show/home:rinigus:maps) using the same RPM SPECs as for Sailfish. So, it is a matter of adding SFOS target to test.

Looking at Fedora target, META (https://build.opensuse.org/projects/Fedora:32/meta) seems to be doable for SFOS as well. Project config is a bit more complicated (https://build.opensuse.org/projects/Fedora:32/prjconf). Corresponding SFOS variant has significant amount of settings at https://build.merproject.org/project/prjconf/sailfishos:3.3.0.16

From legal POV, it seems that Suse expects it to be used by open source projects. As to whether we can compile against closed-source bits, I don’t know. I would have guessed that we can compile our OSS against SFOS bits, but see https://en.opensuse.org/openSUSE:Build_Service_application_blacklist .

There is a difference in use of sb2 (SFOS OBS) which is not used by Suse. Although, we have large fraction of Nemo stack compiled at Suse against Fedora 32 (for ARM as well). Let’s see how it works on devices.

From the last month experience of using Suse OBS, I can see that it is significantly loaded with many projects. In particular, ARM 32-bit does not have much worker nodes and you could end up waiting for a while with them. Aarch64 is OK, x86_64 has sometimes longer delays with OBS repository state recalculations. All in all, it is slower and you do notice it than SFOS OBS.

5 Likes

Using GitLab Pages you can already roll and host your own repositories pretty easily on GitLab. I have done it for debian and Arch packages.

Here are my CI template configurations: https://gitlab.com/nobodyinperson/ci-templates (especially reprepro and pacman-repo)

Here an example of a single project that builds and hosts my personal Arch packages on GitLab: https://gitlab.com/nobodyinperson/repository

Here an example of a single project that builds and hosts a debian repository: https://gitlab.com/nobodyinperson/sshtunnel

Should be quickly adaptable to RPM repos as well.

And BTW for (Python/)QML-only SailfishOS apps, you don’t even need the SDK and can build the RPM in GitLab CI yourself like for my Hasher app: https://gitlab.com/nobodyinperson/harbour-hasher

3 Likes

I just did a Docker-based CI/CD evaluation using the public Drone.io build service to create a build pipeline for @rinigus OSM Scout Server components.

My experiences on a two day run are:

  • OBS SPEC files needs to be slightly modified for build pipeline using SailfishOS Docker SDK by @coderus. The relevant parts are downloading the git repository instead of using tarballs plus sometimes SDK specific fixes for the compiler toolchain, e.g. one package tried to use CC instead of GCC (mapnik python helper script) which needed to be patched aso. You usually have to verify build processes SPEC by SPEC.
  • Drone.io public build service (cloud.drone.io) aborts building a pipeline after one hour usage. A long build processes like mapnik will fail for this. I examined the SPEC file and modified build and install to speed up compilation (scons.py will use -j1 instead of an higher parallel build). Again SPEC file was modified to increase build times.
  • Serial builds for applications with a lot of packages must be parallelized because of build time limit of one hour per pipeline on standard Drone.io setup.
  • A lot of redundancy as for every pipeline SailfishOS SDK must be setup with the required packages. This can be however speed up if providing a custom SDK base docker image with all the needed packages pre-installed.
  • Artifacts must be stored externally as there is no shared storage between pipelines. However steps share the same storage (workspace), but keep the one hour build time limit in mind.
  • Drone.io’s plugins are a bit wonky especially build cache plugins for SFTP or S3 cache/sync. I ended up using rlcone docker image to copy artifacts to S3 storage back and forward.
  • Parallelizm must be hand-crafted. This is very cubersome if building a large project with a lot of dependencies that must be build and fulfilled.
  • You will end up with a huge YAML file as build pipelines can not be evaluated at runtime like Concourse CI does.
  • Compilation time for 11 packages required for OSM Scout Server took 25 minutes on cloud.drone.io including release upload to Github

Drone.io is a nice public build service for smaller SailfishOS applications if there are not too many dependencies that needs to be build. If so you will end up in YAML hell and have to face build timeouts if not parallelized massively. On the other hand Drone.io is as simple as it can be to have CI/CD in place for your SailfishOS applications. For building a Linux distribution and a lot of dependant RPMs it is not.

If you want to check for yourself look here:

3 Likes

Here are the exact build times on cloud.drone.io:

Compared to OBS build is slightly faster even doing all the zypper install commands.

E.g. OSM Scout Server cloud.drone.io 227s vs 426s (OBS), valhalla cloud.drone.io 848s vs. 1193s (OBS).

1 Like

So, if I understand correctly, you have to hand-craft sequence by which the compilation is done for each of the projects if you want to use that CI service? Something that has been done for you when you use OBS. And, as you demonstrate, it is far from trivial in the case of OSM Scout Server. Similar problem will occur with the ports and anything complicated that we would like to build from SFOS.

2 Likes