OBS shut down and next steps

Example of Github Actions CI for building Sailfish OS rpm and upload it to Github Releases: https://github.com/coderus/screencast/actions

Example of Gitlab CI for building Sailfish OS rpm and upload it to Gitlab Releases:

12 Likes

Wow, this is amazing! (And deserving of its own topic maybe?)

I hope Jolla can start publishing their Docker images so this can be made more official.
Another improvement would be to wrap it in a named action that we can reuse (and not have to keep as much logic in our individual yaml files) https://docs.github.com/en/actions/creating-actions/creating-a-docker-container-action

3 Likes

The Docker images @coderus is providing are a very good alternative to OBS and they can be used with pretty much any free CI/CD system out there (Github Actions, Travis etc.). I personally moved away from OBS right away when I found out about them. Of course they do not provide same level of infrastructure as OBS but they are far more modern and easier to use than OBS. I understand why Jolla is closing OBS and it’s okay from my point of view. However, there are devs who have relied on it heavily and therefore maybe some kind of migration guide would be needed.

5 Likes

Yeah, I have personally always used gitlab-ci for my apps, but from what I know, porters benefited from OBS a lot more and for them gitlab-ci/github actions is probably not an option. (You can also simplify the gitlab-ci example a bit more, if you don’t build both architectures in one go, although that may use more CI minutes.)

1 Like

As described above by @r0kk3rz, @piggz, and myself: this is not about simpler app (in terms of dependencies) development where Docker or current SDK would work fine. Closing OBS will hit complex projects that

  • develop platform further
  • ports
  • complex apps relying on many libraries

These will not be helped much by Docker as we will be missing tracking of dependencies and publishing all in repos.

8 Likes

one need to create local ci scripts. repos can be added to ssu/zypper via file:// schema from local folder. this adds a lot of hassle, but doable. If you decide to keep sfos development and not give up on this platform :slight_smile:

3 Likes

Yes, that is true, complex projects and ports may require some other solution.

Edit: I hope Gitlab package registry will have some day RPM support: https://gitlab.com/gitlab-org/gitlab/-/issues/5932

Same applies to Github package registry: https://github.com/features/packages

one need to create local ci scripts. repos can be added to ssu/zypper via file:// schema from local folder. this adds a lot of hassle, but doable.

so, in principle, reproducing OBS by those scripts.

If you decide to keep sfos development and not give up on this platform

Time will tell. Right now it does not inspire to invest time into it as an open source developer. So, I prefer now to deal with other projects and see how OBS situation will get resolved.

@skvark: let’s see how that Gitlab RPM repo ‘some day’ will match our needs

3 Likes

To sum up the problem:

  1. OBS can be seen as a CI
  2. OBS provides a RPM package repository (as CD)
  3. building of RPMs is done via RPM SPEC files that optional references dependencies from the custom build RPM repository, re-building a package will trigger all dependencies to rebuild.
  4. you can chose different build targets (arm, amd64)
  5. you can chose different releases (SFOS 3.3.x, 3.2.x)

Replacing this with an alternative solution would mean to provide two components: CI/CD system and repository service.

RE 1:
There are many CI/CD tools around (Jenkins, Travis, Drone, Concourse-CI). The build-pipeline per package could be scripted into CI files, i.e. downloading coderus docker image, pulling source code from github as ressource, building within docker env and pushing final RPMs into repository ressouce.
However the build job order must be resovled by the developer himself, i.e. root packages first then building child packages and so on.

That being said someone must provide the CI/CD infrastructure to solve this part of the problem.

RE 2:
That part of the story must be provided by some service provider (Github, Gitlab) or setup in the build pipeline, i.e. the creation or update of the required repository meta files will be triggered as a pipeline job on the repository. A basic http or S3 server repository will not solve this problem as “createrepo” command must be triggered locally to build the index files for all provided RPMs. A custom solution with some kind of API must be provided for CD to trigger this repository creation step.

RE 3:
If 1 and 2 are in place this should work (depends on the defined pipelines).

RE 4:
Docker build env provides both targets so arm and amd64 are supported.

RE 5:
CI scripts can define different target pipelines pulling the tagged docker image for the required SFOS release.

Pros:

  • Docker-based CI guarantees clean builds
  • CI scripts can be hosted inside the git repo (one source of truth also for the source code as you don’t need to clone a local copy into OBS)
  • Build process for independant RPMs can be parallized if there are multiple worker nodes
  • Pipelines allow dependencies, i.e. some staging job like a testcase fails so the complete pipeline will stop
  • Supports different resources for input and output (Github, Gist, S3 …) of source code and builds
  • more control for the developer as he defines the pipelines and what should be done
  • use a CI/CD tool that fits the job (CI/CD tool maintenance should be much lower than full OBS setup)

Cons:

  • more scripting (pipelines, shellscripts) needed
  • will take some effort to migrate packages to CI/CD pipeline
  • needs an external repository with some createrepo magic that can be triggered via CI/CD or service provider offering this feature
  • decentralized (devs will host their own CI/CD + repository)

Verdict:

In the end it comes to the situation that community will have a fragmented build and repository space for non-harbour compliant applications.

Jolla should consider if:

  • hosting a community repository with createrepo task / API is feasible
  • providing a docker-based CI/CD for automatic build pipelines (much lower maintenance instead of OBS, community has centralized CI/CD and repository for their work). I personally would suggest Concourse-CI here.
2 Likes

@nekron: thanks for summary. Few small remarks regarding pros:

Pros:

Docker-based CI guarantees clean builds

I am not sure in which case OBS builds are not clean. Maybe you could provide an example?

Supports different resources for input and output (Github, Gist, S3 …) of source code and builds

In practice, OBS provides _service facility that allows you to pull sources from git and other systems. Thus, we don’t upload source, but it is made on request from source repository.

CI scripts can be hosted inside the git repo (one source of truth also for the source code as you don’t need to clone a local copy into OBS)

As explained above, OBS solution ends up doing the same if you use _service. Almost all packages I have seen use such automatic upload. Notable exception are droid packages required for ports.

Build process for independant RPMs can be parallized if there are multiple worker nodes

This is automatically done by OBS

Pipelines allow dependencies, i.e. some staging job like a testcase fails so the complete pipeline will stop

Same can be done with OBS by refusing to compose RPM if the tests fail. In this case, all dependent packages will be either using older version (if such exists) or not compiled due to inability to resolve dependency.

5 Likes

I mean by pulling a docker image and running the build within the container let you start compling on a green field with no artifacts left on the build host from a previous build. OBS workers do the same, but launches qemu VMs instead of a LXC container which possibly consumes more resources.
Both ways (fresh VM or fresh container) generate a clean build so this point is neutral in both cases.

Ok, didn’t know that as the OBS repos I investigated had the source code as local tarballs.

Yep, modern CI/CD does this, too (Concourse-CI).

So basically OBS == CI/CD++ :wink:

Let’s see what solution Jolla will offer as OBS replacement for complex builds if anything equal OBS is even possible.

2 Likes

Huge thanks for these examples. Automatic change log and release scripts were one of my bigger gripes about SFOS app development. I think this could be a separate topic on it’s own? Any idea if OpenRepos supports some kind of API release like Github?

Would be nice to have script that automatically sets a release in Github with changelog (combining what I already have with your example should be sufficient) and then sets a release in OpenRepos.

Some kind of API was available for harmattan applications during appsformeego migration.
No automation API for creating packages available at this time.
There was a client, but it’s outdated as of now.

4 Likes

What’s OBS? Could someone please spell out the acronym, and if possible also provide a link to the announcement referenced, or provide some other context making it possible to understand exactly what this thread is about?

My best hypothesis is that O stands for Openrepos, but I have no candidates for the other two letters.

Reading this should answer all your questions. The announcement topic is linked in the first post above, btw. (although it’s not that discoverable…)

2 Likes

https://sailfishos.org/wiki/Open_Build_Service

3 Likes

I would just chime in to suggest a slight variation on the “Docker running on a CI/CD” solution talked above.

Background: I have experience in building packages - old school RPMs (barebone rpmbuild, in chroot containers mock), older RPM buildsystems (koji), more modern buildsystems (to me OBS is what Koji should have been, but I never managed to get my then employer to shift), and end-user packaging systems (such as conda: I am mostly contributing to bioconda nowadays at work).

Conda has an interesting approach that works in channels:
semi-centralized repository of multiple recipes.

That would be a way to solve the complains of @nekron:

In the end it comes to the situation that community will have a fragmented build and repository space for non-harbour compliant applications.

Instead we could imagine that some communities, e.g.: openrepos, handle giant collection of SPEC recipes, and the CI/CD scripts of that repository handle the recursive compilation of all recipes (or at least all the new ones in a commit/pull-request, etc.) when a project necessitates multiple package, they will be just seen as multiple SPECs in the same channel.

Another interesting effect, is that most package will have their recipe to build from source centralized in a few places. So if Jolla decides to introduce AArch64, it will be seen as just yet another target for which to build the recipes in a channel.

  • It’s not unlike AArch64 showing up as another possible target build arch in OBS.
  • It’s much better than the current “hunt every developper on openrepos, hope that they still pay attention to requests/messages, and pester them until they do yet another build on they local SDK/Docker and manually upload it” into which we are heading with the disappearance of OBS.

(I know that F-Droid is also having such a concept of “the repo is in charge with building the software, not each individual developper” but I don’t have any dev experience there).

4 Likes

Looks like there are many non-OpenSUSE targets on the OpenSUSE build service: https://en.opensuse.org/openSUSE:Build_Service_supported_build_targets

Fedora, even RHEL - I wonder if the Sailfish OS build target could be somehow included as well ?
Or are there some reasons this would not be possible ? Proprietary bits in the SFOS SDK come to mind as well as possibly builder resources I guess?

4 Likes

@MartinK, I was thinking along the same lines and wanted to try. Last week I’ve got the maps stack compiled for Fedora at OBS (https://build.opensuse.org/project/show/home:rinigus:maps) using the same RPM SPECs as for Sailfish. So, it is a matter of adding SFOS target to test.

Looking at Fedora target, META (https://build.opensuse.org/projects/Fedora:32/meta) seems to be doable for SFOS as well. Project config is a bit more complicated (https://build.opensuse.org/projects/Fedora:32/prjconf). Corresponding SFOS variant has significant amount of settings at https://build.merproject.org/project/prjconf/sailfishos:3.3.0.16

From legal POV, it seems that Suse expects it to be used by open source projects. As to whether we can compile against closed-source bits, I don’t know. I would have guessed that we can compile our OSS against SFOS bits, but see https://en.opensuse.org/openSUSE:Build_Service_application_blacklist .

There is a difference in use of sb2 (SFOS OBS) which is not used by Suse. Although, we have large fraction of Nemo stack compiled at Suse against Fedora 32 (for ARM as well). Let’s see how it works on devices.

From the last month experience of using Suse OBS, I can see that it is significantly loaded with many projects. In particular, ARM 32-bit does not have much worker nodes and you could end up waiting for a while with them. Aarch64 is OK, x86_64 has sometimes longer delays with OBS repository state recalculations. All in all, it is slower and you do notice it than SFOS OBS.

4 Likes

Using GitLab Pages you can already roll and host your own repositories pretty easily on GitLab. I have done it for debian and Arch packages.

Here are my CI template configurations: https://gitlab.com/nobodyinperson/ci-templates (especially reprepro and pacman-repo)

Here an example of a single project that builds and hosts my personal Arch packages on GitLab: https://gitlab.com/nobodyinperson/repository

Here an example of a single project that builds and hosts a debian repository: https://gitlab.com/nobodyinperson/sshtunnel

Should be quickly adaptable to RPM repos as well.

And BTW for (Python/)QML-only SailfishOS apps, you don’t even need the SDK and can build the RPM in GitLab CI yourself like for my Hasher app: https://gitlab.com/nobodyinperson/harbour-hasher

3 Likes