Great work, and great initiative. I see you pulled off what i did not, and broke out the actual build step into its own named workflow step.
Well, it basically worked for me, but I tried using a GitHub Docker step. But since it is impossible to set build args there still, i had the choice of using versioning to either version the pipeline, or to point out the SDK version. You making a generic pipeline step of course works around that.
Now i just wish Jolla would start releasing official Docker images. There is nothing wrong with yours, but if this would be the new de-facto way of CI/CD-ing SFOS apps, that seal of approval would be really nice to have. However, @martyone stated at the latest community meeting that it is not something they want to do, but instead they have something else planned. I’m curious what that might be, and will of course be absolutely thrilled if it is something even better… but i’d like to be annoying and ask Jolla to please reconsider that none the less. Docker is an excellent way of distributing toolchains. As usual in community work, having access to tools in reusable chunks (as opposed to behind an install wizard) allows for experimentation and more individually adapted solutions.
I’m not an app developer or experienced user of workflows, so take this with grain of salt because this might not make any sense to you at all and I might be making things up:
Wouldn’t it be easier if some highle skilled invidual or community would create a do-it-all script to handle the building you clone in your workflow and this way you would always have proper and up to date way of building your apps with advanced features, even if you ain’t that skilled developer? The script could also have all kinds of additional features you could call with arguments to have more advanced functionality to the building process if you need them.
For the start the script could for example handle the building, packaging and releasing, the workflow would need only 2 steps. Something like this (I added args to show how they would be added if needed, just drop the whole with: -part if you don’t need them):
Okay, that’s probably as good as unofficial stuff gets, and definitely good enough. Could perhaps do with some branding and making it more “community” than individual. But that is certainly not something that is high priority.
Let’s see what comes out of the OBS shutdown discussion and if any sailors chime in here.
BTW, i wonder if startup is any quicker if the docker images are published on GitHub. (obviously Docker Hub is better in the general case, just curious).
I dont like github actions ci, i like how gitlab ci works.
but who cares how fast is docker image pulling between XXXGBps connected remote servers anyway?
I mean testing not in the %test phase of the spec/rpmbuild, but after the rpms are generated, install whatever they contain and run it.
For example, if my app brings a systemd unit or timer, or a CLI program or script, see if it works correctly. E.g. I have a build of ImageMagick. I also have a script which uses a set of test image data and can run the convert tool from IM to check all supported file types work correctly.
So I would like to create a (gitlab) pipeline which does build → install → run test script. But I have not been able to get zypper to install my built RPMs, because it expects packages to reside in a proper repo. And plain rpm won’t work because it doesn’t resolve runtime dependencies.
So I guess my question boils down to:
How to I transform a set of self-built RPMs into a “repo” that the local zypper on the build machine can use, but still pull dependencies from the official repos
Given that repo is available, how do I transport said repo from one (GitlabCI) “task” to the next, as new docker instances are booted for each task (something about the cache and dependencies keywords in GitlabCI)
Is it even smart to run applications on the build image, are tests there representative of the environment on “real” SFOS devices?
I should look in to gitlab at some point, but seeing as Jolla seems to be moving back to GitHub, and it being much bigger, it’s nice that it works there to.
Hmm, you are probably right… But something is taking a whole lot of time already before the build starts. And at least in theory GitHub only gives you a limited number of minutes each month, so if that could have been improved, it would have been nice. Not to mention that humans are impatient by nature
I just got enough bits to resurrect my old computer as a build server, and it builds SeaPrint in 36 seconds, quite a bit better better than GitHub CI, and mostly in the startup part.
Not doubting you, but calling mb2 for building will install build dependencies. Calling zypper in a script will also install things. Why can’t I install self-built packages as well?
Mind you, I’m not talking about actually full-fledged apps requiring GUI/lipstick/silica/QML and interactive user input. Just running regular binaries and scripts in a SFOS environment.
I tried enabling a github action for telepathy-tank to test this out. Unfortunately it needs a dependency to be build first, how would I do that? Add another github action for the dependency and then somehow fetch the artifact?