Issues with installing sailfish sdk (docker) with debian bullseye (11)

I’m encountering several issues with the sailfish sdk with debian bullseye (docker.io 20.10.2 ) :

  • it seems ssh daemon is not started automatically in the docker engine -> ssh fails with connection refused
  • it is probably related to this, but to enable ssh access again, i had to go through the following step:
    • execute a shell inside the container
    • alter /etc/ssh/sshd_config to add mersdk to the “AllowGroups” directive
    • manually create the /home/mersdk/.ssh directory, add the key to authorized_keys
    • do the same for /root/.ssh

After doing all this, and starting sshd manually, it seems to work. I’m wondering what the root cause of this is (i’ve been doing a complete reinstall of the sdk, after having clean both the SailfishSDK directory and the relevant .config directory, without success). I had no such issues with debian buster (current stable, bullseye is now in freeze stage and likely to become stable in a few months).

Unfortunately, i got a lot of issues issues still. Updates for example are not possibles.

I’m not familiar with docker. Does anyone have any clue about why the init process in the docker container does not seem to launch sshd (and other processes as well) ? I’ve tried reinstalling the sdk multiple times, with different docker versions, still with the same result. Nothing shows up in the logs, so it’s a bit hard to diagnose.

On my Debian SID I had to add vsyscall=emulate to the kernel command line to run some docker containers. I have no idea if that also affects the SDK image though.

klick me

Tried it, without success. I’m back with virtual box VM for now. I have the same issue (broken docker sdk which was previously working) with a debian buster, but a backported 5.9 kernel, so i guess a change between 5.8 and 5.9 kernel broke something. I don’t know enough of docker or linux kernel to know what, though.

Just to be sure: the docker build engine works for you on debian buster with 5.8 kernel, but no longer works with 5.9 kernel?

I can’t be completely sure, but that seems to be the case. It was working, and then it was broken. The kernel update has been one noticeable change which could explain it. I’d be happy to do more testing/debugging on this, but i’m not sure where to start from.

The problem seems to be systemd related. Old versions of systemd (like version 225 in the SDK) expects cgroup v1 while newer versions provide the cgroup v2 (aka unified) interface. You will see related error messages using dmesg as root. systemd in the image failes to mount /sys/fs/cgroup/systemd from the host. Also running docker with a terminal helps debugging that problem, i.e. docker run -it ....

To switch back to the cgroup v1 behaviour, you can add systemd.unified_cgroup_hierarchy=false to the kernel command line. After a reboot, the directory /sys/fs/cgroup/systemd should exist.

3 Likes

@JulienBlanc could you try that systemd.unified_cgroup_hierarchy=false thing?

Thanks, i will try that this week-end, can’t do it now. But that’s probably a good catch, the buster machine the sdk is failing also has a backported systemd (247).

Regards,

So, I set up a clean image in virtual box. Here are the results:

  • debian buster, works fine, as expected (was used for installation)
  • migration to kernel 5.9 from backports: still working fine
  • migration to systemd 247 from backports: fail, sdk not working anymore <-- here is the culprit
  • added systemd.unified_cgroup_hierarchy=false to boot option: working again.

Will try that on the main host, but it seems the workaround is okay. Thanks a lot !

Edit: checked that on bullseye, working fine. Thanks !

2 Likes

thanks! adding systemd.unified_cgroup_hierarchy=false to GRUB_CMDLINE_LINUX work! :partying_face: