Problems with build engine (docker) on custom configured host kernel

Hi sailors!
I use Gentoo with many kernel configuration options disabled, so I am not sure what is causing this behavior.

The SSH daemon in the docker container kept rejecting incoming connections from the SDK.
The container produces no standard logs, so I added a custom log device to the container with:

[root@host ~]# socat unix-listen:/proc/$(pgrep -f sshd_config_engine)/root/dev/log,fork -
<86>Jun 16 12:55:05 sshd[120]: Accepted publickey for mersdk from 172.19.0.1 port 42130 ssh2: RSA SHA256:VoxluqaHrQICYwyJbOGAEPpaLthHT2NyNTmoT9yrjkA
<86>Jun 16 12:55:05 sshd[120]: pam_unix(sshd:session): session opened for user mersdk by (uid=0)
<83>Jun 16 12:55:05 sshd[120]: pam_loginuid(sshd:session): Error writing /proc/self/loginuid: Operation not permitted
<83>Jun 16 12:55:05 sshd[120]: pam_loginuid(sshd:session): set_loginuid failed
<83>Jun 16 12:55:05 sshd[120]: error: PAM: pam_open_session(): Cannot make/remove an entry for the specified session

Searching online hinted for enabling CONFIG_AUDIT and CONFIG_AUDIT_SYSCALL kernel config options, but these were already enabled, and it said EPERM (Operation not permitted), not ENOENT (No such file or directory), and manipulating /proc/self/loginuid works outside docker (I think so). The thread suggested disabling pam_loginuid module, so I finally managed to get the Build Engine to work by executing:

[root@host ~]# docker exec sailfish-sdk-build-engine_arusekk sed -i /loginuid/d /etc/pam.d/sshd

Hope this workaround helps someone! :slight_smile:
I also hope that this can help the SDK developers if they want to make the docker version of build engine more portable.

2 Likes

Probably you have cgroupsv2, too? Won’t work with that: [SDK 3.6.6] Unable to install with docker on Debian 11

Yes, I do use cgroupsv2 (actually it is a mixed setup I guess? see below), but it does not seem to be an issue, since I use OpenRC, and not systemd on my host, so the containerized systemd does not have any problem with that:

$ mount |grep cgroup
cgroup_root on /sys/fs/cgroup type tmpfs (rw,nosuid,nodev,noexec,relatime,seclabel,size=10240k,mode=755)
openrc on /sys/fs/cgroup/openrc type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,release_agent=/lib/rc/sh/cgroup-release-agent.sh,name=openrc)
none on /sys/fs/cgroup/unified type cgroup2 (rw,nosuid,nodev,noexec,relatime,seclabel,nsdelegate)
cpuset on /sys/fs/cgroup/cpuset type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuset)
cpu on /sys/fs/cgroup/cpu type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpu)
cpuacct on /sys/fs/cgroup/cpuacct type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,cpuacct)
blkio on /sys/fs/cgroup/blkio type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,blkio)
memory on /sys/fs/cgroup/memory type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,memory)
devices on /sys/fs/cgroup/devices type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,devices)
freezer on /sys/fs/cgroup/freezer type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,freezer)
pids on /sys/fs/cgroup/pids type cgroup (rw,nosuid,nodev,noexec,relatime,seclabel,pids)

I think that it is an overkill to run a full system inside the container, instead of a simple foreground SSH daemon. Or even no SSH daemon at all, but starting the containers on a per-command basis.

Anyway, applying the loginuid workaround turns out to be enough for me, and I have successfully built my new app. :blush:

1 Like

Note that cgroups v2 is not an issue since SDK release 3.8. The Docker-based build engine has been closely following the architecture of the original VBox-based build engine in order to limit the maintenance cost. Hence some design choices that seem suboptimal when compared to the regular Docker usage patterns.