RFC: adding shell scripting capabilities will bring PM2 to the next level

vi? You disappoint me.
ed is the Standard Unix text editor.

2 Likes

Ah here we agree: documentation of PM is in need of improvement, both for end-users and patch developers.

1 Like

sed, everything else is for boys/girls… :rofl:

About changing the sailing route, it would not so hard as you might think:

Check the update #2, please.

sed ‘s/ed/sed/’ ? (insert evil laugh here).

1 Like

It’s not hidden. It’s a well known forum bug/problem.

1 Like

SYSTEM PM2 PATCHES, IMPLEMENTATION EXAMPLE

In applying this Patch Manager patch, two cases arises

case #1: cacerts_gps folder exists

[root@sfos ~]# ls -al /system/etc/security/cacerts_gps
isrgrootx1.pem -> /tmp/patchmanager/.../isrgrootx1.pem
roots.pem -> /tmp/patchmanager/.../roots.pem

case #2: cacerts_gps folder does not exist

[root@sfos ~]# ls -al /system/etc/security/cacerts_gps
cacerts_gps -> /tmp/patchmanager/system/etc/security/cacerts_gps

Both case are wrong because Patch Manager creates symlink where it was supposed to create dirs and files. In fact, symlinks are not the same story because some tools that operate on the filesystem required a specific option to follow symlinks. Like old versions of the tar and tar is one of the widely used tool for doing backups. The approach can be easily changed following these examples:

mkdir /tmp/test
cd /tmp/test
repourl="coderus.openrepos.net/media/documents"
tarball="robang74-x10ii-iii-agps-config-emea-0.2.0.tar.gz"
curl https://$repourl/$tarball | tar xvzf -

The following shell code will works with -p0 and with -p1 because the sed regex deal with /var, new/var and ./new/var indifferently and every relative filepath is converted into an absolute /filepath.

files=$(sed -ne "s,^+++ *\.*[^/]*\([^ ]*\).*,/\\1,p" unified_diff.patch)
grep -qe "^+++ */" unified_diff.patch || false
plvl=$?

Now it is time to do a backup, for future restore but remember that overlay tricks the old tar, therefore a recent tar or busybox tar is needed

busybox tar czf /$store_dirpath/patch-$project_name.tar.gz $files
echo "$files" > /$store_dirpath/patch-$project_name.list

This determines the uninstall procedure by defining a global non-parametric function

patch_uninstall() { 
	rm -f $(cat /$store_dirpath/patch-$project_name.list)
	busybox tar xzf /$store_dirpath/patch-$project_name.tar.gz -C /
}

This is useful before apply the diff patch in order to creates dir and files not symlinks, therefore the symlinks engine could be ignored

patch_apply() {
	ret=0
	for i in $files; do mkdir -p $(basename $i); touch $i; done
	if ! patch -d / $pagrs -p$plvl unified_diff.patch; then 
		patch_uninstall
		ret=$((1+$?))
	fi
	return $ret
}

Is a system patch? For example system_diff.patch instead of unified_diff.patch? No, then uninstall at shutdown time. But why uninstall a patch at shutdown time with the risk that shutdown procedure and the related uninstall procedure can be interupped by a user physical keys controlled switch off? Because the filesystem for UI patches is volatile by default? Also the root filesystem about the reboot? Anyway:

num=$(printf "%05d" $(ls -1 /$store_dirpath/[0-9]*.tgz | wc -l))
busybox tar czf /$store_dirpath/$num-$project_name_applied.tar.gz $files

At boot time, in something functionally equivalent to /etc/rc.local but after all system device has been mounted not just those in /etc/fstab but before every systemd service will start, will be inserted the restoring procedure which will applies all the patch in their correct sequence

files=$(ls -1 /$store_dirpath/[0-9]*_applied.tar.gz)
test "$files" = "" && exit 0
for i in $files; do busybox tar xzf $i -C /; done

That’s all, unless I forgot or overlooked something essential or important.


UPDATE #1

This seems promising to install system updates also for those that do not require a reboot:

   system-update.target, system-update-pre.target,
   system-update-cleanup.service
       A special target unit that is used for offline system
       updates.  systemd-system-update-generator(8) will redirect
       the boot process to this target if /system-update or
       /etc/system-update exists. For more information see
       systemd.offline-updates(7).

It seems a general solution that requires - by documentation - a reboot but the reboot is managed by the configuration and not automatically enforced. However, due to its specific nature and delicacy, it can be a better option to add a service ordering related to system-update.target or even better system-update-pre.target in such a way that the patches which might conflicts with package updates will be applied before making them fail, as expected to be, instead of being overwritten.

UPDATE #2

The Patch Manager patches can be applied and unapplied many times during a user session and this is great feature :heart_eyes:. However, changing the way in which the PM2 works some of them might not fall into this category.

For example DNS Alternative is delivered like a RPM and it proobably it is the best way to have it. Before, it was a RPM package a developer might develop a patch. In general the way of providing a change could be exemplified in three main passages:

  1. system patch → 2. optional RPM package → 3. default RPM package

Which also implies three different level of integration with SFOS: unsopported (community only), supported in terms of the repository consistency (community + Jolla) and Jolla supported (commercial support). Which three different level of SLA and QoS in terms of supporting the end-users.

IMHO, the main difference between a system patch and a RPM package is bringing into the system new binaries rather than modify the system configuration. In this second case, having a system patch seems more reasonable expecially for the end-users that can choose to reconfigure the system as they wish - in the same manner they are doing with UI.

While UI patches can requires an application retart and a fingerprinter reader patch is functionally identical to a patch for e.g. the Settings, those have impact to the network and rely on the installation of 3rd party package e.g. dnsmasq need a little more attention. In fact, restarting network by SailFish Utilities do not consider the case in which dnsmasq is installed and configured (to fix). For all the others, a reboot is almost necessary.

This brings us to the conclusion that there are three patches classes:

  1. those changes UI, app, stand-alone services level: easy to restart
  2. those changes complex services like network/d-bus: might or might not be restarted
  3. those changes the system at such level that a reboot is needed: restart useless

The #1 and the #3 are almost straightforward cases to deal with. The second is a matter of policy: dnsmasq is optional but supported by network restart because it is an important feature that users usually requires to enable. Or on the other side is a 3rd party unsupported services an then the end-user needs to reboot his/her smartphone.

Maybe you misunderstand; pull requests are how you constructively ask for adding new features to existing software on git projects, if not all VCS.

2 Likes

In the very best case, you’ve fixed a bug (or more), diligently documented that, and / or and added a well thought out feature that you’ve clearly explained. Well, one could go on, but, I just made a merge request which disguises a support request. It’s complicated enough, and the person on the other side knows me, so I can probably get away with it, but, it is borderline. I’m really lucky to get PRs which 99.99% of the time simply improve stuff I’m responsible for. I’m very thankful for that.

2 Likes

It is a matter of role separations. Not about feasibility or lack of will to help.

If you don’t want to create pull requests, just say so, nobody is forcing you. But we do not have fixed roles in this users community, it is all voluntarily cherry picking. If you want to help with documentation, you are welcome. Same applies for pull requests or creating apps. And your “role” (whatever you envision as your role) is already ambiguous btw, posting howto’s for beginners and asking for features, but also submitting patches for patchmanager.

2 Likes

Who did the software should take care of documenting and maintaining it because the same time that I spend to understand, documenting and fixing others people code, I can deliver a better solution or simply switch to a better product. Which most probably is what many did before me.

UPDATE #1

First of all, I wish to engage this

We always play roles: we are the father/mother, the friend, the boy/girlfriend, the son/daughter, the manager or the employee. Our lives are about the roles but this roles are something like a jail - a box in which society force to stay in. It is wonderful that a community of free people do not feel the need to fit in a box-schema.

When we approach a situation or a product or a software, we can play many roles: nerd hacker, cyber-security expert, marketing & sales, business owner, product manager, project manager, end-user, advanced user, social engineer, social manager, troll, etc.

This two way of playing roles are completely different.

The first way is about constrains and expectations that OTHERS put over us and we need to conform to them in a way or another. The second way is to put ourselves in the shoes of someone else, someone different from us without being or became someone else. Like playing Dungeon and Dragons, never made anyone being or became an elf or a wizard. However to play that roles, we need to know in deep what’s about that role (set of values) and it is not.

A bit of philosophy - Aristotle wrote, “It is the mark of an educated mind to be able to entertain a thought without accepting it.” Being able to look at & evaluate different values without necessarily adopting them is perhaps the central skill required in changing one’s own life in a meaningful way. (Source: Aristotele cited and commented on Twitter)

Few people can play multiples roles at the same time because few people are acknowledged about that roles enough to play that roles. In fact, companies usually tend to create multi-cultural heterogeneous teams because the complexity of the modern-world products cannot be faced just by 1-single PoV approach or by segmenting the design-to-delivery among separated in-line departments.

This is the reason because community-based products are usually the winner on the long terms because in the Bazaar there is many PoV. Unfortunately, this is not always true. Some company teams are better than some community-bazaars and viceversa.

end of philosophy

Now, it is my time to explain the quick answer I wrote here before going to take care of my real life and then returning back.

Settings:System → Patchmanger:Settings → Activate enabled Patches when booting

Why activating at boot time instead of made them permanent with that option?

This question is extremely relevant - not only for the consequences that brings in terms of constrains - but also because it is a design choice. This design choice should be explained in details into the documentation otherwise this software have to be re-design from scratch.

No, I cannot make a pull request to fix this because this is an information that WHO designed the software should explain. It could be a very good reason/decision or it was a good reason/decision at that time, e.g. SFOS 2.x but not anymore.

This is the reason because this design choices should be documented, they aging and they are still impacting despite aging. The temporary workaroud became the product and the product became the legacy. We need to stop this before even it begins, documenting it.

What this has to do with roles? Unless someone plays the product and project manager role in the community those PoVs are missing and in fact - AFAIK - the part of the documentation which refers to this design choice is missing.

end of theory

We evaluate a change betwee “activate at boot time” and “keep persistent”.

We can do a confrontation:

  • persistence is easier because we “patch & forget
  • forgetting is not a good practice therefore checking
  • checking but when?
  • every time Patch Manager page is shown
  • how we can implement this check?
  • pach -Rp0 --dry-run can fail or not
  • is it quick enough?
  • yes → done

Here we are, we can have a Persistent Patch Manager with a little of changes. Now, it is your time to play roles or simply express yourself. What good can provide persistence and what problems can cause?

end of doing better

UPDATE #2

Why I am pedanticly collecting this suggestions in the Quick Start Guide. Because one day someone will arrive and with social engineering that will please you, will convince you to do these things or equivalent ones. In the lucky scenario otherwise not. However, the main way is the hard way: community-bazaar should learn how to play roles.

Vague philosophy lessons we can do without : check ✓
Very pedantic tone towards others : check ✓
Walls of text changed over and over again : check ✓
Redirect your “answer” to another subject in the same post : check ✓

Despite similar remarks by others I fail to see any change, and giving up hope on any self-reflection. All I can do is give you no more ammunition, so this will the last reaction you will get from me.

6 Likes

Yes, please go ahead with either of these two options; pick the one which suits you best.

This design choice should be explained in details into the documentation otherwise this software have to be re-design from scratch. […] this is an information that WHO designed the software should explain.

Neither you or anyone else is in the position to tell somebody here what should or has to be done. You may make suggestions and try to convince people (though a staccato of half-baked ideas is no very convincing), but basically it is “DIY or leave it”.

We need to stop this before even it begins, …

I wonder who comprises the “we” you keep recurring to: Do you have multiple personalities? If so, this is fine, because then each “we” means “I”; otherwise, I can only reiterate “Neither you or anyone else is in the position to tell somebody here what should or has to be done.”.

5 Likes

Just I did, moving forward for both of them. About the better solution:

  1. I show that it is easy and feasible to patch the filesystem (files and directory) without creating links to a temporary directory
  2. the Patch Manager can move easily from “apply at boot time” in “persistent mode” with check by --dry-run option which probably is just implemented because currently the Patch Manager is able to detect when a patched file is changed
  3. avoid that Patch Manager removes patches when the system is asked to go down for shutdown or reboot

In particular about the point #3, I have tested with success and satisfaction a killall -9 patchmanager. Obviously this would not provide persistence because /tmp/patchmanager is volatile. Now, I have to make another test based on information collected with find /tmp/patchmanager -type f.

The test will be similar to the shell script code I presented here:

  1. collect the list of files using find
  2. backup all the system files when all patches are disabled (original versions) which probably is not necessary because it is reasonable that they are stored somewhere
  3. kill the patchmanger
  4. use the list of files to remove the links and replace with real files
  5. start again the patchmanager to check how is going to behave
  6. do a system reboot instead of point #4

Some tests, just before going to edit the two scripts that apply patches and one in perl and another in shell script.

After that, I will probably discover the SFOS ill-design choice that constrain the Patch Manager to act volatile instead of providing persistence. Or in a lucky scenario, I will simple discover that volatile for Patch Manager is not a constrain (or not anymore).

In both cases the result will be a lot of fun. :blush:

UPDATE

About the point #2, checking the /tmp/patchmanager3/patchmanager.log I found that the check with patch -Rp0 --dry-run is exactly what Patch Manager does to check that each enabled patch is applied correctly.

If i remember correctly patchmanager in the old days did modify the original files directly. Back then a patch could brake your device, for example lipstick doesn’t start anymore after a reboot. Or not unapllying all patches before a system upgrade could break the system. Then patchmanager switched to the currently used solution.

Please stop whining around, because of your dnsmasq patch. Only a few people which use SFOS are using PM and alot less are using dnsmasq. Make a RPM,make a script, make a patchmanager fork or something else, but please stop spamming the forum with total offtopic posts about your view of the world.

6 Likes

This can be a real issue, thanks for having highlighted it.

It can be solved in another way without renouncing to have a system configuration manager. This is the reason because since the first comment in this thread in which I have included some code, I suggested making a system backup of each patch.

How to revert the backup in case of system failure? It can be done with a userland watchdog (which is missing) and dsmetool seems a useful piece of software to deal with a controlled reboot after the watchdog expired. Smarter solutions might also arise.

Also this can be a real issue thanks to highlighting it.

This means that the upgrade procedure should be better implemented. After all, there is the command to uninstall all the patches immediately available: patchmanager --unapply-all which deactivates and disables (unapply) all patches.

Despite the PM patches, the SFOS upgrade can fail because the user made some kind of changes including installing RPM packages. Before the upgrade, a procedure can be created for doing a backup of the system and the recovery procedure above will also solve the system upgrade failure reverting the system to the previous state.

Sorry, here you missed the whole point but I can understand you because I wrote a lot about PM nextgen. Therefore a little recap is necessary:

  • The dnsmasq connman integration is a test (about different PoV) because the DNS Alternative is considered the way to go and it contains dnsmasq.

  • The correct approach is to fix the dnsmasq and connman RPMs and this is clearly stated into the patch description. Also this information should be considered acquired and accepted.

  • The Quick Start Guide diverged from its original aim and started to collect parts that IMHO need to be fixed. Progressively with the time, it moved from being a “end-user” guide to a “product-manager” guide. Thus, it still reports my patch because it is a corner-case.

  • There is a huge gap between patch applications (even with a little of scripting support) and RPMs packaging because the RPMs repository - the sum of all repositories of all contributors - should NOT make a SFOS upgrade procedure fail. As you can read in the forum, the upgrade failure seems normal, instead. Or at leask a not so rare incident.

  • The SFOS completely lacks a system configuration manager which is an indispensable tool to have for developing a fleet management tool. The two should completely solve the problem of upgrade failures otherwise they are not good enough for that role. But they can do much more than this.

BTW the main question is: why should a community care modding the SFOS in such a manner that can support a system configuration manager and a fleet management tool?

The first and straightforward answer: a safe and friendly relationship with upgrades but there is much more related to these two tools which are missing because the SFOS is not designed to support them. That much more is also about Jolla profitability therefore Jolla should be much more interested in these design-changes than the community.

After all, unless people here wish to follow a strict policy about RPMs repository like Debian, Ubuntu, RedHat, SuSE, etc are doing which brings a lot of top-down organised work (debian is a non-profit foundation, in fact), then those two tools are the solely way to go in order to obtain something equivalent or at least, a restore system to the last working configuration.

By the ZFS has filesystem snapshots for this purpose but it is not the right approach for dealing with a fleet of IoT devices. It is tailored for servers even desktop can leverage it with some important constraints (user data, for example).

Finally: am I going to change the Patch Manager in order to make it a system configuration manager. I do not think so. Since the beginning, I have been thinking about another completely different solution, much more flexible. But it would be a shame to not learn from what has been done and learning by doing is the best way. Doing strange things under your PoV but they seem strange exactly because they are challenging the current system constraints.

UPDATE #1

This is a patch which probably will not work because I saw that Patch Manager runs in a jail and reasonably with user-privileges and not root-privileges (update: it works, after a reboot also). Despite the privileges, it is still a proof-of-concept rather than a definitive solution.

--- /usr/libexec/pm_unapply
+++ /usr/libexec/pm_unapply
@@ -25,7 +25,8 @@
 PATCH_EDITED_NAME="unified_diff_${SYS_BITNESS}bit.patch"
 PATCH_EDITED_BACKUP="$PM_PATCH_BACKUP_DIR"/"$PATCH_EDITED_NAME"
 
-ROOT_DIR="/tmp/patchmanager"
+ROOT_DIR="/"
+TMP_ROOT_DIR="/tmp/patchmanager"
 
 # Applications
 PATCH_EXEC="/usr/bin/patch"
@@ -66,6 +67,7 @@
   exit 0
 }
 
+files=""
 verify_text_patch() {
   if [ -f "$PATCH_FILE" ]; then
     log
@@ -74,6 +76,12 @@
     log "----------------------------------"
     log
 
+    files=$(/bin/sed -ne "s,^+++ *\.*[^/]*\([^ ]*\).*,/\\1,p" "$PATCH_PATH")
+    for i in $files; do
+      [ -L "$i" ] && /bin/rm "$i"
+      [ -f "$TMP_ROOT_DIR/$i" ] && /bin/cp -arf "$TMP_ROOT_DIR/$i" "$i"
+    done
+
     $PATCH_EXEC -R -p 1 -d "$ROOT_DIR" --dry-run < "$PATCH_FILE" 2>&1 | tee -a "$PM_LOG_FILE"
   fi
 }
@@ -87,6 +95,7 @@
     log
 
     $PATCH_EXEC -R -p 1 -d "$ROOT_DIR" --no-backup-if-mismatch < "$PATCH_FILE" 2>&1 | tee -a "$PM_LOG_FILE"
+    for i in $files; do [ -s "$i" ] || /bin/rm -f "$i"; done
   fi
 }
 
--- /usr/libexec/pm_apply
+++ /usr/libexec/pm_apply
@@ -29,7 +29,7 @@
     source /etc/patchmanager/manglelist.conf
 fi
 
-ROOT_DIR="/tmp/patchmanager"
+ROOT_DIR="/"
 
 # Applications
 PATCH_EXEC="/usr/bin/patch"
@@ -69,6 +69,13 @@
     log "Test if already applied patch"
     log "----------------------------------"
     log
+
+    files=$(/bin/sed -ne "s,^+++ *\.*[^/]*\([^ ]*\).*,/\\1,p" "$PATCH_PATH")
+    for i in $files; do
+      if [ -L "$i" ]; then
+        /bin/rm "$i" && /bin/touch "$i"
+      fi
+    done
 
     $PATCH_EXEC -R -p 1 -d "$ROOT_DIR" --dry-run < "$PATCH_PATH" 2>&1 | tee -a "$PM_LOG_FILE"

UPDATE #2

Instead of the current version of Patch Manager, I forked it from its github repository. Today with seven patches

I have unified the pm_apply and pm_unapply shells script in a single one pm_patch.env because most of the code was redundant.

  • pm_apply does source pm_patch.env apply "$@"
  • pm_unapply does source pm_patch.env unapply "$@"

This would help to maintain such shell script code in the future.

Yes, it is hard to observe wasting a years of work of several people and inventing a wheel once again.

2 Likes

About 10 years of works but there is still area of improvement about the code redundancy reduction, for example:

Possibly, I missed something: SFOS has a system configuration manager or Jolla has a fleet management system that I am not acknowledged? Don’t be shy: yes or no, might be enough.

Assuming you mean MDM which is the lets say more common term than “Fleet Management”, then yes.

2 Likes

You are devolving Patchmanager with this plan.

This is how it has worked historically (PM2.x), which resulted in bad, bad times at times.

I recommend looking into src/preload/src/preloadpatchmanager.c and try to understand what this does and why, and its connection to the perceived shortcoming of the current implementation.
You seem to have so far missed to this core piece of Patchmanager functionality.

If you want to go back in time though, there is a fork called “pm2-forever” in this repo which as far as I understand is dedicated to the old ways.

2 Likes