[Guide/Hack] Automatic Nextcloud file sync when directory contents change

This is a small guide to hack together on-access sync for Nextcloud/Owncloud (and in theory other) network file stores.

What you’ll need

  1. Shell access and a little skill editing files
  2. Basic shell scripting knowledge
  3. Very basic systemd unit file knowledge
  4. A working Nextcloud/Owncloud setup/accout/server
  5. Sailsync Owncloud from Openrepos (which provides the upload tool)

That last one can in theory be replaced by something else, e.g. your own oc/nc/webdav sync tool, but this guide doesn’t deal with that. Also, what then remains is just systemd .path stuff really.

What you don’t need

  1. Any part of the SFOS native NC integration (sadly)

1. Set up Sailsync folders

This assumes you have set up the folders you want to sync to automatically already set up in Sailsync, and a manual sync from that app works.

Doing this is beyond the scope of this guide, but using that app is pretty straightforward. Hint: if you know a little SQL, you can add folder configuration pairs via sqlite, which may be preferrable if you’re logged into a shell anyway.

2. Create a ~/.netrc file

To store password information, we create a .netrc file (note the dot!). .netrc is an ancient UNIX way to (relatively insecurely) store remote network auth info. See here for more info.

We do this because owncloudcmd supports it, and because it’s a little more secure than putting password information in a script.

cd $HOME
touch .netrc
chmod 0600 .netrc
$EDITOR .netrc

The file should have entries per Nextcloud account like this:

machine nextcloud.example.com login myuser password mysectretpassword 
machine other.example.org/nextcloud login myotheruser password myothermoresectretpassword 

NB: it should be obvious there is a security issue here, but this is about non-intersctive unattended scripts requiring authentication, so…

3. Create a wrapper script

The following script will:

  1. take one option, a local path
  2. query the Sailsync configuration to find out the configured remote path
  3. sync the directory

The remoteuri variable is important to get right, in the case of a Nextcloud setup with subdirectory is should read: https://server.example.org:443/nextcloud/remote.php/webdav$remote
If you have a subdomain for NC it will look like
https://nextcloud.example.org:443/remote.php/webdav$remote

Place it in e.g. ~/.local/bin where systemd (see below) can find it. I call it ~/.local/bin/sailsync-owncloud_cron.sh

#!/usr/bin/env sh

here=$(printf '%s' $1 | sed 's#/$##')

sqlbin=/usr/bin/sqlite3
sqldb=~/.config/harbour-io.edin.projects.sailsync-owncloud/config.db
remote=$(printf ".mode list\n.separator ,\nselect ocpath from syncpaths where localpath like '%s';" "$here" | $sqlbin $sqldb)
# urlencode spaces
remote=$(printf '%s' "$remote" | sed 's# #%20#g')
remoteuri=https://server.example.org:443/nextcloud/remote.php/webdav$remote

if [ -z "$remote" ]; then
  printf  "ERROR: could not find db entry for %s\n" "$here"
  exit 1
fi

printf "arg is '%s'\n" "$1"
printf "local is '%s'\n" "$here"
printf "remote is '%s'\n" "$remote"
printf "remoteuri is '%s'\n" "$remoteuri"
#
#QT_LOGGING_RULES='*.warning=false,*.debug=false' LD_LIBRARY_PATH="${LD_LIBRARY_PATH}":/usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/lib /usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/bin/owncloudcmd --non-interactive -n $here $remoteuri
QT_LOGGING_RULES='*.info=false' LD_LIBRARY_PATH="${LD_LIBRARY_PATH}":/usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/lib /usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/bin/owncloudcmd --non-interactive -n $here $remoteuri
echo sync returned $?

Test that script is working by calling it with a path that you have configured in Sailsync:

chmod +x ~/.local/bin/sailsync-owncloud_cron.sh
~/.local/bin/sailsync-owncloud_cron.sh /home/nemo/Documents/shared

NB: do not use a trailing slash for the path, give it exactly as given in the sailsync config. Also, use a “real” path, e.g. /home/nemo, not the ~ shorthand

3. Create systemd watchers

Now we make systemd watch our files and call the script when something changes:

Go to ~/.config/systemd/user/ and create a file there, I will call it sync-if-changed-notes.path:

[Unit]
Description="Watch directory and sync if changed"

[Path]
PathChanged=/home/nemo/Notes

[Install]
WantedBy=post-user-session.target

Again, be careful to give the correct home directory, nemo in my case.

The .path file expects a corresponding .service file to call, so create that too, sync-if-changed-notes.service

[Unit]
Description="NextCloud syncer"

[Service]
Environment="QT_LOGGING_RULES='*.info=false'"
ExecStart=/home/nemo/.local/bin/sailsync-owncloud_cron.sh /home/nemo/Notes

You’re almost done, the last part is to tell systemd to activate the watcher and sync service:

 # make systemd pick up the new files:
 systemctl --user daemon-reload
 # enable both:
 systemctl --user enable sync-if-changed-notes.path
 systemctl --user enable sync-if-changed-notes.service
 # start the file watcher:
 systemctl --user start sync-if-changed-notes.path

You can now check when/if something happened:

touch /home/nemo/Notes/testfile

systemctl --user status sync-if-changed-notes.path
● sync-if-changed-notes.path - "Watch directory and sync if changed"
   Loaded: loaded (/home/nemo/.config/systemd/user/sync-if-changed-notes.path; enabled; vendor preset: enabled)
   Active: active (running) since Wed 2021-03-31 16:26:51 CEST; 5 days ago

Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
nemo:~ $ systemctl --user status sync-if-changed-notes.service
● sync-if-changed-notes.service - "NextCloud syncer"
   Loaded: loaded (/home/nemo/.config/systemd/user/sync-if-changed-notes.service; static; vendor preset: enabled)
   Active: inactive (dead) since Tue 2021-04-06 09:09:53 CEST; 4s ago
  Process: 25660 ExecStart=/home/nemo/.local/bin/sailsync-owncloud_cron.sh /home/nemo/Notes (code=exited, status=0/SUCCESS)
 Main PID: 25660 (code=exited, status=0/SUCCESS)

Note that inactive (dead) is OKAY here, it just means the sync is not currently running. What matters is the 4s ago part, and the last line which should show SUCCESS if the script ran successfully.

Final Notes

You can create more systemd path and service files to watch other dirs for changes. ~/Pictures/Camera would be an obvious one for example.

Note that while this method relies on Sailsync, its configuration and the owncloudcmd tool, you can use whatever tools and configs you want (e.g. rsync, scp…) by swapping out the wrapper script to do something else.

Oh, and this should be obvious, but: this will not detect changes on the remote side. But it will sync changes from there if a local change triggers the sync.

UNTESTED: I have not tested what happens if you add a lot of files to a watched folder. It could be that systemd wreaks havoc with many script calls, and/or the owncloudcmd script is smart enough to only run one sync or not.
If someone with more systemd unit file writing knowledge could chip in and tell us how to guard against such a situation

13 Likes

Appendix:

If you just want a simple upload instead of sync without using the sailsync tooling, you can use curl to put files into nextcloud. This again uses ~/.netrc to pickup the password.

ncup.sh

 #!/usr/bin/env bash

user=myusername

localfile=$1
remotepath=$2
remotefile=${localfile##/*}
remote=${remotepath}${remotefile} 

remoteuri="https://nextcloud.example.com/remote.php/dav/files/$user/$remote"

curl --progress-bar --upload-file $localfile --netrc "$remoteuri"
 if [[ $?  -eq 0 ]]; then echo 'success!';fi
2 Likes

This is very cool, @nephros :+1:
It took me a while to realize, though, that if you edit e.g. /home/nemo/Notes/SomeCategory/testfile the change isn’t recognized until you change/create a file in /home/nemo/Notes.

Yeah, that is a limitation/design decision of systemd .path units.

Of course, one could probably make a .path unit which creates more .path units :wink:

Or, one might use incrond (package currently not widely available, though I have a build on OBS) which predates systemd and can do similar things when file locations change - and I think incrond can watch directories recursively.

I’ve got it running for photos and video now; that’s the most important for me :slight_smile: Thanks for the solution.

1 Like

Hi @nephros,

thanks for sharing your job.
Is this script still working?

On y XA2 , I’m on SFOS 4.4.0.58, with SailSync 0.9.2 working like a charm.

But directory ~/.config/harbour-io.edin.projects.sailsync-owncloud/ doesn’t. Instead, I can found ~/.config/io.edin/harbour-io.edin.projects.sailsync-owncloud/ directory, but unfortunately, it is empty, with no config.db file.
Can you help?

Have you successfully configured account and folders in sailsync itself?

yes for sure. I have 4 path configured in Sailsync, and it works.

Hmmm, just checked, and both folders exist here (4.4.0.64/aarch64), the former having the .db file as usual…

$ ls -l /home/nemo/.config/harbour-io.edin.projects.sailsync-owncloud
total 12
-rw-r--r--    1 nemo     nemo          8192 Jun 23 22:27 config.db
-rw-r--r--    1 nemo     nemo           127 Jun 23 22:58 config.ini

You are totally right, maybe I made a typo…
Sorry for that.

Addendum: the incredibly powerful tool rclone has support for many cloud storages, including WebDAV/Nextcloud and can be used as a replacement for Sailsync Owncloud or any other tools.
It also handles authentication much more securely, so obsoletes the .netrc stuff.

Set up the cloud target using rclone config, add a local or alias configuration for your local dirs, and call rclone from either a wrapper or directly from the systemd service.

rclone is not available from any repo (except from some ancient, dangerous Openrepos ones like Schtuhrman’s), but you can download and install the binary directly from the rclone web site. It’s just a single binary, so rather safe to install.


Here is an example config. Do not copy-paste this, use the command rclone config to set it up!):

[home]
type = alias
remote = /home

[my-home]
type = alias
remote = home:/nemo

[my-pics]
type = alias
remote = my-home:/Pictures

[nextcloud]
type = webdav
url = https://nextcloud.example.org/remote.php/webdav/
vendor = nextcloud
user = nextclouduser
pass = mYVeryS3cr3TP@assKeY

[nc-pics]
type = alias
remote = nextcloud:/Sailfish-Pictures

You would call this as:

rclone -q sync my-pics: nc-pics:  

Or interactively (recommended for testing the first few tries):

rclone -i sync my-pics: nc-pics:  

There is also an experimental ‘bisync’ mode which may or may not be what you want to try.

Do read the extensive rclone documentation before implementing any of this though.

2 Likes

I’m using the old standard inotify + curl approach. But systemd is already there, and user units as I understand it live in user’s home, so I don’t recommend my method :slight_smile: I just had recipes on hand for inotify (which is a sign of my age).

Would it be possible to advise me on how to install RClone from binaries? This will assist for other programs.

Thank you

I might had done it, download package for arm64 and click install.

https://downloads.rclone.org/v1.60.0/rclone-v1.60.0-windows-arm64.zip

What would be the command in terminal?

Thanx

You mean how to install from terminal? Something like

curl -LO <url>
unzip !$:t   # ;)

If it’s the zip file. For an RPM it’s either

pkcon install-local <filename>

or

 xdg-open <filename>

If you mean how to use rclone, a rough guide is about three comments up.

2 Likes

I thank you @nephros, indeed, these commands what I was looking for, and you added more… much appreciated…

So is mounting a WebDAV-Resource with rclone better in terms of easier configuration and performance when compared to davfs2?

Just set up rclone. It’s really nice. Thanks a lot!

@nephros, you didn’t by any chance set up automatic syncing of contact and calendars? I’ve gotten so far that I can dump contacts or calendars and split them and sync em via vdirsync. Or maybe that works for you with the given tools.

contacts and calendar works with just the built in account