This is a small guide to hack together on-access sync for Nextcloud/Owncloud (and in theory other) network file stores.
What you’ll need
- Shell access and a little skill editing files
- Basic shell scripting knowledge
- Very basic systemd unit file knowledge
- A working Nextcloud/Owncloud setup/accout/server
- Sailsync Owncloud from Openrepos (which provides the upload tool)
That last one can in theory be replaced by something else, e.g. your own oc/nc/webdav sync tool, but this guide doesn’t deal with that. Also, what then remains is just systemd .path stuff really.
What you don’t need
- Any part of the SFOS native NC integration (sadly)
1. Set up Sailsync folders
This assumes you have set up the folders you want to sync to automatically already set up in Sailsync, and a manual sync from that app works.
Doing this is beyond the scope of this guide, but using that app is pretty straightforward. Hint: if you know a little SQL, you can add folder configuration pairs via sqlite, which may be preferrable if you’re logged into a shell anyway.
2. Create a ~/.netrc file
To store password information, we create a .netrc
file (note the dot!). .netrc
is an ancient UNIX way to (relatively insecurely) store remote network auth info. See here for more info.
We do this because owncloudcmd supports it, and because it’s a little more secure than putting password information in a script.
cd $HOME
touch .netrc
chmod 0600 .netrc
$EDITOR .netrc
The file should have entries per Nextcloud account like this:
machine nextcloud.example.com login myuser password mysectretpassword
machine other.example.org/nextcloud login myotheruser password myothermoresectretpassword
NB: it should be obvious there is a security issue here, but this is about non-intersctive unattended scripts requiring authentication, so…
3. Create a wrapper script
The following script will:
- take one option, a local path
- query the Sailsync configuration to find out the configured remote path
- sync the directory
The remoteuri
variable is important to get right, in the case of a Nextcloud setup with subdirectory is should read: https://server.example.org:443/nextcloud/remote.php/webdav$remote
If you have a subdomain for NC it will look like
https://nextcloud.example.org:443/remote.php/webdav$remote
Place it in e.g. ~/.local/bin
where systemd (see below) can find it. I call it ~/.local/bin/sailsync-owncloud_cron.sh
#!/usr/bin/env sh
here=$(printf '%s' $1 | sed 's#/$##')
sqlbin=/usr/bin/sqlite3
sqldb=~/.config/harbour-io.edin.projects.sailsync-owncloud/config.db
remote=$(printf ".mode list\n.separator ,\nselect ocpath from syncpaths where localpath like '%s';" "$here" | $sqlbin $sqldb)
# urlencode spaces
remote=$(printf '%s' "$remote" | sed 's# #%20#g')
remoteuri=https://server.example.org:443/nextcloud/remote.php/webdav$remote
if [ -z "$remote" ]; then
printf "ERROR: could not find db entry for %s\n" "$here"
exit 1
fi
printf "arg is '%s'\n" "$1"
printf "local is '%s'\n" "$here"
printf "remote is '%s'\n" "$remote"
printf "remoteuri is '%s'\n" "$remoteuri"
#
#QT_LOGGING_RULES='*.warning=false,*.debug=false' LD_LIBRARY_PATH="${LD_LIBRARY_PATH}":/usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/lib /usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/bin/owncloudcmd --non-interactive -n $here $remoteuri
QT_LOGGING_RULES='*.info=false' LD_LIBRARY_PATH="${LD_LIBRARY_PATH}":/usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/lib /usr/share/harbour-io.edin.projects.sailsync-owncloud/lib/owncloud/bin/owncloudcmd --non-interactive -n $here $remoteuri
echo sync returned $?
Test that script is working by calling it with a path that you have configured in Sailsync:
chmod +x ~/.local/bin/sailsync-owncloud_cron.sh
~/.local/bin/sailsync-owncloud_cron.sh /home/nemo/Documents/shared
NB: do not use a trailing slash for the path, give it exactly as given in the sailsync config. Also, use a “real” path, e.g. /home/nemo
, not the ~
shorthand
3. Create systemd watchers
Now we make systemd watch our files and call the script when something changes:
Go to ~/.config/systemd/user/
and create a file there, I will call it sync-if-changed-notes.path
:
[Unit]
Description="Watch directory and sync if changed"
[Path]
PathChanged=/home/nemo/Notes
[Install]
WantedBy=post-user-session.target
Again, be careful to give the correct home directory, nemo in my case.
The .path file expects a corresponding .service file to call, so create that too, sync-if-changed-notes.service
[Unit]
Description="NextCloud syncer"
[Service]
Environment="QT_LOGGING_RULES='*.info=false'"
ExecStart=/home/nemo/.local/bin/sailsync-owncloud_cron.sh /home/nemo/Notes
You’re almost done, the last part is to tell systemd to activate the watcher and sync service:
# make systemd pick up the new files:
systemctl --user daemon-reload
# enable both:
systemctl --user enable sync-if-changed-notes.path
systemctl --user enable sync-if-changed-notes.service
# start the file watcher:
systemctl --user start sync-if-changed-notes.path
You can now check when/if something happened:
touch /home/nemo/Notes/testfile
systemctl --user status sync-if-changed-notes.path
● sync-if-changed-notes.path - "Watch directory and sync if changed"
Loaded: loaded (/home/nemo/.config/systemd/user/sync-if-changed-notes.path; enabled; vendor preset: enabled)
Active: active (running) since Wed 2021-03-31 16:26:51 CEST; 5 days ago
Warning: Journal has been rotated since unit was started. Log output is incomplete or unavailable.
nemo:~ $ systemctl --user status sync-if-changed-notes.service
● sync-if-changed-notes.service - "NextCloud syncer"
Loaded: loaded (/home/nemo/.config/systemd/user/sync-if-changed-notes.service; static; vendor preset: enabled)
Active: inactive (dead) since Tue 2021-04-06 09:09:53 CEST; 4s ago
Process: 25660 ExecStart=/home/nemo/.local/bin/sailsync-owncloud_cron.sh /home/nemo/Notes (code=exited, status=0/SUCCESS)
Main PID: 25660 (code=exited, status=0/SUCCESS)
Note that inactive (dead)
is OKAY here, it just means the sync is not currently running. What matters is the 4s ago
part, and the last line which should show SUCCESS
if the script ran successfully.
Final Notes
You can create more systemd path and service files to watch other dirs for changes. ~/Pictures/Camera
would be an obvious one for example.
Note that while this method relies on Sailsync, its configuration and the owncloudcmd tool, you can use whatever tools and configs you want (e.g. rsync, scp…) by swapping out the wrapper script to do something else.
Oh, and this should be obvious, but: this will not detect changes on the remote side. But it will sync changes from there if a local change triggers the sync.
UNTESTED: I have not tested what happens if you add a lot of files to a watched folder. It could be that systemd wreaks havoc with many script calls, and/or the owncloudcmd script is smart enough to only run one sync or not.
If someone with more systemd unit file writing knowledge could chip in and tell us how to guard against such a situation