Speed isn’t great, parsing input took 40sec, reply around 3 minutes, but it works.
git clone https://github.com/zanussbaum/gpt4all.cpp
cd gpt4all.cpp/
make chat
wget https://the-eye.eu/public/AI/models/nomic-ai/gpt4all/gpt4all-lora-quantized.bin
Model is 4GB and had to increase swap (overshot it by a lot making 4G extra, in the end it needed only 400Mb extra it seems, as it ended up using 1.35G of swap)
How to increase swap: Low memory, apps crashing - change zram settings or add swapfile? [4.x] - #22 by Schrdlu