Low memory, apps crashing - change zram settings or add swapfile? [4.x]

Couple of notes on going for 4G now

  • rootfs is limited in the default partitioning scheme, there’s only about 2.3G free

  • /home has all the space, but is LUKS encrypted: better security vs leaking anything, but how much slower to swap?

  • auto-mounting swap from /home is probably a special configuration

But I set up a 4G /home/swapfile for a test run now.

I just don’t understand why go through all that trouble? Why dont use zramctl?

devel-su

swapoff /dev/zram0

zramctl -s 6196888121 /dev/zram0

mkswap /dev/zram0
Setting up swapspace version 1, size = 5,8 GiB (6196887552 bytes)
no label, UUID=16e79d4e-8f13-4128-b5c0-32b438769ce6

swapon /dev/zram0

free -m
total        used        free      shared  buff/cache   a
vailable
Mem:           5508        3042        1067          40        1399
2889
Swap:          5910           0        5910

3 Likes

Without knowing zram details, I figured there would be a limit to how much is useful to allocate
Why would Jolla ship an underpowered configuration for so long? But indeed Drop swap for zram on Linux | Opensource.com seems to support an idea of < 8G RAM systems having 95% zram allocation :thinking:

I did initially also double zram, but it also ran out fairly quickly. Perhaps increasing both zram and adding a swapfile is the secret sauce? Going to test this next.

1 Like

It wasn’t shown on my chart, and my swap config is a bit different. I also use, aside the “build in” swap, a swap file on fast microsd card, but only slight amount (512mb) cause although card is really fast (Samsung PRO Plus V30), more swap on it causes system to “jam” somtimes. It’s just bearly noticeable but it is there. But with swap up to 512mb it doesn’t exist, or is much reduced that you cant see any difference

Both of my swap spaces have equal priorities.

Maybe this 512mb of outside swap is what you need? An additional swap on good microsd performs really well and I’ve been running with it for a long time now without any issues

(It’s not my test but it’s done on the same card)
card

Not so sure, especially if you use equal priorities. System will be writing in two different sections on th same memory. Even if you use different prios, then when system need to read something from both places it would slow down noticeably. This is not confirmed, it’s only my theory.

I didn’t do much more testing with different configs as internal swap + swap on microsd turns out to be really stable and fast configuration

1 Like

If I understand correctly, by using a zram device as swap you are trading processing power for memory (for compressing and uncompressing data that is actually swapped). Especially devices with less RAM but many cpu cores could really benefit.
My Xperia 10 II with its 4 GB of RAM is nearly unuseable when running Android apps due to OOM killer terminating random apps all the time. Using zram swap could cause a device feeling somewhat slower, but at least apps could run without being killed. Have to try that intensively…

1 Like

Jolla already ships a 1G zram configuration out of the box, so they are very aware of our RAM limitations.

AFAICT the primary strategy has really boiled down to

Today’s daily driver experience has been really good so far

  • I have 13 apps open, incl. Browser
  • Lighthouse shows 1G memory free, while 1G/7G swap is used
  • phone is still super responsive to every move I make in all apps

Swap stats probably signal “overkill”, but let’s see how this play out after a few days of heavy workweek. Stay tuned for updates.

4 Likes

Yep, I just posted about swapon -p -3 at Low memory, apps crashing - change zram settings or add swapfile? [4.x] - #52 by lkraav

3 Likes

@lkraav were you able to somehow implement your settings so this would be a default, or loaded later (during boot) system configuration at startup?

I need to manually execute few commands from terminal each time I reboot. I already asked this question few times, but nobody seems to know how to help

zram expansion is easy to persist, exactly as described in this thread.

For added swapfile on /home, I just run swapon manually, with help of Terminal history, as I aim to not reboot often.

I increased zram swap size as described in this thread. Now my 10ii does not kill applications any more.
Also fingerprint reader appears to run longer but that might be just wishful thinking.

Today’s daily driver experience has been really good so far
I have 13 apps open, incl. Browser
Lighthouse shows 1G memory free, while 1G/7G swap is used
phone is still super responsive to every move I make in all apps

After 3 weeks of daily driving additional swap on X10II, my conclusion: it is hopeless.

UX is great at first, maybe 1 day, then massively degrades as more and more key components start getting into superslow /home/swapfile. zram, even boosted, gets exhausted quickly.

Browser, as implemented, is just way too heavy of an app for 4G or less RAM. It is possible to daily drive most other apps pretty smoothly, but with every hyperlink open a massive elephant destroys the porcelain shop.

I like @jojo taking initiative with some Browser memory profiling work [SFOS Browser] Solving the browser memory issue - #41 by jojo but I think upgrading to 6G or preferrably min. 8G RAM device is the only real solution.

Isn’t that a bit much?
I mean other OS’s that supposedly are not as lightweight as sfos can run up to date full fledged browsers with a couple of GBs of ram. And they run blazing fast with no issues at all.
From what I understand is that the browser needs a lot of work and not that 6GB is not enough for a great browsing experience.

1 Like

Certainly, but I predict hardware to solve this problem significantly faster than Jolla will be able to noticeably optimize Gecko. Would love to be proven wrong. Based on the overall SFOS progress velocity, not holding my breath.

I can tell you from my 10III that 6G of ram is still not enough, but I don’t have a II to tell you the actual difference.
Based on that I believe that even with 8, that is overkill in my opinion, the experience won’t be that much different.
It is sad to open any android browser (and dealing with the whole choppy android app experience) to be able to browse without any random closes.

Angelfish + Qt Runner does the job, no Android needed.

Performance was really bad when I tried it 2 weeks ago. And I was wondering at that point, isn’t the sfos browser open source? Cause if it is, I can’t understand why people would port basically anything to SFOs instead of working on the stock browser.
Unless I didn’t understand something of course.

Yes, thats right. Angelfish is slow. But stability is better than SF Browser, will say, it never crashes.

Also I prefer the SF-B. because it’s a Firefox derivate while Angelfish is a Chrom(ium) derivate, and for my flavour SFB/FF has much better UI.

I strongly assume, the browser crashes are someway related with power saving settings and sending the CPU or some other parts of the phone to sleep and wake them up later.

I observed that the browser never crashes if DeadBeef Silica is running in the background playing music while surfing and keeps CPU awake preventing power save and sleep mode. In this case also Browser UI interactions (taps, clicks) are noticeable faster with no delays.

On the other hand, if Browser runs alone, UI interactions are generally lazy, e.g. try to write something on a dialog:
If keyboard opens, you touch a letter key too short: ignored;
touch middle time: optical feedback on keyboart comes but letter is not written in the text;
touch long enough: optical feedback on keyboard comes and letter is inserted into text field.

@lkraav Thanks for the lipstick hint. I agree generally. There are times when the Browser crashes are insufferable and also everything is slow on the phone for unknown reasons. Other times UI is fast as it should be and Browser works quite good.
So maybe not the Browser itself is the faulty but some lipstick bug.

The ‘insufferable’ situations often can be solved by leaving the phone in peace for half an hour, after this it works again. Lipstick running into a mess?

Next time this happens I’ll do no other tricks but immediately reset UI with SF utilities, and report here.

…to be continued…

1 Like

Same here.

My latest “overall UX” discovery is that a complete “Lipstick + apps” restart, when it crashes on its own, or you trigger it via Settings > Utilities, does wonders for restoring cold boot-like smoothness, without a restart.

Feeling like if I set up a systemd timer unit to trigger this nightly, I might have a solid daily driver in X10II again.

I’ve been talking about memory leaks in this thread before, and this reinforces my belief that probably many of these complicated base components (GPU driver, Lipstick, Alien Dalvik, Browser) are simply leaking resources over the session and degrade in performance.

1 Like

this does not work for me even doing this before:

In fact, the result is the following:

[root@sfos ~]# zramctl -s $((2*1024*1024*1024)) /dev/zram0 
zramctl: /dev/zram0: failed to reset: Device or resource busy
[root@sfos ~]# echo 1 > /proc/sys/vm/drop_caches; swapoff -av; free
swapoff /dev/zram0
              total        used        free      shared  buff/cache   available
Mem:        3643472     1100024     2353380       12844      190068     2485908
Swap:       1048572      289588      758984

In particular this is the error:

[root@sfos ~]# echo 1 > /proc/sys/vm/drop_caches; swapoff /dev/block/zram0
swapoff: /dev/block/zram0: swapoff failed: Interrupted system call

and checking with dmesg -Hw there is constant flux of these messages:

[  +0.000071] Freezing of tasks aborted after 0.030 seconds
[  +0.000083] OOM killer enabled.
[  +0.000002] Restarting tasks ... done.
[  +0.015447] PM: PM: suspend exit 2023-06-29 09:30:30.916913689 UTC
[  +0.000004] PM: suspend exit
[  +0.004559] binder: 2749:2749 transaction failed 29189/-22, size 32-0 line 3096
[  +0.045540] ## mmc1: mmc_gpio_set_uim2_en: gpio=101 value=1
[  +0.070757] PM: PM: suspend entry 2023-06-29 09:30:31.037763337 UTC
[  +0.000016] PM: suspend entry (deep)
[  +0.000009] PM: Syncing filesystems ... done.
[  +0.008866] Freezing user space processes ... 
[  +0.035409] PM: Wakeup pending, aborting suspend

UPDATE #1

I think that, it was this line to create the problem:

echo 1 > /proc/sys/vm/drop_caches

which starts a drop that will keep longer than supposed to be and interfere with swapoff command which probably does not need it because it should have its own internal specific calls about cache dropping.

UPDATE #2

Anyway chianging the size of the swap size at running system time does not seem a immediate procedure (not so much, at least). In fact, this functions fails in this aim even when succeed for each command:

zram_set_size() {
	declare -i mb=${1:-1024}
	swapusage() { free -m | grep -i swap | tr -s ' ' | sed "s,0 0 0,off,"; }
	swapusage; swapoff -v /dev/zram0; swapusage
	echo
	echo "The zram size at boot is set in $(ls -1 /vendor/etc/fstab.pdx20?):"
	sed -i "s|\(^/dev/block/zram0.*size\)=[0-9]*,max|\\1="$((mb*1024*1024))",max|" \ 
           /vendor/etc/fstab.pdx20?
	grep zram /vendor/etc/fstab.pdx20? | tr -s ' '
	echo
	swapusage; swapon -v /dev/zram0; swapusage
}

What device are you using?
Did you try reboot and do swapoff after fresh system restart?
To be honest I’ve never faced problems with swapoff command.
Maybe you can also try disabling all swap spaces?:

swapoff -a