tier1.jp

Amounts of separated mountpoints (GNOME)

We wrote about separated mountpoint Debian Install guide, and some may wonder if those amounts are proper.

We also wonder about it. So we did a little survey.

Rev3

Buster status notes. Minor tweaks.

What is important

In short, the peak usage of the mountpoints are crucial.

df -h tells us how much they occupy, but it's not enough.

Method

To achieve I/O stats per mountpoints,

user$ cat /sys/fs/cgroup/blkio/blkio.throttle.io_service_bytes

This shows Read, Write, Sync and Async per block devices.

To combine major:minor numbers and actual mountpoint names,

user$ lsblk -l

We wrote some bash scripts to manage the results in CSV and ReStructuredText tables.

Attention!

The data is only Read and Write values; no Sync and Async.

Other conditions

  • GNOME desktop machine, Debian Stretch -> Now Buster.
  • Single user workstation.
  • We use noatime and tmpfs to reduce write I/Os.
  • Browsing and scripting, committing to local VCS repositories, and backups everyday.

Result

We processed those results by Python + pandas.

Attention!

Additional one month survey resulted in very little differences.

We "utilized XDG tmpfs more" on our scripts to reduce writes; such as backups temporary directory, script outputs, logs, etc.

ext4 data=writeback on /home does not seems significant.

Rev3 comment

A clean installed Debian Buster machine seems very significant reduction of write operations; almost expected write amounts.

We need more time and data to evaluate again.

Raw stats

Reads:

mountpoint             /         /home          /usr    /usr/share  \
count       3.100000e+01  3.100000e+01  3.100000e+01  3.100000e+01
mean        5.545155e+07  1.004920e+09  4.538749e+08  9.897872e+07
std         2.596340e+06  3.489151e+09  2.254861e+08  5.592428e+07
min         5.000909e+07  3.712000e+06  1.890847e+08  4.237824e+07
25%         5.336166e+07  1.107251e+07  2.823854e+08  5.606093e+07
50%         5.575373e+07  6.061773e+07  4.361308e+08  8.756122e+07
75%         5.736755e+07  8.877568e+07  6.311004e+08  1.357015e+08
max         6.071194e+07  1.425558e+10  8.861788e+08  2.766162e+08

mountpoint          /var    /var/cache      /var/log     /var/mail  \
count       3.100000e+01  3.100000e+01  3.100000e+01  3.100000e+01
mean        3.186039e+08  2.748591e+07  4.634211e+06  3.383048e+06
std         3.716843e+07  1.180817e+07  3.528061e+06  7.430375e+04
min         2.890762e+08  6.112256e+06  2.831360e+06  3.343360e+06
25%         2.937754e+08  2.717901e+07  3.630080e+06  3.343360e+06
50%         2.958469e+08  3.344896e+07  3.867648e+06  3.351552e+06
75%         3.539016e+08  3.378893e+07  4.293632e+06  3.386368e+06
max         4.271237e+08  3.607040e+07  2.320896e+07  3.675136e+06

mountpoint    /var/spool      /var/tmp          NVMe
count       3.100000e+01  3.100000e+01  3.100000e+01
mean        3.217540e+06  4.163402e+06  1.997444e+09
std         4.397872e+05  2.089349e+05  3.648750e+09
min         2.360320e+06  3.269632e+06  6.179169e+08
25%         3.412992e+06  4.068352e+06  7.596790e+08
50%         3.421184e+06  4.191232e+06  1.010303e+09
75%         3.429376e+06  4.332544e+06  1.289692e+09
max         3.638272e+06  4.359168e+06  1.576072e+10

Writes:

mountpoint             /         /home          /usr    /usr/share  \
count       3.100000e+01  3.100000e+01  3.100000e+01  3.100000e+01
mean        1.112989e+06  6.388108e+08  5.060542e+05  1.055249e+06
std         1.682694e+06  1.325640e+09  2.233945e+06  4.429413e+06
min         2.867200e+04  6.635520e+05  2.867200e+04  2.867200e+04
25%         1.228800e+05  2.523341e+07  3.276800e+04  3.276800e+04
50%         5.201920e+05  2.849341e+08  3.276800e+04  3.276800e+04
75%         1.013760e+06  6.725038e+08  3.276800e+04  3.276800e+04
max         7.503872e+06  7.334855e+09  1.246413e+07  2.466816e+07

mountpoint          /var    /var/cache      /var/log     /var/mail  \
count       3.100000e+01  3.100000e+01  3.100000e+01  3.100000e+01
mean        1.037924e+08  4.776801e+07  1.012425e+07  1.764385e+06
std         1.375454e+08  5.875282e+07  1.784250e+07  4.124028e+06
min         4.423680e+05  1.228800e+04  4.915200e+05  1.433600e+05
25%         1.148928e+06  3.276800e+04  1.292288e+06  2.314240e+05
50%         3.911680e+06  2.756198e+07  3.264512e+06  5.570560e+05
75%         1.711329e+08  6.883430e+07  8.062976e+06  1.046528e+06
max         5.027267e+08  1.904763e+08  7.138918e+07  1.939866e+07

mountpoint    /var/spool      /var/tmp          NVMe
count       3.100000e+01  3.100000e+01  3.100000e+01
mean        2.104089e+06  4.970496e+06  3.291994e+09
std         4.210203e+06  2.593516e+07  6.079978e+09
min         2.293760e+05  1.228800e+04  2.614272e+06
25%         3.133440e+05  2.334720e+05  8.743619e+08
50%         7.946240e+05  2.969600e+05  1.903450e+09
75%         1.552384e+06  4.546560e+05  3.662546e+09
max         1.966899e+07  1.447076e+08  3.416394e+10

Example cases

Apology about hard to read raw data.

Here are some examples tables.

Control (Stretch)

Just After cold boot, logged in, open a GNOME terminal.

MOUNTPOINT READ WRITE
/ 52,208,640 28,672
/usr 290,841,600 28,672
/usr/share 56,030,208 28,672
/var 293,769,216 548,864
/var/cache 33,731,584 12,288
/var/log 3,912,704 499,712
/var/mail 3,351,552 208,896
/var/spool 2,360,320 294,912
/var/tmp 4,068,352 12,288
/home 18,056,192 1,769,472
NVMe 774,396,416 3,814,400

On the other hand, below cases are just before shutdown the box.

Case 1 (Stretch)

Some AppArmor Profile tasks, mostly aa-logprof.

MOUNTPOINT READ WRITE
/ 57,185,280 4,988,928
/usr 189,084,672 32,768
/usr/share 42,378,240 32,768
/var 289,080,320 2,125,824
/var/cache 6,120,448 49,152
/var/log 3,650,560 57,974,784
/var/mail 3,351,552 14,348,288
/var/spool 3,421,184 15,138,816
/var/tmp 3,920,896 434,176
/home 4,084,736 1,341,747,200
NVMe 618,556,928 5,503,820,800

Case 2 (Stretch)

Tiny but many VCS commits.

MOUNTPOINT READ WRITE
/ 56,939,520 921,600
/usr 885,986,304 77,824
/usr/share 178,807,808 577,536
/var 427,123,712 350,003,200
/var/cache 36,029,440 165,748,736
/var/log 4,473,856 12,099,584
/var/mail 3,351,552 1,728,512
/var/spool 3,433,472 3,305,472
/var/tmp 4,359,168 144,707,584
/home 725,128,192 7,334,854,656
NVMe 2,380,356,608 34,163,943,424

Watch the amount of /home and NVMe writes. They are some Giga-Bytes operations.

  • What we did was almost writing scripts and committing them on each local bzr repositories.
  • Some web browsing (only-memory-cache, no disk cache.)
  • No media creation.
  • No software compilation.

NVMe seems to be too large at a glance. However, those mountpoints are all journaled ext4 over LVM.

Rev2 comment

Our machines are all LVM-over-LUKS, ext4 journaled.

Hence, especially VCS commits may make storage writes more than double.

VCS atomic operations, over ext4 journal operations, over LVM operations.

/var/log shows a little bit differences. This is mostly due to AppArmor audit logs.

Note

Without them, /var/log just have 2MB/day writes on a GNOME desktop usage.

The occupation is also small, almost below 100MB under the Debian's default logrotate setting.

Considering the amounts

Then, how small the each mountpoint could be?

Attention!

This section was so messy.

Please forget about pseudo-math like expressions. Very poor.

However, our conclusion is same. Use large amount SSD/NVMe for /home.

That seems the simple and effective method.

Considering peak writes and non-TRIM area, Our hypothesis is,

  • Assume Mountpoint M.
  • a(M) as amount of M.
  • u(M) as used amount of M.
  • t(M) as not discarded, TRIM target area (amount) of M.

Minimum amount of mountpoint on SSD/NVMe

a(M) = u(M) + t(M) + C (where 1GB < C.)

Let us assume u(/home) is around 4GB.

Our statistics showed /home tends to be written much than expected.

  1. On average, it is below 300MB, but the peak is 7GB in a day.
  2. If you do TRIM every day, t(/home) would be under 10GB.
  3. The pandas 75% line tells us /home tends to have 600MB writes per day.

So roughly, t(/home) would be,

  • around 6GB per week TRIM condition.
  • maybe 30GB per month TRIM condition.

Hence, a(/home) should be more than 10GB without ext4 discard option.

  • If you do TRIM manually per month, it should be above 32GB.
  • The ideal amount might be around 64GB or above.

Conclusion

Large SSD/NVMe or separated /home is better.

Buy above 500GB one, or separate /home into another storage.

The other mountpoints are not so much on desktop, but on server /var and under that should be separated with proper amounts.

Rev2 comment

Larger /home makes TRIM by fstrim takes a bit longer time.

If you do not put audio, video, pictures so much, We think 64GB is not a bad initial value.

You can just extend by LVM LV and ext4 on demand, provided you have spare area in your LVM PE.

See also: How to extend LVM LV and ext4

Note

Workstations may not need so much amounts around /var.

Thanks for reading a long messy article.

Have a nice day.

published: MODIFIED: