Disk Usage - Location? Custom Partitions

Hello - Cyberpanel is fantastic - on year 3 now. - Thank you!

I have a new server with a custom partition Ubuntu server:
4TB nvme X2
backup_disk = 1 full disk
/boot = 1GB
/home = 3.1TB
/var = 1GB
SWAP = 256GB

Disk Usage ‘/’ shows 92%

root@gpl:/# df -ih
Filesystem Inodes IUsed IFree IUse% Mounted on
udev 16M 561 16M 1% /dev
tmpfs 16M 936 16M 1% /run
/dev/nvme1n1p4 640K 183K 458K 29% /
tmpfs 16M 30 16M 1% /dev/shm
tmpfs 16M 3 16M 1% /run/lock
tmpfs 16M 18 16M 1% /sys/fs/cgroup
/dev/nvme1n1p3 64K 304 64K 1% /boot
/dev/nvme1n1p2 6.3M 87K 6.2M 2% /var
/dev/nvme1n1p6 199M 443K 198M 1% /home
/dev/nvme1n1p1 0 0 0 - /boot/efi
/dev/loop0 94K 2.5K 92K 3% /tmp
/dev/nvme0n1p1 218M 140K 218M 1% /backup_disk
/dev/loop1 12K 12K 0 100% /snap/core20/1587
/dev/loop2 480 480 0 100% /snap/snapd/14978
/dev/loop4 802 802 0 100% /snap/lxd/22753
/dev/loop3 12K 12K 0 100% /snap/core20/1581
/dev/loop5 486 486 0 100% /snap/snapd/16292
/dev/loop6 796 796 0 100% /snap/lxd/21835
tmpfs 16M 22 16M 1% /run/user/0

Question - where is the folder that cyberpanel is showing 92% full?

not sure why you post df -ih inode count? Like what’s the purpose? df -h gives you disk space. Also not sure why you have 256GB of swap, if I fill out that much swap on a webserver I’d be super concerned.

In any case, you can debug the issue by running the very command cyberpanel use for that count ( which isn’t df or du )

python3
>>> import psutil
>>> psutil.disk_usage('/')
>>> exit()

In any case, just looking at the inode count of /, which is 640k, and considering /var has 6.3M inodes, I’d say not sure about your partitioning reasoning. psutil is not exactly super precise or recognize weird mounts and stuff, but it’s usually not THAT far off.

Hello - thank you for the fast reply and yes df -h is what I should have posted:
Filesystem Size Used Avail Use% Mounted on
udev 63G 0 63G 0% /dev
tmpfs 13G 1.5M 13G 1% /run
/dev/nvme1n1p4 9.8G 9.0G 275M 98% /
tmpfs 63G 108M 63G 1% /dev/shm
tmpfs 5.0M 0 5.0M 0% /run/lock
tmpfs 63G 0 63G 0% /sys/fs/cgroup
/dev/nvme1n1p2 98G 7.8G 86G 9% /var
/dev/loop0 1.5G 361M 989M 27% /tmp
/dev/nvme1n1p3 974M 206M 701M 23% /boot
/dev/nvme0n1p1 3.4T 416G 2.8T 13% /backup_disk
/dev/nvme1n1p1 1.1G 5.3M 1.1G 1% /boot/efi
/dev/nvme1n1p6 3.1T 100G 2.8T 4% /home
/dev/loop1 68M 68M 0 100% /snap/lxd/22753
/dev/loop3 68M 68M 0 100% /snap/lxd/21835
/dev/loop4 44M 44M 0 100% /snap/snapd/14978
/dev/loop5 62M 62M 0 100% /snap/core20/1587
/dev/loop6 47M 47M 0 100% /snap/snapd/16292
tmpfs 13G 0 13G 0% /run/user/0
/dev/loop7 62M 62M 0 100% /snap/core20/1593

the SWAP - I made with the rule to double RAM

The server I am using is the AMD 5950x
https://www.hetzner.com/dedicated-rootserver/ax101/configurator#/

can I resize anything to fix the 97% usage going higher?

or should I re-install ubuntu and use default values for everything?
many thanks

Well it’s the ubuntu desktop partitioning, the SWAP double the RAM rule is for laptop that hibernate SwapFaq - Community Help Wiki the swap for a 128 gb server that don’t hibernate… 11 gb. Does your server hibernate? With a lid you close and and everything? I’m curious actually.

Anyway, the whole 63gb of /cgroup, /dev and all that /snap nonsense is for Ubuntu’s desktop SNAP appstore system https://snapcraft.io/, which suck and need to go from any server, with all the desktop stuff : policykit, multipath, udisk2, etc ( there’s a ton, ubuntu desktop is very bloated ).

There’s basically no easy way to run a server out of 2% disk on / unless you start mounting every log folders to where there’s space, just 1.5gb on /tmp is unworkable for almost any server. Be very wary of ubuntu desktop images on server, because of the whole SNAP thing, it’s very different than default debian desktop.

You can fix almost all of this in /etc/fstab

show me df output

ahh, that is my mistake.
It is server only - no desktop GUI installed. It does not hibernate.

root@gpl:/# df
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65868272 0 65868272 0% /dev
tmpfs 13189284 1488 13187796 1% /run
/dev/nvme1n1p4 10218772 9213024 465076 96% /
tmpfs 65946412 109600 65836812 1% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 65946412 0 65946412 0% /sys/fs/cgroup
/dev/nvme1n1p2 102626232 8986864 88380104 10% /var
/dev/loop0 1474600 401844 979572 30% /tmp
/dev/nvme1n1p3 996780 210292 717676 23% /boot
/dev/nvme0n1p1 3592279904 536901416 2872825964 16% /backup_disk
/dev/nvme1n1p1 1098632 5356 1093276 1% /boot/efi
/dev/nvme1n1p6 3275212176 112733616 2996032176 4% /home
/dev/loop1 69504 69504 0 100% /snap/lxd/22753
/dev/loop3 68864 68864 0 100% /snap/lxd/21835
/dev/loop4 44672 44672 0 100% /snap/snapd/14978
/dev/loop5 63488 63488 0 100% /snap/core20/1587
/dev/loop6 48128 48128 0 100% /snap/snapd/16292
tmpfs 13189280 0 13189280 0% /run/user/0
/dev/loop7 63488 63488 0 100% /snap/core20/1593

Here is fstab:

/etc/fstab: static file system information.

Use ‘blkid’ to print the universally unique identifier for a

device; this may be used with UUID= as a more robust way to name devices

that works even if disks are added and removed. See fstab(5).

/ was on /dev/nvme1n1p4 during curtin installation

/dev/disk/by-uuid/b0745115-d688-456f-849b-cc0164865fb3 / ext4 defaults 0 1
/dev/disk/by-uuid/3091aeae-58a1-4366-89e3-4d334a649b5d none swap sw 0 0

/home was on /dev/nvme1n1p6 during curtin installation

/dev/disk/by-uuid/062288f2-f689-4dd0-8216-af27948be57d /home ext4 defaults 0 1

/var was on /dev/nvme1n1p2 during curtin installation

/dev/disk/by-uuid/0f685e88-a3d5-4f83-b01e-863c0a155875 /var ext4 defaults 0 1

/boot was on /dev/nvme1n1p3 during curtin installation

/dev/disk/by-uuid/29276026-aac0-4911-8b08-2882b1c8073f /boot ext4 defaults 0 1

/boot/efi was on /dev/nvme1n1p1 during curtin installation

/dev/disk/by-uuid/6388-CCBB /boot/efi vfat defaults 0 1

/backup_disk was on /dev/nvme0n1p1 during curtin installation

/dev/disk/by-uuid/977ec893-aca5-4d18-9e42-fa6082438cf5 /backup_disk ext4 defaults 0 1
/swap.img none swap sw 0 0
proc /proc proc defaults,hidepid=2 0 0
/usr/.tempdisk /tmp ext4 loop,rw,noexec,nosuid,nodev,nofail 0 0
/tmp /var/tmp none bind 0 0
proc /proc proc defaults,hidepid=2 0 0
proc /proc proc defaults,hidepid=2 0 0
proc /proc proc defaults,hidepid=2 0 0
proc /proc proc defaults,hidepid=2 0 0
proc /proc proc defaults,hidepid=2 0 0

ah can’t help you, my post get downvoted and eventually disappear on this community by dreamer and master. I’m jumping ship.

Good luck, maybe if you post df output one more time like shoaibkk suggested it’ll fix everything!

Server seems to run fine but the drive says nearly full. here is df:
Filesystem 1K-blocks Used Available Use% Mounted on
udev 65868272 0 65868272 0% /dev
tmpfs 13189284 1504 13187780 1% /run
/dev/nvme1n1p4 10218772 9267480 410620 96% /
tmpfs 65946412 108848 65837564 1% /dev/shm
tmpfs 5120 0 5120 0% /run/lock
tmpfs 65946412 0 65946412 0% /sys/fs/cgroup
/dev/nvme1n1p3 996780 210292 717676 23% /boot
/dev/nvme1n1p2 102626232 11204580 86162388 12% /var
/dev/nvme1n1p1 1098632 5356 1093276 1% /boot/efi
/dev/nvme0n1p1 3592279904 876924880 2532802500 26% /backup_disk
/dev/nvme1n1p6 3275212176 115405864 2993359928 4% /home
/dev/loop1 63488 63488 0 100% /snap/core20/1593
/dev/loop0 1474600 119576 1261840 9% /tmp
/dev/loop2 44672 44672 0 100% /snap/snapd/14978
/dev/loop4 69504 69504 0 100% /snap/lxd/22753
/dev/loop5 48128 48128 0 100% /snap/snapd/16292
/dev/loop6 68864 68864 0 100% /snap/lxd/21835
/dev/loop7 63488 63488 0 100% /snap/core20/1611
tmpfs 13189280 0 13189280 0% /run/user/0