is ballooning a bad sign?
Hello guys again!
in a typical vps, it shows
free -h
total used free shared buff/cache available
Mem: 8.0G 4.5G 40M 145M 3.5G 3.4G
Swap: 8.0G 218M 7.8G
is it a bad sign?
Comments
YES
they all --- OVERSELLLLL seeeellllllll sellllll -----
sale
https://www.linuxatemyram.com/
buff/cache is available to you, the OS just accelerates stuff with it.
Free NAT KVM | Free NAT LXC
On Linux when the ram is not actually getting used for application instend of just doing nothing with it, it will use it for buffering and cacheing if a application requires it Linux will release it.
Anyone have friends try to clear their mobile ram cache?
Action and Reaction in history
Samsung smartphones are @ ~70 RAM usage out of the box.
I mean this with a sense of humour and no intent to wound, but the only sign that is of is that you need to do more reading on how linux/ram works. that is not balooning
https://inceptionhosting.com
Please do not use the PM system here for Inception Hosting support issues.
did this happen on one of your
OOuhh
kidding with you man.
@seenu
Here's something you could try if you haven't noticed it before. On a Linux machine, reboot, and then launch a big application, maybe Firefox. Notice how long the application takes to load. Then close the application. Then reopen it. The second time it will be way faster! Why? Because, when there's enough memory, Linux often keeps the application in RAM memory even after it's been closed. The second time the application is opened, it gets quickly reloaded from RAM. The second time, the application does not have to be read into RAM from the filesystem. Apparently the kernel has sophisticated ways of keeping track of which applications are likely to be reopened, how much instantly available memory needs to be preserved, etc.
I can see the memory "ballooning" effect on my server. If I run a service on a 2 GB VM, the system grabs almost all the memory. If I say, "Oh my, it needs more memory," and double the memory to 4 GB, the system persists in grabbing almost all of the memory. Depending on what services are running, increasing the memory and the swap can make a huge difference, or not much difference. But the percentage of memory in use doesn't necessarily tell you much all by itself.
At least, this seems more or less how it was explained to me and what I think I might be seeing on my server.
Best wishes from Sonora! ?️
MetalVPS
Why isn't this system using more RAM for buffering?
PinePhone, just booted:
After opening Firefox:
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
you're really asking why a server with 256G is not using more than 13G for buffers/cache? is there even enough data in use to be cached? ;-) ;-) ;-)
does it eventually use zfs anyway, where the ARC might not be shown as buffers/cache but really counted towards the occupied memory?
This server is compiling large programs. The Go linker is reading many files. I'm hoping for even faster compilation.
It's ext4 filesystem on Intel NVMe.
I'll consider ZFS when it's due for reinstall.
Occupied memory is 128GB of hugepages.
Webhosting24 aff best VPS; ServerFactory aff best VDS; Cloudie best ASN; Huel aff best brotein.
sync; echo 1 > /proc/sys/vm/drop_caches
2. In order to clear dentries (Also called as Directory Cache) and inodes run:
sync; echo 2 > /proc/sys/vm/drop_caches
3. In order to clear PageCache, dentries and inodes run:
sync; echo 3 > /proc/sys/vm/drop_caches
Running sync writes out dirty pages to disks. Normally dirty pages are the memory in use, so they are not available for freeing. So, running sync can help the ensuing drop operations to free more memory.
Page cache is memory held after reading files. Linux kernel prefers to keep unused page cache assuming files being read once will most likely to be read again in the near future, hence avoiding the performance impact on disk IO.
dentry and inode_cache are memory held after reading directory/file attributes, such as open() and stat(). dentry is common across all file systems, but inode_cache is on a per-file-system basis. Linux kernel prefers to keep this information assuming it will be needed again in the near future, hence avoiding disk IO
TLDR-
https://www.thegeekdiary.com/how-to-clear-the-buffer-pagecache-disk-cache-under-linux/
Time for PowerPacker
You need more cpu power for faster compilation. in your case 256GB of RAM vs 100TB of RAM will perform about the same.