I would like to tweak some settings on my server to better use the resources I have.
The Linux server in question has 32gb RAM, running general Docker containers of the selfhosted variety, no AI, and 250gb SSD. Currently, the server has no swap with swappines set at 60.
Swap: 0 0
vm.swappiness = 60
Generally, only one app is being engaged at a time and everything seems to respond fairly well.
So, my question is, what would be a good swapsize and swappiness level for this server? I realize that all of this is dependent on how the server is used and load level during operation. Is there a rule of thumb to go by. I’m sure there will be some tweaking involved, but I was looking for a good point to deviate from. Where do you guys set swap and swappiness?
Thanks
Yeah I mean it really depends on the app.
If it just occasionally peaks, maybe modest zram and low swappiness is all you need.
But sometimes i crack it up (and basically disable zram) because I’m paging like 4x my RAM pool size onto ssds with big files.
If you genuinely attempting to quantify this, you can create a swap file of any size right there on your drive. You could iterate and test every setting for every scenario. You could even change settings dynamically if you wanted to.
That said, I leave it to the kernel to figure out and over the past 25 or so years that’s been fine.
past 25 or so years that’s been fine.
Well, two independent sources with 25+/- years of experience say leave it alone. It sounds to me like I should leave it alone.
About that.
Just because I’ve done it this way and haven’t had issues, doesn’t mean it’s the best or only way.
You dared to ask a question and the tools to explore answers are readily available.
This is how we as a society make progress.
Please don’t feel like my experience is the final answer to your question … my experience tells me that this is rarely … if ever … the case.
So … please … explore!
You dared to ask a question and the tools to explore answers are readily available.
Right, however, before I go ‘test’ and screw things up, why not dare to consult with more knowledgeable sources? Maybe I have not taken into account other things that could be negatively affected by said testing? I mean, if you came to me and said ‘Hey bro, I’m thinking about learning how to play the guitar (something I’ve been doing for 65 years). What guidance could you offer a guy just starting out? What about equipment, type strings, etc’? Sure, you could easily go out and buy a cheap, sub $100 guitar only to have it wear your wrists and fingers out and then quit because it’s too painful to practice. Or, you could ask the guy who has been playing the guitar and other stringed instruments for virtually all his life, what guidance he could give. 😀
I appreciate your input greatly, and as I said, 25 years of experience does speak for itself.
Thank you
People always say to let the system manage memory and don’t interfere with it as it’ll always make the best decisions, but personally, on my systems, whenever it starts to move significant data into swap the system starts getting laggy, jittery, and slow to respond. Every time I try to use a system that’s been sitting idle for a bit and it feels sluggish, I go check the stats and find that, sure enough, it’s decided to move some of its memory into swap, and responsiveness doesn’t pick up until I manually empty the swap so it’s operating fully out of RAM again.
So, with that in mind, I always give systems plenty of RAM to work with and set vm.swappiness=0. Whenever I forget to do that, I will inevitably find the system is running sluggishly at some point, see that a bunch of data is sitting in swap for some reason, clear it out, set vm.swappiness=0, and then it never happens again. Other people will probably recommend differently, but that’s been my experience after ~25 years of using Linux daily.
but that’s been my experience after ~25 years of using Linux daily.
Certainly, 25 years of experience speaks for itself. If I may ask a follow up question.
I run Portainer, and in Portainer you can adjust Runtime & Resources per container. I am apparently too incompetent to grasp Dockge. Currently everything in Runtime & Resources is unchanged. Is there any benefit to tweaking those settings, or just let 'em eat when hungry?
I run all of my Docker containers in a VM (well, 4 different VMs, split according to network/firewall needs of the containers it runs). That VM is given about double the RAM needed for everything it runs, and enough cores that it never (or very, very rarely) is clipped. I then allow the containers to use whatever they need, unrestricted, while monitoring the overall resource utilization of the VM itself (cAdvisor + node_exporter + Promethus + Grafana + Alert Manager). If I find that the VM is creeping up on its load or memory limits, I’ll investigate which container is driving the usage and then either bump the VM limits up or address the service itself and modify its settings to drop back down.
Theoretically I could implement per-container resource limits, but I’ve never found the need. I have heard some people complain about some containers leaking memory and creeping up over time, but I have an automated backup script which stops all containers and rsyncs their mapped volumes to an incremental backup system every night, so none of my containers stay running for longer than 24 hours continuous anyway.
Rule of thumb is leave it alone unless you have an actual problem you need to solve.
You’re probably not even using all your RAM, so there’s point in swapping to disk. It would only harm performance.
It really depends on how your memory gets allocated when everything is at peak utilization. If you have a process that needs 4GB that you don’t want to fall over when memory runs out, then you need that +~10%-ish. If everything you’re running is critical, then you want at least the size of your memory allocated for swap.