Can your process ( or VM ) allocate more memory than is physically available on the underlying Host ?
Kaushik Banerjee ( He/Him/His )
SVP| Autonomous & Accountable DevOps, APAC SRE Head for Trading Tech| Execution, Empathy & Unleashing Team's Potential| I help Organizations reduce TOIL ,MTTR & MTTD while Improving Resiliency & Reliability
While trying to figure something out for Linux VMs on new hardware, I noticed that the cheapest tier had the following "overcommit".
CPU --> 5:1
memory --> More than 130%
I was very intrigued. I can wrap my head around CPU overcommitment given have always been aware of CPU cycles being shared between processes and users. So CPU overcommitment is just a logical step in that direction.
To wrap my head around memory overcommitment, I had to do a bit of digging around.
tl;dr: Memory overcommitment is possible only on X64 architecture. And on Linux machines ( at least the RedHat ones I checked ) seems 50% overcommitment ( i.e. allocate double the actual physical RAM memory size ) is the default design.
So imagine your Code needs to allocate 16 GB of memory, on a machine with 8 GB RAM.
On a 32-bit O/S, it will hit "Page fault handler" and eventually OOM.
领英推荐
As a 32-but O/S with 8 GB RAM will have 2 GB Kernel Memory and 6 GB user memory.
On a 64-bit O/S, 8 GB Physical RAM can produce a much larger ( up to 128 TB ) of vRAM.
This is down to memory overcommitment on all Modern O/Ses, but I will use Linux as an example.
Linux memory overcommitment is a feature that allows processes to allocate more memory than what is physically available in the system. This is achieved by the kernel by allocating virtual memory to processes rather than physical memory, which means that processes can request more memory than what is available in the system.
/proc/sys/vm/overcommit_* files contain the values which determine how the kernel allocates virtual memory. The default is 0, 0, and 50 for the 3 files in there.
Where a value of 0 ( default ) allows the Kernel to perform heuristic memory overcommitment. The value of 50 ( in overcommit_ratio ) refers to 50% ie double the size of the Physical memory that can be allocated in virtual memory.
The over-commitment is based on the (mostly true ) assumption that when a process or a VM asks for (say ) a 12 GB of memory it actually doesn't use all of it in one go, and a fair bit is available to be used by other processes.
So yes, your code can ask for 12 GB of memory on a 6 GB memory Physical machine.
Overcommitment allows more processes and VMs to run in parallel. The downside is that due to the kernel continuously handling virtual to physical memory mappings and overcommit handling, the processes overall would be slower.
So on applications that have a need for speed , we always ask for dedicated/pinned CPUs and Memory. No sharing with noisy neighbors :)