Quick Byte: How has Apple been able to use less RAM and why is that changing?
Apple has a bit of a reputation for being lighter on the Random Access Memory (RAM) than competing systems like Android and Windows, only recently putting 8GB of RAM in the iPhone and starting the mac at 16GB where the aforementioned platforms have have had more memory for a decade or longer.
The answer is simple they use the XNU kernel's Copy on Write memory mechanism. To simplify how this mechanism works when memory needs to be shared between processes on an iPhone or mac instead of copying that memory from process A to B (as is done in traditional memory mechanisms) the kernel maps an abstraction over the memory so both processes can refer to the memory without copying the physical pages that the memory uses.
Here is an example of allocating a vmobject. The function vm_allocate takes reference to the process (Mach task), a pointer that the task can use to access the memory, size of memory desired, and flags to control has this memory is allocated.
Its important to note that when this memory is read after being shared you'l need to use the vm_read API with the memories port to get a pointer to the data as you don't read directly from the vmobject.
Here is a code snippet to show how that the Mach vmobject API works.
领英推荐
#include <mach/mach.h>
mach_port_t task;
kern_return_t kr;
mach_vm_address_t address;
mach_vm_size_t size = 4096; // 4KB page size
task = mach_task_self();
// Allocate virtual memory in the task's address space
kr = mach_vm_allocate(task, &address, size, VM_FLAGS_ANYWHERE);
if (kr != KERN_SUCCESS) {
printf("Error allocating memory: %d\n", kr);
return -1;
}
printf("Allocated memory at: %llx\n", (unsigned long long)address);
To put it simply each process will refer to the same physical memory (through the vmobject abstraction) making sharing memory much faster and saving both the time it takes to copy memory as well as the space needed to have duplicated memory reducing the amount of RAM needed.
This feature is enabled by the kernel itself through the mach layer and is prevalent throughout the system. When you build a mac or iOS app the shared frameworks like Core Foundation and SwiftUI are mapped into your process so they are not copied into every app that needs them saving greatly on system resources. These frameworks can exist in fast cache making them more performant and readily available.
In recent years Apple has started to include more RAM in their computers due to the increasing need for on-device AI. With AI inference you have a fundamental tradeoff that needs to be made. More RAM will lead to faster more real-time responses. Adding RAM is expensive and has some implications for increased power consumption but it's likely the right choice as in united memory architectures such as Apple's Arm architecture when not needed by AI the RAM can be used for regular applications, file cache, and whatever other use cases come along.
Cyber Security Student | Aspiring Data Center Technician
5 个月Insightful