VMM / Page File / Swap Space

Someone posted this on a forum I frequent, I thought it was a pretty good explanation to most beginners so I thought I might re-post it here.  (Even if the poster did later admit they wrote it on an iPad late at night when I asked if I could copy it here!)

VMM

There’s a difference between virtual memory and swap. I’m going to simplify, generalize and maybe lie just a little for the purpose of making this comprehensible. The details of actual implementations are arcane, require a fir bit of back story themselves, and really don’t matter unless you’re hacking on an OS kernel. The important part is that the model described will correctly predict and describe how the computer behaves even if the mechanism of action isn’t 100% perfect (and it’ll still be about 95% correct).

On Virtual Memory Managers
You have a bunch of ram. Each little piece gets an address from 1 to “a zillion”. If you want to read or write data from/to memory, you supply the address (and optionally, the data you’re writing) and the memory stores or retrieves it as appropriate.

The thing that sucks about that is only one program can ever be using memory at a time. If I write a word processor and decide to store a user’s documents at address 100 and going up, and you write a game and decide to store the player’s health at address 101 then we’re going to fight. My updates replace your data, your updates replace my data. The only solution is to never let two programs run, or to have every single program agree never to touch the same memory address. Neither of these are workable.

Solution: Give the operating system all of the memory! But how will programs get memory they need to run? The OS can provide routines for that. Your software says “hey Windows, I need some memory” and the OS responds saying “Sure, here’s 10mb, it starts at address 1000”.

When your program writes data to address 1000 it does so by going through your OS. It says “hey OS, at address 1000 put the letter “B””. The OS says “sure thing” and then does it’s own thing. The OS actually looks through the entire block of physical, finds a place to store the value, then records it: “Program 1, address 1000 = physical memory, address 193493.”

The same happens when my program runs – I ask for memory, the OS gives me an address that things start at, and when I store data the OS puts it somewhere, makes a note of what the program & value are and tells me everything is fine. The physical address might be 27, but my program addresses that as 1000 just like yours.

When your program reads data back from (what it thinks) is address # 1000, the OS looks up what physical address that maps to, reads the value, and makes sure that’s what your program sees when it looks at address 1000. The real address 1000 might have anything (or nothing in it) but your program won’t know because the OS is forcing it to interact with imaginary (or virtual) memory. When my program requests the same address it does a similar lookup. Neither of our programs needs to be concerned with what anybody else is doing and it’s impossible to accidentally interfere with somebody else’s software.

This whole process of separating the addresses your programs interact with from the real physical addresses of memory is handled by a system called ‘the virtual memory manager’.

On paging
Now that your applications aren’t addressing memory directly, why do we even need physical RAM at all? So long as we have ‘free’ memory we can always back every memory request with physical RAM, but suppose the virtual memory manager just hands out 1GB of memory to every application that spawns. (or that you run so many applications that need more than the amount of physical ram you have). At that point it can either crash with an out-of-memory error or start making some tricks happen.

Applications never really access ram directly – so the VMM doesn’t really need to back every request with physical memory. All it needs to do is ensure that whenever you ask for data at some address it’s able to return the value, and whenever you write data it stores it correctly.

So, what I can do is write my VMM to monitor what RAM isn’t allocated to backing a memory request by an application. If it ever gets ‘completely claimed’ I can start to cheat. I can say “okay, take the first 100 megabytes of RAM, write that out to disk, and pretend I never allocated it in the first place”. Great, – now the VMM can continue handing out memory so long as RAM+HDD is less than the amount of memory allocated. The process of taking data in RAM and writing it to disk is called “paging out”. Paging out is slow, but it usually doesn’t make your programs feel slow because there are some things you can do to respond to memory requests quickly, for example you can write out some rarely used portion of RAM to disk before you actually run out of physical memory. When the VMM finally has a request it can’t back with physical memory it just has to blow-out the stuff it’s already written to disk. No big deal. Operating systems will page out data “just in case” they need to later in order to improve performance and if that’s wasted effort it’s not a big deal. If you need to read that data that was paged out it still exists in memory so there’s no penalty for accessing it. It’s not until that memory is used to back another allocation request that you’ll have issues.

So, what happens when some program wants to read data that was stored in that first 100 megabytes of memory that now lives on disk and has be pushed out of memory? When that happens the VMM needs to read the data back into physical RAM and then respond to the request. Unfortunately, all of the physical ram is issued (or it wouldn’t have purged the data from RAM in the first place). This situation is called a ‘page fault’. So, the VMM has to find 100 different megabytes of RAM to page out, then it can read that first 100 megabytes back in (a process called paging in) and finally respond to the request. Paging in is slow, and if it happens in response to a page fault (and not just the VMM trying to keep things snappy) then you’ll feel it because you’re forced to wait on the disk before your program can continue. Page faults suck balls and it’s the thing you really want to avoid. More physical memory is the best solution to page faults but smarter VMs can make them less likely.

The data written out from RAM to disk is typically stored something called a ‘pagefile’, ‘swapfile’ or just ‘swap’. When you’re dicking around in the system control panel in windows this is what you’re actually messing with. You’re adjusting how much disk space windows is allowed to use to page-out RAM into. In a way you’re limiting the total amount of memory that can ever be allocated to RAM+pagefile size. You’re not (directly) influencing how the swap file or physical memory is used to back requests, at least not in the way that setting a color depth or resolution for your screen alters how directx does it’s work. The only real way to directly influence the process in a precise and meaningful way is to disable the pagefile entirely. This is widely considered a bad idea: some programs won’t run, you prevent efficient caching of the file system, increase the likelihood of crashes due to low memory, and you don’t in any significant way improve performance for regular applications.

On optimizations
The thing about VMMs and paging is that they don’t have to be the simple stuff we imagined here. For example, the VMM knows how frequently bits of memory are accessed and when they were last used. It can make a pretty good guess as to what’s safe to unload. A VMM might even choose to pageout stuff (or at least write it to disk ‘in case’ it needs to later) even if there’s no stress on memory — that way if there is a huge demand for memory it can respond almost instantly. Apple goes a step further and actually starts “zipping up” data in memory before it needs to start paging: compressing memory makes more free RAM, and decompressing it when data is needed is pretty darn fast so the performance impact is tiny. Memory compression tends to work pretty well, you can get a 25-50% reduction for ‘almost free’ which means it can back up to 24gb of virtual memory (remember, that’s memory assigned to applications, not paged out data in swap files) with only 16gb of physical RAM. If it ever needs to page out, it’s way easier to write 1GB worth of memory content to disk if it’s already compressed down to 500mb. It’s also faster to read that back in and decompress than it would be to read it back in raw and uncompressed. Every VMM has tricks like this – it’s a very mature field.

Windows defaults and stuff
By default Windows has a pretty simple algorithm for allocating how big the page file is allowed to be. Windows will use it “on demand” based on what the VMM thinks is best. In some sense more pagefile space is good because it means the VMM has more flexibility with how it backs requests for memory. Unfortunately page files eat up space on disk and if you’ve got a small drive (256gb isn’t exactly huge) then using up 10gb for page files might not be desirable. When you set the maximum page file size to 2GB you say “Hey windows, I would rather have you start crashing after handing out ~18GB of memory than fill up my disk.”

Whether you’re “only using ram” or “using less of the SSD” for paging isn’t something we can answer easily. Windows will page even if there’s no memory pressure. Certainly if you put a bucketload of stress on memory then by limiting your pagefile size you’ll force windows to use less of the SSD – but you’re also forcing applications to crash instead of simply backing memory requests after paging out. On the other hand, if there’s only 2GB of data that can ever be paged out then you force windows to keep everything else in physical memory. If you had a really badly designed VMM that might be desirable, but Windows (and every other modern OS) doesn’t have a shitty VMM. They’re all going to try and avoid disk thrashing as much as possible because HDDs and SSDs are very slow compared to RAM.

On RAPID
Everybody knows disks (any disks) are slow. Whenever possible you want to avoid accessing a drive and when that’s not possible you want to avoid waiting on them. One of the tricks to do this is caching. When you request a file from a drive Windows will read that file, and maybe try to guess what other data you need and read that before you even request it. It will them store that data in RAM. If you ever need to read that data again it’ll pull it from the stored copy in RAM and just pretend it came from the disk. Instead of waiting 0.3s for an HDD to respond you just wait 0.00001s for RAM to respond. So long as the data is the same: who cares? This works because your read/write requests to the disk also go through an abstraction layer sort of like how you never really get to access RAM, you just go through the VMM. Windows is able to see all data accessed by all applications and store parts in RAM so it may be able to respond to drive requests from memory even the first time you make a request (for example: nearly every application is going to access the shared libraries that are used to draw windows on screen. those are loaded up once when windows first boots and nearly every program can benefit from having those cached).

Going the other way can work to. Instead of writing data directly to disks (which is very slow) windows can store your write request in memory, tell your application the write has completed so that it can keep on working, and then at it’s leisure actually write it out to disk. It does a few things like ensuring the file cache is kept current (if you write to a file that’s cached in memory, that cache is updated immediately without going to disk).

The thing with file caching is that it’s a crap-shoot about what stays in there – for starters, cached files use up memory that might also be used by the VMM to back memory requests. When you start putting pressure on memory it’s likely that file caches get flushed and purged (because the data can always be read back from the disk). Then again a VMM can be clever too. It might notice “I’ve never accessed this memory backed by physical RAM in application X, but this cached file Y is getting read twice a second. Instead of flushing this file cache, I’m going to page out that application X’s memory instead and keep the file cache.”

In this case your VMM, page file, and file cache have all conspired to improve performance substantially. It’s better to suffer one page out now, and maybe a page in later, than to make 7500 file reads per hour run 10000x slower.

RAPID takes control away from Windows and says “fuck you, we’re always having a cache of 1GB worth of file data. in memory” RAPID then uses it’s own algorithms to figure out what gets to be in that cache. If RAPID does a better job than Windows then you’ll see better application performance in ‘the real world’. If RAPID does a shittier job than Windows then you’ll waste some memory and maybe performance gets worse (because paging has to start happening sooner). On average, there may be some gains, but they’re not huge. RAPID does a few things that Windows doesn’t (like write coalescing and deferral) but outside of synthetic benchmarks they aren’t going to make night-and-day differences.

The only real conflict that exists is that you block up whatever memory RAPID is using so that it can never be used to back requests for memory received by applications. If you always have huge amounts of free memory then there’s no real harm here: you’re backing pretty much every VMM allocated memory request with physical RAM and not paging so it doesn’t really matter. Using some of that free memory as a file cache probably won’t hurt and it might help, so why not?

If you find you’re frequently paging in/out then it’s probably not a good idea to use RAPID because it’ll just make the situation worse but for most users that’s probably not a concern. As you can see, see RAPID, VMMs, and Swap aren’t really the same thing – while they might interact it’s mostly not an important concern unless you have very small amounts of memory relative to the tasks you’re trying to accomplish.

About Robert Craig