Memory Management and Metrics (Windows 2008)
Memory management in any OS can be a complex and confusing business, especially as meanings change with time, and the same counter can be called different things in different OS's from the same family.
The first thing to appreciate is that Windows 2008 likes to fill its memory up, in order to improve system performance (Unix likes to do this too). For example, it will try to pre-fetch drivers etc into memory before they're actually required, and cache files that are being accessed. The idea being that empty memory is worthless memory, it has no value if its not being used, so you might as well populate it with stuff that might be used, and in doing so improve the system's responsiveness and performance.
|High memory usage in Virtual Machines?|
|In the virtual world this can cause concern for Virtual Infrastructure admins, as they see lots of machines, using lots of memory, even when they're not doing anything. Having been used to seeing Win2003 and previous machines that were less greedy with memory. But high usage in Windows 2008 isn't a necessarily a problem. The VM's are merely trying make as a good a use of the available system resources as possible.
If your ESX actually becomes congested, then it'll instruct VMTools to start inflating balloon drivers. The first thing that the OS will dump from memory is the unnecessary stuff that's been optimistically loaded into memory but isn't actually providing any worth. This is good, efficient system operation, whereby every drop of possible performance is being extracted from the underlying hardware and the memory management techniques of both the OS and ESX are working effectively and in harmony.
- 1 System Memory Counters
- 2 Process Memory Counters
- 3 Page File
- 4 Sources and Further reading...
System Memory Counters
Total physical memory (RAM) available to the OS
This is data that has been cached into memory to improve IO times. For example, open files that are being read or written to.
Cached memory pages are broadly speaking either...
- Standby - loaded into memory ready for use, and can be immediately dropped if required
- Modified - loaded into memory and since modified, and can be flushed to disk if required
Data in the cache can be written to disk (or dropped) if the memory space is required for something else to make space. Data in the cache does not get paged out to disk, its only in memory in the first place to improve access times, therefore if there's no space for it in physical memory, there's no point writing it to virtual memory.
Counter found in Resource Monitor
Cached memory that has been modified since it was originally loaded in. It can be freed up on demand, but not instantly, the data will need to be written to disk before it can be re-used.
Counter found in Resource Monitor
Data that has been Cached into memory, and has not been modified since (though it may have been read) and can be dropped if required. It can be instantly freed up on demand.
This is memory that can be immediately written to if required, it may currently contain cache data, but this data can be dropped and overwritten (does not need to written to disk 1st).
If your system has a available memory (more than 100MB or so - but really dependant on the workload its sustaining) then its not experiencing physical memory constraints.
Physical memory space that is completely free, its not been populated with data (or at least any data has been invalidated/dereferenced - in the same way that data doesn't actually get deleted from a hard-drive, it just becomes orphaned and over-writeable).
Counter found in Resource Monitor
Memory that the OS has committed to providing, to an application, normally shown as
committed / total . Total includes both physical and virtual memory.
An application can request an allocation of memory from the OS, for its own use, which the OS will set aside/reserve for it. When that allocation has been completed, that amount of memory has been committed.
Committed memory, needn't actually be populated with data, so needn't contribute to the amount in use. If you have no physical memory available, an application can still start and request an allocation. But that allocation will effectively be provided for by virtual memory. All committed needs to be backed (serviced by) physical RAM or page file.
Just because an application requests a large commit, and then doesn't populate it with data, doesn't mean that its misbehaving. Certain OS API calls can cause large Commits, ready in case the space is needed. See Page File for further info.
Process Memory Counters
Memory pages that the process currently has loaded in physical memory.
If a process attempts to access a memory page that is not in its Working Set, a page fault occurs.
Memory pages can be shared between processes and so can appear in the working set of more than one process. This commonly occurs because processes are using shared software libraries, or because they are explicitly sharing data between them (for example two processes of the same application).
Peak Working Set
Peak Working Set memory since process first existed.
Working Set Delta
Change in a processes working set size since last update.
Private Working Set
Memory pages that the process currently has loaded in physical memory that are currently dedicated/private to that process (they could become shared in the future).
Memory reserved for use by the process - can be physical or virtual (page file provided).
Memory used for kernel and device drivers' data that can be paged out to disk if required.
References to the registry and and memory mapped files are often large consumers of Paged Pool memory.
Memory used for kernel and device drivers' data that cannot/must not be paged out to disk as it might be accessed when the system is unable to handle page faults (as its handling hardware interrupts).
All non-paged pool memory (including virtual memory) is stored in physical memory and so therefore is limited. Typically the amount used by a process doesn't change that much, and a slow constant increase can be indicative of a driver or application with a memory leak.
Page faults occur when a process attempts to access a page of memory that is not located in its allocated physical memory, meaning that it has to be fetched from elsewhere, which will impact on the processes performance. These can be either a...
Soft Page Fault
Soft page faults occur when a process attempts to access a page of memory that is not located in physical memory, but in order to service the request the page file does not need to be accessed.
This is generally less common than a Hard Page Fault, and occurs when the memory page is actually in physical memory, just not assigned to the calling process. This can occur when...
- The page is in the working set of another process
- The page was in the process of being removed from memory but hasn't actually been reused yet
- The page is already in memory, but due to memory manager pre-fetch rather than a process instigated method
- The process references a memory page for the first time since allocation (in which case there isn't actually any data to be retrieved), can be known as a demand-zero fault.
Hard Page Fault
Hard page faults occur when a process attempts to access a page of memory that is not located in physical memory, and so in order to service the request the data needs to be retrieved from the page file into physical memory.
Page Faults Delta
Number of page faults that have occurred since the last update.
What should be considered a bad or high value is down to the expected operation of the system - there is no right or wrong. The magnitude and duration of any page faulting should be considered, and how appropriate that is for the process in question.
The Page File is used as a slower alternative to Physical memory, generally used where either there isn't enough space to store data in physical RAM, or when the data is of a lower priority and needn't take up valuable physical RAM space. The page file is stored on disk, which is slower and normally more plentiful than physical RAM. The virtual memory provided by the page file is organised into 4KB blocks known as pages.
The OS will manage the Page File size itself by default, but it can be manually controlled. Which means that you can make it comparatively small if required. As a general rule this should be avoided, however, there is the potential to save plenty of storage space across your infrastructure if you've a large number of servers using shared storage (either as virtual machines or directly as physical servers).
Certain OS API calls will cause large Commits (eg MapViewOfFile), which means that you need to have a decent amount of page file free to allow for certain operations. You should only consider reducing your page file to a small size on servers on which only certain thoroughly tested applications run, and there is value to be had from doing so.
Sources and Further reading...
- Troubleshooting Nonpaged and Paged Pool Errors in Windows
- Pushing the Limits of Windows: Paged and Nonpaged Pool