Remove VM swap file in ESXi 5.x

There may be specific scenarios where it is not desirable to have a VM swap file. In my most recent experience a customer was short on storage so wanted to save space occupied by the large VM swap files, which are equal in size to the memory allocation to the VM. As physical memory on the ESXi host was not over-subscribed this would not have negatively impacted the performance of the VMs .

To remove the VM swap files perform the following steps:

  1. In the vSphere client locate the VM, right-click on it and select Edit Settings.
  2. Go to the Resources tab and select Memory
  3. In the right-hand side check Reserve all guest memory (All locked) and click OK. The screenshot below shows this setting:

20130419130220

This setting reserves all 32GB of vRAM allocated to the VM on the ESXi host, and only if that memory is locked and guaranteed will that VM be able to power on.

Prior to making the configuration change the VM’s folder on the datastore contained a 32GB swap file:

/vmfs/volumes/514c4fc8-21030200-2f06-bc305bf615e3/VM1 # ls -al | grep vswp
-rw-------    1 root     root     34359738368 Apr 19 12:07 VM1-6d3a3a7d.vswp
-rw-------    1 root     root     119537664 Apr 19 12:07 vmx-VM1-1832532605-1.vswp

After locking the reserved guest memory, the swap file was zero-length, so occupied no storage space (0kb):

/vmfs/volumes/514c4fc8-21030200-2f06-bc305bf615e3/VM1 # ls -al | grep vswp
-rw-------    1 root     root             0 Apr 19 12:10 VM1-6d3a3a7d.vswp
-rw-------    1 root     root     119537664 Apr 19 12:10 vmx-VM1-1832532605-1.vswp

You can ignore the VMX swap file, above it is vmx-VM1-1832532605-1.vswp. These files are not related to ordinary host memory swapping but allow the swapping of the memory overhead associated with the VMX process. The ESX/ESXi host creates VMX swap files automatically when the VM is powered on, as long as there is sufficient free disk space. This is a new feature in ESXi 5.x and more info can be found here.

Note: Removing the swap file is not recommended in solutions where memory has been over-subscribed to VMs. Doing so precludes the use of and benefits VMware memory management techniques such as ballooning, TPS (transparent page sharing), memory compression and host swapping (in that order).

  • Chris Neale

    Is it not worth mentioning that rather than going “all out” and locking all the guest memory you could just reserve 50% of it, say?
    As your swap file size is actually equal to the Memory Allocated – Memory Reservation (or MemAlloc-MemLimit IF there was a limit set)
    So you could save plenty of space just by reserving a % of your guest VMs memory without stopping them all powering on.

    • japinator

      Hi Chris, thanks for your comment – you have a very valid point. If the situation cited above was such that there was some free storage space (but not enough) then we could have used your suggestion of reserving less, e.g. 50%. However, the issue was that once all the VMs were deployed they would not have had any space at all for the vswap files because the Solution Engineer that sized the storage got it completely wrong – he did not factor in the space for the the swap files. Hence the reason for reserving all the guest memory. I will add your comments to the post when I get a chance. Thanks again!