Max datastore size of 2TB – 512 bytes in ESX 4.x

You cannot add a new or extend an existing VMFS-3 datastore on ESX 4.x beyond 2TB. According to the vSphere 4.1 Configuration Maximums the maximum datastore size is 2TB – 512 bytes. This limitation also applies to ESX 3.x. If you present a new LUN that exceeds this limit (2TB – 512 bytes) and attempt to create a datastore on it you will see the following error:

Call "HostDatastoreSystem.QueryVmfsDatastoreCreateOptions" for 
object "datastoreSystem-2119" on vCenter Server "vc01.vsysad.local"
failed.

A screenshot of this in a live environment is below:

20140404184943

This is documented in VMware KB article 2007328.

To rectify the problem, the only option is to destroy the LUN and present a new one, but this time with a maximum size of 2TB – 512 bytes or less. You will then be able to add the datastore successfully. This limitation is due to VMFS-3, which uses Master Boot Record (MBR) and its partition table limits the maximum addressable storage space to 2TB (232 × 512 bytes).

A workaround to exceeding the 2TB datastore limit on ESX 4.1 is to add extents. This concatenates LUNs together, thereby extending its overall size. vSphere 4.1 supports a maximum of 32 extents with a maximum size of 2TB each –  the maximum size of the volume including all extents would be 64TB. To find out more about extents in vSphere go to this excellent post by Cormac Hogan on the official VMware Blog site.

Another pitfall to point out is that extending an existing VMFS-3 datastore beyond 2TB is possible. By doing so, ESX will allow the datastore expansion, the datastore will continue to function as normal and VMs running on it won’t be affected. Furthermore, storage rescans would be completely safe, as would removing and re-adding hosts to the cluster. The only clue that something is amiss would be that the extent size of the datastore would appear be incorrect in the GUI, in my case it was 0.00 bytes. Once the host is rebooted, it will no longer be able to mount the VMFS filesystem – a very bad situation to be in!

What this means is, don’t extend a VMFS-3 datastore beyond 2TB and then reboot a host that is attached to it, as it will not be able to mount it and the VMs running on it will not be accessible from that specific host. If the host is a member of a cluster, other hosts that are still connected to the datastore will still be able to mount it. The high-level process to recover from this situation would be:

  • Present a new LUN with a size of 2TB – 512 bytes or less to all hosts in the cluster
  • Format the LUN with VMFS-3, name it and bring it online
  • From a host that still has the affected datastore mounted, storage vMotions all VMs onto the newly created datastore
  • Ensure those VMs are now accessible again from the affected host
  • Delete the “old” datastore, un-present it and remove it from the storage array

Please note that this datastore size limitation does not apply to ESXi 5.x as by default they are formatted with VMFS-5 which utilises GUID Partition Table (GPT), which does not suffer from this limitation. The maximum size of a GPT disk is determined by the OS and according to the vSphere 5.1 Configuration Maximums, this limit is 64TB.

If you have the option to upgrade from ESX 4.x to ESXi 5.x, I strongly recommend doing so, it overcomes many of the limitations of ESX 4.x, one of them being the 2TB datastore limit highlighted in this post.

References:
VMFS Extents – Are they bad, or simply misunderstood?
Block size limitations of a VMFS datastore
Creating a new datastore fails with the error: An error occurred during host configuration (2007328)
Troubleshooting a LUN that is extended in size past the 2TB/2047GB limit (1004230)
ESX/ESXi 3.x/4.x hosts do not support 2-terabyte LUNs (3371739)
VMware vSphere 4.1 Configuration Maximums
VMware vSphere 5.1 Configuration Maximums
Creating and managing extents on ESX/ESXi (2000643)

Login failed to Dell OMSA site on ESXi 5.1

You may receive a Login failed…. error when trying to log into the Dell OpenManage Systems Administrator (OMSA) site on a Dell server running VMware ESXi 5.1 from a Managed Node. The full error is below:

Login failed....
Lockdown mode is enabled in the managed node. For more 
information, see online help.

In this example, Lockdown mode was not enabled, so could not be the cause. I was able to verify this by running the following command while connected to the server via ssh:

[root@esxi1 ~]# vim-cmd -U dcui vimsvc/auth/lockdown_is_enabled
false

From the above you can see that the response was false, meaning that Lockdown mode was not enabled. Once that potential cause has been eliminated, simply restart the Dell OMSA services from the console or via ssh. To do so, run the following command:

[root@esxi1 ~]# /usr/lib/ext/dell/srvadmin/bin/dataeng restart
[root@esxi1 ~]# /etc/init.d/wsman restart

Once the Dell OMSA services have been restarted you should be able to log into the OMSA site and carry out the required tasks.

If you receive a not found error for the /usr/lib/ext/dell/srvadmin/bin/dataeng restart command, per the below:

[root@esxi1 ~]# /usr/lib/ext/dell/srvadmin/bin/dataeng restart
-sh: /usr/lib/ext/dell/srvadmin/bin/dataeng: not found

then try restarting all services by running the command below:

[root@esxi1 ~]#/sbin/services.sh restart

Once all the services have restarted try again and you should be able to log into the OMSA site!

Reference:
Restarting the Management agents on an ESXi or ESX host (1003490)