Best Practice Configuration – VMware Snapshot, Network, and NetApp Storage

Ahmed Taha is a Global Knowledge instructor who teaches and blogs from Global Knowledge Egypt.

This post provides recommendations and prerequisites to design NetApp storage, VMware snapshot, and networking.

One Data Store Serves Multiple Virtual Machines (VM)


  1. It’s simple to create one LUN to host all VMs.
  2. By design we should reserve space for growth so that customers can assign space to recover VMs.
  3. You can host two or more VMs in the same data store. This allows you to restore one snapshot to recover all of the VMs to the same state. However, if you have two nodes stored in different data stores and you need to restore node 1, then node 2 to will not be able to communicate with node 1. It’s recommended that you restore all nodes to the same states.
  4. One of NetApp’s features is named flex Clone. You can create a virtual copy of the data store on an ESX server that is isolated for testing functionalities without impacting production.


  1. Because this architecture doesn’t allow a storage array to identify the I/O load generated by an individual VM, it’s difficult to monitor I/O per VM.
  2. If all VMs are hosted in the same data store then it’s  difficult to restore one individual VM.
  3. Replicating a specific VM to a Disaster Recovery (DR) site could be difficult because the replication is based on the data store.
  4. If thin provisioning is used without being monitored, the VMFS could grow and affect all other VMs stored in the same data store.
  5. If the VMware snapshot is not deleted then the size of the snapshot file could become the same as the size of the virtual machine disk and take up the protected space for the data store.
  6. If flex Clone technology is used to clone one large data store, you will not be able to keep the state for one VM after modifications.
  7. If one big data store is destroyed then all VMs will be lost.


  • NetApp storage offers Snap Manager for Virtual Infrastructure to restore an individual file, but you have to buy a license.
  • We assume that we can enable a storage snapshot to protect all VMs.
  • Enabling snapshot may consume from 20 to 40 % of data store capacity.

Spanning VMFS Volumes Across Multiple Data Stores


  1. Allows a storage array to identify the I/O load generated by an individual VM.
  2. Allows a storage array to distribute the I/O load to all available paths.
  3. Hosting one VM in one data store gives you the ability to restore it individually.
  4. It is simple to replicate the specific VM to the DR site.
  5. Thin Provisioning issues affect only one VM that’s stored in the data store.
  6. VMware snapshot issues affect only one VM that’s stored in the data store.
  7. It is easy to use flex Clone technology to virtually clone one data store so that you will be able to keep the state of one VM after modification.
  8. If one data store is destroyed then only one VM will be lost.


  1. There is a lot of administration since you have to create one LUN per VM.
  2. If we have multiple data stores then we have to reserve space for protection and expansions
  3. You can’t restore one snapshot status to multiple VMs.


NetApp recommends a one-to-one alignment of VMware data stores to flexible volumes to easily  implement Snapshot backups and Snap Mirror replication policies at the data store level because NetApp implements these storage side features at the flexible volume level.

Network Configuration for ISCSI Storage

  1. Create a VM kernel for every Ethernet link that you want to dedicate to ISCSI traffic.
  2. A default gateway is not required for the VM kernel IP storage network.
  3. Each ISCSI VM kernel must be configured to use with an adapter that is not used by any other VM kernel.
  4. It’s recommended to create a VMware Network Teaming for ISCSI traffic that is attached to separate network switches.
  5. For systems that have fewer NICs such as blade servers, VLANs can be very useful. Configuring two NICs together provides an ESX server with physical link redundancy. By adding multiple VLANs, you can group common IP traffic onto separate VLANs for optimal performance.
  6. It is recommended to group the virtual machine network on  VLAN -The -IP Storage and VMotion VMkernel activities should reside on a second VLAN.
  7. VM kernels can be on different IP subnets. This configuration is required when combining ISCSI with NFS data store access.

Default Native Multipathing Setting

  1. When you connect a NetApp array to a VSphere server then the array is identified as an active-active storage controller, and the VMware native Multipathing path selection policy applies the “Fixed” Multipathing policy. This means users are required to manually load balance I/O across the primary paths.
  2. For deployments that prefer a complete “plug-n-play” architecture, enable Asymmetric Logical Unit Access protocol (ALUA) on the NetApp storage array, and configure the Round Robin in VMware that allows for the auto negotiation of paths between SCSI target devices and target ports enabling dynamic reconfiguration.


In this article

Join the Conversation