Monday, May 25, 2020

Pluggable Storage Architecture and Path Management

PSA  -Pluggable Storage Architecture
To manage multipathing, ESXi uses a special VMkernel layer, Pluggable Storage Architecture (PSA). The PSA is an open and modular framework that coordinates various software modules responsible for multipathing operations. These modules include generic multipathing modules that VMware provides, NMP and HPP, and third-party MPPs.

NMP - Native Multipathing Plug-in
The NMP is the VMkernel multipathing module that ESXi provides by default. The NMP associates physical paths with a specific storage device and provides a default path selection algorithm based on the array type. The NMP is extensible and manages additional submodules, called Path Selection Policies (PSPs) and Storage Array Type Policies (SATPs). PSPs and SATPs can be provided by VMware, or by a third party. 

PSPs - Path Selection Plug-ins
The PSPs are submodules of the VMware NMP. PSPs are responsible for selecting a physical path for I/O requests.

SATPs  - Storage Array Type Plug-ins
The SATPs are submodules of the VMware NMP. SATPs are responsible for array-specific operations. The SATP can determine the state of a particular array-specific path, perform a path activation, and detect any path errors. 

MPPs -Multipathing Plug-ins
The PSA offers a collection of VMkernel APIs that third parties can use to create their own multipathing plug-ins (MPPs). The modules provide specific load balancing and failover functionalities for a particular storage array. The MPPs can be installed on the ESXi host. They can run in addition to the VMware native modules, or as their replacement.


HPP - VMware HighPerformance Plug-in
The HPP replaces the NMP for high-speed devices, such as NVMe PCIe flash. The HPP improves the performance of ultra-fast flash devices that are installed locally on your ESXi host. The plug-in supports only singlepathed devices

Claim Rules
The PSA uses claim rules to determine whether an MPP or NMP owns the paths to a particular storage device. The NMP has its own set of claim rules. These claim rules match the device with a specific SATP and PSP


reference

vsphere6.7 vmware Standard Switch vs vDSS Distributed Switch

These features are available with both types of virtual switches:

  • Can forward L2 frames  
  • Can segment traffic into VLANs  
  • Can use and understand 802.1q VLAN encapsulation  
  • Can have more than one uplink (NIC Teaming)  
  • Can have traffic shaping for the outbound (TX) traffic

These features are available only with a Distributed Switch:

  • Can shape inbound (RX) traffic  
  • Has a central unified management interface through vCenter Server  
  • Supports Private VLANs (PVLANs)  
  • Provides potential customization of Data and Control Planes

vSphere 5.x provides these improvements to Distributed Switch functionality:

  • Increased visibility of inter-virtual machine traffic through Netflow.  
  • Improved monitoring through port mirroring (dvMirror).  
  • Support for LLDP (Link Layer Discovery Protocol), a vendor-neutral protocol.  
  • The  enhanced link aggregation feature provides choice in hashing algorithms  and also increases the limit on number of link aggregation groups.  
  • Additional port security is enabled through traffic filtering support.  
  • Improved single-root I/O virtualization (SR-IOV) support and 40GB NIC support.

vSphere 6.x provides these improvements to Distributed Switch functionality:

  • Network  IO Control  New support for per virtual machine Distributed vSwitch  bandwidth reservations to guarantee isolation and enforce limits on  bandwidth.  
  • Multicast Snooping - Supports IGMP snooping for  IPv4 packet and MLD snooping for IPv6 packets in VDS. Improves  performance and scale with multicast traffic.  
  • Multiple TCP/IP  Stack for vMotion - Allows vMotion traffic a dedicated networking stack.  Simplifies IP address management with a dedicated default gateway for  vMotion traffic.

Wednesday, May 20, 2020

Virtual Machine Conditions and Limitations for vMotion


 To migrate virtual machines with vMotion, the virtual machine must meet certain network, disk, CPU, USB, and other device requirements.
The following virtual machine conditions and limitations apply when you use vMotion:
The source and destination management network IP address families must match. You cannot migrate a virtual machine from a host that is registered to vCenter Server with an IPv4 address to a host that is registered with an IPv6 address.
Using 1 GbE network adapters for the vMotion network might result in migration failure, if you migrate virtual machines with large vGPU profiles. Use 10 GbE network adapters for the vMotion network.
If virtual CPU performance counters are enabled, you can migrate virtual machines only to hosts that have compatible CPU performance counters.
You can migrate virtual machines that have 3D graphics enabled. If the 3D Renderer is set to Automatic, virtual machines use the graphics renderer that is present on the destination host. The renderer can be the host CPU or a GPU graphics card. To migrate virtual machines with the 3D Renderer set to Hardware, the destination host must have a GPU graphics card.
Starting with vSphere 6.7 Update 1 and later, vSphere vMotion supports virtual machines with vGPU.
vSphere DRS supports initial placement of vGPU virtual machines running vSphere 6.7 Update 1 or later without load balancing support.
You can migrate virtual machines with USB devices that are connected to a physical USB device on the host. You must enable the devices for vMotion.
You cannot use migration with vMotion to migrate a virtual machine that uses a virtual device backed by a device that is not accessible on the destination host. For example, you cannot migrate a virtual machine with a CD drive backed by the physical CD drive on the source host. Disconnect these devices before you migrate the virtual machine.
You cannot use migration with vMotion to migrate a virtual machine that uses a virtual device backed by a device on the client computer. Disconnect these devices before you migrate the virtual machine.

Tuesday, May 5, 2020





Azure Certification paths