IT tutorials
 
Applications Server
 

VMware View 5 Architecture : Virtual (part 4) - Network, Storage

3/31/2015 8:16:35 AM
- Free product key for windows 10
- Free Product Key for Microsoft office 365
- Malwarebytes Premium 3.7.1 Serial Keys (LifeTime) 2019

Network

Various types of network traffic are required in a virtualization environment. The nature and segregation of this traffic should be fundamental to any good virtualization design. You can aggregate and logically separate network traffic in many different ways. Here, we delve into a few options and the reasoning behind each to provide guidance. In addition, you have the option of adding vShield technology as a layer on top of the virtual desktop environment to further isolate View desktops from each other. 

In Virtual Desktop Infrastructure, you can actually have more storage I/O and network I/O requirements than server virtualization environments. The reason is that the number of read/write requests and amount of network data being transferred can be greater than that of a virtual server environment. Therefore, in addition to storage considerations, you also must carefully plan out your network architecture.

Before we discuss the new capabilities that have been added with vSphere 5, let’s go through a typical network setup. In configuring your ESXi server, you have two connection types, virtual machine and VMKernel connections, as shown in Figure 7.

Image

Figure 7. Connection Type.

Virtual machine connections facilitate traffic to and from the virtual machines on the ESXi host. VMkernel handles traffic to and from the host hypervisor or ESXi host. VMkernel traffic may include vMotion, storage (iSCSI and NFS), and connections to the DCUI. All ports are connected to a logical switch or vSwitch. A vSwitch is a piece of software that runs on the ESXi hosts that allows you to aggregate virtual machines to map them to a physical network interface. You can configure the number of ports and the maximum transfer units (MTUs), as shown in Figure 8. MTUs are adjusted for support of, for example, jumbo frames in iSCSI networks.

Image

Figure 8. vSwitch Properties.

The 1500 MTUs in Figure 8 refer to the size of the payload in bytes carried by the Ethernet frame or Ethernet packet. Generally, a setting of 1500 bytes is appropriate for most traffic except if your network infrastructure supports jumbo frames. A jumbo frame increases the size of the payload. A jumbo frame is a packet that carries a greater number of bytes, increasing the payload of the packet. Because vSwitches are typically mapped to physical network cards, you need dedicated vSwitches mapped to physical network cards to be able to change the MTU to support jumbo frames. The recommended MTU for jumbo frames is 9000 bytes or MTUs.

To properly configure support for jumbo frames, you must ensure the max Ethernet frame size is consistent across everything the packet will traverse. If it is not, the packet is broken into fragments and sent piece by piece. This means to take advantage of jumbo frames, every device along the path must have the MTU adjusted.

The MTU must be configurable on the storage device’s network card, the physical switch, and either the NIC card within the ESXi host or the iSCSI software initiator on the ESXi host. When you are planning to use iSCSI with your VDI environment, it is best practice to isolate all traffic on a separate VLAN segment. Often this segment is a flat layer two network, which is restricted to just iSCSI initiators and targets.

It is possible to segregate traffic physically or logically. Let’s look at a real-world example to better illustrate this scenario. You have four types of traffic that you are going to ensure have adequate throughput and are fully redundant. They include the management console or DCUI, vMotion traffic, iSCSI storage area traffic, and virtual desktop traffic. In this case, you need eight physical network cards and four virtual switches. You can configure the DCUI for active-passive for redundancy and the vMotion, storage, and virtual desktop traffic to make use of both network cards. You need to review whether the storage vendor supports jumbo frames because you may need to customize the MTU or payload size on the VMkernel port to 9000 bytes and also the virtual switch. The benefit to this design is you have dedicated physical paths or NICs to separate out the traffic. The drawback to this design is you have a requirement for eight physical network adapters. This configuration may be problematic if you were looking to use small form factor, older blade servers. Newer blades tend to use converged network adapters (CNAs) and software to enable additional network paths and throughput. It is a standard to segment traffic by vSwitch and physical NICs, as depicted in Figure 9.

Image

Figure 9. A typical vSwitch to physical NIC topology.

This method is not the only one, however, because you can do it logically with port groups to reduce the number of physical NICs required. Another way of achieving the same design principle of segregating the network traffic is through logical segregation of the traffic types using layer 2 technology or VLAN tagging. ESXi fully supports the capability to tag packets to enable the underlying network software to create different LAN segments or virtual LAN (VLAN) segments across the same physical path. VLAN tagging requires configuration on both the network hardware and the ESXi servers.

Let’s look at how this configuration differs logically. In this case, you aggregate all the physical NICs so that they are connected to the same vSwitch. You then use VLAN IDs to create four virtual LAN segments to segregate all the traffic. In addition, you could add additional levels of redundancy by using port groups. Port groups are similar to VLANs, but you can assign specific NICs to the logical group and set active–active or active–passive network card configurations. Although aggregation is less common on GB networks, it is becoming increasingly prevalent on 10 GB networks. Be aware that 10 GB also further reduces the number of physical NICs required. Logically separated, the configuration looks like the one shown in Figure 10.

Image

Figure 10. Bonding and segregating by VLAN or port group.

The benefit to using both VLANs and port groups is that you can use a smaller number of NICs but still have the same logical segregation of network traffic. The only risk is that one type of network traffic uses more bandwidth than it should, but you can use traffic shaping and network I/O control to prioritize traffic so that the virtual desktop traffic always receives priority, for example. We go through a sample configuration after the installation of ESXi.

It is therefore more important to estimate the network requirements properly and configure the number of physical network cards to provide adequate throughput.

Storage

Although it is possible to build a virtual environment based on local storage, doing so is highly inefficient because many of the high availability features require shared storage. Local storage also does not scale well because it becomes increasingly difficult to move things around. This is not to say that you may not make use of local storage for floating or nonpersistent desktops. We discussed some interesting scenarios making use of local storage and SSD drives in the introduction.

VMware supports Fiber Channel, iSCSI, and NAS storage solutions. Although there used to be recommendations based around performance for running one over the other, this is less of a concern because the storage platforms have evolved to drive better performance no matter what the underlying storage technology is based on.

 
Others
 
- VMware View 5 Architecture : Virtual (part 3) - vCenter Server
- VMware View 5 Architecture : Virtual (part 2) - VMware vSphere 5 Architecture, VMware vSphere 5, VMware ESXi, VMware vSphere 5
- VMware View 5 Architecture : Virtual (part 1) - Infrastructure Introduction
- VMware View 5 : Establishing a Performance Baseline (part 10)
- VMware View 5 : Establishing a Performance Baseline (part 9)
- VMware View 5 : Establishing a Performance Baseline (part 8)
- VMware View 5 : Establishing a Performance Baseline (part 7)
- VMware View 5 : Establishing a Performance Baseline (part 6)
- VMware View 5 : Establishing a Performance Baseline (part 5)
- VMware View 5 : Establishing a Performance Baseline (part 4) - Configure vCenter Operations
 
 
Top 10
 
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 2) - Wireframes,Legends
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Finding containers and lists in Visio (part 1) - Swimlanes
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Formatting and sizing lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Adding shapes to lists
- Microsoft Visio 2013 : Adding Structure to Your Diagrams - Sizing containers
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 3) - The Other Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 2) - The Data Properties of a Control
- Microsoft Access 2010 : Control Properties and Why to Use Them (part 1) - The Format Properties of a Control
- Microsoft Access 2010 : Form Properties and Why Should You Use Them - Working with the Properties Window
- Microsoft Visio 2013 : Using the Organization Chart Wizard with new data
Technology FAQ
- Is possible to just to use a wireless router to extend wireless access to wireless access points?
- Ruby - Insert Struct to MySql
- how to find my Symantec pcAnywhere serial number
- About direct X / Open GL issue
- How to determine eclipse version?
- What SAN cert Exchange 2010 for UM, OA?
- How do I populate a SQL Express table from Excel file?
- code for express check out with Paypal.
- Problem with Templated User Control
- ShellExecute SW_HIDE
programming4us programming4us