115x Filetype PDF File size 0.93 MB Source: www.intel.com
IT@Intel White Paper Intel IT IT Best Practices Data Center Solutions May 2012 Using Converged Network Adapters for FCoE Network Unification Executive Overview Switching to FCoE deployment To unify local area network and storage area network infrastructure over a single using dual-port Intel® Ethernet fabric, Intel IT evaluated the price and performance advantages of switching to a X520 Server Adapters would Fibre Channel over Ethernet (FCoE) using converged network adapters (CNAs). In yield a greater than 50-percent comparing a CNA solution to a two-card solution—a 10 Gigabit Ethernet network reduction in total network interface card and a Fibre Channel (FC) host bus adapter (HBA)—we determined costs per server rack. that CNAs provide a technically sound, cost-effective solution for network unification in our virtualized environment. Our evaluation revealed that switching • Allows application loads to drive nearly to FCoE deployment using the dual-port 90-percent full bi-directional bandwidth 10-gigabit (Gb) Intel® Ethernet Server Adapter utilization on FCoE or TCP/IP X520 series had the following benefits: • Provides an open solution that, taking • Reduces total network costs per server advantage of native FCoE initiators in rack by more than 50 percent operating systems, offers an efficient • Delivers FC performance comparable and cost-effective alternative to more to and, in some cases, exceeding the expensive proprietary CNAs that offload performance of an FC HBA FC and FCoE processing onto the adapter • Enables controllable prioritization for To realize these advantages, Intel IT plans LAN or SAN traffic through network to use FCoE configurations and dual-port Quality of Service mechanisms that 10-Gb Intel Ethernet X520 server adapters regulate contention for bandwidth in new and replacement servers in our Craig Pierce virtualized environment. System Engineer, Intel Architecture Systems Integration (IASI) Sanjay Rungta Senior Principal Engineer, Intel IT Sreeram Sammeta System Engineer, IASI Terry Yoshii Research Staff, Intel IT IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification Contents BACkGrOuNd 1GbE network infrastructure and found it For years, many organizations have run inadequate to meet Intel’s rapidly growing Executive Overview ............................. 1 business requirements and the increasing two separate, parallel networks in their demands they place on data center resources.1 Background ............................................ 2 data center. To connect servers and clients, How FCoE Works ............................... 2 as well as connect to the Internet, they Among the trends that supported our transition use an Ethernet-based local area network to a 10GbE data center fabric design are: Solution ................................................. 3 (lAN). For connecting servers to the • The escalating data handling demands Converged Network Adapter storage area network (SAN) and the block created by increasing compute density Test Goals and Challenges .............. 4 storage arrays used for storing data, they in our Design computing domain and by Storage Performance use a Fibre Channel (FC)-based network. large-scale virtualization in our Office and Comparison: Throughput ................. 4 Enterprise computing domain Storage Performance In late 2010, Intel IT decided our existing • The need to match network performance Comparison: Response Time ........... 5 1-gigabit Ethernet (1GbE) network demands with the increasing gains in Storage Performance infrastructure was no longer adequate to meet file server performance due to the latest Comparison: CPU Effectiveness ..... 5 Intel’s rapidly growing business requirements high-performance Intel® processors and Baseline Bandwidth Utilization and the resource demands of our increasingly clustering technologies. The network, not Quality-of-Service Testing .............. 5 virtualized environment. We needed a more the file servers, was the limiting factor in Quality-of-Service Testing .............. 6 cost-effective solution that unifies our fabric, supporting faster throughput. Storage I/O Latency during provides equal or better performance, and • A 40-percent annual growth in Internet Quality-of-Service Testing .............. 7 enables traffic prioritization through quality of service (QoS) mechanisms. Our solution: a connection requirements driving a need for Conclusion .............................................. 7 10GbE infrastructure which we deployed initially greater bandwidth through the organization Acronyms ................................................ 8 for connection to the local area network (LAN) When we found we could reduce our network and to host bus adapters (HBAs) that connect total cost of ownership by as much as 18 to our FC-based storage area network (SAN). to 25 percent, we began the transition to a The recent development of Fibre Channel 10GbE data center fabric design in 2011. over Ethernet (FCoE), a storage protocol that In our initial implementation, we used 10GbE enables FC communications to run directly NICs to connect servers to the LAN. To over Ethernet, provides a way for companies connect to the FC-based SAN, we used HBAs. to consolidate these networks into a single At the time, we viewed this as a temporary common network infrastructure. By unifying solution until we could fully evaluate the use LANs and SANs, eliminating redundant of converged network adapters (CNAs) to switches, and reducing cabling and network enable a unified network fabric through FCoE. interface card (NIC) counts, an FCoE server adapter—specifically, the dual-port 10-Gb How FCoE Works IT@INTEl Intel® Ethernet X520 Server Adapter—can: FCoE is essentially an extension of FC over a • Reduce capital expenditures (CapEx) different link layer transport. FCoE maps FC The IT@Intel program connects IT over Ethernet while remaining independent of professionals around the world with their • Cut power and cooling costs by reducing the the Ethernet forwarding scheme. This enables peers inside our organization – sharing number of components FC to use 10GbE networks while preserving lessons learned, methods and strategies. • Simplify administration by reducing the the FC protocol. With the new data center Our goal is simple: Share Intel IT best number of devices that need to be managed practices that create business value and bridging (DCB) specification, the 10-Gb DCB make IT a competitive advantage. Visit In addition, network unification using FCoE protocol also alleviates two previous concerns: us today at www.intel.com/IT or contact reduces operating expenditures. For a large IT packet loss and latency. your local Intel representative if you’d organization such as Intel IT, this is a significant Servers connect to FCoE with CNAs, which like to learn more. advantage. Our 87 data centers support a combine both FC HBA and Ethernet NIC massive worldwide computing environment 1 that houses approximately 90,000 servers. We address in detail our work evaluating the advantages to Intel of upgrading to 10GbE The timing of FCoE’s release is advantageous network infrastructure in the IT@Intel white paper 2 www.intel.com/IT as well. We recently evaluated Intel’s existing “Upgrading Data Center Network Architecture to 10 Gigabit Ethernet.” Intel Corporation, January 2011. Using Converged Network Adapters for FCoE Network Unification IT@Intel White Paper functionality on a single adapter card. Use of FCoE and CNAs consolidates network TCP/IP and SAN data Fibre Channel over Ethernet (FCoE) traffic on a unified network. For Intel IT, a unified fabric using CNAs represents an One disadvantage of Fibre Channel (FC) as a network technology for storage important next step in the ongoing effort to make the area networks (SANs) is that it’s incompatible with Ethernet, the dominant conversion of our data center architecture to 10GbE server networking technology. Although SANs and Ethernet networks perform connections as cost-efficient as possible. substantively the same function, they use different technologies and thus are entirely separate physical networks, with separate switches, cabling, networking hardware (such as network interface cards and host bus adapters), and connections SOluTION into each server. The expertise required to support each network is also different. We examined two approaches to providing Current industry standards combine server and storage networks over a single a converged adapter. One approach uses an unified fabric. The most prominent of these is FCoE. This specification preserves open solution that takes advantage of native the storage-specific benefits of FC and allows it to be transported over Ethernet FCoE initiators in operating systems, enabling in the enterprise. Under FCoE, both the Ethernet and the FC protocols are the adapter to work in a complementary way merged, enabling organizations to deploy a single, converged infrastructure with the platform hardware and the operating carrying both SAN and server TCP/IP network traffic. FCoE preserves all FC system. This approach provides a cost-effective constructs, providing reliable delivery, while preserving and interoperating way to handle both FCoE and TCP/IP traffic with an organization’s investment in FC SANs, equipment, tools, and training. (see Figure 1). Host Program Interface Local Area Network Configuration FC Host Bus Adapter API, DCBx, FC File System (LAN) Data center bridging exchange (DCBx) link protocol and fibre channel (FC) multipath I/O iSCSI Native Operating System or Device Drivers Intel® FCoE Initiator and Management virtual Storport*, Open FC, TCP/IP FCoE, encapsulation and decapsulation, FCoE Protocol FCoE initialization protocol Network Driver Interface Specification, Net Device Base Drivers Intel® Ethernet Base Drivers with with Buffers and Interrupts Data Center Bridging Priorities assigned to traffic classes (802.1p) Adapter Traffic Classes and Queues Storage Area Network LAN Traffic class mapped to queues (802.1Qaz) Classification and Quality of Service Engine Prioritization Engine Insert tag .1Q , schedule priority groups, allocate bandwidth, flow control FCoE Offloads FCoE Offloads: DDP, CRC Rx and Tx, LSO, RSS Data path acceleration Converged Port FCoE Converged LAN Priority tagged packets, congestion messages MAC Address Port MAC Address vLAN 1 vLAN 2 API – application programming interface: CRC – cyclic redundancy check; DDP – direct data placement; FCoE – Fibre Channel over Ethernet; iSCSI – internet Small Computer System Interface; LSO – large segment offload; MAC – media access control; RSS – really simple syndication; Rx – receive or incoming (data); Tx – transmit or outgoing (data); vLAN – virtual LAN Figure 1. This diagram shows a unified networking Fibre Channel over Ethernet (FCoE) configuration. The converged network adapter uses native FCoE initiators in the operating system to work with the platform hardware and operating system, handling FCoE traffic at a lower cost than a hardware-based solution. www.intel.com/IT 3 IT@Intel White Paper Using Converged Network Adapters for FCoE Network Unification The second approach uses custom • Be regulated through appropriate QoS Quality of Service (QoS) hardware adapters with FC and FCoE (and mechanisms (see sidebar) that assign in some cases TCP/IP) protocol processing higher or lower priority to either In computer networking, QoS resource embedded in the hardware. With this kind of traffic—network or storage. reservation control mechanisms prioritize approach, the CNA converts FCoE traffic to This is a critical capability because different applications, users, or data flows FC packets in the hardware, and the CNA a converged network must be able to ensure a certain level of performance. manufacturer provides its custom drivers, to protect storage traffic from any For example, a particular bit rate, along with interfaces, and management software. other traffic. Today’s SAN networks limits on delay, jitter, and packet-dropping The result is a more expensive adapter. provide this type of isolation through probability and bit error rate, may be Intel IT decided to test the first approach— a separate infrastructure guaranteed for an application that is delay- which is software driver-based—because • Serve as a cost-effective, technically or loss-sensitive, such as storage or video. it would provide a robust, scalable, viable solution for network unification For such applications, QoS guarantees high-performance server connectivity in our virtualized environment are important when network capacity is solution without the expensive, custom insufficient for the concurrent data flow. hardware. What’s more, we knew that Storage Performance recent improvements in some of the Comparison: Throughput latest hypervisors now enable them to The first test, which relies on a CNA using work with FCoE traffic. This meant the a software driver for FCoE processing, software driver-based approach could be had to realize equivalent or similar used in virtualized environments. performance in storage processing to the dual-port 8-Gb Fibre Channel HBAs we Average I/O Operations Per Second Converged Network Adapter were replacing. To determine this, we ran 67% Read; 100% Random Test Goals and Challenges a series of tests using different I/O block Fibre Channel Protocol (FCP) sizes—from 4 kilobytes (KB) to 128 KB— Fibre Channel over Ethernet (FCoE) Recognizing cost as an important 100,000 factor associated with an adapter that through both types of adapters. We could ultimately be used on hundreds deliberately avoided saturating the hosts 80.000 of servers, our goal was to see how a and network and storage arrays in these ond CNA using a software driver for FCoE tests by specifying a cache-intensive 60.000 processing would work for Intel’s data load—5 megabytes (MB) working set size— centers. To find out, we put dual-port and by driving moderately high I/O from tions per Sec 16 workers to four logical unit numbers at a 40.000 10-Gb Intel Ethernet X520 Server Adapters, which support FCoE, in two a queue depth of 1 from a single host. I/O Oper Intel® Xeon® processor 5600 series-based Tests showed negligible differences 20.000 servers and ran a variety of performance in the smaller block sizes. The Fibre tests. We wanted to see if such a CNA Channel Protocol (FCP) performance of 0 could address the following challenges: 4 KB 8 KB 16 KB 32 KB 64 KB 128 KB the dual-port 8-Gb FC HBA was 13.1 FCP 84,841 77,741 65,110 42,440 26,710 13,510 • Provide storage performance to 2.5 percent higher than the FCoE FCoE 75,045 70,051 63,491 51,724 34,414 18,272 (throughput, response time, performance of the dual-port Intel Figure 2. Using a software driver to handle Fibre Channel CPU effectiveness, and latency) Ethernet X520 Server Adapter running over Ethernet, the dual-port 10-Gb Intel® Ethernet X520 comparable to an FC HBA at 4-KB to 16-KB I/O block sizes (see Server Adapter delivered 21.9 to 35.2 percent higher Figure 2). More significant was the throughput running at 32-kilobyte (KB) to 128-KB block I/O • Drive the network load and the difference in the higher I/O block sizes. sizes compared to a dual-port 8-Gb Fibre Channel host bus storage load to their peaks, when The FCoE performance of the dual- adapter. each is run in isolation 4 www.intel.com/IT
no reviews yet
Please Login to review.