certcollecion.

net
DCUFD

Designing Cisco Data
Center Unfied Fabric
Volume 1
Version 5.0

Student Guide
Text Part Number: 97-3184-01

certcollecion.net

Americas Headquarters
Cisco Systems, Inc.
San Jose, CA

Asia Pacific Headquarters
Cisco Systems (USA) Pte. Ltd.
Singapore

Europe Headquarters
Cisco Systems International BV Amsterdam,
The Netherlands

Cisco has more than 200 offices worldwide. Addresses, phone numbers, and fax numbers are listed on the Cisco Website at www.cisco.com/go/offices.
Cisco and the Cisco logo are trademarks or registered trademarks of Cisco and/or its affiliates in the U.S. and other countries. To view a list of Cisco trademarks, go to this
URL: www.cisco.com/go/trademarks. Third party trademarks mentioned are the property of their respective owners. The use of the word partner does not imply a
partnership relationship between Cisco and any other company. (1110R)

DISCLAIMER WARRANTY: THIS CONTENT IS BEING PROVIDED “AS IS.” CISCO MAKES AND YOU RECEIVE NO WARRANTIES
IN CONNECTION WITH THE CONTENT PROVIDED HEREUNDER, EXPRESS, IMPLIED, STATUTORY OR IN ANY OTHER
PROVISION OF THIS CONTENT OR COMMUNICATION BETWEEN CISCO AND YOU. CISCO SPECIFICALLY DISCLAIMS ALL
IMPLIED WARRANTIES, INCLUDING WARRANTIES OF MERCHANTABILITY, NON-INFRINGEMENT AND FITNESS FOR A
PARTICULAR PURPOSE, OR ARISING FROM A COURSE OF DEALING, USAGE OR TRADE PRACTICE. This learning product
may contain early release content, and while Cisco believes it to be accurate, it falls subject to the disclaimer above.

Student Guide

© 2012 Cisco and/or its affiliates. All rights reserved.

certcollecion.net

Students, this letter describes important
course evaluation access information!

Welcome to Cisco Systems Learning. Through the Cisco Learning Partner Program,
Cisco Systems is committed to bringing you the highest-quality training in the industry.
Cisco learning products are designed to advance your professional goals and give you
the expertise you need to build and maintain strategic networks.
Cisco relies on customer feedback to guide business decisions; therefore, your valuable
input will help shape future Cisco course curricula, products, and training offerings.
We would appreciate a few minutes of your time to complete a brief Cisco online
course evaluation of your instructor and the course materials in this student kit. On the
final day of class, your instructor will provide you with a URL directing you to a short
post-course evaluation. If there is no Internet access in the classroom, please complete
the evaluation within the next 48 hours or as soon as you can access the web.
On behalf of Cisco, thank you for choosing Cisco Learning Partners for your
Internet technology training.
Sincerely,
Cisco Systems Learning

certcollecion.net

certcollecion.net
Table of Contents
Volume 1
Course Introduction .......................................................................................................... 1
Overview ................................................................................................................................................1
Learner Skills and Knowledge .........................................................................................................2
Course Goal and Objectives ..................................................................................................................3
Course Flow ...........................................................................................................................................4
Additional References ............................................................................................................................5
Cisco Glossary of Terms .................................................................................................................6
Your Training Curriculum .......................................................................................................................7
Additional Resources .......................................................................................................................... 10
Introductions ....................................................................................................................................... 12

Cisco Data Center Solutions ......................................................................................... 1-1
Overview ............................................................................................................................................ 1-1
Module Objectives ....................................................................................................................... 1-1

Defining the Data Center ..................................................................................................... 1-3
Overview ............................................................................................................................................ 1-3
Objectives .................................................................................................................................... 1-3
Data Center Solution Components .................................................................................................... 1-4
Data Center Terminology................................................................................................................. 1-14
Data Center Challenges .................................................................................................................. 1-18
Introduction to Cloud Computing ..................................................................................................... 1-37
Data Center Virtualization ................................................................................................................ 1-51
Summary.......................................................................................................................................... 1-55

Identifying the Cisco Data Center Solution ...................................................................... 1-57
Overview .......................................................................................................................................... 1-57
Objectives .................................................................................................................................. 1-57
Cisco Data Center Architecture Overview ....................................................................................... 1-58
Cisco Data Center Architecture Network ......................................................................................... 1-65
Cisco Data Center Architecture Storage ......................................................................................... 1-88
Summary.......................................................................................................................................... 1-92

Designing the Cisco Data Center Solution ....................................................................... 1-93
Overview .......................................................................................................................................... 1-93
Objectives .................................................................................................................................. 1-93
Design Process ................................................................................................................................ 1-94
Design Deliverables ....................................................................................................................... 1-108
Cisco Validated Designs ................................................................................................................ 1-112
Summary........................................................................................................................................ 1-113
Module Summary ........................................................................................................................... 1-115
Module Self-Check ........................................................................................................................ 1-117
Module Self-Check Answer Key.............................................................................................. 1-119

Data Center Technologies ............................................................................................. 2-1
Overview ............................................................................................................................................ 2-1
Module Objectives ....................................................................................................................... 2-1

Designing Layer 2 and Layer 3 Switching .......................................................................... 2-3
Overview ............................................................................................................................................ 2-3
Objectives .................................................................................................................................... 2-3
Forwarding Architectures ................................................................................................................... 2-4
IP Addressing and Routing .............................................................................................................. 2-10
Summary.......................................................................................................................................... 2-15

certcollecion.net
Virtualizing Data Center Components .............................................................................. 2-17
Overview .......................................................................................................................................... 2-17
Objectives ................................................................................................................................. 2-17
Device Virtualization Mechanisms ................................................................................................... 2-18
Virtual Device Contexts ................................................................................................................... 2-23
Virtualization with Contexts ............................................................................................................. 2-32
Virtualization with Virtual Appliances ............................................................................................... 2-36
Summary ......................................................................................................................................... 2-38

Designing Layer 2 Multipathing Technologies ................................................................ 2-39
Overview .......................................................................................................................................... 2-39
Objectives ................................................................................................................................. 2-39
Network Scaling Technologies ........................................................................................................ 2-40
vPC and MEC .................................................................................................................................. 2-43
Cisco FabricPath ............................................................................................................................. 2-58
Summary ......................................................................................................................................... 2-79
References ................................................................................................................................ 2-79
Module Summary............................................................................................................................. 2-81
Module Self-Check .......................................................................................................................... 2-83
Module Self-Check Answer Key ............................................................................................... 2-85

Data Center Topologies ................................................................................................. 3-1
Overview ............................................................................................................................................ 3-1
Module Objectives ....................................................................................................................... 3-1

Designing the Data Center Core Layer Network ................................................................ 3-3
Overview ............................................................................................................................................ 3-3
Objectives ................................................................................................................................... 3-3
Data Center Core Layer .................................................................................................................... 3-4
Layer 3 Data Center Core Design ..................................................................................................... 3-6
Layer 2 Data Center Core Design ..................................................................................................... 3-8
Data Center Collapsed Core Design ............................................................................................... 3-13
Summary ......................................................................................................................................... 3-15

Designing the Data Center Aggregation Layer ................................................................ 3-17
Overview .......................................................................................................................................... 3-17
Objectives ................................................................................................................................. 3-17
Classic Aggregation Layer Design .................................................................................................. 3-18
Aggregation Layer with VDCs ......................................................................................................... 3-21
Aggregation Layer with Unified Fabric ............................................................................................ 3-29
Aggregation Layer with IP-Based Storage ...................................................................................... 3-35
Summary ......................................................................................................................................... 3-38

Designing the Data Center Access Layer ......................................................................... 3-39
Overview .......................................................................................................................................... 3-39
Objectives ................................................................................................................................. 3-39
Classic Access Layer Design .......................................................................................................... 3-40
Access Layer with vPC and MEC .................................................................................................... 3-43
Access Layer with FEXs .................................................................................................................. 3-44
Access Layer with Unified Fabric .................................................................................................... 3-52
Summary ......................................................................................................................................... 3-56

Designing the Data Center Virtualized Access Layer ...................................................... 3-57
Overview .......................................................................................................................................... 3-57
Objectives ................................................................................................................................. 3-57
Virtual Access Layer ........................................................................................................................ 3-58
Virtual Access Layer Solutions ........................................................................................................ 3-60
Using Cisco Adapter FEX ................................................................................................................ 3-62
Using Cisco VM-FEX ....................................................................................................................... 3-63
Solutions with the Cisco Nexus 1000V Switch ................................................................................ 3-65
Summary ......................................................................................................................................... 3-77

ii

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
Designing High Availability ............................................................................................... 3-79
Overview .......................................................................................................................................... 3-79
Objectives .................................................................................................................................. 3-79
High Availability for IP ...................................................................................................................... 3-80
High Availability Using vPC and VSS .............................................................................................. 3-85
High Availability Using IP Routing and FHRP ................................................................................. 3-87
IP Routing Protocols Deployment Design ................................................................................. 3-88
High Availability Using RHI .............................................................................................................. 3-91
High Availability Using LISP ............................................................................................................ 3-95
Summary........................................................................................................................................ 3-103

Designing Data Center Interconnects............................................................................. 3-105
Overview ........................................................................................................................................ 3-105
Objectives ................................................................................................................................ 3-105
Reasons for Data Center Interconnects ........................................................................................ 3-106
Data Center Interconnect Technologies ........................................................................................ 3-111
Cisco OTV...................................................................................................................................... 3-113
Storage Replication Technologies and Interconnects ................................................................... 3-126
Summary........................................................................................................................................ 3-131
References .............................................................................................................................. 3-131
Module Summary ........................................................................................................................... 3-133
Module Self-Check ........................................................................................................................ 3-135
Module Self-Check Answer Key.............................................................................................. 3-138

 2012 Cisco Systems, Inc.

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

iii

certcollecion.0 © 2012 Cisco Systems.net iv Designing Cisco Data Center Unified Fabric (DCUFD) v5. . Inc.

and virtualization solutions based on fabric extenders (FEXs). and equipment and link virtualization technologies.net DCUFD Course Introduction Overview The Designing Cisco Data Center Unified Fabric (DCUFD) v5. reliable. . The course describes the Cisco data center unified fabric solutions. Fibre Channel over Ethernet (FCoE).certcollecion. determine the requirements.0 is a five-day instructor-led course aimed at providing data center designers with the knowledge and skills needed to design scalable. and intelligent data center unified fabrics. Cisco FabricPath. and design the Cisco data center unified fabric solution based on Cisco products and technologies. and explains how to evaluate existing data center infrastructure.

The subtopic also includes recommended Cisco learning offerings that learners should first complete to benefit fully from this course. 2 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. .certcollecion.net Learner Skills and Knowledge This subtopic lists the skills and knowledge that learners must possess to benefit fully from the course. Inc.

managing. solutions. Inc. and physical security threats  Design data center infrastructure that is required to implement network-based application services  Design data center management to facilitate monitoring. network security threats. and components  Provide a comprehensive and detailed overview of technologies used in data centers. Upon completing this course.certcollecion.net Course Goal and Objectives This topic describes the course goal and objectives. and describe scalability implications and their possible use in cloud environments  Design data center connections and topologies in the core layer  Explain and design data center storage designs. and limitations of various storage technologies  Design secure data centers that are protected from application-based threats. and provisioning data center equipment and applications © 2012 Cisco Systems. Course Introduction 3 . Cisco Data Center Architecture solution. you will be able to meet these objectives:  Evaluate the data center solution design and design process regarding the contemporary data center challenges.

certcollecion. The exact timing of the subject materials and labs depends on the pace of your specific class. Inc.net Course Flow This topic presents the suggested flow of the course materials. 4 Designing Cisco Data Center Unified Fabric (DCUFD) v5. .0 © 2012 Cisco Systems. The schedule reflects the recommended structure for this course. This structure allows enough time for the instructor to present the course information and for you to work through the lab activities.

as well as information on where to find additional technical references. Course Introduction 5 .certcollecion. Inc. © 2012 Cisco Systems.net Additional References This topic presents the Cisco icons and symbols that are used in this course.

com/wiki/Category:Internetworking_Terms_and_Acronyms_(ITA). refer to the Cisco Internetworking Terms and Acronyms glossary of terms at http://docwiki.0 © 2012 Cisco Systems. Inc.cisco. 6 Designing Cisco Data Center Unified Fabric (DCUFD) v5.net Cisco Glossary of Terms For additional information on Cisco terminology.certcollecion. .

which is the home of Cisco Certifications. Course Introduction 7 . visit the Cisco Learning Network. Inc.certcollecion. To prepare and learn more about IT certifications and technology tracks.net Your Training Curriculum This topic presents the training curriculum for this course. © 2012 Cisco Systems.

DCUFD v5. All rights reserved.com/go/certifications © 2012 Cisco and/or its affiliates.0 © 2012 Cisco Systems. a discussion forum open to anyone holding a valid Cisco Career Certification: 8  Cisco CCDE®  Cisco CCIE®  Cisco CCDP®  Cisco CCNP®  Cisco CCNP® Data Center  Cisco CCNP® Security  Cisco CCNP® Service Provider  Cisco CCNP® Service Provider Operations  Cisco CCNP® Voice  Cisco CCNP® Wireless  Cisco CCDA®  Cisco CCNA®  Cisco CCNA® Data Center  Cisco CCNA® Security  Cisco CCNA® Service Provider  Cisco CCNA® Service Provider Operations  Cisco CCNA® Voice  Cisco CCNA® Wireless Designing Cisco Data Center Unified Fabric (DCUFD) v5.cisco.certcollecion. Inc. .net Expand Your Professional Options and Advance Your Career Cisco CCNP Data Center Implementing Cisco Data Center Unified Fabric (DCUFI) Implementing Cisco Data Center Unified Computing (DCUCI) Available Exams (pick a group of 2) Designing Cisco Data Center Unified Computing (DCUCD) Designing Cisco Data Center Unified Fabric (DCUFD) or Troubleshooting Cisco Data Center Unified Fabric (DCUFT) Troubleshooting Cisco Data Center Unified Computing (DCUCT) www.0-11 You are encouraged to join the Cisco Certification Community.

suggestions. and information about Cisco Career Certification programs and other certification-related topics.cisco.net It provides a gathering place for Cisco certified professionals to share questions.com/go/certifications. Course Introduction 9 . visit http://www. Inc. © 2012 Cisco Systems.certcollecion. For more information.

refer to the information available at the following pages.certcollecion.net Additional Resources For additional information about Cisco technologies. and products. solutions. 10 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. Inc. .

Inc.net © 2012 Cisco Systems.certcollecion. Course Introduction 11 .

. 12 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems.net Introductions Please use this time to introduce yourself to your classmates so that you can better understand the colleagues with whom you will share your experience.certcollecion. Inc.

Module Objectives Upon completing this module. and environmental challenges and goals for contemporary data center solutions  Provide a high-level overview of the Cisco data center solution architectural framework and components within the solution  Define the tasks and phases of the design process for the Cisco Unified Computing solution . servers. virtual appliances. you will learn how to define data centers.net Module 1 Cisco Data Center Solutions Overview Modern data centers operate in high availability and are the foundation for business processes. the Cisco Data Center Architecture solution. The data center design process needs to be well run and well documented. identify technologies. security appliances.certcollecion. and design processes to successfully design a data center. These include switches. This ability includes being able to meet these objectives:  Analyze the relationship between the business. technical. Cisco offers a comprehensive set of technologies and devices that are used to implement data centers. the cloud computing model has been emerging and data centers provide the infrastructure that is needed to support various cloud computing deployments. In this module. Additionally. and components. and so on. This module provides an overview of the design process and documentation. you will be able to evaluate data center solution designs and the design process regarding contemporary data center challenges.

certcollecion.net 1-2 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. Inc. .

you will be able to analyze the relationship between the business. defines commonly used terms. and environmental challenges  Recognize the cloud computing paradigm. providing highly available services to its users and customers.certcollecion. and concepts  Recognize the importance of virtualization technologies and solutions for data center evolution . This lesson outlines various categories of data centers. terms. This ability includes being able to meet these objectives:  Categorize general data center solution components  Define the baseline technology and terminology used in data center solutions  Analyze business. technical.net Lesson 1 Defining the Data Center Overview A modern data center is an essential component in any business. Objectives Upon completing this lesson. and environmental challenges and goals for contemporary data center solutions. and analyzes challenges and concepts with a focus on virtualization. technical.

Internet Small Computer Systems Interface (iSCSI). and computing resources. Data Center Solutions A data center infrastructure is an essential component that supports Internet services. and storage devices like disk arrays and tape libraries  Computing technologies and equipment. Network File System (NFS). electronic communications. and security aspects of the solution Designing Cisco Data Center Unified Fabric (DCUFD) v5. multilayer and converged devices.certcollecion. storage network equipment. and other business services and solutions: 1-4  Network technologies and equipment like intelligent switches. storage. operating systems. Inc. . Data Center Definition A data center is a centralized or geographically distributed group of departments that house the computing systems and their related storage equipment or data libraries. and Layer 2 and Layer 3 protocols  Storage solutions and equipment that cover technologies ranging from Fibre Channel.net Data Center Solution Components This topic describes how to categorize general data center solution components. A data center has controlled centralized management that enables an enterprise to operate according to business needs.0 © 2012 Cisco Systems. Fibre Channel over Ethernet (FCoE). applications. digital commerce. server virtualization. including general purpose and specialized servers  Operating system and server virtualization technologies  Application services technologies and products like load balancers and session enhancement devices  Management systems that are used to manage network. high-availability mechanisms.

A data center network must be flexible and must support nondisruptive scalability of applications and computing resources to adapt the infrastructure for future business needs. Cisco Data Center Solutions 1-5 .certcollecion. Business Continuance Definition Business continuity is the ability to adapt and respond to risks as well as opportunities in order to maintain continuous business operations. Inc. along with flexibility for future data center changes.net  Security technologies and equipment that are employed to ensure confidentiality and security of sensitive data and systems  Desktop virtualization solutions and access clients Data Center Goals A data center goal is to sustain the business functions and operations. There are four primary aspects of business continuity:  High availability (disaster tolerance)  Continuous operations  Disaster recovery  Disaster tolerance © 2012 Cisco Systems.

. and define the lifetime of the solution: 1-6  Physical facility: The physical facility encompasses all the data center facility physical characteristics that affect the already-mentioned infrastructure.certcollecion.net Apart from the already-mentioned aspects and components of the data center solution. Alternatively. This organization can be in the form of a single department that takes care of all the IT aspects (typically with the help of external IT partners). physical space and racks. cooling capacity. This includes available power. and so on. there are two very important components that influence how the solution is used and scaled. it can involve multiple departments. in large companies. physical security and fire prevention systems. Inc.  IT organization: The IT organization defines the IT departments and how they interact in order to offer IT services to business users. Designing Cisco Data Center Unified Fabric (DCUFD) v5. with each department taking care of a subset of the data center infrastructure.0 © 2012 Cisco Systems.

DCUFD v5. data centers are being consolidated because the distributed approach is expensive in the long term. The second era of data center computing was characterized by pure client-server and distributed computing. This solution also provides significant gains in terms of return on investment (ROI) and the total cost of ownership (TCO). whether the processes are conducted on a factory floor in a pod or in a Tier 3 data center 200 feet (61 m) below the ground in order to support precious metal mining. This approach will support your business processes. Cisco customers are at different stages of this journey. data centers were monolithic and centralized. All rights reserved.net Decentralized Virtualized Mainframe Client-Server and Distributed Computing Service-Oriented IT Relevance and Control Centralized Consolidate Virtualize Automate Application Architecture Evolution © 2012 Cisco and/or its affiliates. resilience. because they are an advantageous solution in terms of availability. and service level agreements (SLAs). and the services were distributed due to poor computing ability and high link costs. Cisco Data Center Solutions 1-7 . Mainframes are still used in the finance sector.0—1-6 Data centers have changed and evolved over time. Inc. In short. Virtualization Is Changing the Data Center Architecture The Cisco Data Center Business Advantage framework brings sequential and stepwise clarity to the data center. At first. the communication infrastructure is relatively cheaper and the computing capacities have increased. The new solution is equipment virtualization. © 2012 Cisco Systems. which the users accessed to perform their work on the mainframe. Today. Consequently. employing mainframes and terminals. which makes the utilization of servers much more common than in the distributed approach. Data center networking capabilities bring an open and extensible data center networking approach to the placement of your IT solution. Cisco data center networking delivers location freedom. Applications were designed in such a way that client software was used to access an application. The mainframes were too expensive.certcollecion.

on any server. architectures. or building block. Cisco has improved reliability and validated best-practice architectures and has also improved customer success. Phase 2: Unified Fabric The introduction of a unified fabric is the first step toward removing the network barriers to deploying any workload. in seconds. and it can deliver provisioning freedom. This subject will be discussed in more detail later on.net Phase 1: Data Center Networking and Consolidation Earlier. of the data center is not the physical server. to include the providers as well (with no compromise on service levels). and so on.certcollecion. storage networking. By delivering these types of services as an integrated system. proximity. Bringing security. principles. thus simplifying the line of business and services in the enterprise. Phase 4: Enterprise-Class Clouds and Utility With the integration of cloud technologies. This solution is a simpler. but it also enables the enterprise to execute business processes in the best places and in the most efficient ways. and interoperability to standalone cloud architectures can enable enterprise-class clouds. Cisco Unified Computing focuses on automation simplification for a predesigned virtualization platform. but on portable workloads. more efficient architecture that extends the life cycle of capital assets. This market extends from the enterprise to the provider and from the provider to another provider based on available capacity. application networking. Cisco entered all the data center transport and network markets—server networking. and virtualization platforms. and security networking. Inc. this time the growth should be predicated not on addressing a federation across providers. It is the virtual machine (VM) that provides a consistent abstraction between physical hardware and logically defined software constructs. and technologies. . It is an evolution that is enabled by integrating cloud computing principles. computing. Phase 3: Unified Computing and Automation Bringing the network platform that is created by the unified fabric together with the virtualization platform and the computing and storage platform introduces a new paradigm in the market: Cisco Unified Computing. 1-8 Designing Cisco Data Center Unified Fabric (DCUFD) v5. It is another choice that simplifies startup processes and ongoing operations within the virtualized environment. and architectures and the Cisco Unified Computing architecture. The new atomic unit. After the process is automated. power cost. workloads can become increasingly portable.0 © 2012 Cisco Systems. However. all with high availability. The freedom of choice about where business processes are executed is extended across the boundaries of an organization. control. enterprise internal IT resources will be seen as a utility that is able to automate and dynamically provision the infrastructure across the network. Phase 5: Intercloud Market A goal of Cisco is to create a market as a new wave of innovation and investment similar to what the industry last saw with the Internet growth of the mid-1990s.

This silo approach has led to underutilization of resources. The goal was to provide the most appropriate server. These storage silos can be in the form of integrated. difficulty in managing these disparate complex environments. and difficulty in applying uniform services such as security and application optimization. Over the past decade. storage. most data centers have evolved on an ad hoc basis. or small SAN islands. all designed to deploy a series of applications:  IBM mainframe applications  Email applications on Microsoft Windows servers  Business applications on IBM AS/400 servers  Enterprise resource planning (ERP) applications on UNIX servers  R&D applications on Linux servers In addition. a broad collection of storage silos exists to support these disparate server environments. It is also difficult to implement strong. There are many server platforms in current data centers. Inc. This strategy led to data centers with stovepipe architectures or technology islands that were difficult to manage or adapt to changing environments. network-attached storage (NAS). © 2012 Cisco Systems. as well as the intellectual capital (programmers and support staff) of the enterprise.certcollecion. Cisco Data Center Solutions 1-9 . consistent disasterrecovery procedures and business continuance functions. and networking infrastructure that supported specific applications. glass-house.net Data center trends that have affected the data center architecture and design can be summarized in significant phases and stages: Phase 1 Isolated Application Silos Data centers are about servers and applications. raised-floor structures that housed the computer resources. direct-attached storage (DAS). The first data centers were mostly mainframe.

Virtualization in a data center enables creation of huge resource pools. lighting. any-to-any access. file blocks. and more flexible resource utilization. Almost any component of a data center can now be virtualized—storage. and assignment to applications. Building and certifying a green data center or other facility can be expensive initially. and increasingly from the general public. mirroring. Consolidation has also reduced the cost of data center deployment. sustainable landscaping. allocation. integrated modules. LAN and WAN. Installing catalytic converters on backup generators is also helpful. computing resources. servers. tape devices. . Virtualization represents the foundation for further automation of data center services. and additional savings are derived from reduced cabling. Today. There is growing pressure from environmentalists. Automation Automation of data center services has been made possible by consolidating and virtualizing data center components. and volume management functions.0 © 2012 Cisco Systems. and networks has enabled centralization of data center components. A unified data center fabric is based on a unified I/O transport protocol.certcollecion. file systems. Virtualization Virtualization is the creation of another abstraction layer that separates physical from logical characteristics and enables further automation of data center services. Another advantage is the fact that green facilities offer employees a healthy. simplified management. and technology convergence. and evaporative cooling. but longterm cost savings can be realized in operations and maintenance. Monitoring data center resource utilization is a necessary condition for an automated data center environment. and automated resource provisioning. The construction and operation of a green data center includes advanced technologies and strategies. such as photovoltaic. and clustering I/Os. This pressure is in terms of monetary support for the creation and maintenance of ecologically responsible technologies. and so on. Phase 3 Converged Network Converged networks promise the unification of various networks and single all-purpose communication applications. networks. increased. Other data center services can also be automated. Converged networks potentially lead to reduced IT cost and increased user productivity.net Phase 2 Consolidation Consolidation of storage. Computing and networking resources can be automatically provisioned whenever needed. Inc. integrated modules. The use of alternative energy technologies. and waste recycling. comfortable work environment. which could potentially transport SAN. Energy-Efficient Data Center A green data center is a data center in which the mechanical. such as data migration. such as minimizing the footprints of the buildings and using low-emission building materials. green data centers provide an 85 percent power reduction using virtualized. servers. electrical. and automated data center management. 1-10 Designing Cisco Data Center Unified Fabric (DCUFD) v5. heat pumps. port consumption. Rack space is saved with virtualized. efficient. Most protocols today tend to be transported across a common unified I/O channel and common hardware and software components of the data center architecture. and support cost. and computer systems are designed for maximum energy efficiency and minimum environmental impact. automated Information Lifecycle Management (ILM). is also being considered. for governments to offer green incentives. The advantages of data center automation are automated dynamic resource provisioning.

Inc. All rights reserved. storage. and storage services. computing services. services.Servers Blade Servers • Storage: . networks. server I/O. It is crucial to understand the functions of each piece of equipment before consolidating it.Server I/Os • Network and fabric: Enterprise Storage . computing-agnostic.0—1-8 Consolidation is defined as the process of bringing together disconnected parts to make a single and complete whole. and process consolidation:  Reduced number of servers. security-agnostic. LAN.certcollecion. it means replacing several small devices with a few highly capable pieces of equipment to provide simplicity.Storage devices Consolidated Storage . Cisco Data Center Solutions 1-11 . DCUFD v5. and storage-agnostic by integrating application services. among others. and clustering networks on a common data center network Consolidated Data Center Networks Unified Fabric (Access Layer) LAN—Ethernet Fibre Channel SAN DCB—Ethernet and FCoE © 2012 Cisco and/or its affiliates. application. The primary reason for consolidation is the sprawl of equipment and processes that is required to manage the equipment. security services.net Service Integration The data center network infrastructure is integrating network intelligence and is becoming application-agnostic. and processes Consolidated Servers • Compute: . network. storage devices. and so on  Increased usage of resources using resource pools (of storage and computing resources)  Reduced centralized management  Reduced expenses due to a smaller number of equipment pieces needed  Increased service reliability © 2012 Cisco Systems. There are various important reasons for server. In the data center. cables. • Applications.SAN.

data I/Os. Enhanced Ethernet is a new converged network protocol that is designed to transport unified data and storage I/Os. All initial attempts have been unsuccessful. NFS. 1-12 Designing Cisco Data Center Unified Fabric (DCUFD) v5. lower latency. .8 feet [10 m]).0 © 2012 Cisco Systems. 10Gb/s Data Center Bridging (DCB) uses copper and twinax cables with short distances (32. and lower power requirements than 10BASE-T.certcollecion.Consolidates Ethernet and FCoE server I/O into common Ethernet LAN SAN LAN SAN FCoE traffic (FC.0—1-9 Server I/O consolidation has been attempted several times in the past with the introduction of Fibre Channel and iSCSI protocols that carry storage I/Os. Inc. therefore influencing the server bandwidth requirement of 10 Gb/s. iSCSI) FCoE Header FC Header Fibre Channel Payload CRC EOF FCS Standard Fibre Channel Frame (2148 Bytes) Ethertype = FCoE © 2012 Cisco and/or its affiliates. A growing demand for network storage is influencing network bandwidth demands. and clustering I/Os across the same channel. FICON) 10 GE Link Ethernet Fibre Channel DCB Ethernet Byte 2179 Control information (version. Server virtualization allows the consolidation of multiple applications on a server. All rights reserved. but with lower cost. EOF ordered sets) Byte 0 Ethernet Header Other networking traffic (TCP/IP. SOF. DCUFD v5. FCoE and classical Ethernet can be multiplexed across the common physical DCB connection.net • DCB enables deployment of converged unified data center fabrics: . CIFS. Primary enabling technologies are Peripheral Component Interconnect Express (PCIe) and 10 Gigabit Ethernet.

The components need to properly interact in order to deliver application services. WAN • Data center connects to other IT segments: . campus LAN.certcollecion. Inc.Various connectivity options . The data center is one of the components of the IT infrastructure. © 2012 Cisco Systems. Fabric A SAN Fabric B Fabric B DCUFD v5.Internet and WAN edge LAN • How components fit together: . The data center needs to connect to other segments to deliver application services and enable users to access and use them. All rights reserved. and various demilitarized zone (DMZ) segments hosting public or semi-public services. Cisco Data Center Solutions 1-13 .Campus LAN .Segments with different functionality Multiple links Fibre Channel SAN Fabric A Ethernet Unified Fabric (Ethernet with FCoE) PortChannel © 2012 Cisco and/or its affiliates.0—1-10 Data center architecture is the blueprint of how components and elements of the data center are connected.net Internet. Such segments are Internet and WAN edge.

and is expressed as the mean time between failures (MTBF). where each component can fail independently but have 99 percent reliability. this translates into one battery failure every hour during its 10-hour life span. Protecting against failure is expensive. Data center architecture that provides service availability is a combination of several different levels of data center high-availability features. and depends on the following:  Serviceability: Serviceability is the probability of a service being completed within a given time window.certcollecion. Reliability is a component of high availability that measures how rarely a component or system breaks down. For example.65 days every year. Software. Mean time to repair (MTTR) is the average time that is required to complete a repair action.net Data Center Terminology This topic describes the baseline technology and terminology used in data center solutions. For a system with 10 components. In an ideal situation.9 days of downtime in one year. It is important to identify the most serious causes of service failure and to build cost-effective safeguards against them. Inc. A high degree of component reliability and data protection with redundant disks. 1-14 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Hardware reliability problems cause only 10 percent of the downtime.000 hours. servers. a battery may have a useful life of 10 hours.  Reliability: Reliability represents the probability of a component or system not encountering any failure over a time span. In a population of 50. the reliability of the entire system is not 99 percent. For example. if a system has serviceability of 0. as is downtime. clusters.0 © 2012 Cisco Systems. adapters. human. or process-related failures comprise the other 90 percent.99 to the 10th power). The focus of reliability is to make the data center system components unbreakable. A server with 99 percent reliability will be down for 3. a system can be serviced without any interruption of user support. .000 batteries. and disaster recovery decreases the chances of service outages. This translates to 34. then there is a 98 percent probability that the service will be completed within 3 hours.44 percent (0. but its MTBF is 50.98 for 3 hours. The entire system reliability is 90.

certcollecion. SLAs define your minimum levels of data center availability and often determine what actions will be taken in the event of a serious disruption. Clusters are also examples of a fault-tolerant system. mirrored (backup and secondary data center) site where business and mission-critical applications can be started within a reasonable period of time after the destruction of the primary site.5. © 2012 Cisco Systems. Cisco Data Center Solutions 1-15 . Several components of the system have built-in component redundancy. such as 99. or 99. SLAs can prescribe different expectations in terms of guaranteed application response time. As downtime approaches zero. Setting up a new. such as billing and even penalties. is an extended period of outage of missioncritical service or data that is caused by events such as fire or attacks that damage the entire facility. such as 1 hour or automatic. Ultimately. and other attributes of the service. or 0. and guaranteed data center availability. 0. such as 1. The recovery time or performance loss that is caused by a component failure is close to zero. in the context of online applications.9 percent. The SLA can also prescribe guaranteed application resource allocation time. A more resilient system results in higher availability. off-site facility with duplicate hardware.999. Availability measures the ability of a system or group of systems to keep the application or service operating. A disaster recovery solution requires a remote. and real-time data synchronization enables organizations to quickly recover from a disaster at the primary site. Availability could be calculated as follows: Availability = MTBF / (MTBF + MTTR) or Availability = Uptime / (Uptime + Downtime) Achieving the property of availability requires either building very reliable components (high MTBF) or designing components and systems that can rapidly recover from failure (low MTTR). performance support. Inc. RTO determines how long it takes for a certain application to recover. SLAs record and prescribe the levels of service availability. These objectives also outline the requirements for disaster recovery and business continuity.1 second. SLAs are fundamental to business continuity. The data center infrastructure must deliver the desired Recovery Point Objective (RPO) and Recovery Time Objective (RTO).  Disaster recovery: Disaster recovery is the ability to recover a data center at a different site if a disaster destroys the primary site or otherwise makes the primary site inoperable. Higher levels of guaranteed availability imply higher SLA charges. availability approaches 100 percent. A fault-tolerant server has a fully replicated hardware design that allows uninterrupted service in the event of component failure. The problem with faulttolerant systems is that the system itself is a single point of failure. an enterprise carries significant risk in terms of its ability to deliver on the desired SLAs.  Fault-tolerant: Fault-tolerant systems are systems that have redundant hardware components and can operate in the presence of individual component failure. Clusters can provide uninterrupted service despite node failure. one or more nodes in the cluster take over the applications from the failed server.net  Availability: Availability is the portion of time that an application or service is available for productive work. If these requirements are not met in a deterministic fashion. and information and disk content are preserved. If a node that is running on one or more applications fails. Designing for availability assumes that the system will fail and that the system is configured to mask and recover from component-to-server failures with minimum application outage. which is a compromise between the downtime cost and the cost of the high-availability configuration. serviceability. Disaster. and RPO determines to which point (in backup and data) the application can recover.99. in the case of violation of the SLAs. software. An important decision to consider when building a system is the required availability level. 99. For example.

 Application level: Redundancy needs to be built into the application design. Inc. and stateful module and process failovers. The infrastructure is transparent to the layers above it and aims to provide the continuous connectivity needed by the application and users. Fire or loss of electricity can cause damage at the campus level and can be recovered using redundant components such as power supplies and fans. . Clustered applications work in such a way that the servers are synchronized. Server virtualization clusters have the capability to start computing workloads on a different physical server if one is not available. Designing Cisco Data Center Unified Fabric (DCUFD) v5. resilient. the types of outages are classified based on the scope of the outage impact: 1-16  Outage with an impact at the data center level: An outage of this type is an outage of a system or a component such as hardware or software.  Outage with an impact at the campus level: This type of outage affects a building or an entire campus.  Computing services: Clusters of servers running in redundant operation are deployed to overcome a failure of a physical server. using fast routing and switching reconvergence.certcollecion. and redundant data center components. There are different types of outages that might be expected and might affect the data center functions and operation. if they are available.net An outage in the data center operation can occur at any of the following levels:  Network infrastructure: Data centers are built using sufficient redundancy so that a failure of a link or device does not affect the functioning of the data center. The most important components in this case are IP routing protocols.  IP services: The IP protocol and services provide continuous reachability and autonomous path calculation so that traffic can reach the destination across multiple paths. or by using the secondary data center site or Power over Ethernet (PoE). Typically.0 © 2012 Cisco Systems. These types of outages can be recovered using reliable.

continuous operation. warm standby. Inc. gradual recovery. continuous availability. Cisco Data Center Solutions 1-17 . Such outages can be recovered using geographically dispersed. such as earthquakes. © 2012 Cisco Systems. Data center recovery types: Different types of data center recovery provide different levels of service and data protection. hot standby. widespread power outage. and a back-out plan. immediate recovery. standby data centers that use global site selection and redirection protocols to seamlessly redirect user requests to the secondary site. or tornados.certcollecion. such as cold standby. flooding.net  Outage with an impact at the regional level: This type of outage affects a region.

All these concerns are central to data centers. Inc.net Data Center Challenges This topic describes how to analyze business. they are also being pressured by issues relating to power and cooling. escalating security and provisioning needs. As a result.0 © 2012 Cisco Systems. and ever-stricter regulatory compliance. because more services are concentrated in a single data center. and hurricanes. and moving to new technologies can significantly affect data center power and cooling budgets. 1-18 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Security concerns and business continuance must be considered in any data center solution. resulting in lost time (and revenue). many people could be put out of work. quicker access to applications and information. If an attack occurred in such a condensed environment. fires. The modern enterprise is being changed by shifting business pressures and operational limitations. The importance of security has been rising as well. require more power and generate more heat than older technologies.certcollecion. and business continuance. Modern data center technologies. . A data center should be able to provide services if an outage occurs because of a cyber attack or because of physical conditions such as floods. earthquakes. technical. efficient asset utilization. thorough traffic inspection is required for inbound data center traffic. and environmental challenges. such as multicore CPU servers and blade servers. While enterprises get ready to meet demands for greater collaboration.

© 2012 Cisco Systems.” “We have too many underutilized servers..” Network Department “I need to provide lots of bandwidth and meet SLAs for application uptime and responsiveness. You have the opportunity to talk on all levels because of your strategic position and the fact that you interact with all of the different components in the data center.certcollecion. Depending on which IT team you are speaking with.0—1-16 The data center is viewed from different perspectives. Complexity and coordination Organization DCUFD v5. The organization might be run in silos.. where each silo has its own budget and power base. I want to see more business value out of IT.” © 2012 Cisco and/or its affiliates.” Storage Department “I cannot keep up with the amount of data that needs to be backed up.net Challenges Chief Officer “I need to take a long-term view . All rights reserved. Cisco Data Center Solutions 1-19 .” Applications Department “Our applications are the face of our business. you will find different requirements.” Server Department “As long as my servers are up. I am OK. The data center involves multiple stakeholders.” Security Department “Our information is our business. who all have different agendas and priorities. We need to protect our data everywhere—in transit and at rest. The traditional network contacts might not be the people who make the decisions that ultimately determine how the network evolves. Inc. and have short-term wins. replicated. depending on the position of the person whose view is being expressed. many next-generation solutions involve multiple groups. and archived.” “It is all about keeping the application available. Conversely.

Data centers need infrastructures that can protect and recover applications.certcollecion. The design should include coordinated efforts that cut across several areas of expertise including telecommunications. and comply with environmental requirements. ventilation. Companies must also address regulatory issues. early obsolescence. and air conditioning (HVAC) systems. Architectural and Mechanical Specifications The architectural and mechanical facility specifications are defined as follows: 1-20  How much space is available  How much load that a floor can bear  The power capacity that is available for data center deployment  The cooling capacity that is available  The cabling infrastructure type and management Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. and heating. .net The data center facility has multiple aspects that need to be properly addressed (that is. designed. and that can provide uninterrupted access. and implemented to work together to ensure reliable access while supporting future requirements. and built). power. communications. and information. Each of the components of the data center and its supporting systems must be planned. Inc. There is no substitute for careful planning and following the guidelines for data center physical design. and intolerable levels of availability. Facility capacities are limited and need to be properly designed. the design must be considered early in the building development process. architectural components. Neglecting any aspect of the design can render the data center vulnerable to cost failures. When it comes to building a reliable data center and maximizing an investment. designed. taken into account when the facility is being planned. enable business resilience.

Space issues include how to size the data center. and determines which and how much equipment can be installed in a certain rack. Physical Security Finally. which depend on how the passive infrastructure is deployed A data center that is too small will not adequately meet server.net In addition. and how to construct the data center to effectively protect the valuable equipment inside. Although sometimes neglected. and network requirements and will thus inhibit the productivity and will incur additional costs for upgrades or expansions. That is not the only important parameter. Protection from third parties is important. The floor-loading capability is equally important. where to locate servers within a multipurpose building. not only from the initial construction cost but also from the perspective of ongoing operational expenses. Cabinets and racks are part of the space requirements and other aspects:  Loading. and flexibility. Cisco Data Center Solutions 1-21 . and thus what the rack weight should be. a data center that is too spacious is a waste of money. storage. If properly selected. Fire suppression equipment and alarm systems to protect against fires should be in place. The data center space defines the number of racks that can be used and thus the equipment that can be installed. Otherwise. Alternatively. life span. Determining the proper size of the data center is a challenging and essential task that should be done correctly and must take into account several variables:  The number of people supporting the data center  The number and type of servers and the storage and networking equipment that is used  The sizes of the non-server. storage. Inc. as well as protection of the equipment and data from certain disasters. the facility must meet certain environmental conditions.certcollecion. Properly sized data center facilities also take into account the placement of equipment. Space The space aspect involves the physical footprint of the data center. The types of data center devices define the operating temperatures and humidity levels that must be maintained. physical security is vital because the data center typically houses data that should not be available to third parties. or network areas. costly upgrades or relocations must be performed. how to make it adaptable for future needs and growth. The placement of current and future equipment must be very carefully considered so that the data center physical infrastructure and support is deployed optimally. so access to the premises must be well controlled. the data center facility can grow when needed. which determines what and how many devices can be installed  The weight of the rack and equipment that is installed  Heat that is produced by the equipment that is installed  Power that is consumed by the equipment that is installed © 2012 Cisco Systems. the size of the data center has a great influence on cost.

more power that has to be drawn from the AC supply and more heat output needs to be dissipated. generators. network equipment. Power requirements must also be estimated for all support equipment such as the uninterruptible power supply.certcollecion. If the computing power of the server must work harder. Estimating power needs involves determining the power that is required for all existing devices and for devices that are anticipated in the future. and on-site generators.0 © 2012 Cisco Systems. and cooling devices (which take up most of the energy). For the server environment. Determining power requirements requires careful planning. The power estimation must be made to accommodate required redundancy and future growth. . and so on. lighting. The variability of usage is difficult to predict when determining power requirements for the equipment in the data center. Power requirements are based on the desired reliability and may include two or more power feeds from the utility. utility power failures. conditioning electronics. storage. and some power is “lost” upon conversion. and relative humidity. electrical conduits. the power usage depends on the computing load. multiple circuits to systems and equipment. The power system must physically accommodate electrical infrastructure elements such as power distribution units (PDUs). lighting. wiring.net Power The power in the data center facility is used to power servers and also storage. and other potential electrical problems (thus addressing the redundancy requirements). 1-22 Designing Cisco Data Center Unified Fabric (DCUFD) v5. and by using smoke detectors. data center hotspots. network equipment. an uninterruptible power supply. Cooling The temperature and humidity conditions must be controlled and considered by deploying probes to measure temperature fluctuations. The facility electrical system must not only power data center equipment (servers. and so on) but must also insulate the equipment against surges. Inc. circuit breaker panels. and so on. HVAC system.

use it.net Overheating is an equipment issue with high-density computing:  More heat overall  Hotspots  High heat and humidity. Cisco Data Center Solutions 1-23 .41214 British thermal units (BTUs). In the cold aisle. This fact keeps the hot air from mingling with the cold air. the increasing computing and memory power of a single server results in higher heat production. 3 kilowatt (kW) per chassis is not a problem for one chassis. Maintaining a 40 to 55 percent relative humidity level is recommended The cooling aspect of the facilities must have proper airflow to reduce the amount of heat that is generated by concentrated equipment. thus also taking into account future heat production. The design must also take into consideration the cooling that is required for the current sizing of the data center servers. Inc. Additionally. and BTU measurements in their equipment specifications. this formula can be helpful. In the hot aisle.  Humidity levels that affect static electricity and condensation must be considered. In addition. there are no perforated tiles. which threaten equipment life spans  Computing power and memory requirements. Adequate cooling equipment must be used for more flexible cooling. Many manufacturers publish kilowatt (kW). but five or six chassis per rack = 20 kW. or using cabinets with mesh fronts and backs  Using perforated tiles with larger openings Helpful Conversions One watt is equal to 3. This cold air washes over the equipment and is expelled out of the back into the hot aisle. In the hot aisle. the cabinets and racks should be arranged in an alternating pattern to create “hot” and “cold” aisles.412 does not equal the published wattage. Sometimes. © 2012 Cisco Systems. the blade server deployment results in more heat being produced. which demand more power and generate more heat  Data center demand for space-saving servers: density = heat. kilovolt-ampere (kVA). the equipment racks are arranged back to back. Thus. dividing the BTU value by 3. High-density equipment produces much heat. other considerations for cooling include the following:  Increasing airflow by blocking unnecessary air escapes or by increasing the height of the raised floor  Spreading equipment out over unused portions of the raised floor if space permits  Using open racks instead of cabinets when security ID is not a concern.certcollecion. Perforated tiles in the raised floor of the cold aisles allow cold air to be drawn into the face of the equipment. which requires proper cooling capacity and proper data center design. This value is a generally used value for converting electrical values to BTUs and vice versa. equipment racks are arranged face to face. Where the information is provided by the manufacturer. Where it is not provided. the heat that is produced actually increases because the blade servers are space optimized and allow more servers to be deployed in a single rack. Because not every active piece of equipment exhausts heat out of the back. Increasing Heat Production Although the blade server deployment optimizes the computing-to-heat ratio. The solutions that address the increasing heat requirements must be considered when blade servers are deployed within the data center. but the design must anticipate future growth.

rather than the rule. the result is a shortened equipment life span. A feature of this environment is that change is the exception. 1-24 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc. but the environmental infrastructure support is often relatively inflexible. The cooling solution can resolve the increasing heat in these ways:  Increasing the space between the racks and rows  Increasing the number of HVAC units  Increasing the airflow through the devices  Using new technologies like water-cooled racks Traditionally.  Raised floors were the norm.  Floor loading was predetermined. .net If cooling is not properly addressed and designed. the data center evolved primarily to support mainframe computing.0 © 2012 Cisco Systems. The modern server-based environment is one where change is a constant.  Racks and cabling did not need to be reconfigured.certcollecion.  The floor was raised for bottom-to-top cooling. This situation led to a Layer 0 infrastructure in which the following occurred:  Power and cooling were overprovisioned from the onset.

the uninterruptible power supply (UPS in the figure) and air conditioning are no longer segregated from the computer equipment. Incorporating the physical layer into an on-demand framework achieves significant cost savings.net In the figure. and the modular nature of the framework results in much higher levels of availability versus investment. Cisco Data Center Solutions 1-25 . There are some additional benefits:  Significantly lower TCO  Ability to add power and cooling as needed  Shorter deployment cycles  Ability to provision facilities in the same manner as IT infrastructure  Increased uptime © 2012 Cisco Systems.certcollecion. and the power distribution panels are now within the racks. Inc. the power and data cabling is being run over the tops of the racks. There is no longer a raised floor.

and the data center environment should be kept at a safe. the heat problem that they face is similar. and so on) of the hardware.0 © 2012 Cisco Systems. Designing Cisco Data Center Unified Fabric (DCUFD) v5.  Air-conditioned air is pulled from the cold aisle through the racks and exits the back of the servers. as long as the BTU capacity of the airconditioning units is sufficient to cool all the equipment that is installed. and the technology described here applies to both. For data centers without raised floors. Inc. To help dissipate heat from electrically active IT hardware. power supplies.net A traditional data center thermal control model is based on controlled air flows. The cycle repeats. air that is treated by air conditioning is pumped into the floor space below the rows of server racks. chilling the warm components (processors.certcollecion. . cool temperature. Before trying to solve the heat problem. The heated air then exits out the back of the rack into the hot aisle. Summary of the air flow in a data center facility: 1-26  Cold air is pumped from the air-conditioning units through the raised floor of the data center and into the cold aisles between facing server racks. Heat is exchanged and the air becomes warm.  The heat from the server racks exhausts into the hot aisles. Although construction of data centers differs. the chilled air enters through diffusers above the cold aisle. The built-in fans of the stored hardware pull the cool air through each rack. The heated air is pulled back into the air-conditioning units and chilled. it is important to understand what may be causing additional heat in the data center environment. where it is returned to the airconditioning units to be chilled. The chilled air enters the data center through vented tiles in the floor and fills in the cold aisle front of the rows of racks.

The electrical infrastructure is crucial for keeping server.certcollecion. The physical network. and network devices operating. dictates if and how these devices communicate with one another and the outside world. Cabling usability and simplicity are affected by the following:  Media selection  Number of connections provided  Type of cabling termination organizers These parameters must be addressed during the initial facility design. storage. Fiber optics-based cabling is less susceptible to external interferences and offers greater distances. the cabling needs to be deployed in tight spaces. terminating at various devices. and the server. creating a health and safety hazard. The cabling must be abundant to provide ample connectivity and must employ various media types to accommodate different connectivity requirements.) Typically.net The data center cabling (the passive infrastructure) is equally important for proper data center operation. while copper-based cabling is ubiquitous and less costly. Two options are widely used today: copper-based cabling and fiber optics-based cabling. Inc. The cabling infrastructure must not incur the following:  Improper cooling due to restricted airflow  Difficult-to-implement troubleshooting  Unplanned dependencies that result in more downtime for single component replacement © 2012 Cisco Systems. storage. The cabling infrastructure also governs the physical connector and the media type of the connector. The infrastructure needs to be a well-organized physical hierarchy that aids the data center operation. Cisco Data Center Solutions 1-27 . which is the cabling that runs and terminates between devices. However. and network components and all the technologies to be used must be considered. (No one wants a data center where the cables are on the floor. but it must remain well organized for the passive infrastructure to be simple to manage and easy to maintain.

certcollecion. with underfloor cabling. and networking equipment) make the cabling infrastructure reconfiguration very difficult. airflow is restricted by the power and data cables. Raised flooring is a difficult environment in which to manage cables because cable changes mean lifting floor panels and potentially having to move equipment racks. Cables should be located in the front or rear of the rack for easy access. but a direct-connect design is not optimal for the physical element of a network (for example. storage. Cable management is a major topic in its own right. poorly designed cabling will incur downtime due to reconfiguration or expansion requirements that were not considered by the original cabling infrastructure shortcomings. cabling is located in the front of the rack in service provider environments. Conversely. a switch). Enough redundancy must be provided within the physical network device. . The designer of a data center should work with the facilities team that installs and maintains the data center cabling in order to understand the implications of any new or reconfigured environment in the data center. Smaller networks often use a direct-connect cabling design in which there is one main networking row. in terms of Layer 2 configuration. A direct-connect design scales poorly and is prone to cable overlap. A direct-connect design is excellent for the logical element of a network. Cabling is routed directly from the networking row to server cabinet locations. When data center cabling is deployed. Thus. Typically. 1-28 Designing Cisco Data Center Unified Fabric (DCUFD) v5.net  Downtimes due to accidental disconnects For example. scalable cabling is crucial for proper data center operation and life span. The solution is a cable management system that consists of integrated channels that are located above the rack for connectivity. Inc.0 © 2012 Cisco Systems. the space constraints and presence of operating devices (namely servers.

and makes it possible to avoid cable overlap. Inc. © 2012 Cisco Systems. This design scales well. which makes the physical network easier to manage.net A distributed cabling design uses network substations within each server row. Cisco Data Center Solutions 1-29 . less expensive. This design is superior for the physical element of a network. Cable runs are shorter and better organized. and from there to the main network row. and less restrictive for air flow. Server cabinets are cabled to the local substation.certcollecion.

given EoR challenges  Copper cabling from servers to access switches  Fiber cabling may be used to aggregate top-of-rack (ToR) servers  Addresses aggregation requirements for ToR access environments Note 1-30 The MoR approach is especially suitable for 10 Gigabit Ethernet environments. which are economical.0 © 2012 Cisco Systems. Suitable high-density 10 Gigabit Ethernet switches are the Cisco Nexus 7009 or Cisco Nexus 5596. but extend 32. The middle-of-row (MoR) and EoR design distributions are typically used for modular access. On average.certcollecion. These two types of rack distributions enable lower cabling distances (and lower cost) and allow a dense access layer. these are 6 to 12 multi-RU servers per rack. Designing Cisco Data Center Unified Fabric (DCUFD) v5. .net There are different types of network equipment distribution of the data center racks.8 feet (10 m) in length at most. with servers and network access layer switches connected using copper cables. EoR design characteristics are as follows:  Copper from server to access switches  Poses challenges on highly dense server farms: — Distance from farthest rack to access point — Row length may not lend itself well to switch port density MoR design characteristics are as follows:  Use is starting to increase. requiring 4 to 6 kW per server rack and 10 to 20 kW per network rack. Every switch supports one or many medium or large subnets and VLANs. and cabling is done at data center build-out. You can take advantage of twinax cables. The most typically used distribution type is end-of-row (EoR) distribution. Inc.

From the perspective of the physical topology. Inc. Every access layer switch is configured with one or more small subnets. they are managed from a single device. From the management perspective. they provide ToR connectivity. ToR design characteristics are as follows:  Used with dense access racks (1-RU servers)  Typically one access switch per rack (although some customers are considering two plus a cluster)  Typically approximately 10 to 15 servers per rack (enterprises)  Typically approximately 15 to 30 servers per rack (service providers)  Use of either side of rack is becoming popular  Cabling within rack: Copper for server to access switch  Cabling outside rack (uplink):  — Copper (Gigabit Ethernet): Needs an MoR model for fiber aggregation — Fiber (Gigabit Ethernet or 10 Gigabit Ethernet): More flexible and also requires aggregation model (MoR) Subnets and VLANs: One or many subnets per access switch Note © 2012 Cisco Systems. making it much easier for cabling. A suitable selection for ToR cabling is a fabric extender (FEX). 2224. copper or fiber uplinks are utilized to the aggregation layer MoR switching rack. compared to an EoR design. resembling the EoR design.certcollecion. Outside the server access rack. It is easier because the access switch is located on the top of every server rack and cabling occurs within the rack.rack unit (RU) servers.net The ToR design model is appropriate for many 1. or 2232 FEX. Cisco Data Center Solutions 1-31 . such as the Cisco Nexus 2248.

Inc. the following four design models can be used:     1-32 EoR design (switch-to-switch) characteristics: — Scales well for blade server racks (approximately three blade chassis per rack) — Most current uplinks are copper.net For blade chassis. but the new switches will offer fiber MoR (pass-through) design characteristics: — Scales well for pass-through blade racks — Copper from servers to access switches ToR blade server design characteristics: — Has not been used with blade switches — May be a viable option on pass-through environments if the access port count is right Cisco Unified Computing System: — The Cisco Unified Computing System features the fabric interconnect switches. .certcollecion. which act as access switches to the servers — Connects directly to the aggregation layer Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems.

although being the “promised solution” for server-.0—1-28 Virtualization. and storage facilities are under increased loads. Server virtualization results in multiple VMs being deployed on a single physical server. the configuration of the VM access port does not move with the machine. Inc.  The complexity factor: The leveraging of high-density technologies. and the support burden is heavier. higher-density memory formats. By moving the access layer into the hosts. security aspects. All these aspects require higher integration and collaboration from the personnel of the various service teams.net • Virtual domains are growing fast and becoming larger. network administrators are faced with virtual interface deployments.  The challenges of virtualized infrastructure: These challenges involve management. this increase can result in more © 2012 Cisco Systems. network-. • Virtualization solution infrastructure management: . by enabling mobility for VMs. and so on. the servers. the network administrators have no insight into configuration or troubleshooting of the network access layer. Hypervisor Hypervisor Hypervisor DCUFD v5. brings a few challenges.Higher network (LAN and SAN) attach rates are required: • Multicore processor deployment affects virtualization and networking requirements. and space-related problems. LAN and SAN Implications The important challenge that server virtualization brings to the network is the loss of administrative control on the network access layer. such as multicore processors. All rights reserved. .The network access layer must support consolidation and mobility. by using virtualization. and adaptation of organizational processes and structures. brings increased equipment and network complexity. the information about the VM connection point gets lost. when obtaining access. If the information is lost. However. • Network administrators are involved in virtual infrastructure deployments: . Third. common policies. Though the resource utilization is increased. unified fabrics.Where is the management demarcation? App App App App App OS OS OS OS OS App App App App App OS OS OS OS OS App App App App App OS OS OS OS OS App App App App App OS OS OS OS OS Resource Pool Hypervisor © 2012 Cisco and/or its affiliates. Cisco Data Center Solutions 1-33 . On the other hand.  Support efficiency: Trained personnel are required to support such networks. Second. network.certcollecion. which is desired. new-generation management tools ease these tasks.

0 © 2012 Cisco Systems. and network ports. and therefore VMs. and version management. More management effort is put into proper firmware deployment.net I/O throughput.  Typically. Using multiple interfaces also ensures that the redundancy requirement is properly addressed.certcollecion. When there is more I/O throughput. and with SAN and LAN infrastructures running in parallel. cabling.  Multiple interfaces also cause multiple fault domains and more complex diagnostics. Inc.  There are a higher number of adapters. there are the following implications: 1-34  The network infrastructure costs more and is less efficiently used. which results in higher costs.  Having multiple adapters increases management complexity. to access storage on a disk array. Designing Cisco Data Center Unified Fabric (DCUFD) v5. driver patching. more bandwidth is required per physical server. . To solve this challenge.  Multiple Fibre Channel interfaces provide SAN connectivity for storage traffic to allow servers. multiple interfaces are used to provide server connectivity. Virtualization thus results in a higher interface count per physical server.  Multiple Gigabit Ethernet interfaces provide LAN connectivity for data traffic to flow to and from the clients or to other servers. a dedicated management interface is also provided to allow server management.

Cisco Data Center Solutions 1-35 . or between clouds (private or public) . clouds). This issue imposes another set of challenges:  Data security and integrity when moving application load from a controlled IT infrastructure to an outsourced infrastructure (that is. the following items limit the solution architecture:  Scalability: MAC address tables and the VLAN address space present a challenge when VMs need to move outside of their own environments. Layer 2 connectivity is commonly required between the segments where the VM moves.MAC table size . Moving outside the environment means moving the VM from primary to secondary data center. application mobility means that users can access applications from any device.Bandwidth App App OS Hybrid Cloud OS App App OS App App Management Management Bridge Primary Data Center Secondary Data Center © 2012 Cisco and/or its affiliates. VMware vSphere VMware vSphere Private Clouds Public Clouds DCUFD v5. or even from a private IT infrastructure to a public one. Everyone demands and requires it. All rights reserved.Number of VLANs • Data security and integrity • Layer 2 connectivity requirements: • Global address availability • Compatibility . VM mobility requires that data center architectures are properly conceived. unknown unicast. a public cloud) © 2012 Cisco Systems. IT needs to cut costs of the infrastructure. and so on) that consumes precious bandwidth — Extending IP subnets and split-brain problems upon data center interconnect failure Today.net Virtual machine mobility: • Scalability boundaries: Application mobility: • Demands to move applications to. From the VM perspective. IT infrastructure users and businesses demand to be able to access their applications and data from anywhere. The virtualization through addressing and solving many of the challenges of “classic” solutions brings its own piece to the mobility puzzle. This introduces all sorts of challenges: — Distance limitation — Selection of an overlay technology that enables seamless Layer 2 connectivity — Unwanted traffic carried between sites (broadcast. At the same time.  To ensure proper VM operation.certcollecion. which imposes new challenges.0—1-29 These days mobility is of utmost importance.Distance . application mobility means the ability to move application load between IT infrastructures (that is. and thus more are considering the cloud. and thus operation of the application hosted by the VM. from. and failure to do so may prevent proper VM mobility. Inc. From the IT perspective.

critical applications do not allow downtime for the conversion process).0 © 2012 Cisco Systems. This limitation diminishes seamless application mobility and limits the types of applications that can be moved (for example. enabling application access by the same name regardless of its location  Because the IT infrastructures typically do not use single standardized underlying architecture. and so on. . incompatibility within the infrastructure arises. equipment.certcollecion. which can limit or affect application mobility by requiring the use of conversion tools. access to applications. that is. Designing Cisco Data Center Unified Fabric (DCUFD) v5.net 1-36  As with VM mobility. Inc.

One of those details is on-demand billing. associations of resources to an application are more dynamic and ad hoc than in a virtual data center. However. Cloud computing translates into IT resources and services that are abstracted from the underlying infrastructure and provided on demand and “at scale” in a multitenant environment. © 2012 Cisco Systems. However. but not at the scale and ubiquity that is anticipated in the future. Another difference is that. Inc. hosted model. and concepts. Cloud computing is still new. it is more useful to focus on the value than on the concepts. This description is usually true. Today.certcollecion. Cloud resource usage is typically tracked at a granular level and billed to the customer on a short interval. One could argue that a component cloud is very much like a virtualized data center. Cisco Data Center Solutions 1-37 . in a cloud. There are some cloud solutions that are available and offered. there are some details that differentiate a component cloud from a virtualized data center. There are no definite and complete standards yet because the standards are in the process of being defined. these differences and definitions are very fragile. clouds are associated with an off-premises.net Introduction to Cloud Computing This topic describes how to recognize the cloud computing paradigm. terms. Rather than trying to define these differences.

consolidation) that aggregates the computing.certcollecion.net Cloud computing is feasible due to the fundamental principles that are used and applied in modern IT infrastructures and data centers: 1-38  Centralization (that is. and application resources in central locations—data centers  Virtualization by which seamless scalability and quick provisioning can be achieved  Standardization. .0 © 2012 Cisco Systems. which makes integration of components from multiple vendors possible  Automation. storage. network. which creates time savings and enables user-based self-provisioning of IT services Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc.

which is embodied in its components. packaging. The concept of cloud services will evolve toward something that ideally hides complexity and allows control of resources. Cloud computing architecture defines the fundamental organization of a system. and technologies that are used to secure the data and applications of companies that use cloud services. a public cloud or open cloud) can easily move their applications and services between the cloud providers. and the principles governing the design and evolution of the system. storage. All rights reserved. It defines the security policy.  Standards: There should be standard interfaces for protocols.Hybrid cloud Infrastructure Costs Traditional Hardware Model • Cloud types: Customer Dissatisfaction (Insufficient Hardware) Large Capital Expenditure . their relationship to each other. These terms are defined in more detail in the following sections. and so on) and how the applications and services that are running in the cloud are isolated from each other.net . © 2012 Cisco Systems.  Security: This ability is implicitly included in the previous characteristic. storage.0—1-33 The terms in the figure are used when talking about cloud computing to define types of cloud solutions. load balancing. DCUFD v5.Private cloud .  Automation: This feature is an important characteristic that defines how a company that would like to use a cloud-based solution can get the resources and set up its applications and services in the cloud without too much intervention from the cloud service support staff. Cisco Data Center Solutions 1-39 . Inc. It also defines anything that secures the infrastructure of the cloud itself.Multitenancy and isolation . server capacity. computing.Security .Automation .Virtual private cloud .Elasticity Time • The solution depends on the vendor • Common standards are being defined Predicted Demand Traditional Hardware Actual Demand Scalable Cloud Hardware Opportunity Cost Automated Trigger Actions © 2012 Cisco and/or its affiliates. and databases).  Elasticity: Flexibility and elasticity allow users to scale up and down at will—utilizing resources of all kinds (CPU.Public cloud .certcollecion.Community cloud Time • Architecture Scalable Cloud Model . This list describes the fundamental characteristics of a cloud solution:  Multitenancy and isolation: This characteristic defines how multiple organizations use and share a common pool of resources (network.Standards Infrastructure Costs • Characteristics: Scalable setup helps you stay ahead of the curve . and access to cloud resources so that the companies that are using an external cloud solution (that is. while providing the automation that removes the complexity. mechanisms.

but there are trends and activities toward defining common cloud computing solutions.0 © 2012 Cisco Systems. . a government agency that is part of the US Department of Commerce.certcollecion. cloud computing is not very well-defined in terms of standards. is responsible for establishing standards of all types as needed by industry or government programs. 1-40 Designing Cisco Data Center Unified Fabric (DCUFD) v5. The NIST cloud definition also defines cloud characteristics that are both essential and common. Inc. The National Institute of Standards and Technology (NIST).net As mentioned.

pay-as-you-go billing. and the appearance of infinite scalability.com cloud-based customer relationship management (CRM)  Skype voice services © 2012 Cisco Systems.net A public cloud can offer IT resources and services that are sold with cloud computing qualities.certcollecion. on-demand provisioning. Inc. such as self-service. Cisco Data Center Solutions 1-41 . Here are some examples of cloud-based offerings:  Google services and applications like Google Docs and Gmail  Amazon web services and the Amazon Elastic Compute Cloud (EC2)  Salesforce.

Private clouds will have these characteristics:  Consolidated. on-demand provisioning. .0 © 2012 Cisco Systems. This arrangement will enable new architectures to target very large-scale activities.net A private cloud is an enterprise IT infrastructure that is managed with cloud computing qualities.certcollecion. 1-42 Designing Cisco Data Center Unified Fabric (DCUFD) v5. virtualized. and the appearance of infinite scalability. pay-as-you-go charge-back. It can support from tens to thousands of applications and services. the private cloud will scale by pooling IT resources under a single cloud operating system or management platform. and automated existing data center resources  Provisioning and cost-metering interfaces to enable self-service IT consumption  Targeted at one or two noncritical application systems Once a company has decided that a private cloud service is appropriate. Inc. such as self-service.

an enterprise private cloud with a service provider public cloud) with one another by connecting their individual management infrastructures and allowing the exchange of resources. Inc.  Federation can occur across data center and organization boundaries with cloud internetworking. Cisco Data Center Solutions 1-43 . the tendency to connect them forms the hybrid cloud. A hybrid cloud links disparate cloud computing infrastructures (that is.certcollecion.net Once private and public clouds are well accepted. © 2012 Cisco Systems. The hybrid cloud can enable these options:  Disparate cloud environments can leverage other cloud-system resources.

net A virtual private cloud is a service offering that allows enterprises to create their private clouds on infrastructure services.0 © 2012 Cisco Systems. The closed-cloud service provider enables an enterprise to accomplish these activities: 1-44  Leverage services that are offered by third-party Infrastructure as a Service (IaaS) providers  Virtualize trust boundaries through cloud internetworking standards and services  Access vendor billing and management tools through a private cloud management system Designing Cisco Data Center Unified Fabric (DCUFD) v5. . Inc. that are provided by a service provider.certcollecion. such as a public cloud.

The federation can enable the following options:  Disparate cloud environments can leverage other cloud-system resources.net The largest and most scalable cloud computing system—the open cloud—is a service provider infrastructure that allows a federation with similar infrastructures offered by other providers. and service providers can leverage other provider infrastructures to manage exceptional loads on their own offerings. A federation will link disparate cloud computing infrastructures with one another by connecting their individual management infrastructures and allowing the exchange of resources and the aggregation of management and billing streams.  The federation can occur across data center and organization boundaries with cloud internetworking. Cisco Data Center Solutions 1-45 . © 2012 Cisco Systems.  The federation can provide unified metering and billing and “one-stop” self-service provisioning. Enterprises can choose freely among participants. Inc.certcollecion.

SaaS is also called “software on demand. persistence. . testing. storage. and hosting.0 © 2012 Cisco Systems. The software can be licensed for a single user or for a group of users. 1-46 Designing Cisco Data Center Unified Fabric (DCUFD) v5. customers can access the application over the Internet. Platform as a Service PaaS is the delivery of a computing platform and solution stack as a service. Offerings may also include application services such as team collaboration. These services may be provisioned as an integrated solution online.net There are three different cloud computing service models:  Software as a Service (SaaS)  Platform as a Service (PaaS)  Infrastructure as a Service (IaaS) Software as a Service SaaS is software that is deployed over the Internet or is deployed to run behind a firewall in your LAN or PC. through a subscription or a pay-as-you-go model.” SaaS vendors develop. It provides all of the facilities that are required to support the complete life cycle of building and delivering web applications and services entirely from the Internet. database integration. It facilitates the deployment of applications without the cost and complexity of buying and managing the underlying hardware and software and provisioning hosting capabilities. The SaaS vendor may run all or part of the application on its hardware or may download executable code to client machines as needed—disabling the code when the customer contract expires.certcollecion. security. scalability. deployment. The offerings may include facilities for application design. application development. and operate software for customer use. A provider licenses an application to customers as a service on demand. Rather than installing software onsite. host. state management. and developer community facilitation. application instrumentation. application versioning. web service integration and marshaling. Inc.

Servers Managed by You SaaS Operating System Virtualization Managed by Others Middleware Managed by Others Managed by You Runtime PaaS Managed by Others Data Managed by You On-Premises Traditional IT Middleware Operating System Virtualization Servers Servers Storage Storage Storage Networking Networking Networking DCUFD v5. All rights reserved. clients can buy those resources as a fully outsourced service. can deliver computer infrastructure. or network equipment. or cloud infrastructure services. typically a platform virtualization environment. It is an evolution of virtual private server offerings. and are highest in the SaaS model. and ranges from the IaaS model. as a service. to the SaaS model. The service is typically billed on a utility computing basis and the amount of resources consumed (and therefore the cost) will typically reflect the level of activity. data center space. © 2012 Cisco Systems. software. IaaS Applications Applications Applications Applications Data Data Runtime Runtime Runtime Middleware Middleware Data Operating System Operating System Virtualization Virtualization Servers Storage Networking © 2012 Cisco and/or its affiliates.certcollecion.net Infrastructure as a Service IaaS. The dependability requirements of the service provider are also higher depending on the model.0—1-41 The type of service category also defines the demarcation point for management responsibilities. Inc. These demarcation points also mean that the service provider is invested with more trust depending on the model. with shared responsibility between customer and service provider. Cisco Data Center Solutions 1-47 . and must have a better understanding in cases of higher management control. Rather than purchasing servers. where almost all management responsibilities belong to the service provider.

• Compare between models with different reporting options. disk and CPU usage.certcollecion. 1-48 Designing Cisco Data Center Unified Fabric (DCUFD) v5. such as the amount of memory. Multiple cost models provide flexibility in measuring costs. . Network Address Translation (NAT). Inc. They need to meter the resources that are offered and used. such as floor space. and other services such as DHCP. power. including average memory.0 © 2012 Cisco Systems. The table describes three basic cost models that are typically used.net • Start simple and move to an advanced model over time. CPU. public IP addresses. DCUFD v5. network I/O. They need to create a charge-back hierarchy that provides a basis for determining cost structures and delivery of reports. • Ensure that model aligns with organizational requirements. making it easy to start with a simple chargeback model that is aligned with organizational requirements.0—1-42 Service providers have an important building block for delivering IT as a service. • Flexible costing options mix and match between models. or storage allocated or reserved for the virtual machine Utilization-based costing Variable costs per virtual machine based on actual resources used. and disk I/O Cost models can be combined in a cost template. and firewalling. Complexity Costing Model Description Utilization-based costing Variable costs based on actual resources utilized Allocation-based costing Variable costs based on time of allocating resources Fixed costing Fixed cost for an item © 2012 Cisco and/or its affiliates. Basic Cost Models Cost Model Description Fixed cost Specific per-virtual machine instance costs. cooling. software. All rights reserved. or administrative overhead Allocation-based costing Variable costs per virtual machine based on allocated resources. including broadband network traffic.

net The transition to cloud-based computing is an evolution that is characterized by different architectures and steps. the use of external cloud services will become more popular.certcollecion. per department. The focus should be on helping customers to move to the right side of the figure and build their own dynamic data centers. and so on. Customers will have more than one architecture (for example. Inc. Cisco Data Center Solutions 1-49 . Most companies currently operate in application silos or they are already using virtualization that is deployed in zones like infrastructure applications. a shared infrastructure that forms an internal cloud will emerge from the virtualization zones. The evolution from the current infrastructure to cloud-based computing will proceed in phases. Next. Finally. Service providers in the cloud market are already on the far right of the scheme. four) and will likely be moving to a more cloud-oriented IT structure. © 2012 Cisco Systems.

Inc.  Reduced risk: It enables enterprises and service providers to deploy new architectures and technologies with confidence.certcollecion.net The system architecture of a cloud is like the architecture of a data center. . network.  Increased flexibility: It provides rapid. It provides these benefits: 1-50  Reduced time to deployment: It provides a fully tested and validated architecture that accelerates technology adoption and rapid deployment.  Improved operational efficiency: It integrates automation with a multitenant resource pool (computing. and mitigates operational configuration errors. on-demand. secure. improves asset use.0 © 2012 Cisco Systems. reduces operational overhead. flexible. Cisco VMDC is a validated architecture that delivers a highly available. and storage). Cisco Virtualized Multi-Tenant Data Center The Cisco Virtualized Multi-Tenant Data Center (VMDC) provides design and implementation guidance for enterprises that are planning to deploy private cloud services and for service providers that are building virtual private and public cloud services. Designing Cisco Data Center Unified Fabric (DCUFD) v5. workload deployments in a multitenant environment due to a comprehensive automation framework with portal-based resource provisioning and management capabilities. The cloud has all the components and aspects that a typical data center has. and efficient data center infrastructure.

consolidation of multiple networks.certcollecion. better security. VMware and Microsoft are examples of companies that support server virtualization technologies. reduces the cost of management. Deployment of another virtual server is easy because there is no need to buy a new adapter and a new server. When separation occurs. Network Virtualization Network virtualization can address the problem of separation. Inc. software just needs to be activated and configured properly. and increased network availability. Therefore. For a virtual server to be enabled. segmentation of networks. server virtualization simplifies server deployment. making them independent of the physical topology.net Data Center Virtualization This topic describes how to recognize the importance of virtualization technologies and solutions for data center evolution. Cisco Data Center Solutions 1-51 . A VLAN virtualizes Layer 2 segments. Considering the diverse networking needs of different enterprises might require separation of a single user group or a separation of data center resources from the rest of the network for certain reasons. Server Virtualization Server virtualization enables physical consolidation of servers on the common physical infrastructure. Network virtualization also provides other types of benefits such as increasing network availability. Examples of network virtualization are VLANs and virtual SANs (VSANs) in Fibre Channel SANs. Virtualization delivers tremendous flexibility to build and design data center solutions. A similar concept represents a VSAN in Fibre Channel SANs. the physical positioning will no longer address the problem. © 2012 Cisco Systems. Separation tasks become complex when it is not possible to confine specific users or resources to specific areas in the network. though they participate in different logical broadcast domains (VLANs). This virtualization presents the ability to connect two servers to the same physical switch. and increases server utilization.

certcollecion. separation of logical networking infrastructure that is based on traffic service types. By applying the service profile to another stateless hardware instance. by reducing the amount of equipment and directing the utilization figures higher. such as higher service availability. Inc. It should be able to use idle remote server CPU resources. Common Goals There are some common goals across virtualization techniques: 1-52  Affecting utilization and reducing overprovisioning: The main goal is to reduce operating costs for maintaining equipment that is not really needed. and multiprotocol and multivendor integration can benefit from storage virtualization. . which implies an extended Layer 2 domain. but with virtualization.0 © 2012 Cisco Systems. Individual administration for each virtual entity can be deployed using role-based access control (RBAC). world wide names (WWNs). The servers have become stateless. and so on. Faults must be contained. A virtual device context (VDC) represents the ability of the switch to enable multiple virtual and independent switches on the common physical switch to participate in data center networks.  Management: Flexibly managing a virtual resource requires no hardware change in many cases. and flexible and scalable data center design. Overprovisioning has been used to provide a safety margin. Computing Virtualization The computing virtualization is a paradigm used in server deployments. such as: MAC addresses. Storage Virtualization Storage virtualization is the ability to pool storage on diverse and independent devices into a single view. Application virtualization enables VMware VMotion and efficient resource utilization. Designing Cisco Data Center Unified Fabric (DCUFD) v5. universally unique identifiers (UUIDs). with service profiles defining operational properties of the servers. Performance (quality of service [QoS] and SLA) must be provided at the desired level independently for each virtual entity. Features such as copy services.net Device Virtualization Cisco Nexus 7000 and Catalyst 6500 switches support device virtualization or Cisco Nexus Operating System (Cisco NX-OS) virtualization. a lower overprovisioning percentage can be used because systems are more flexible. fault isolation. This feature provides various benefits to the application services. Application Virtualization The web-based application must be available anytime and anywhere. or is not fully utilized. data migration.  Isolation: Security must be effective enough to prevent any undesired access across the virtual entities that share a common physical infrastructure. you can move the workload to another server for added capacity or recovery.

and VSANs. There are a number of benefits of higher service density:  Less power consumption  Less rack space  Reduced ports and cabling  Simplified operational management  Lower maintenance costs The figure shows how virtual services can be created from the physical infrastructure. VLANs.0—1-47 Virtualizing data center network services has changed the logical and physical data center network topology view. and virtual server load-balancing contexts with the Cisco Application Control Engine (ACE) and Cisco Intrusion Detection System (IDS). using features such as VDC. Inc. Cisco Data Center Solutions 1-53 .net Logical and Physical Data Center View Virtualized data center POD: • Logical instantiation of entire data center network infrastructure using VDC. VDC VMs VLANs Storage Pool Physical Points of Delivery (PODs) Virtual LUNs DCUFD v5. high reliability • Efficient resource pool utilization Network Pool Server Pool • Centralized management • Scalability VDC Virtual Network Services © 2012 Cisco and/or its affiliates. VLANs. Service virtualization enables higher service density by eliminating the need to deploy separate appliances for each application.certcollecion. and virtual services • Fault isolation. Virtual network services include virtual firewalls with Cisco adaptive security appliances and service modules or Firewall Services Module (FWSM). VSAN. © 2012 Cisco Systems. All rights reserved.

Taking a similar approach to the integration of network services directly into data center switching platforms.0 © 2012 Cisco Systems. SAN islands have been used within the data center to separate traffic on different physical infrastructures. Traditionally. Storage virtualization also involves virtualizing the storage devices themselves.net Data center storage virtualization starts with Cisco VSAN technology. providing security and separation from both a management and traffic perspective. Coupled with VSANs. the Cisco MDS 9000 NX-OS platform supports third-party storage virtualization applications on an MDS 9000 services module. . 1-54 Designing Cisco Data Center Unified Fabric (DCUFD) v5. VSANs are used within the data center SAN environment to consolidate SAN islands onto one physical infrastructure while maintaining the separation from management and traffic perspectives.certcollecion. Inc. storage device virtualization enables dynamic allocation of storage. To provide virtualization facilities. reducing operational costs by consolidating management processes.

• Consolidation and virtualization technologies are essential to modern data centers and provide more energy-efficient operation and simplified management. © 2012 Cisco and/or its affiliates. and computing components.0—1-49 Cisco Data Center Solutions 1-55 . All rights reserved. and cooling. • Data centers have various challenges. Inc. High availability. including business-oriented. storage. organizational. serviceability.net Summary This topic summarizes the primary points that were discussed in this lesson. • The cloud computing paradigm is an approach where IT resources and services are abstracted from the underlying infrastructure. DCUFD v5. © 2012 Cisco Systems. • Various service availability metrics are used in data center environments. These allow several services to run on top of the physical infrastructure. facility-related. operational. • The main data center solution components are the network. Clouds are particularly interesting because of various available pay-per-use models. power. and fault tolerance are the most important metrics.certcollecion.

certcollecion. . Inc.0 © 2012 Cisco Systems.net 1-56 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

Other components are storage. Among the most innovative components is the Cisco Unified Computing System (UCS). you can create a flexible. you will be able to provide a high-level overview of the Cisco Data Center solution architectural framework and components within the solution.net Lesson 2 Identifying the Cisco Data Center Solution Overview In this lesson. This ability includes being able to meet these objectives:  Evaluate the Cisco Data Center architectural framework  Evaluate the Cisco Data Center architectural framework network component  Evaluate the Cisco Data Center architectural framework storage component . and highly available data center solution that can fulfill the needs of any data center. reliable.certcollecion. security. you will identify Cisco products that are part of the Cisco Data Center solution. Using all of these components. and comprises Cisco Nexus switches and selected Cisco Catalyst switches. The networking equipment is one of the most important components. and application delivery products. Objectives Upon completing this lesson.

. to help enable new and innovative business models. Inc.0 © 2012 Cisco Systems. and take advantage of new revenue streams and business opportunities. Businesses can create services faster. Cisco Data Center represents a fundamental shift in the role of IT into becoming a driver of business innovation. and complexity. Cisco Data Center increases efficiency and profitability by reducing capital expenditures. operating expenses. become more agile.certcollecion. 1-58 Designing Cisco Data Center Unified Fabric (DCUFD) v5.net Cisco Data Center Architecture Overview This topic describes the Cisco Data Center architectural framework. It also transforms how a business approaches its market and how IT supports and aligns with the business.

For growth. the data center enhances application experience and increases productivity through a scalable platform for collaboration tools. Furthermore. the business needs to be able to respond to the market quickly and lead market reach into new geographies and branch openings. and risk. the business can enable service creation without spending on infrastructure. all with limited impact. while being sure they can retract them quickly if they prove unsuccessful. margin. increasing margins through customer retention and satisfaction. When a business maintains focus on cutting costs. The data center capabilities can help influence growth and business. converged architecture to reduce costs. can have a major impact on business. and product brand awareness and loyalty. Cost cutting.net Businesses today are under three pressure points. it is also highly concerned with security of data. policy management. Inc. Businesses also need to gain better insight with market responsiveness (new services). These areas show how the IT environment. The data center must ensure a consistent policy across services so there is no compromise on quality of service versus quantity of service. Cisco Data Center Solutions 1-59 . By enabling the ability to affect new service creation and faster application deployment through service profiling and rapid provision of resource pools. The element of risk in a business must be minimized. and encourage customer expansion. The data center works toward a servicerobust. and the data center in particular. and efficiencies are all critical elements for businesses today in the current economic climate. and access.certcollecion. margin. While the business focuses on governing and monitoring changing compliance rules and a regulatory environment. and provide increased service level agreements. At the same time. These pressure points are business growth. the result is a higher return on investment (ROI). the business needs the flexibility to implement and try new services quickly. © 2012 Cisco Systems. maintain customer relationship management (CRM) for customer satisfaction and retention.

are addressed. It is the foundation for a model of the dynamic networked organization and can enable the following important aspects:  Quick and efficient innovation  Control of data center architecture  Freedom to choose technologies The Cisco Data Center architectural framework is delivered as a portfolio of technologies and systems that can be adapted to meet organizational needs. Governance. System-level benefits. preintegrated. Cisco offers tested. and Risk BUSINESS VALUE Transformative Agile Cisco Lifecycle Services Policy Efficient Partner Ecosystem Consolidation Open Standards Automation Application Energy Security Performance Efficiency Unified Fabric Switching Virtualization Application Networking Unified Management Security Storage © 2012 Cisco and/or its affiliates. systems excellence. such as performance. solution differentiation. 1-60 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0—1-6 Cisco Data Center is an architectural framework that connects technology innovation with business innovation. All rights reserved. and business advantages. The Cisco approach to the data center is to provide an open and standards-based architecture. and resiliency.0 © 2012 Cisco Systems. Inc. You can adopt the framework in an incremental and granular fashion to control when and how you implement data center innovations. The components of the Cisco Data Center architecture are categorized into four areas: technology innovations. .certcollecion.net Leading Profitability New Service Creation and Revenue Generation New Business Models. This framework allows you to easily evolve and adapt the data center to keep pace with changing organizational needs. energy efficiency. OS Cloud Continuity Workload Mobility SYSTEMS EXCELLENCE Unified Computing Management SOLUTION DIFFERENTIATION Compute TECHNOLOGY INNOVATION DCUFD v5. and validated designs that provide businesses with a faster deployment model and a quicker time to market. along with workload mobility and security.

The framework allows organizations to create services faster.certcollecion.net The Cisco Data Center framework is an architecture for dynamic networked organizations. agile. It can provide the following benefits:  Business value  Flexibility and choice with an open ecosystem  Innovative data center services Cisco Data Center is a portfolio of practical solutions that are designed to meet IT and business needs and can help accomplish these goals:  Reduce total cost of ownership (TCO)  Accelerate business growth  Extend the life of the current infrastructure by making your data center more efficient. improve profitability. Cisco Data Center Solutions 1-61 . Inc. and resilient © 2012 Cisco Systems. and reduce the risk of implementing new business models.

They provide a networking platform that helps IT departments to achieve a lower TCO. Cisco Application Networking Services (ANS) solutions can help you lower your TCO and improve IT flexibility. availability. reliably.  Cisco UCS: You can improve IT responsiveness to rapidly changing business demands with the Cisco UCS.net NX-OS Nexus 5000 UCS B-Series Nexus 1000V Cisco MDS Cisco WAAS UCS C-Series Virtual Appliances: .vNAM . Designing Cisco Data Center Unified Fabric (DCUFD) v5. and validated desktop virtualization solution. . Compute DCUFD v5. enhanced resiliency.VSG Nexus 2200 Nexus B22 Nexus 4000 Nexus 7000 Cisco ACE Cisco Catalyst Cisco OTV FabricPath Unified Fabric Fibre Channel over Ethernet Unified Computing Extended Memory VN-Link Virtual Machine Aware Networking Fabric Extender Simplified Networking DC-class Switching Switching Application Networking Unified Fabric for Blades Security Storage OS Management © 2012 Cisco and/or its affiliates. while simplifying your data center and branch infrastructure. All rights reserved.  Cisco Virtualization Experience Infrastructure (VXI): Cisco VXI can deliver a superior collaboration and rich-media user experience with a best-in-class ROI in a fully integrated.ASA 1000V . and security. Inc.0 © 2012 Cisco Systems.  Cisco Application Networking Solutions: You can improve application performance.vWAAS . and securely through end-to-end provisioning and migration support. and greater agility through Cisco Data Center storage solutions.certcollecion.0—1-8 The framework brings together multiple technologies: 1-62  Data center switching: These next-generation virtualized data centers need a network infrastructure that delivers the complete potential of technologies such as server virtualization and unified fabric. This next-generation data center platform accelerates the delivery of new services simply. open.  Data center security: Cisco Data Center security solutions enable you to create a trusted data center infrastructure that is based on a systems approach and uses industry-leading security solutions.  Storage networking solutions: SANs are central to the Cisco Data Center architecture.

Storage devices can be virtualized into storage pools. such as VMware vSphere. © 2012 Cisco Systems. and scalable data centers. storage. Inc. Citrix XenServer.  Operational consistency: A consistent interface and consistent tools simplify management.certcollecion. boot image. which is further virtualized and running on common hardware. VMs are entities that run an application within the client operating system. All rights reserved. SMT DCUFD v5.0—1-9 The architectural framework encompasses the network. security. VMware View. The logical server personality is defined by using management software.net Driving Profitability New Service Creation and Revenue Generation New Business Models. The Cisco Data Center switching portfolio is built on the following common principles:  Design flexibility: Modular. and integrated blade switches are optimized for both Gigabit Ethernet and 10 Gigabit Ethernet environments. and it defines the properties of the server: the amount of memory. Cisco Data Center Solutions 1-63 . The network hardware for consolidated connectivity serves as one of the most important technologies for fabric unification. and Citrix XenDesktop. number of network interface cards. The architectural framework is open and can integrate with other vendor solutions and products. On the lowest layer of the framework is the virtualized hardware. and network devices are virtualized by using device contexts. rack.  Industry-leading switching capabilities: Layer 2 and Layer 3 functions can build stable. secure. application services. separating physical networks and equipment into virtual entities. Microsoft Hyper-V. virtual machines (VMs) are one of the most important components of the framework.  Investment protection: The adaptability of the Cisco Nexus and Catalyst families simplifies capacity and capability upgrades. Starting from the top down. and problem resolution. VLANs and virtual storage area networks (VSANs) provide for virtualized LAN and SAN connectivity. Governance and Risk BUSINESS VALUE Transformative Agile Cisco Lifecycle Services Efficient Partner Ecosystem Consolidation Virtualization Automation SOLUTION DIFFERENTIATION Cloud Policy Vblock (UC on UCS) © 2012 Cisco and/or its affiliates. percentage of total computing power. and so on. and computing equipment. operations.

and data. providing availability.  Services-oriented SANs: The “any network service to any device” model can be extended regardless of protocol. and feature consistency can reduce operating expenses. . any business risk is reduced.  Nonstop communications: Users can stay connected with a resilient infrastructure that enables business continuity. Inc. The Cisco Data Center security solutions enable businesses to create a trusted data center infrastructure that is based on a systems approach and industry-leading security solutions. management. vendor. identity. Cisco ANS provides the following attributes:  Application intelligence: You can take control of applications and the user experience. In addition.0 © 2012 Cisco Systems. fiber connectivity (FICON). and performance. and enforce business policies. Internet Small Computer Systems Interface (iSCSI).certcollecion. and operations tools for a virtualized environment.  Cisco Unified Network Services: You can connect any person to any resource with any device. Fibre Channel over Ethernet (FCoE). speed. management. the Cisco Catalyst 6500 Switch offers Virtual Switching System (VSS) capabilities. simple interoperability. and Fibre Channel over IP (FCIP). and the Cisco Nexus 7000 Switch offers hypervisor-like virtual switch capabilities in the form of virtual device contexts (VDCs).  Integrated security: There is built-in protection for access.  Virtualization: This feature allows simplification of the network and the ability to maximize resource utilization.  Enterprise-class storage connectivity: Significantly larger virtualized workloads can be supported. The Cisco storage solution provides the following:  Multiprotocol storage networking: By providing flexible options for Fibre Channel. and cabling Designing Cisco Data Center Unified Fabric (DCUFD) v5. cooling. protect critical assets.  Operational manageability: You can deploy services faster and automate routine tasks. These solutions enable the rapid deployment of data center technologies without compromising the ability to identify and respond to evolving threats.  A unified operating system and management tools: Operational simplicity. power. The Cisco UCS provides the following benefits: 1-64  Streamlining of data center resources to reduce TCO  Scaling of service delivery to increase business agility  Reducing the number of devices that require setup.net  Virtualization: Cisco Data Center switches provide VM mobility support. or location. scalability.

The access layer aggregates end users and provides uplinks to the aggregation layer. The principal advantages of this model are its hierarchical structure and its modularity. and the core layer. This support allows customers to converge voice onto their data networks and provides roaming wireless LAN (WLAN) access for users. This security is provided by using tools such as IEEE 802. DCUFD v5. Modules in a layer can be put into service and taken out of service without affecting the rest of the network. Cisco Data Center Solutions 1-65 .certcollecion. © 2012 Cisco Systems.  Convergence: The access layer supports inline Power over Ethernet (PoE) for IP telephony and wireless access points. the aggregation layer. such as the Hot Standby Router Protocol (HSRP). A hierarchical design avoids the need for a fully meshed network in which all network nodes are interconnected.net Cisco Data Center Architecture Network This topic describes the Cisco Data Center architectural framework network component. problem isolation.0—1-11 The architectural components of the infrastructure are the access layer. The layer supports the QoS trust boundary. The access layer can be an environment with many features:  High availability: The access layer is supported by many hardware and software attributes. Inc. and IP Source Guard. • Provides access and aggregation for applications in a feature-rich environment • Provides high availability through software attributes and redundancy • Supports convergence for voice. The layer also offers default gateway redundancy by using dual connections from access switches to redundant aggregation layer switches that use a First Hop Redundancy Protocol (FHRP). wireless. This layer offers system-level redundancy by using redundant supervisor engines and redundant power supplies for crucial application groups. DHCP snooping.  Security: The access layer provides services for additional security against unauthorized access to the network. Dynamic Address Resolution Protocol (ARP) Inspection (DAI). and network management.  Quality of service (QoS): The access layer allows prioritization of mission-critical network traffic by using traffic classification and queuing as close to the ingress of the network as possible. port security. and data • Provides security services to help control network access • Offers QoS services including traffic classification and queuing • Supports IP multicast traffic for efficient network use To Core Aggregation Access © 2012 Cisco and/or its affiliates. This ability facilitates troubleshooting.1X. All rights reserved.

The aggregation layer is the layer in which routing and packet manipulation is performed and can be a routing boundary between the access and core layers. the aggregation layer offers a default route to access layer routers and runs dynamic routing protocols when communicating with core routers. Gateway Load Balancing Protocol (GLBP). load balancing.certcollecion. All rights reserved. For some networks. and routing. security. . In addition. Default gateway redundancy allows for the failure or removal of one of the aggregation nodes without affecting endpoint connectivity to the default gateway. The aggregation layer uses a combination of Layer 2 and multilayer switching to segment workgroups and to isolate network problems. To further improve routing protocol performance. traffic loading. security. 1-66 Designing Cisco Data Center Unified Fabric (DCUFD) v5. The aggregation layer represents a redistribution point between routing domains or the demarcation between static and dynamic routing protocols. This layer is commonly used to terminate VLANs from access layer switches. DCUFD v5. This layer performs tasks such as controlled-routing decision making and filtering to implement policy-based connectivity and QoS.0 © 2012 Cisco Systems. QoS. and provisioning are the important considerations at this layer. High availability is typically provided through dual paths from the aggregation layer to the core and from the access layer to the aggregation layer. and QoS mechanisms • Segments workgroups and isolates problems To Core To Core (…) Aggregation Access © 2012 Cisco and/or its affiliates. and default gateways • Implements policies including filtering.net  IP multicast: The access layer supports efficient network and bandwidth management by using software features such as Internet Group Management Protocol (IGMP) snooping. redistribution.0—1-12 Availability. this layer provides default gateway redundancy by using an FHRP such as HSRP. the aggregation layer summarizes routes from the access layer. Inc. or Virtual Router Redundancy Protocol (VRRP). Layer 3 equal-cost load sharing allows both uplinks from the aggregation to the core layer to be used. • Aggregates access nodes and uplinks • Provides redundant connections and devices for high availability • Offers routing services such as summarization. The aggregation layer also connects network services to the access layer and implements policies regarding QoS. so that they do not affect the core layer.

Cisco Data Center Solutions 1-67 . such as checking access lists and filtering. The core layer helps in scalability during future growth. Core (…) Aggregation Access © 2012 Cisco and/or its affiliates. • It provides reliability through redundancy and fast convergence. which would slow down the switching of packets.net • The core layer is a high-speed backbone and aggregation point for the enterprise. alternate paths. All rights reserved. and load balancing. Inc. The core should be a high-speed Layer 3 switching environment that uses hardware-accelerated services. The core devices must be able to implement scalable protocols and technologies. The core layer should not perform any packet manipulation. That type of design yields the fastest and most deterministic convergence results. • Separate core layer helps in scalability during future growth. the core uses redundant point-topoint Layer 3 interconnections in the core. DCUFD v5. The core must provide a high level of redundancy and must adapt to changes very quickly. For fast convergence around a link or node failure.0—1-13 The core layer is the backbone for connectivity and is the aggregation point for the other layers and modules in the Cisco Data Center Business Advantage architecture. Core devices are most reliable when they can accommodate failures by rerouting traffic and can respond quickly to changes in the network topology. © 2012 Cisco Systems.certcollecion.

Unified fabric technology enables a wire-once infrastructure in which there are no physical barriers in the network to redeploying applications or capacity. The main advantage of Cisco Unified Fabric is that it offers LAN and SAN infrastructure consolidation.0 © 2012 Cisco Systems. configuration.  Performance: Data centers should be able to provide deterministic latency and large bisectional bandwidth to applications and services as needed. Universal I/O brings efficiency to the data center through “wire-once” deployment and protocol simplification.  Resiliency: The data center infrastructure and implemented features need to provide high availability to the applications and services they support. and consistent management of existing and new services. Inc.  Flexibility: Businesses require a single architecture that can support multiple deployment models to provide the flexible component of the architecture. In a 30-megawatt (MW) data center. this increase accounts for an annual US$60 million cost deferral. .net Cisco Unified Fabric delivers transparent convergence. in the Cisco WebEx data center. 1-68 Designing Cisco Data Center Unified Fabric (DCUFD) v5. thus delivering hardware freedom. The network comes in as a central component to the evolution of the virtualized data center and to the enablement of cloud computing.  Scale: Data centers need to be able to support large Layer 2 domains that can provide massive scalability without the loss of bandwidth and throughput. and sophisticated intelligent services to provide the following benefits:  Support for traditional and virtualized data centers  Reduction in TCO  An increase in ROI The five architectural components that impact TCO include the following:  Simplicity: Businesses require the data center to be able to provide easy deployment.certcollecion. It is no longer necessary to plan for and maintain two completely separate infrastructures. This efficiency. has shown the ability to increase workload density by 30 percent in a flat power budget. massive three-dimensional scalability.

 Cisco Unified Fabric convergence offers the best of both SANs and LANs by enabling users to take advantage of the Ethernet economy of scale. private clouds. © 2012 Cisco Systems. scalable architecture OTV Workload mobility FEX-Link Simplified management VM-FEX VM-aware networking DCB/FCoE Consolidated I/O vPC Active-Active uplinks All specifications subject to change without notice © 2012 Cisco and/or its affiliates. Cisco Unified Fabric offers you a massive reduction of cables. and geographic span. business resiliency. Cisco Data Center Solutions 1-69 .  Cisco FabricPath: Cisco FabricPath is a set of capabilities within the Cisco Nexus Operating System (NX-OS) Software that combines the plug-and-play simplicity of Ethernet with the reliability and scalability of Layer 3 routing. extensive vendor community.certcollecion. Cisco Unified Fabric complements Unified Network Services and Unified Computing to enable IT and business innovation. magnitude of ports and bandwidth. Cisco FabricPath enables companies to build highly scalable Layer 2 multipath networks without the Spanning Tree Protocol (STP). and high-performance computing environments. and future innovations. the Cisco Unified Fabric evolution is continuing to provide architectural innovations.  Cisco Unified Fabric scalability delivers performance. Inc.  Cisco Overlay Transport Virtualization (OTV): Cisco OTV is an industry-first solution that significantly simplifies extending Layer 2 applications across distributed data centers. Cisco OTV allows companies to deploy virtual computing resources and clusters across geographically distributed data centers to deliver transparent workload mobility. To support the five architectural attributes. These networks are particularly suitable for large virtualization deployments. DCUFD v5. All rights reserved. various adapters. and superior computing resource efficiencies. and pass-through modules. FabricPath Flexible.net Cisco Unified Fabric offers a low-latency and lossless connectivity solution that is fully virtualization-enabled.0—1-15 Cisco Unified Fabric is a foundational pillar for the Cisco Data Center Business Advantage architectural framework. switches.  Cisco Unified Fabric intelligence embeds critical policy-based intelligent functionality into the unified fabric for both traditional and virtualized data centers.

on a converged network. Cisco VM-FEX architecture provides virtualization-aware networking and policy control.  Data Center Bridging (DCB) and FCoE: Cisco Unified Fabric provides the flexibility to run Fibre Channel. Designing Cisco Data Center Unified Fabric (DCUFD) v5.  vPC: Cisco virtual port channel (vPC) technology enables the deployment of a link aggregation from a generic downstream network device to two individual and independent Cisco NX-OS devices (vPC peers). . IP-based storage such as network-attached storage (NAS) and Small Computer System Interface over IP or FCoE.certcollecion.  Cisco VM-FEX: Cisco Virtual Machine Fabric Extender (VM-FEX) provides advanced hypervisor switching as well as high-performance hardware switching. Cisco FEX-Link uses the Cisco Nexus 2000 Series FEXs to extend the capacities and benefits that are offered by upstream Cisco Nexus switches.net 1-70  Cisco FEX-Link: Cisco Fabric Extender (FEX)-Link technology enables data center designers to gain new design flexibility while simplifying cabling infrastructure and management complexity. and service enabled. Inc. or a combination of these technologies. extensible. It is flexible.0 © 2012 Cisco Systems. This multichassis link aggregation path provides both link redundancy and active-active link throughput that scales high-performance failover characteristics.

10 Gigabit Ethernet is the basis for a new DCB protocol that. and HPC data. and high-performance computing (HPC) protocols.1Qaz): Provides bandwidth management and priority selection  Quantized congestion notification (QCN. Currently. Data Center Bridging Exchange (DCBX.certcollecion. It provides a lossless data center transport layer that enables the convergence of LANs and SANs onto a single unified fabric. over the same physical network.net Data Center Bridging Cisco Unified Fabric is a network that can transport many different protocols.1 DCB is a collection of standards-based extensions to Ethernet and it can enable a Converged Enhanced Ethernet (CEE). SAN. IEEE 802. IEEE 802. IEEE 802. provides a common platform for lossy and lossless protocols that carry LAN.  Priority-based flow control (PFC. DCB enhances the operation of iSCSI. In addition to supporting FCoE.1AB): Exchanges parameters between DCB devices and leverages functions that are provided by Link Layer Discovery Protocol (LLDP) © 2012 Cisco Systems.1Qbb): Provides lossless delivery for selected classes of service  Enhanced Transmission Selection (ETS. IEEE 802. Cisco Data Center Solutions 1-71 . Inc.1au): Provides congestion awareness and avoidance (optional) Note  Cisco equipment does not use QCN as a means to control congestion. SAN. and other business-critical traffic. PFC and ETS are used. Cisco does not have plans to implement QCN in its equipment. The IEEE 802. Instead. NAS. such as LAN. through enhanced features.

IEEE typically calls a standard specification by a number: for example.  Quality of service (QoS) assurance: This ability monitors the Fibre Channel traffic with respect to lossless delivery of Fibre Channel frames and bandwidth reservations for Fibre Channel traffic.certcollecion. FCoE FCoE is a new protocol that is based on the Fibre Channel layers that are defined by the ANSI T11 committee.1az.net Different organizations have created different names to identify the specifications.  A minimum 10-Gb/s Ethernet platform. FCoE addresses the following:  Jumbo frames: An entire Fibre Channel frame (2180 B in length) can be carried in the payload of a single Ethernet frame. The term Converged Enhanced Ethernet was created by IBM. so the organization grouped the specifications into DCB. FCoE traffic consists of a Fibre Channel frame that is encapsulated within an Ethernet frame. in the future. The Fibre Channel frame payload may in turn carry SCSI messages and data or. or DCB. 1-72 Designing Cisco Data Center Unified Fabric (DCUFD) v5. IEEE did not have a way to identify the group of specifications with a standard number.  FCoE Initialization Protocol (FIP): This protocol provides a login for Fibre Channel devices into the fabric. Inc. again. . IEEE has used the term Data Center Bridging.0 © 2012 Cisco Systems. to reflect the core group of specifications and to gain consensus among industry vendors (including Cisco) as to what a Version 0 list of the specifications would be before they all become standards. IEEE 802. may use FICON for mainframe traffic. and it replaces the lower layers of Fibre Channel traffic.  Fibre Channel port: World wide name (WWN) addresses are encapsulated in the Ethernet frames and MAC addresses are used for traffic forwarding in the converged network.

2.htm. Cisco Data Center Solutions 1-73 .  Link partners can choose supported features and willingness to accept configurations from peers. Inc. Without DCBX. DCBX is a discovery and capability exchange protocol that is used by devices that are enabled for Data Center Ethernet to exchange configuration information.certcollecion.  The Data Center Bridging Capability Exchange Protocol (DCBCXP) utilizes the LLDP and processes the local operational configuration for each feature. Details on DCBCXP can be found at http://www. © 2012 Cisco Systems.intel.  Servers need to learn whether they are connected to enhanced Ethernet devices.  Within the enhanced Ethernet cloud.pdf.com/technology/eedc/index.) Devices need to discover the edge of the enhanced Ethernet cloud:  Each edge switch needs to learn that it is connected to an existing switch. devices need to discover the capabilities of peers.net The DCBX protocol allows each DCB device to communicate with other devices and to exchange capabilities within a unified fabric. each device would not know if it could send lossless protocols like FCoE to another device that was not capable of dealing with lossless delivery.ieee802.org/1/files/public/docs2008/az-wadekar-dcbcxp-overview-rev0. The following parameters of the Data Center Ethernet features can be exchanged:  Priority groups in ETS  PFC  Congestion notification (as backward congestion notification [BCN] or as Quantized Congestion Notification [QCN])  Application types and capabilities  Logical link down to signify the loss of a logical connection between devices even though the physical link is still up  Network interface virtualization (NIV) (See http://www.

1X EIGRP VRRP LACP CTS PIM SNMP RIB Protocol Stack (IPv4 / IPv6 / L2) Layer 3 Protocols VLAN mgr RIB Protocol Stack (IPv4 / IPv6 / L2) Infrastructure Kernel © 2012 Cisco and/or its affiliates. • Securely delineated administrative contexts VDC A VDC B Layer 2 Protocols Layer 2 Protocols Layer 3 Protocols VLAN mgr UDLD OSPF GLBP VDC A STP CDP BGP HSRP VDC B IGMP sn. and each VDC is assigned its own physical ports.  They can be used by network administrators and operators for training purposes. 802. Designing Cisco Data Center Unified Fabric (DCUFD) v5.  They can provide departments with the ability to administer and maintain their own configurations.0—1-18 A VDC is used to virtualize the Cisco Nexus 7000 Switch and presents the physical switch as multiple logical devices. Inc.0 © 2012 Cisco Systems. All rights reserved.  They can consolidate the switch platforms of multiple departments onto a single physical platform. 802. .1X EIGRP VRRP LACP CTS PIM SNMP RIB VDC n RIB UDLD OSPF GLBP STP CDP BGP HSRP IGMP sn. VDCs provide the following benefits: 1-74  They can secure the network partition between different users on the same physical switch.net VDCs provide the following: VDCs do not have the following: • Flexible separation and distribution of software components • The ability to run different operating system levels on the same box at the same time • Flexible separation and distribution of hardware resources • A single “infrastructure” layer that processes hardware programming. Each VDC contains its own unique and independent VLANs and virtual routing and forwarding (VRF).  They can be dedicated for testing purposes without impacting production systems. DCUFD v5.certcollecion.

1Q) Tag-Based (802. Cisco Data Center Solutions 1-75 . Cisco VM-FEX provides visibility down to the VM level.0—1-19 Cisco VM-FEX encompasses a number of products and technologies that work together to improve server virtualization strategies:  Cisco Nexus 1000V Virtual Distributed Switch: This switch is a software-based switch that was developed in collaboration with VMware.  NIV: This VM networking protocol was jointly developed by Cisco and VMware and allows the Cisco VM-FEX functions to be performed in hardware. The Cisco NPV allows storage services to follow a VM as the VM moves. Inc. and regulatory compliance.certcollecion. The switch integrates directly with the VMware ESXi hypervisor. troubleshooting.  Cisco N-Port Virtualizer (NPV): This function is currently available on the Cisco MDS 9000 Series Multilayer Switches and the Cisco Nexus 5000 and 5500 Series Switches. Mobility of Network and Security Properties Nondisruptive Operational Model DCUFD v5.net Nexus 1000V Nexus 5500 Software Hypervisor Switching External Hardware Switching Tagless (802. the network and security policies automatically follow a VM that is being migrated with VMware VMotion. simplifying management. Because the switch can combine the network and server resources. All rights reserved. © 2012 Cisco Systems.1Qbh) Feature Set Flexibility Performance Consolidation Server VM 1 VM 2 Server VM 3 VM 1 VM 4 VM 2 VM 3 VM 4 Hypervisor Nexus 1000V VIC Hypervisor NIC NIC Nexus 1000V Nexus 5500 LAN Policy-Based VM Connectivity © 2012 Cisco and/or its affiliates.

Designing Cisco Data Center Unified Fabric (DCUFD) v5. Unified ports support all existing port types including 1 Gigabit and 10 Gigabit Ethernet.0—1-20 Unified ports are ports that can be configured as Ethernet or Fibre Channel.certcollecion. . Unified ports are supported on Cisco Nexus 5500UP Switches and Cisco UCS 6200 Series Fabric Interconnects. The benefits are as follows: 1-76  Deploy a switch. such as the Cisco Nexus 5500UP.0 © 2012 Cisco Systems.net • One port for all types of server I/O - • Any port can be configured as • 1/10 Gigabit Ethernet. FCoE on 10 Gigabit Ethernet (dedicated or converged link) or 8/4/2/1-G native Fibre Channel port. • DCB (lossless Ethernet). pure 1/2/4/8-Gb/s Fibre Channel interfaces such as any Nexus 5500UP port can be configured as 1 Gigabit/10 Gigabit Ethernet. as a data center switch standard capable of all important I/O  Mix Fibre Channel SAN to host. Fibre Channel Fibre Channel DCUFD v5. DCB (lossless Ethernet). FCoE. as well as switch and target with FCoE SAN  Implement with native Fibre Channel today and enable smooth migration to FCoE in the future. • FCoE on 10 Gigabit Ethernet (dedicated or converged link) or • 8/4/2/1G native Fibre Channel Flexibility of use - One standard chassis for all data center I/O needs Ethernet or Ethernet FCoE Ethernet Fibre or Channel Traffic © 2012 Cisco and/or its affiliates. All rights reserved. Inc.

Inc.  Cisco Nexus 2000 Series FEX: A category of data center products that are designed to simplify data center access architecture and operations. 2248TP GE. 2232PP 10GE. The Cisco Nexus 3064 Switch is well suited for financial © 2012 Cisco Systems. and rack and blade server environments.92 Tb/s Nexus 7018 Nexus 5596UP 1. which allow smooth transition from 10 Gigabit Ethernet to 40 Gigabit Ethernet. B22HP) Nexus 5010 NX-OS © 2012 Cisco and/or its affiliates.certcollecion. 10 Gigabit Ethernet. proven innovations of the Cisco Data Center Business Advantage architecture into the High-Frequency Trading (HFT) market.  Cisco Nexus 3000 Series Switches: The Cisco Nexus 3000 Series Switches extend the comprehensive.2 Tb/s 1 Tb/s 400 Gb/s Nexus 7010 7 Tb/s Nexus 3064 Nexus 5020 Nexus 4000 960G Nexus 7009 Nexus 1010 Nexus 5548UP 520G Nexus 2000 Nexus 1000V (2148T. Cisco Data Center Solutions 1-77 . The Cisco Nexus 3064 Switch supports 48 fixed 1/10-Gb/s Enhanced small form-factor pluggable plus (SFP+) ports and 4 fixed quad SFP+ (QSFP+) ports. Gigabit Ethernet. DCUFD v5. The Cisco Nexus 2000 Series FEXs act as remote line cards for the Cisco Nexus 5000 and 5500 Series and the Cisco Nexus 7000 Series Switches. The Cisco Nexus 1010 provides dedicated hardware for the VSM.net 15 Tb/s 7. unified fabric. The Cisco Nexus 1000V operates inside the VMware ESX or ESXi hypervisor. copper and fiber connectivity. It also supports the Cisco Nexus 1000V Network Analysis Module (NAM) Virtual Service Blade and provides a comprehensive solution for virtual access switching.0—1-21 The Cisco Nexus product family comprises the following switches:  Cisco Nexus 1000V: A virtual machine access switch that is an intelligent software switch implementation for VMware vSphere environments that run the Cisco Nexus Operating System (NX-OS) software. 2224TP GE. and supports the Cisco Virtual Network Link (VN-Link) server virtualization technology to provide the following: — Policy-based virtual machine connectivity — Mobile virtual machine security and network policy — Nondisruptive operational model for server virtualization and networking teams  Cisco Nexus 1010 Virtual Services Appliance: The appliance is a member of the Cisco Nexus 1000V Series Switches and hosts the Cisco Nexus 1000V Virtual Supervisor Module (VSM).5 Tb/s 1. All rights reserved. The Cisco Nexus 2000 Series uses the Cisco FEX-Link architecture to provide a highly scalable unified server-access platform across a range of 100-Mb/s Ethernet. making access switch deployment much easier for the network administrator.

line-rate Layer 2 and 3 unicast and multicast switching.1 DCB standards.0 © 2012 Cisco Systems. This capability provides networking over a single link. Designing Cisco Data Center Unified Fabric (DCUFD) v5. SAN.  Cisco Nexus 5000 and 5500 Series Switches: A family of line-rate. extremely low-latency. 10-Gb/s blade switch that is fully compliant with the INCITS FCoE and IEEE 802. It is a line-rate. with dual links used for redundancy. The Cisco Nexus 5000 Series Switches are designed for data centers transitioning to 10 Gigabit Ethernet as well as data centers ready to deploy a unified fabric that can manage LAN. The 10-slot chassis has front-to-back airflow. making it a good solution for hot-aisle and cold-aisle deployments. cooling. virtualized and nonvirtualized x86 computing architectures. The Cisco Nexus 7000 Series Switches incorporate significant enhancements in design.  Cisco Nexus 7000 Series Switches: A modular data center-class switch that is designed for highly scalable 10 Gigabit Ethernet networks with a fabric architecture that scales beyond 15 Tb/s.net collocation deployments and delivers features such as latency of less than a microsecond. 1-78  Cisco Nexus 4000 Switch Module for IBM BladeCenter: A blade switch solution for IBM BladeCenter H and HT chassis. power. airflow. and server clusters.certcollecion. scale-out. low-latency. Inc. This switch provides the server I/O solution that is required for high-performance. nonblocking. The 18-slot chassis uses side-to-side airflow to deliver high density in a compact form factor. and cabling. and support for 40 Gigabit Ethernet standards technologies. . lossless 10 Gigabit Ethernet and FCoE switches for data center applications. Layer 2. The switch is designed to deliver continuous system operation and virtualized services.

The VMware vCenter server defines the data center that the Cisco Nexus 1000V will manage. with each physical NIV port on a module being a physical Ethernet port. Inc. Cisco Data Center Solutions 1-79 . © 2012 Cisco Systems. The Cisco Nexus 1000V is effectively a virtual chassis. There are two components that are part of the Cisco Nexus 1000V implementation:  Virtual Supervisor Module (VSM): This is the control software of the Cisco Nexus 1000V distributed virtual switch.certcollecion.0—1-22 The Cisco Nexus 1000V Virtual Ethernet Module provides Layer 2 switching functions in a virtualized server environment. and managed as if it were a line card in a physical Cisco switch. with the first server or host automatically being assigned to the next available module number. The ports to which the virtual network interface card (vNIC) interfaces connect are virtual ports on the Cisco Nexus 1000V. It is modular. The servers are modules on the switch.0 host. All rights reserved. where they are assigned a global number. with each server being represented as a line card. It is based on the Cisco NX-OS Software. with the VEMs forming a switch domain that should be in the same virtual data center that is defined by VMware vCenter. The VSM can control several VEMs. and runs on either a virtual machine (VM) or as an appliance.net VEM VSM • Virtual Ethernet Module Replaces the VMware virtual switch • Enables advanced switching capability on the hypervisor • Provides each VM with dedicated switch ports Nexus 1010 Cisco VSMs Cisco VEM VM1 VM2 VM3 © 2012 Cisco and/or its affiliates. • CLI interface into the Nexus 1000v • Leverages NX-OS software • Controls multiple VEMs as a single network device • Can be a virtual or physical appliance Cisco VEM VM4 VM5 VM6 VM7 Cisco VEM VM8 VM9 VM10 VM11 VM12 DCUFD v5.  Virtual Ethernet Module (VEM): This is the part that actually switches the data traffic and runs on a VMware ESX 4. The Cisco Nexus 1000V provides visibility into the networking components of the ESX servers and access to the virtual switches within the network. Modules 1 and 2 are reserved for the VSM. This allows users to configure and monitor the virtual switch using the Cisco NX-OS CLI. and ports can either be physical or virtual. and replaces virtual switches within the VMware ESX servers.

and the switch offers 1. .0—1-23 Cisco Nexus 5548UP Switch The Cisco Nexus 5548UP Switch is the first of the Cisco Nexus 5500 platform. and eight ports of 8/4/2/1-Gb/s native Fibre Channel connectivity using the SFP interface.certcollecion. taking the maximum capacity for the switch to 96 ports.or 10-Gb/s FCOE Fibre Channel and three additional slots.net Nexus 5548 Nexus 5596 48-port Switch 96-port Switch • 32 fixed ports 1/10GE/FCoE/DCB • 1 Expansion Module Slot • 48 fixed ports 1/10GE/FCoE/FC (Unified Ports) • 3 Expansion Module Slot Ethernet Ethernet + Fibre Channel • 16 ports 1/10GbE. The three additional slots will accommodate any of the expansion modules for the Cisco Nexus 5500 Series Switches. Inc.or 10-Gb/s FCOE Fibre Channel ports and one expansion slot.  Fibre Channel plus Ethernet module that provides 8 x 1/10 Gigabit Ethernet and FCoE ports using the SFP+ interface. The switch has 32 1. Both of the Cisco Nexus 5500 Series Switches support Cisco FabricPath. FCoE. 1-80 Designing Cisco Data Center Unified Fabric (DCUFD) v5. DCB • 8ports 1/10GbE. Expansion Modules for the Cisco 5500 Series Switches The Cisco Nexus 5500 Series Switches support the following expansion modules:  Ethernet module that provides 16 x 1/10 Gigabit Ethernet and FCoE ports using SFP+ interfaces. These ports are unified ports that provide flexibility regarding the connectivity requirements.  A Layer 3 daughter card for routing functionality The modules for the Cisco Nexus 5500 Series Switches are not backward-compatible with the Cisco Nexus 5000 Series Switches. 10 Gigabit Ethernet and FCoE switch offering up to 960-Gb/s throughput and up to 48 ports. The Cisco Nexus 5500 Series supports the same Cisco Nexus 2200 Series FEXs. DCUFD v5. and with the Layer 3 routing module.0 © 2012 Cisco Systems. Cisco Nexus 5596UP Switch The Cisco Nexus 5596UP Switch has 48 fixed ports capable of supporting 1.92-Tb/s throughput. both Layer 2 and Layer 3 support is provided. FCoE. It is a 1-RU. DCB • 8ports 1/2/4/8GFC Nexus 2248 FEX • 48 Fixed 100M/1GbE ports • 4 Fixed 10GbE uplinks Nexus 2232 FEX • 32 1/10 GE Ethernet/FCoE • 8 10 GE DCB/FCoE uplinks Nexus 2224 FEX • 24 Fixed 100M/1GbE ports • 2 Fixed 10GbE uplinks Nexus B22 FEX • 16 10GbE ports • 8 10GbE uplinks Cisco Data Center Network Manager (DCNM) and Fabric Manager © 2012 Cisco and/or its affiliates. All rights reserved.

Note © 2012 Cisco Systems. and for the Cisco Nexus 7000 Series.net Cisco Nexus 2000 Series FEXs The Cisco Nexus 2000 Series FEXs offer front-to-back cooling.1(2) software.certcollecion. Cisco Nexus 5500 Series Switches and Cisco Nexus 2200 Series FEXs can be ordered with front-to-back or back-to-front airflow direction. Inc. This way you can achieve the desired switch orientation and still fit into your hot aisle-cold aisle thermal model. a hotswappable fan tray with redundant fans. The Cisco Nexus 2000 Series has redundant hot-swappable power supplies. Cisco Data Center Solutions 1-81 . The Cisco Nexus 2000 Series is an external line module for the Cisco Nexus 5000 and 5500 Series Switches.  Cisco B22 Blade FEX for HP: 16 x 10 Gigabit Ethernet host interfaces and 8 x 10 Gigabit Ethernet FCoE uplinks (SFP+). This model is supported as an external line module for the Cisco Nexus 7000 Series using Cisco NX-OS 5.  Cisco Nexus 2232PP 10GE FEX: 32 x 1/10 Gigabit Ethernet and FCoE ports (SFP+) and 8 x 10 Gigabit Ethernet FCoE uplinks (SFP+). placement of all switch ports at the rear of the unit in close proximity to server ports. and is a 1-RU form factor. The Cisco Nexus 2000 Series has two types of ports: ports for end-host attachment and uplink ports. compatibility with data center hot-aisle and cold-aisle designs.  Cisco Nexus 2148T FEX: 48 x 1000BASE-T ports and 4 x 10 Gigabit Ethernet uplinks (SFP+)  Cisco Nexus 2224TP GE FEX: 24 x 100/1000BASE-T ports and 2 x 10 Gigabit Ethernet uplinks (SFP+)  Cisco Nexus 2248TP GE FEX: 48 x 100/1000BASE-T ports and 4 x 10 Gigabit Ethernet uplinks (SFP+). depending on the fan tray that is ordered. and accessibility of all user-serviceable components from the front panel.

lossless Ethernet.0 us 2. middle-of-row (MoR).92 Tb/s 1 RU 2 RU 1 Gigabit Ethernet Port Density 48* 96* 10 Gigabit Ethernet Port Density 48 96 16 96 Port-to-Port Latency 2.0 us Number of VLANs 4096 4096 8G Native Fibre Channel Port Density ✔* ✔* 1 Gigabit Ethernet Port Scalability 1152** 1152** 10 Gigabit Ethernet Port Scalability 768** 768** ✔ ✔ Layer 3 Capability 40 Gigabit Ethernet Ready *Layer 3 requires field-upgradeable component ** Scale expected to increase with future software releases © 2012 Cisco and/or its affiliates. front-to-back cooling is used for consistency with server designs. The family can support unified fabric in enterprise and service provider data centers. offering true 1. and the upcoming Cisco Nexus 5596UP Switch provides a density of 96 1/10Gb/s ports in 2RUs. The Cisco Nexus 5548P can have 48 Ethernet ports at 10 Gb/s sending packets simultaneously without any effect on performance. The Cisco Nexus 5500 platform provides a rich feature set that makes it well suited for top-of-rack (ToR). wire-speed performance. All rights reserved. The absence of resource sharing helps ensure the best performance of each port regardless of the traffic patterns on other ports.net Product Features and Specifications Switch Fabric Throughput Switch Footprint Nexus 5548UP Nexus 5596UP 960 Gb/s 1. DCUFD v5. The upcoming Cisco Nexus 5596UP can have 96 Ethernet ports at 10 Gb/s.92-Tb/s bidirectional bandwidth.  Nonblocking line-rate performance: All the 10 Gigabit Ethernet ports on the Cisco Nexus 5500 platform can manage packet flows at wire speed. The port counts are based on 24 Cisco Nexus 2000 FEXs per Cisco Nexus 5500 Switch. Designing Cisco Data Center Unified Fabric (DCUFD) v5.certcollecion. It protects investments in data center racks with standards-based 1 and 10 Gigabit Ethernet and FCoE features. which protects the investments of enterprises. 1-82  High density and high availability: The Cisco Nexus 5548P provides 48 1/10-Gb/s ports in 1 RU. The switch family has sufficient port density to support single and multiple racks that are fully populated with blade and rack-mount servers. and virtual machine awareness features that allow IT departments to consolidate networks. The combination of high port density.0—1-24 The table in the figure describes the differences between the Cisco Nexus 5000 and 5500 Series Switches. offering true 960-Gb/s bidirectional bandwidth.0 © 2012 Cisco Systems. or end-of-row (EoR) access-layer applications. . The Cisco Nexus 5500 Series is designed with redundant and hotswappable power and fan modules that can be accessed from the front panel. where status lights offer an at-a-glance view of switch operation. Cisco Nexus 5500 Platform Features The Cisco Nexus 5500 Series is the second generation of access switches for 10 Gigabit Ethernet connectivity. To support efficient data center hotand cold-aisle designs. Inc. and extremely low latency makes this switch family well suited to meet the growing demand for 10 Gigabit Ethernet.

— Lossless Ethernet with PFC: By default. Single-stage fabric means that a single crossbar fabric scheduler has complete visibility into the entire system and can therefore make optimal scheduling decisions without building congestion within the switch. Ethernet is designed to drop packets when a switching node cannot sustain the pace of the incoming traffic. and all other data path features turned on. and when too many bursts occur at the same time.1p CoS. This ability avoids head-of-line (HOL) blocking. and a total of 768 VOQs per ingress interface on the Cisco Nexus 5596UP. With a single-stage fabric. The Cisco Nexus 5500 platform increases the number of egress queues by supporting 8 egress queues for unicast and 8 egress queues for multicast. switches support eight egress queues per output port. the switch does not contribute to it. which would otherwise cause congestion to spread. Packet drops make Ethernet very flexible in managing random traffic patterns that are injected into the network. This latency was measured on fully configured interfaces.  Single-stage fabric: The crossbar fabric on the Cisco Nexus 5500 Series is implemented as a single-stage fabric. However.net  Low latency: The cut-through switching technology that is used in the ASICs of the Cisco Nexus 5500 Series enables the product to offer a low latency of 2 microseconds. Depending on how the burst of congestion is smoothed out. The no-drop benefits are significant for any protocol that assumes reliability at the media level. The low latency on the Cisco Nexus 5500 Series. congestion does not result in drops. which transforms Ethernet into a reliable medium. so that a congested egress port does not affect traffic that is directed to other egress ports. a short period of congestion occurs. the overall network performance can be affected. make the Cisco Nexus 5500 platform an excellent choice for latency-sensitive environments.1p CoS. QoS. The extensive use of VOQs in the system helps ensure high throughput on a per-egress. which remains constant regardless of the size of the packet being switched. Inc. The Cisco Nexus 5500 platform offers a complete range of congestion management features to reduce congestion. such as FCoE. each servicing one IEEE 802. together with a dedicated buffer per port and the congestion management features.certcollecion. they effectively make Ethernet unreliable and push the burden of flow control and congestion management up to a higher level in the network stack. Cisco Data Center Solutions 1-83 . or a total of 384 VOQs per ingress interface on the Cisco Nexus 5548P. The CoS granularity allows some classes of service to have a reliable no-drop behavior. with access control lists (ACLs). © 2012 Cisco Systems. Servers tend to generate traffic in bursts.  Congestion management: Keeping latency low is not the only critical element for a highperformance network solution. With a flow-control mechanism in place. per-CoS basis. the congestion becomes exclusively a function of your network design. while allowing other classes to retain traditional best-effort Ethernet behavior. Through configuration. PFC offers point-to-point flow control of Ethernet traffic that is based on IEEE 802. This support allows separation of unicast and multicast that are contending for system resources within the same CoS and provides more fairness between unicast and multicast. These features address congestion at different stages and offer granular control over the performance of the network. — Separate egress queues for unicast and multicast: Traditionally. the user can control the amount of egress port bandwidth for each of the 16 egress queues. thus eliminating any bottleneck within the switches. resulting in a total of 8 VOQs per egress on each ingress interface. — Virtual output queues: The Cisco Nexus 5500 platform implements virtual output queues (VOQs) on all ingress interfaces. Congestion on one egress port in one CoS does not affect traffic that is destined for other classes of service or other egress interfaces.1p class of service (CoS) uses a separate VOQ in the Cisco Nexus 5500 platform architecture. Every IEEE 802.

. The Cisco Nexus 5500 platform supports IEEE 1588 boundary clock synchronization. dropped packets can sometimes lead to long TCP timeouts and consequent loss of throughput. The access layer in the data center is typically built at Layer 2. For accurate application performance monitoring and measurement. This scheme enables both a single point of management and a uniform set of features and capabilities across all access-layer switches. Inc. whereby the intermediate switch becomes the data path to the central forwarding and policy enforcement under control of the central switch. Cisco Nexus 5500 Series hardware is capable of switching packets that are based on Cisco FabricPath headers or TRILL headers. an embedded switch or softswitch requires separate management. NIV enables a central switch to create an association with the intermediate switch. the access layer may be Layer 3. transactions occur in less than a millisecond. underutilized network bandwidth.  1-84 Layer 3: The design of the access layer varies depending on whether Layer 2 or Layer 3 is used at the access layer.certcollecion. ECN allows end-to-end notification of network congestion without dropping packets.  FCoE: FCoE is a standards-based encapsulation of Fibre Channel frames into Ethernet frames. The receiver of the packet echoes the congestion indicator to the sender. By implementing FCoE. Traditionally. the TCP sender takes action by controlling the flow of traffic. IEEE 1588 is designed for local systems that require very high accuracy beyond that attainable using Network Time Protocol (NTP).  IEEE 1588 Precision Time Protocol (PTP): In financial environments. However. and slow convergence. control-plane scalability.  Cisco FabricPath and Transparent Interconnection of Lots of Links (TRILL): Existing Layer 2 networks that are based on STP have a number of challenges to overcome. When congestion is detected. The Cisco Nexus 5500 platform also supports packet time stamping by including the IEEE 1588 time stamp in the Encapsulated Remote Switched Port Analyzer (ERSPAN) header. In other words. the systems supporting electronic trading applications must be synchronized with extremely high accuracy (to less than a microsecond). TCP detects network congestion by observing dropped packets. the Cisco Nexus 5500 platform enables storage I/O consolidation in addition to Ethernet. A Cisco Nexus 2000 Series FEX behaves as a virtualized remote I/O module. and the boundary clock will then act as a master clock for all attached slaves. such as two-tier designs.  NIV architecture: The introduction of blade servers and server virtualization has increased the number of access-layer switches that need to be managed. the Cisco Nexus 5500 platform will run PTP and synchronize to an attached master clock. This capability enables customers to deploy scalable Layer 2 networks with native Layer 2 multipathing. Cisco FabricPath and TRILL are two emerging solutions for creating scalable and highly available Layer 2 networks. In some designs. these Layer 2 networks lack fundamentals that limit their scalability. enabling the Cisco Nexus 5500 platform to operate as a virtual modular chassis. particularly highfrequency trading environments. Although enhancements to STP and features such as Cisco vPC technology help mitigate some of these limitations. These challenges include suboptimal path selection. although this may not Designing Cisco Data Center Unified Fabric (DCUFD) v5. The Cisco Nexus 5500 platform can set a mark in the IP header so that instead of dropping a packet.0 © 2012 Cisco Systems. it sends a signal impending congestion. Building at Layer 2 allows better sharing of service devices across multiple servers and allows the use of Layer 2 clustering. In both cases. One critical implementation of NIV in the Cisco Nexus 5000 and 5500 Series is the Cisco Nexus 2000 Series FEXs and their deployment in data centers. which requires the servers to be near Layer 2.net — Explicit congestion notification (ECN) marking: ECN is an extension to TCP/IP that is defined in RFC 3168. which must respond as though congestion had been indicated by packet drops.

and fabric modules are accessible completely from the rear to ensure that cabling is not disrupted during maintenance. Inc. providing true I/O consolidation at the hardware level. The Cisco Nexus 7000 Series Switches run the Cisco NX-OS software to deliver a rich set of features with nonstop operation. Redundant power supplies. and ease of management.0—1-25 The Cisco Nexus 7000 Series Switches offer a modular data center-class product that is designed for highly scalable 10 Gigabit Ethernet networks with a fabric architecture that scales beyond 15 Tb/s. DCUFD v5. reliability. and cooling in both new and existing facilities. The result is a full-featured Ethernet switch and a full-featured Fibre Channel switch that is combined into one product. operation.  Front-to-back airflow with 10 front-accessed vertical module slots and an integrated cable management system facilitates installation.  Designed for reliability and maximum availability.net imply that every port on these switches is a Layer 3 port. fan trays.  Hardware-level I/O consolidation: The Cisco Nexus 5500 platform ASICs can transparently forward Ethernet. © 2012 Cisco Systems.  18 front-accessed module slots with side-to-side airflow in a compact horizontal form factor with purpose-built integrated cable management ease operation and reduce complexity. and TRILL. The Cisco Nexus 5500 platform can operate in Layer 3 mode with the addition of a routing module. scalability. FCoE. Fibre Channel. • 15+ Tb/s system • DCB and FCoE support • Continuous operations • Device virtualization • Modular OS • Cisco TrustSec Nexus 7009 Nexus 7010 Nexus 7018 Slots 7 I/O + 2 sup 8 I/O + 2 sup 16 I/O + 2 sup Height 14 RU 21 RU 25 RU BW / Slot Fab 1 N/A 230 Gig / slot 230 Gig / slot BW / Slot Fab 2 550 Gig / Slot 550 Gig / slot 550 Gig / slot © 2012 Cisco and/or its affiliates. All rights reserved.certcollecion. The solution that is adopted by the Cisco Nexus 5500 platform reduces the costs of consolidation through a high level of integration in the ASICs. Cisco FabricPath. The Cisco Nexus 7000 Series provides integrated resilience that is combined with features optimized specifically for the data center for availability. all interface and supervisor modules are accessible from the front. Cisco Data Center Solutions 1-85 .

The transparent front door allows easy observation of cabling and module indicators and status lights without any need to open the doors. These LEDs report the power supply.  Airflow is side-to-side.net  The system uses dual dedicated supervisor modules and fully distributed fabric architecture.  The system uses dual system and fabric fan trays for cooling. with support for two supervisors. . Inc. providing ease of maintenance tasks with minimal disruption. The door can be completely removed for both initial cabling and day-to-day management of the system. which combined with the chassis midplane. This increases the forwarding capacity on the 10-slot form factor to 9.  The integrated cable management system is designed to support the cabling requirements of a fully configured system to either or both sides of the switch.  The purpose-built optional front module door provides protection from accidental interference with both the cabling and modules that are installed in the system.certcollecion. Each fan tray is redundant and composed of independent variable-speed fans that automatically adjust to the ambient temperature.0 © 2012 Cisco Systems. deliver up to 230 Gb/s per slot for 4. The door supports a dual-opening capability for flexible operation and cable installation while attached. meeting the demands of large deployments. All system components can easily be removed with the cabling in place. the system continues to operate without a significant degradation in cooling capacity. allowing hot swapping without affecting the system.  The crossbar fabric modules are located in the front of the chassis. Designing Cisco Data Center Unified Fabric (DCUFD) v5. This adjustment helps reduce power consumption in well-managed facilities while providing optimum operation of the switch.7 Tb/s. allowing maximum flexibility. If either a single fan or a complete fan tray fails. with up to 8 I/O module slots. and 7. fabric. There are five rear-mounted fabric modules. Fan tray redundancy features help ensure reliability of the system and support for hot swapping of fan trays.  The midplane design supports flexible technology upgrades as your needs change and provides ongoing investment protection.9 Tb/s and on the 18-slot form factor to 18. The LEDs alert operators to the need to conduct further investigation. and I/O module status. The system design increases cooling efficiency and provides redundancy capabilities. Cisco Nexus 7000 Series 9-Slot Chassis  The Cisco Nexus 7000 Series 9-slot chassis with up to 7 I/O module slots supports up to 224 10 Gigabit Ethernet or 336 Gigabit Ethernet ports. Migrating to the Cisco Fabric Module 2 increases the bandwidth per slot to 550 Gb/s.8 Tb/s in the 18-slot form factor using the Cisco Fabric Module 1.  Front-to-back airflow helps ensure that use of the Cisco Nexus 7000 Series 10-slot chassis addresses the requirement for hot-aisle and cold-aisle deployments without additional complexity. supervisor. Cisco Nexus 7000 Series 10-Slot Chassis 1-86  The Cisco Nexus 7000 Series 10-slot chassis.  Independent variable-speed system and fabric fans provide efficient cooling capacity to the entire system. fan. supports up to 256 10 Gigabit Ethernet or 384 Gigabit Ethernet ports.  A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components.1 Tb/s of forwarding capacity in the 10-slot form factor.

providing ease of maintenance tasks with minimal disruption. fan. supervisor. All system components can easily be removed with the cabling in place.  A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components. The transparent front door allows observation of cabling and module indicator and status lights. supervisor. fabric.net  The integrated cable management system is designed for fully configured systems. and I/O module status.certcollecion.  The purpose-built optional front module door provides protection from accidental interference with both the cabling and modules that are installed in the system. The LEDs alert operators to the need to conduct further investigation. optimizing the use of rack space. The LEDs alert operators to the need to conduct further investigation. These LEDs report the power supply. The door can be completely removed for both initial cabling and day-to-day management of the system. Inc. The transparent front door allows easy observation of cabling and module indicators and status lights without any need to open the doors. Fan tray redundancy features help ensure reliability of the system and support for hot swapping of fan trays. These LEDs report the power supply.  A series of LEDs at the top of the chassis provides a clear summary of the status of the major system components. The system allows cabling either to a single side or to both sides for maximum flexibility without obstructing any important components. Cisco Nexus 7000 Series 18-Slot Chassis  The Cisco Nexus 7000 Series 18-slot chassis with up to 16 I/O module slots supports up to 512 10 Gigabit Ethernet or 768 Gigabit Ethernet ports. fan.  The cable management cover and optional front module doors provide protection from accidental interference with both the cabling and modules that are installed in the system. fabric.  The integrated cable management system is designed to support the cabling requirements of a fully configured system to either or both sides of the switch. This flexibility eases maintenance even when the system is fully cabled.  Independent variable-speed system and fabric fans provide efficient cooling capacity to the entire system. and I/O module status. The door supports a dual-opening capability for flexible operation and cable installation while fitted.  Side-to-side airflow increases the system density within a 25-RU footprint. meeting the demands of the largest deployments. Cisco Data Center Solutions 1-87 . © 2012 Cisco Systems. The optimized density provides more than 16 RU of free space in a standard 42-RU rack for cable management and patching systems.  The system supports an optional air filter to help ensure clean airflow through the system. allowing maximum flexibility. The addition of the air filter satisfies Network Equipment Building System (NEBS) requirements.

net Cisco Data Center Architecture Storage This topic describes the Cisco Data Center architectural framework storage component.certcollecion. new line card modules have been introduced to support those faster data rates. These line card modules can be installed in existing chassis without having to replace them with new ones. such as the following: 1-88  Ultrahigh availability  Scalable architecture  Comprehensive security features  Ease of management  Advanced diagnostics and troubleshooting capabilities  Transparent integration of multiple technologies  Multiprotocol support Designing Cisco Data Center Unified Fabric (DCUFD) v5. the Cisco MDS switches and directors have embodied many innovative features that help improve performance and help overcome some of the limitations present in many SANs today. One of the benefits of the Cisco MDS products is that the chassis supports several generations of line card modules without modification. Multilayer switches are switching platforms with multiple layers of intelligent features. . Cisco MDS SAN Switches Cisco Multilayer Director Switches (MDS) and directors and the line card modules provide connectivity in a SAN. As Fibre Channel speeds have increased from 2 to 4 Gb/s and are now 8 Gb/s. Since their introduction in 2002. Inc.0 © 2012 Cisco Systems.

Based on the Cisco MDS 9000 Series operating system and a comprehensive management platform in Cisco Fabric Manager. The Cisco MDS 9148 switch is an 8-Gb/s Fibre Channel switch providing forty-eight 2-. 32-. 4-. which encrypts Fibre Channel traffic  Secure Erase. The Cisco MDS 9222i Multiservice Modular Switch uses the 18/4 architecture of the DSX9304-18K9 line card and includes native support for Cisco MDS Storage Media Encryption (SME) along with all the features of the Cisco MDS 9216i Multilayer Fabric Switch. or 8-Gb/s Fibre Channel ports. the Cisco MDS 9500 Series offers various application line card modules and a scalable architecture from an entry-level fabric switch to director-class systems.certcollecion. which reduce overall latency  Continuous data protection (CDP) is enabled by the Cisco SANTap feature © 2012 Cisco Systems. The Cisco MDS DS-X9708-K9 module has eight 10 Gigabit Ethernet multihop-capable FCoE ports. which permanently erases data  SAN extension features like Write Acceleration and Tape Read Acceleration. or 48-port models but can be expanded to use the 8-port license. It enables extension of FCoE beyond the access layer into the core of the data center with a full line-rate FCoE module for the Cisco MDS 9500 Series Multilayer Directors. Inc. Cisco Data Center Solutions 1-89 . which provides online data migration  LinkSec encryption.net The Cisco MDS 9500 Series products offer industry-leading investment protection and offer a scalable architecture with highly available hardware and software. Services-oriented SAN fabrics can transparently extend any of the following SAN services to any device within the fabric:  Data Mobility Manager (DMM). The base license supports 16-.

. Therefore. SAN designers build separate fabrics. 1-90 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc. In addition.certcollecion. Using VSANs. otherwise known as SAN islands. The EISL is supported on links that interconnect any Cisco MDS 9000 Series Switch product. when a VSAN is created. and can still provide island-like isolation.net SAN Islands A SAN island refers to a physically isolated switch or group of switches that is used to connect hosts to storage devices. VSANs provide not only hardware-based isolation but also a complete replicated set of Fibre Channel services for each VSAN. Reasons for building SAN islands may include the desire to isolate different applications in their own fabrics or to raise availability by minimizing the impact of fabric-wide disruptive events. VSANs provide a method for allocating ports within a physical fabric to create virtual fabrics. separate SAN islands also offer a higher degree of security because each physical infrastructure contains its own separate set of fabric services and management access. configuration management capability. for various reasons. fewer and less-costly redundant fabrics can be built. Instead. Each separate virtual fabric is isolated from another by using a hardware-based frame-tagging mechanism on VSAN member ports and Enhanced Inter-Switch Links (EISLs). each housing multiple applications. The EISL type has been created and includes added tagging information for each frame within the fabric. VSANs can virtualize the physical switch into many virtual switches. An analogy is that VSANs on Fibre Channel switches are like VDCs on Cisco Nexus 7000 Series Ethernet switches. a completely separate set of fabric services. VSAN Scalability VSAN functionality is a feature that was developed by Cisco that can leverage the advantages of isolated SAN fabrics with capabilities that address the limitations of isolated SAN islands. SAN designers can raise the efficiency of a SAN fabric and alleviate the need to build multiple physically isolated fabrics to meet organizational or application needs. and policies are created within the new VSAN.0 © 2012 Cisco Systems. Independent physical SAN islands are virtualized onto a common SAN infrastructure. Today.

NPV Core Switches NPIV-Enabled MDS 9500 • Must enable the N-Port ID Virtualization (NPIV) feature • Supports up to 100 NPV edge switches NPV-enabled switches are standards-based and interoperable with other third-party switches in the SAN. so effectively there can be no more than 40 switches that are connected together. Cisco NPV aggregates multiple locally connected N Ports into one or more external N-Port links.The switch reboots. SD. • Supports only F.certcollecion. and no physical port may belong to more than one VSAN. NP = proxy N mode © 2012 Cisco and/or its affiliates. and as a Fibre Channel switch to the servers in the fabric or blade switch.The configuration is not saved. NPV Edge Switches • Need to enable switch in NPV mode • Changing to and from NPV mode is disruptive: Servers . which will limit the number of domains that can be deployed in data centers. Therefore. Inc. MDS 9124 MDS 9134 . Cisco Data Center Solutions 1-91 . Each Fibre Channel switch is identified by a single domain ID. DCUFD v5. the original storage manufacturer has only qualified up to 70 domains per fabric or VSAN. Blade switches and top-of-rack access layer switches will also consume a domain ID. However. Cisco NPV also allows multiple devices to attach to the same port on the NPV core switch. which share the domain ID of the NPV core switch among multiple NPV switches. and NP modes • Supports 16 VSANs • Local switching is not supported. .net Membership in a VSAN is based on the physical port. F = fabric mode.0—1-30 The Fibre Channel standards as defined by the ANSI T11 committee allow for up to 239 Fibre Channel domains per fabric or VSAN. All rights reserved.Switching is done at the core. The Cisco NPV addresses the increase in the number of domain IDs that are needed to deploy many ports by making a fabric or module switch appear as a host to the core Fibre Channel switch. © 2012 Cisco Systems. SD = SPAN destination mode. and it reduces the need for more ports on the core. the node that is connected to a physical port becomes a member of that VSAN port.

Inc.0—1-31 © 2012 Cisco Systems. 1-92 Designing Cisco Data Center Unified Fabric (DCUFD) v5.net Summary This topic summarizes the primary points that were discussed in this lesson. • The Cisco Data Center architecture is an architectural framework for connecting technology innovation to business innovation.certcollecion. • The Cisco Nexus product range can be used at any layer of the network depending on the network and application requirements. © 2012 Cisco and/or its affiliates. • The Cisco MDS product range is used to implement intelligent SAN based on Fibre Channel. . All rights reserved. FCoE or iSCSI protocol stack.0 DCUFD v5.

you will be able to define the tasks and phases of the design process for the Cisco Data Center solution. you will gain insight into the data center solution design process.certcollecion. Objectives Upon completing this lesson. There is a difference in the design phase for new scenarios and designs that involve an existing production environment and usually a migration from the old to the new environment.net Lesson 3 Designing the Cisco Data Center Solution Overview In this lesson. This ability includes being able to meet these objectives:  Describe the design process for the Cisco Data Center solution  Assess the deliverables of the Cisco Data Center solution  Describe Cisco Validated Designs . This lesson provides an overview of how a data center solution is designed and the documentation that is necessary.

1-94 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Design workshop Assessment 1. To track the design process progress and completed and open actions. Verification phase: To ensure that the designed solution architecture does meet the customer expectations. Each phase of the design process has steps that need to be taken in order to complete the phase.net Design Process This topic describes the design process for the Cisco Data Center solution. the solution designer creates the solution architecture by using the assessment phase results as input data. Audit* Analysis Solution sizing 2. In this phase. organizational constraints. technical goals. In general. and technical constraints. it is important to identify the organizational goals. the checklist in the figure can aid the effort. Assessment phase: This phase is vital for the project to be successful and to meet the customer needs and expectations. Some of the steps are mandatory and some are optional. Plan phase: In this phase.0 © 2012 Cisco Systems.0—1-4 To design a solution that meets customer needs. Inc. The decision about which steps are necessary is governed by the customer requirements and the type of the project (for example. new deployment versus migration). DCUFD v5. Verification Verification workshop Proof of concept* *Optional steps © 2012 Cisco and/or its affiliates. the solution should be verified and confirmed by the customer. .certcollecion. 3. all information that is relevant for the design has to be collected. Plan Deployment plan Migration plan* 3. All rights reserved. the design process can be divided into three major phases: 1. 2.

and support aligned to requirements) Implement the solution (Integrate without disruption or causing vulnerability) © 2012 Cisco and/or its affiliates.net Coordinate planning and strategy (Make sound financial decisions) Prepare Assess readiness (Can the solution support the customer requirements?) Operational excellence (Adapt to changing business requirements) Optimize Plan Operate Design Maintain solution health (Manage. cost. Implement DCUFD v5. The design specification is the basis for the implementation activities. critical milestones. Cisco Data Center Solutions 1-95 . and performing a gap analysis to determine whether the existing system infrastructure.  Implement: After the design has been approved.certcollecion. user needs. Design. sites. reliability. The solution is built or additional components are incorporated according to the design specifications. and resources that are required to implement changes to the solution. developing a solution strategy. resolve. implementation (and verification) begins. Inc. assessing any existing environment. All rights reserved. The project plan should align with the scope. security. For the design of the Cisco Data Center solution. and so on. and proposing a high-level conceptual architecture that identifies technologies that can best support the architecture. Implement. repair. and resource parameters established in the original business requirements. and performance. responsibilities. © 2012 Cisco Systems.  Plan: The plan phase involves identifying initial solution requirements based on goals. The PPDIOO solution life-cycle approach reflects the life-cycle phases of a standard solution.  Design: The initial requirements that were derived in the planning phase lead the activities of the solution design specialists. and replace) Design the solution (Products.0—1-5 Cisco has formalized the life cycle of a solution into six phases: Prepare. service. the first three phases are used. and Optimize (PPDIOO). with the goal of integrating devices without disrupting the existing environment or creating points of vulnerability. A project plan is useful to help manage the tasks. The prepare phase can establish a financial justification for the solution strategy by assessing the business case for the proposed architecture. facilities. The solution design specification is a comprehensive detailed design that meets current business and technical requirements and incorporates specifications to support availability. scalability. The plan phase involves characterizing sites. Plan. The PPDIOO phases are as follows:  Prepare: The prepare phase involves establishing the organizational requirements. Operate. and operational environment are able to support the proposed system.

certcollecion.net

Operate: Operation is the final test of the appropriateness of the design. The operational
phase involves maintaining solution health through day-to-day operations, including
maintaining high availability and reducing expenses. The fault detection, correction, and
performance monitoring that occur in daily operations provide initial data for the
optimization phase.

Optimize: The optimization phase involves proactive management of the solution. The
goal of proactive management is to identify and resolve issues before they affect the
organization. Reactive fault detection and correction (troubleshooting) are needed when
proactive management cannot predict and mitigate failures. In the PPDIOO process, the
optimization phase may prompt a network redesign if too many solution problems and
errors arise, if performance does not meet expectations, or if new applications are identified
to support organizational and technical requirements.

Note

1-96

Although design is listed as one of the six PPDIOO phases, some design elements may be
present in all the other phases.

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net

The solution life-cycle approach provides four main benefits:

Lowering the total cost of ownership (TCO)

Increasing solution availability

Improving business agility

Speeding access to applications and services

The TCO is lowered by these actions:

Identifying and validating technology requirements

Planning for infrastructure changes and resource requirements

Developing a sound solution design aligned with technical requirements and business goals

Accelerating successful implementation

Improving the efficiency of your solution and of the staff supporting it

Reducing operating expenses by improving the efficiency of operation processes and tools

Solution availability is increased by these actions:

Assessing the security state of the solution and its ability to support the proposed design

Specifying the correct set of hardware and software releases and keeping them operational
and current

Producing a sound operations design and validating solution operation

Staging and testing the proposed system before deployment

Improving staff skills

Proactively monitoring the system and assessing availability trends and alerts

Proactively identifying security breaches and defining remediation plans

© 2012 Cisco Systems, Inc.

Cisco Data Center Solutions

1-97

certcollecion.net
Business agility is improved by these actions:

Establishing business requirements and technology strategies

Readying sites to support the system that you want to implement

Integrating technical requirements and business goals into a detailed design and
demonstrating that the solution is functioning as specified

Expertly installing, configuring, and integrating system components

Continually enhancing performance

Access to applications and services is accelerated by these actions:

1-98

Assessing and improving operational preparedness to support current and planned solution
technologies and services

Improving service-delivery efficiency and effectiveness by increasing availability, resource
capacity, and performance

Improving the availability, reliability, and stability of the solution and the applications
running on it

Managing and resolving problems affecting your system and keeping software applications
current

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net

The design methodology under PPDIOO consists of three basic steps:
Step 1

Identify customer requirements: In this step, important decision makers identify
the initial requirements. Based on these requirements, a high-level conceptual
architecture is proposed. This step is typically done within the PPDIOO prepare
phase.

Step 2

Characterize the existing network and sites: The plan phase involves
characterizing sites and assessing any existing networks and performing a gap
analysis to determine whether the existing system infrastructure, sites, and
operational environment can support the proposed system. Characterization of the
existing environment includes existing environment audit and analysis. During the
audit, the existing environment is thoroughly checked for integrity and quality.
During the analysis, environment behavior (traffic, congestion, and so on) is
analyzed. This investigation is typically done within the PPDIOO plan phase.

Step 3

Design the network topology and solutions: In this step, you develop the detailed
design. Decisions on solution infrastructure, intelligent services, and solutions are
made. You may also build a pilot or prototype solution to verify the design. You also
write a detailed design document.

© 2012 Cisco Systems, Inc.

Cisco Data Center Solutions

1-99

certcollecion.net

The first action of the design process and the first step of the assessment phase is the design
workshop. The workshop has to be conducted with proper customer IT personnel and can take
several iterations in order to collect relevant and valid information. In the design workshop, a
draft high-level architecture may already be defined.
The high-level agenda of the design workshop should include these tasks:

Define the business goals: This step is important for several reasons. First, you should
ensure that the project follows customer business goals, which will help you ensure that the
project is successful. With the list of goals, the solution designers can then learn and write
down what the customer wants to achieve with the project and what the customer expects
from the project.

Define the technical goals: This step ensures that the project also follows customer
technical goals and expectations and thus likewise ensures that the project is successful.
With this information, the solution designer will know the technical requirements of the
project.

Identify the data center technologies: This task is used to clarify which data center
technologies are covered by the project and is the basis for how the experts determine what
is needed for the solution design.

Define the project type: There are two main types of projects. They are new deployments
or the migration of existing solutions.

Identify the requirements and limitations: The requirements and limitations are the
details that significantly govern the equipment selection, the connectivity that is used, the
integration level, and the equipment configuration details. For migration projects, this step
is the first part of identifying relevant requirements and limitations. The second part is the
audit of the existing environment with proper reconnaissance and analysis tools.

The workshop can be conducted in person or it can be done virtually by using Cisco WebEx or
a Cisco TelePresence solution.

1-100

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
The design workshop is a mandatory step of the assessment phase because without it, there is
no relevant information with which the design can be created.

It is very important to gather all the relevant people in the design workshop to cover all the
aspects of the solution (the design workshop can be a multiday event).
The Cisco Unified Computing System (UCS) solution is effectively part of the data center, and
as such, the system must comply with all data center policies and demands. The following
customer personnel must attend the workshop (or should at least provide information that is
requested by the solution designer):

Facility administrators: They are in charge of the physical facility and have the relevant
information about environmental conditions like available power, cooling capacity,
available space and floor loading, cabling, physical security, and so on. This information is
important for the physical deployment design and can also influence the equipment
selection.

Network administrators: They ensure that the network properly connects all the bits and
pieces of the data center and thus also the equipment of the future Cisco UCS solution. It is
vital to receive all the information about the network: throughput, port and connector types,
Layer 2 and Layer 3 topologies, high-availability mechanisms, addressing, and so on. The
network administrators may report certain requirements for the solution.

Storage administrators: Here, the relevant information encompasses storage capacity
(available and used), storage design and redundancy mechanisms (logical unit numbers
[LUNs], Redundant Array of Independent Disks [RAID] groups, service processor ports,
and failover), storage access speed, type (Fibre Channel, Internet Small Computer Systems
Interface [iSCSI], Network File System [NFS]), replication policy and access security, and
so on.

© 2012 Cisco Systems, Inc.

Cisco Data Center Solutions

1-101

certcollecion.net

1-102

Server and application administrators: They know the details for the server
requirements, operating systems, and application dependencies and interrelations. The
solution designer learns which operating systems and versions are or will be used, what the
requirements of the operating systems are from the connectivity perspective (one network
interface card [NIC], two NICs, NIC teaming, and so on). The designer also learns which
applications will be deployed on which operating systems and what the application
requirements will be (connectivity, high availability, traffic throughput, typical memory
and CPU utilization, and so on).

Security administrators: The solution limitations can also be known from the customer
security requirements (for example, the need to use separate physical VMware vSphere
hosts for a demilitarized zone [DMZ] and private segments). The security policy also
defines the control of equipment administrative access and allowed and restricted services
(for example, Telnet versus Secure Shell [SSH]), and so on.

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

it is clear that some information should be collected over a longer time to be relevant (for example. For proper design. the levels of server memory and CPU utilization that are measured over a weekend are significantly lower than during weekdays). logging. Cisco Data Center Solutions 1-103 . applications. the VMware Capacity Planner will help the designer collect information about the servers and it can even suggest the type of servers that are appropriate for the new design (regarding processor power. but it is strongly advised in order to audit the existing environment. server. Inc. If the project involves a migration to a VMware vSphere environment from physical servers. memory size. administrative access. It is not necessary. and so on).certcollecion. and so on)  The limitations of the current infrastructure From the description of the audit. Other details can be gathered by inspecting the equipment configuration (for example. and so on). it is of the utmost importance to have the relevant information upon which the design is based:  Memory and CPU resources  Storage space  Inventory details  Historical growth report  Security policies that are in place  High-availability mechanisms  Dependencies between the data center elements (that is. © 2012 Cisco Systems. Simple Network Management Protocol (SNMP) management. storage. Information can be collected with the various reconnaissance and analysis tools that are available from different vendors. operating system.net The audit step of the assessment should typically be undertaken for migration projects.

The analysis is mandatory for creating a designed solution that will meet project goals and customer expectations. and configurations. software. 1-104 Designing Cisco Data Center Unified Fabric (DCUFD) v5. The designer must baseline and optimize the requirements. .certcollecion. which can then be directly translated into the proper equipment.0 © 2012 Cisco Systems.net The analysis is the last part of the assessment phase. Inc. The solution designer must review all the collected information and then select only the important details.

All rights reserved. world wide name (WWN). Cisco Data Center Solutions 1-105 . Inc. but also all the necessary patch cables. a new module. the design or plan phase can commence. This phase (like the assessment phase) contains several steps and substeps.0—1-12 Once the assessment phase is completed and the solution designer has the analysis results. — LAN and SAN equipment that is required for connecting the system has to be selected. This plan also details MAC. Nexus.certcollecion. the hardware and software that are used will be defined. and so on. the Bill of Materials (BOM). MDS. and universally unique identifier (UUID) addressing. and weight measurements for the Cisco UCS. © 2012 Cisco Systems. Deployment plan: This step can be divided into the following substeps: — The physical deployment plan details where and how the equipment will be placed into the racks for racking and stacking. and other devices.net  Solution sizing:  Size the solution  Select LAN and SAN equipment  Calculate environmental characteristics  Create BOM  Deployment plan:  Physical deployment  Server deployment  LAN and SAN integration  Administration and management  Migration plan:  Prerequisites  Migration and rollback procedures  Verification steps © 2012 Cisco and/or its affiliates. such as the LAN and SAN access layer configuration. VLANs and VSANs. Some steps are mandatory and some are optional. DCUFD v5. and high-availability settings. The BOM includes not only the Cisco UCS and Nexus products. The equipment can be small form-factor pluggable (SFP) modules. — Last but not least. There are three major steps:   Solution sizing: In this step. You will need to calculate the power. and port connectivity. — Once the equipment is selected. firmware versions. cooling. power inlets. which is a detailed list of the equipment parts. All Cisco UCS details are defined from a single management point in the Cisco UCS Manager. — The server deployment plan details the server infrastructure configuration. or even Cisco Nexus and Multilayer Director Switch (MDS) switches and licenses. needs to be created. the environmental requirements need to be determined by using the Cisco Power Calculator. and management access.

and so on). . A vital part of the migration plan is the series of verification steps that confirm or disprove the successfulness of migration.0 © 2012 Cisco Systems. Equally important (although hopefully not used) are the rollback procedures that should be used in case of failures or problems during migration. Migration plan: Applicable for migration projects. Note 1-106 Details about such designs are discussed later in the course. VLAN and VSAN configuration on the core side. Typical solutions to common requirements are described in the Cisco Validated Designs (for example. an Oracle database and the Cisco UCS. Different deployments have different requirements and thus different designs. and the high-availability settings). — The administration and management plan details how the new solution will be managed and how it integrates into the existing management infrastructure (when present). Designing Cisco Data Center Unified Fabric (DCUFD) v5. this plan details when. Inc.net  — The LAN and SAN integration plan details the physical connectivity and configuration of core data center devices (the Cisco Nexus and MDS switches. and with which technologies the migration from an existing solution to a new deployment will be performed. Citrix XenDesktop with VMware and Cisco UCS. how.certcollecion.

This approval is typically received by conducting a verification workshop with the customer personnel who are responsible for the project. The proof of concept is typically a smaller set of the proposed solution that encompasses all the vital and necessary components to confirm the proper operation. Cisco Data Center Solutions 1-107 . which is how the customer and designer can confirm that the proposed solution meets the expected goals. The customer also receives complete information about the designed solution.net Once the design phase is completed.certcollecion. the solution must be verified and approved by the customer. The solution designer must define the subset of the designed solution that needs to be tested and must conduct the necessary tests with expected results. The second step of the verification phase can be the proof of concept. © 2012 Cisco Systems. Inc.

that is. It is required to be completed upon the request of the department or project leader from the customer side.  Solution requirements: This section defines the following characteristics for the solution as a whole as well as for the individual parts: — 1-108 Requirements concerning system availability. security. which technologies should be covered. Thus the customer requirements document (CRD) should be used to detail the customer requirements for a project for which a solution will be proposed. and service restoration Designing Cisco Data Center Unified Fabric (DCUFD) v5. which data center components are involved. . the list includes details about the applications and services planned to be deployed. This section also defines the strategic impact of this project to the customer (for example.  Project scope: This section defines the scope of the project regarding the design (for example. and so on).net Design Deliverables This topic describes how to assess the deliverables of the Cisco Data Center solution. Inc.  Expected outcome: This section provides an overview of the intentions and future direction.0 © 2012 Cisco Systems. Apart from connectivity. and give the customer a competitive edge?).  List of services and applications with goals: This section provides a list of the objectives and requirements for this service. details about the type of services the customer plans to offer and introduce with the proposed solution.certcollecion. make the customer more profitable. Every project should start with a clear understanding of the requirements of the customer. and summarizes the services and applications that the customer intends to introduce. The following sections should be part of the CRD:  Existing environment: This section describes the current customer environment. is the solution required to solve an important issue. behavior under failure scenarios. and so on.

© 2012 Cisco Systems.certcollecion. Cisco Data Center Solutions 1-109 . The CRD thus clearly defines what the customer wants from the solution and is also the basis and input information for the assessment phase. service assurance (fault management and performance management). The assessment phase should finally result in the analysis document. which must include all the information that is gathered in the assessment phase. The supporting documentation for the assessment phase can include the following:  Questionnaire: This questionnaire can be distributed to the customer personnel in order to prepare for the design workshop or to provide relevant information in written form when verbal communication is not possible. including those that are not yet implemented — All aspects of solution management features like service fulfillment (service provisioning and configuration management). but also for presenting the results and progress to the customer.net — All performance requirements — All security requirements — All critical features required in order to provide this service or application.  Meeting minutes: This document contains the relevant information from the design workshop. Inc. as well as billing and accounting It is also advisable that the CRD include the high-level timelines of the project so that the solution designer can plan accordingly. Each phase of the design process should result in documents that are necessary not only for tracking the efforts of the design team.

such as the solution components. VLANs. Designing Cisco Data Center Unified Fabric (DCUFD) v5.  Site requirements specification: This document (or more than one document when the solution applies to more than one facility) will specify the equipment environmental characteristics. extra space on the storage.  Site survey form: This document (or more than one document when the solution applies to more than one facility) is used by the engineers or technicians to conduct the survey of a facility in order to determine the environmental specifications. such as the detailed list of equipment.  Migration plan: This document is necessary when the project is a migration and it must have at least the following sections: — Required resources: Specifies the resources that are necessary to conduct the migration. management IP addressing. addressing. It should result in the following documentation: 1-110  High-level design (HLD): This document describes the conceptual design of the solution. how the high-availability mechanisms work. and cabling. . and so on. These resources can include. as well as information about the relevant configurations. for example. and VSANs. weight. the equipment to be used (not detailed information).  Low-level design (LLD) (also known as a detailed design): This document describes the design in detail. resource pools. extra Ethernet ports to connect new equipment before the old equipment is decommissioned. — Migration procedures: Specifies the actions for conducting the migration (in the correct order) with verification tests and expected results. cooling capacity.certcollecion.0 © 2012 Cisco Systems. service profiles. the plan of how the devices will be deployed in the racks. such as power. the plan of how the devices will be connected physically. address pools and naming conventions.net The design phase is the most document-intensive phase. Inc. or extra staff or even external specialists. how the business continuance is achieved.

and it should list the tests and the expected results with which the solution is verified. the proof-of-concept document should be produced. Cisco Data Center Solutions 1-111 . Apart from that. If the customer confirms that the solution design is approved. Second. Inc. meeting minutes should be taken in order to track the workshop. the document must specify what resources are required to conduct the proof of concept (not only the equipment but also the environmental requirements). Because the first step of the verification phase is the verification workshop.certcollecion. © 2012 Cisco Systems. the customer must sign off on the solution. when a proof of concept is conducted. The document is a subset of the detailed design document and it is for the equipment that will be used in the proof of concept.net — Rollback procedures: Specifies the actions that are necessary to revert to a previous state if there are problems during the migration.

Inc.net Cisco Validated Designs This topic describes Cisco Validated Designs. and so on). . These designs incorporate a wide range of technologies and products into a portfolio of solutions that have been developed to address the business needs of our customers. Cisco Validated Designs are organized by solution areas. tested. NetApp.0 © 2012 Cisco Systems. Cisco Validated Designs consist of systems and solutions that are designed. The individual blueprint covers the following aspects: 1-112  Solution requirements from an application standpoint  Overall solution architecture with all the components that fit together  Required hardware and software BOM  Topology layout  Description of the components used and their functionalities Designing Cisco Data Center Unified Fabric (DCUFD) v5. VMware. and documented to facilitate and improve customer deployments. EMC.certcollecion. Cisco UCS-based validated designs are blueprints that incorporate not only Cisco UCS but also other Cisco Data Center products and technologies along with applications of various ecopartners (Microsoft.

network. © 2012 Cisco and/or its affiliates. and deployment design are necessary. DCUFD v5.net Summary This topic summarizes the primary points that were discussed in this lesson. analysis. Inc.0—1-21 Cisco Data Center Solutions 1-113 . All rights reserved. • The analysis phase should include key IT personnel: server. sizing. Of these phases. • The Cisco Validated Designs program offers a collection of validated designs for various solutions.certcollecion. and security professionals. Design deliverables are used to document each of the solution phases. storage. © 2012 Cisco Systems. • The design of a Cisco Data Center solution comprises several phases. application.

certcollecion. Inc. .net 1-114 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems.

and so on.net Module Summary This topic summarizes the primary points that were discussed in this module. In this module. Cisco Data Center Solutions 1-115 . and which aspects are covered by Cisco solutions.certcollecion. Inc. accounting. surveillance. construction. physical access security. server and networking hardware. including power delivery. and storage). © 2012 Cisco Systems. cooling. regulatory compliance. you learned which technologies the data center encompasses. Data centers are very complex environments that require collaboration of experts of various technology areas (applications. Interdisciplinary knowledge needs to be gathered.

0 © 2012 Cisco Systems.net 1-116 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc. .certcollecion.

net Module Self-Check Use the questions here to review what you learned in this module. cold aisle hotspot British thermal Unit virtual cloud private cloud open cloud public cloud cloud as a service Which two mechanisms allow virtualization of the network and IP services? (Choose two.) (Source: Defining the Data Center) A) B) C) D) © 2012 Cisco Systems.) (Source: Defining the Data Center) A) B) C) D) E) Q7) thermomix hot aisle. VRF security context service profile hypervisor Cisco Data Center Solutions 1-117 .certcollecion.) (Source: Designing the Cisco Data Center Solution) A) B) C) D) Q6) assessment phase spin-out phase plan phase verification phase Which three cloud deployments are based on the NIST classification? (Choose three. Q1) Which two options are important business drivers for data centers? (Choose two. Inc.) (Source: Defining the Data Center) A) B) C) D) server consolidation power and cooling rack space rack weight Q3) Where do you install cables in data center rooms? (Source: Identifying the Cisco Data Center Solution) Q4) What is the thermal control model called? (Source: Identifying the Cisco Data Center Solution) A) B) C) D) Q5) Which three options are phases of the data center design process? (Choose three.) (Source: Defining the Data Center) A) B) C) D) Q2) global availability global warming reduced communication latency fast application deployment Which two options are the main operational limitations of data centers? (Choose two. The correct answers and solutions are found in the Module Self-Check Answer Key.

) (Source: Identifying the Cisco Data Center Solution) A) B) C) D) E) Q9) What are the three capabilities of the Cisco MDS 9500 platform? (Choose three. .com Designing Cisco Data Center Unified Fabric (DCUFD) v5.certcollecion.net Q8) Identify the three important Cisco technologies in the data center.) (Source: Identifying the Cisco Data Center Solution) A) B) C) D) E) Q10) virtualized NAS FCoE Fibre Channel FCIP serial attached SCSI Where can you find design best practices and design recommendations when designing data center networks? (Source: Designing the Cisco Data Center Solution) A) B) C) D) 1-118 FEX-Link OTV VLAN VMotion VDC Cisco Best Practices Program Cisco Validated Design Program Cisco Advanced Services Cisco. (Choose three. Inc.0 © 2012 Cisco Systems.

D Q2) B. Cisco Data Center Solutions 1-119 . B. Q4) B Q5) A. D Q10) B © 2012 Cisco Systems.net Module Self-Check Answer Key Q1) A.certcollecion. C. B Q8) A. depending of the design of the room. C Q3) Cables are installed under the floor or under the ceiling. C. E Q9) B. Inc. D Q6) B. C. D Q7) A.

Inc. .net 1-120 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems.certcollecion.

MEC. and Cisco FabricPath. as well as provide for high availability and customer separation  Identify and design data center component virtualization technologies. such as vPC. Module Objectives Upon completing this module. present the limitations. and virtualization. and describe scalability implications and their possible use in cloud environments. The technologies include Layer 2 multipathing. This ability includes being able to meet these objectives:  Describe and design Layer 2 and Layer 3 switched networks. multilink aggregation. These technologies allow for optimum use of data center resources—links that are all utilized and devices that are virtualized with increased utilization efficiency. you will be able to provide a comprehensive and detailed overview of technologies that are used in data centers. all without using STP . you will learn about modern data center technologies.certcollecion.net Module 2 Data Center Technologies Overview In this module. and outline best practices and validated designs  Design data centers using multipathing technologies.

.0 © 2012 Cisco Systems.certcollecion. Inc.net 2-2 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

net Lesson 1 Designing Layer 2 and Layer 3 Switching Overview This lesson presents various technologies that are essential for data centers. and scalable routing protocols that are used in data centers.certcollecion. you will be able to describe and design Layer 2 and Layer 3 switched networks. as well as provide for high availability and customer separation. This ability includes being able to meet these objectives:  Understand and explain hardware-forwarding architectures  Describe IP addressing considerations and IP routing technologies . hardware-based switching. These technologies include packet switching and routing. Objectives Upon completing this lesson.

2-4 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Historically. Layer 2 Protocols Protocols that are used in Layer 2 networks include variants of STP. It creates a loop-free topology to ensure that broadcast traffic is not replicated all over the network repeatedly. Now. The control plane operation consists of MAC address learning and aging. most processes are hardware-assisted and there is no performance difference when forwarding packets on Layer 2 or Layer 3.net Forwarding Architectures This topic describes hardware-forwarding architectures. This causes communication overhead and can be a burden in large networks. If a MAC address is not known at the moment (that is. Layer 2 Packet Forwarding Forwarding of packets on Layer 2 is referred to as switching. .0 © 2012 Cisco Systems. the switch “floods” the frame by sending it out of all ports. The most common ones are Rapid Spanning Tree Protocol (RSTP) and Multiple Spanning Tree Protocol (MSTP). Automatic VLAN distribution can be achieved by using VLAN Trunking Protocol (VTP).certcollecion. Spanning Tree Protocol (STP) is also part of the control plane and ensures proper operation of the network. The switching decision is made based on the destination MAC address of the frame. MAC addresses are learned as the “conversation” occurs between two hosts. Packets are forwarded exclusively on the information that is present in the Layer 2 packet header. it has aged out). Layer 2 and Layer 3 devices had very different ways of forwarding packets. Inc. and age out after inactivity.

the same hardware performs packet forwarding for both Layer 2 and Layer 3. depending on the goals and equipment. © 2012 Cisco Systems. The router then examines the routing table and. Typically. Cisco FabricPath allows for even better scalability and load distribution by introducing a new Layer 2 fabric. and packets are forwarded at the same rate. Q-in-Q makes it possible to transport more VLANs by adding another VLAN tag. Data Center Technologies 2-5 . performs recursive lookups to determine the outgoing interface for the next-hop IP address. or as pairs of destination networks and next-hop IP addresses. Now. The primary Layer 3 protocol is IP.net Strategies for Layer 2 Fabrics Several strategies exist to implement Layer 2 fabrics. These best paths are installed in the global (or virtual routing and forwarding [VRF]) routing table. With virtual port channels (vPCs). Inc. Historically. Layer 3 Forwarding Forwarding of packets on Layer 3 is called IP routing. The control plane operation is much more complex than in pure Layer 2 frame forwarding. which can be used to identify the customer to which the VLANs belong. When you are close to the threshold of 4096 VLANs. There are several stages that build routing information. the routes are installed as pairs of destination networks and outgoing interfaces. you can use links between switches in a more optimal way and reduce oversubscription by eliminating spanning tree blocked ports. The router makes a decision based on the IP addresses in the packet header.certcollecion. The preferred routing protocol builds its internal topology table and selects the best paths to all advertised destinations. routers had been slower by orders of magnitude compared to switches. for networks that are installed as pairs of destination networks and IP addresses. This is true for most data center switches.

and Equal-Cost Multipath (ECMP) as a load-balancing technology that allows for simultaneous use of multiple links. Routing protocols provide exchange of connectivity information. In these cases. FHRP provides for default gateway redundancy. where distributed switching is performed in the data plane.certcollecion. The load-balancing method depends on what the equipment supports.net When the process is complete and the adjacency table is generated. all information is encoded into hardware. 2-6 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Layer 3 Protocols Layer 3 protocols that are used in data center networks include routing protocols and First Hop Redundancy Protocol (FHRP). enhancing the performance of the entire platform. the data plane information is encoded in every module. Strategies for Layer 3 Fabrics Large Layer 3 fabrics can be achieved by using multiple links between aggregation and core switches. a routing protocol. Note Many routing and switching platforms offer distributed forwarding because of their modular architecture. Inc. .0 © 2012 Cisco Systems.

and its destination is not in the FIB. which is the routing information base (RIB). Data Center Technologies 2-7 . These routes are entered into the IP routing table. On distributed platforms. the packet is dropped. the router can make definitive decisions based on the information in the FIB. The forwarding information base (FIB) is generated from the RIB and additional Layer 2 adjacency information. The generation of entries in the FIB table is not packet-triggered. Because the FIB table contains the complete IP switching table. Whenever a router receives a packet. the change is also reflected in the FIB. When something changes in the RIB. The routing protocols select best routes based on their internal tables. the FIB is copied to all forwarding engines on the line cards and is known as the distributed FIB (dFIB) table. it is change-triggered. Note © 2012 Cisco Systems.net Packet forwarding is performed based on the forwarding tables that are downloaded into the hardware. but instead of holding only the destination MAC address. The adjacency table is derived from the Address Resolution Protocol (ARP) cache.certcollecion. it holds the whole Layer 2 header. Inc. Examples of distributed platforms are the Cisco Nexus 7000 Switch and the Cisco Catalyst 6500 Series with line cards that feature the Distributed Forwarding Card daughterboards.

forwarding does not need to be interrupted when the control is changed from one supervisor to the other if they are in the same chassis.0 © 2012 Cisco Systems. These forwarding tables are then downloaded to each forwarding engine on the line cards in the system.certcollecion. Inc. The forwarding engine includes all required logic to process access control lists (ACLs). and no distributed forwarding cards [DFCs]). and so on. The memory that is used for these processes is called ternary content addressable memory (TCAM). . quality of service (QoS). packet rewriting. Each line card is fully autonomous in its switching decisions. it decides which outgoing interface will be used to forward the packet. On distributed platforms. Forwarding information on line cards is synchronized whenever there is a change in the network topology.net There are two types of forwarding that occur on network devices: centralized and distributed. based on the data that is computed by the control plane. Packet forwarding is a function of the data plane. A centralized forwarding engine usually resides on the supervisor module or on the main board of the switch. Control plane information is precomputed on the supervisor engine. Distributed Forwarding Distributed forwarding is performed on modular devices. Centralized Forwarding Centralized forwarding takes place on nonmodular devices or on devices that are modular but have only one (central) forwarding engine. Examples of devices that use centralized forwarding include Cisco Catalyst 4500 Series Switches and Cisco Catalyst 6500 Series Switches with forwarding on the supervisor engine (Policy Feature Cards [PFCs] only. The forwarding engine processes the packets and. 2-8 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Examples of devices that use distributed forwarding include Cisco Nexus 7000 Series Switches and Cisco Catalyst 6500 Series Switches with DFCs installed on line cards.

net Note © 2012 Cisco Systems. while the Cisco Catalyst 6500 Series Switch performs centralized switching for line cards that do not support distributed forwarding.certcollecion. Distributed forwarding on the Cisco Nexus 7000 Series Switches and on the Cisco Catalyst 6500 Series Switches is similar but not identical. Data Center Technologies 2-9 . The Cisco Nexus 7000 Series Switch does not perform any hardware-assisted switching on the supervisor engine. Inc.

Typically. You also need a management network with a provisioned IP subnet. Alternatively. When designing a data center network.certcollecion. you can segment the management subnet into multiple subnets: one for network infrastructure. If your data center has more devices. another one for server virtualization infrastructure.net IP Addressing and Routing This topic describes IP addressing considerations and IP routing technologies. Inc. and clear ranges for link subnets or “link VLANs.  Incorporate public IP addresses if you are using them to offer services to the Internet. a /24 should be sufficient. span multiple sites. The IP addressing plan must be carefully designed with summarization in mind. you may need a /23. Note 2-10 If your data center is very large and you need very large server subnets to accommodate moveable virtual machines.  The IPv6 addressing and subnetting logic should be related enough to IPv4 subnet numbers to simplify troubleshooting. IP addressing considerations are similar for both IP version 4 (IPv4) and IP version 6 (IPv6).0 © 2012 Cisco Systems. The IP addressing plan must be easy to use and easy to summarize:  Define clear ranges for server subnets. and so on. you must design the IP addressing plan as well. you may need to provision a larger subnet than /24 for those servers. and so on.”  Create an addressing plan that allows you to follow the network topology and to accommodate routing between different servers. . Designing Cisco Data Center Unified Fabric (DCUFD) v5. use summarization to simplify routing decisions and troubleshooting. If it is a large data center.

but also involving configuration work on all possible servers that are running various operating systems. not only in terms of planning. Renumbering does not enhance the service in any way.132. and 10.certcollecion. Inc.0. a medium. Data Center Technologies 2-11 .net There are some additional considerations for IP addressing:  Are you designing an addressing scheme for a new deployment or for an existing deployment? New scenarios generally allow you to assign IP addressing that can be summarized between contiguous subnets and so on.  How did the IP addressing change in the past? Was the data center active before the companies or data centers merged or were acquired? Typically. Example: 10. the IP addressing remains until the existing services are in use. while adding a subnet to an existing network will likely not allow you to perform summarization. on appliances. These two can be summarized as well. and so on.0.  Do you need to work with public IP addresses? You likely hold several discontiguous blocks of IP addresses of different sizes.  Do you need to design an IP addressing scheme for a small. © 2012 Cisco Systems. or a large data center? Do you have enough IP address space in the address plan that includes the campus network?  Do you need to provision IP address space for an additional (secondary) data center? A good practice is to keep IP addressing similar.0/16 for the secondary data center.  Renumbering of IP addresses is costly and requires a lot of work.0/16 for the primary data center subnets. with the second octet defining to which data center the subnet address belongs. These usually cannot be summarized.133.

but they are sometimes used for a specific purpose. The functionality is similar to OSPF in that they are both link-state routing protocols. the routing protocol is an interior gateway protocol (IGP). such as Open Shortest Path First (OSPF). which are called OSPF areas. Enterprise IS-IS deployments are not very common. . RIP is very lightweight on processor resources. OSPF is a multiarea routing protocol that allows you to segment the enterprise network into multiple parts. It has slightly better scalability and convergence speed.certcollecion. By using the same routing protocol. Examples of such equipment are firewalls and application and server load balancers. the routing domain is kept homogenous and you can take advantage of routing protocol features. dynamic routing is retained. Examples of such equipment are firewalls.net The routing protocols that are used in data center networks usually match the ones that are used in campus networks. and built-in default route origination. By using BGP. Inc.0 © 2012 Cisco Systems. The EIGRP routing protocol may be used in data center networks as well. The data center network typically uses one area. Protocols such as Routing Information Protocol (RIP) and Border Gateway Protocol (BGP) are less common in data centers. while BGP uses TCP as transport and can be used over service modules and appliances in routed mode. Intermediate System-to-Intermediate System (IS-IS). Static routing is typically used for administratively defining a traffic path or for when equipment does not support dynamic routing. but the protocol design is different from OSPF or IS-IS. 2-12 Designing Cisco Data Center Unified Fabric (DCUFD) v5. such as summarization. The OSPF routing protocol is the most common choice. EIGRP is a hybrid routing protocol and does not feature automatic summarization on area borders. The IS-IS protocol is used most often in service provider data centers when service providers are using IS-IS as their backbone IGP. or Enhanced Interior Gateway Routing Protocol (EIGRP). Most commonly. and application and server load balancers. and routing updates are transported by TCP sessions Locator/Identity Separation Protocol (LISP) is used to optimally transport traffic when the location of the destination address can change. BGP can be used when multicast traffic cannot be transported over devices.

IPv6 support is available for many routing protocols. Detection of a neighbor loss that is not directly connected (that is. IPv6 readiness helps with migrating the applications to IPv6. linkdown events immediately trigger new path recalculations. a router over a Layer 2 switch) can take more time because routing protocols need the dead timer to expire. you can tune the hello and dead time intervals. The primary rule is to use a routing protocol that can provide fast convergence. Typically. Data Center Technologies 2-13 . To speed up the convergence. Router authentication should also be used because it prevents injection of rogue routes by nonauthorized routers. © 2012 Cisco Systems.net There are guidelines regarding how to design the routing protocol configuration for data center routing.certcollecion. Inc.

IPv6 is well-supported on most networking equipment and appliances.certcollecion. Inc. and to many IPv6 clients you must serve IPv6 content. compatibility testing should be performed if your data center has equipment that is not from Cisco.0—2-13 The IPv6 protocol offers services to IPv6 endpoints. 2-14 Designing Cisco Data Center Unified Fabric (DCUFD) v5. DCUFD v5. You segment the network in the same way and you apply the same logic when designing routing protocol deployment. Mobile service providers are taking advantage of IPv6. Before deploying an IPv6 service. .0 © 2012 Cisco Systems. if they are truly protocol-agnostic • Thorough checking for compatibility with service modules and appliances • Alternative: Involve a proxy or translator from IPv6 to IPv4 on data center edge as an interim solution if applications or equipment do not fully support IPv6 IPv6 Application Client Data Center Backbone— Campus Edge IPv6 Internet Proxy IPv4 IPv6 © 2012 Cisco and/or its affiliates. IPv6 is used in networks where IPv4 addressing is either unobtainable or impractical because of its scalability limitations.net • End-to-end IPv6 preferably used when serving content to IPv6-only endpoints • Need to update and test the applications. There are no major differences in network and routing protocol design between IPv6 and IPv4. IPv6 offers a larger address space for a large deployment of client devices. All rights reserved.

Data Center Technologies 2-15 . Inc.certcollecion.net Summary This topic summarizes the primary points that were discussed in this lesson. © 2012 Cisco Systems.

0 © 2012 Cisco Systems. .net 2-16 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc.certcollecion.

certcollecion.net
Lesson 2

Virtualizing Data Center
Components
Overview
This lesson describes the virtualization technologies and concepts that are used on various
equipment in Cisco Data Center networks.

Objectives
Upon completing this lesson, you will be able to identify and design data center component
virtualization technologies, present the limitations, and outline best practices and validated
designs. This ability includes being able to meet these objectives:

Identify device virtualization mechanisms

Design virtualized solutions using VDCs

Design virtualized services using contexts on firewalling and load-balancing devices

Design virtualized services using virtual appliances

certcollecion.net
Device Virtualization Mechanisms
This topic describes device virtualization mechanisms.

• Network virtualization
- VLANs, VSANs, VRFs
VLAN

• Server virtualization

VRF

- VM, host adapter virtualization,
processor virtualization

• Storage virtualization
- Virtualized storage pools,
- tape virtualization

• Application virtualization

Hypervisor

Security
Contexts

Virtual
Machines

- Application clusters

• Network services virtualization

Virtual
Switching
System
(VSS)

- Virtualized appliances

• Compute virtualization
- Service profiles

• Not all virtualization mechanisms
need to be used at the
same time.

VDC
Extranet

VDC DMZ

VDC Prod

Application
Clusters
VDCs

© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—2-4

Virtualization delivers tremendous flexibility to build and design data center solutions. Diverse
networking needs of different enterprises might require separation of a single user group or a
separation of data center resources from the rest of the network. Separation tasks become
complex when it is not possible to confine specific users or resources to specific areas in the
network. When separation occurs, the physical positioning no longer addresses the problem.
Note

Not all virtualization mechanisms are used at the same time and not all virtualizations are
needed for a specific data center or business case. It is up to the customer to choose what
virtualization to implement and possibly migrate in stages.

Network Virtualization
Network virtualization can address the problem of separation. Network virtualization also
provides other types of benefits such as increasing network availability, better security,
consolidation of multiple networks, segmentation of networks, and increased network
availability. Examples of network virtualization are VLANs and VSANs in Fibre Channel
SANs. VLAN virtualizes Layer 2 segments, making them independent of the physical
topology. This virtualization allows you to connect two servers to the same physical switch,
even though they participate in different logical broadcast domains (VLANs). A similar
concept represents VSAN in Fibre Channel SANs.

2-18

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
Server Virtualization
Server virtualization enables physical consolidation of servers on the common physical
infrastructure. Deployment of another virtual server is easy because there is no need to buy a
new adapter and no need to buy a new server. For a virtual server to be enabled, you only need
to activate and properly configure software. Server virtualization, therefore, simplifies server
deployment, reduces the cost of management, and increases server utilization. VMware and
Microsoft are examples of companies that support server virtualization technologies.

Device Virtualization
The Cisco Nexus 7000 supports device virtualization or Cisco Nexus Operating System (NXOS) virtualization. The virtual device context (VDC) represents the ability of the switch to
enable multiple virtual and independent switches on the common physical switch to participate
in data center networks. This feature provides various benefits to the application services, such
as higher service availability, fault isolation, separation of logical networking infrastructure
based on traffic service types, and flexible and scalable data center design.

Storage Virtualization
Storage virtualization is the ability to pool storage on diverse and independent devices into a
single view. Features such as copy services, data migration, and multiprotocol and multivendor
integration can benefit from storage virtualization.

Application Virtualization
The web-based application must be available anywhere and at any time, and it should be able to
utilize unused remote server CPU resources, which implies an extended Layer 2 domain.
Application virtualization enables VMware VMotion and efficient resource utilization.

Network Services Virtualization
Network services are no longer available only as standalone physical devices, but are
increasingly available as virtual appliances. You can easily deploy a virtual appliance to
facilitate deployment of a new application or a new customer.

Compute Virtualization
Cisco Unified Computing System (UCS) uses “service profiles” that are used as a computing
virtualization mechanism. The service profiles define the personality of the server and can be
applied on any hardware component that supports the abstracted hardware that is configured in
the service profile. For example, if you configure a service profile with two network interface
cards (NICs), the service profile can be applied to any physical server with two NICs or more.

© 2012 Cisco Systems, Inc.

Data Center Technologies

2-19

certcollecion.net
• Layer 2 services virtualization:
-

• Layer 3 services virtualization:

VLANs
VSANs
vPC
Cisco FabricPath
Cisco OTV

-

VRF

Layer 3 Services
Routing Protocols
VRF1

VRF2

. . .

RIB

VRFn

FIB

Layer 2 Services
VLAN1

STP

PVLAN

© 2012 Cisco and/or its affiliates. All rights reserved.

VLAN2

SVI

...

VLANn

OTV
DCUFD v5.0—2-5

Examples of network virtualization are VLANs, which virtualize Layer 2 segments, making
them independent of the physical topology.
When using unified fabric, VSANs provide a similar degree of virtualization on the SAN level.
Additionally, all fabric services are started for a created VSAN.
The virtual port channel (vPC) and Cisco FabricPath are examples of fabric virtualization. The
vPC virtualizes the control plane in such a way that Spanning Tree Protocol (STP) on the
neighbor switch is not aware that it is connected to two different switches. It receives a uniform
bridge protocol data unit (BPDU).
VRF is an example of virtualization on Layer 3 that allows multiple instances of the routing
table to co-exist within the same router at the same time.
Cisco Overlay Transport Virtualization (OTV) is an example of a Layer 2 extension technology
that extends the same VLAN across any IP-based connectivity to multiple sites.

2-20

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net

A device can be virtualized in various ways. Each way is defined by the level of fault
containment and management separation that is provided.
Several virtualization mechanisms provide separation between data, control, and management
planes.
These are the main elements that are associated with virtualization:

Control plane: The ability to create multiple independent instances of the control plane
elements, enabling the creation of multiple logical topologies and fault domains.

Data (or forwarding) plane: Forwarding tables and other databases that can be partitioned
to provide data segregation.

Management plane: Well-delineated management environments that can be provided
independently for each virtual device.

Software partitioning: Modular software processes grouped into partitions and dedicated
to specific virtual devices, therefore creating well-defined fault domains.

Hardware components: Hardware components partitioned and dedicated to specific
virtual devices, allowing predictable allocation of hardware resources.

Switches currently provide a limited level of virtualization that uses virtual routing and
forwarding (VRF) and VLANs. This level of virtualization does not partition all the various
components and elements of a switch, meaning that one infrastructure can ultimately affect all
VRFs and VLANs that are used.

VDC Shared and Dedicated Resources
The Cisco Nexus 7000 VDCs use shared and dedicated resources.

Dedicated resources are the resources that are used exclusively in one VDC, such as
physical interfaces, ternary content addressable memory (TCAM) table space, and so on.

Shared resources are the supervisor engine, the management interface, fabric modules, and
other common hardware.

© 2012 Cisco Systems, Inc.

Data Center Technologies

2-21

certcollecion.net
There is one exception: An interface can be shared between VDCs if the port is running unified
fabric. In this mode, data traffic is managed by the data VDC, where Fibre Channel over
Ethernet (FCoE) traffic is managed by the storage VDC.

Physical Switch

Context
Application 1

Context
Application 2

Context
Application 3

Firewall

Firewall

Firewall

Physical Firewall
Physical SLB
Device

SLB

SLB

SSL

SLB = server load balancing
SSL = Secure Sockets Layer
© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—2-7

This figure depicts one physical service module that has been logically partitioned into several
virtual service modules, and a physical switch that has been logically partitioned into several
VDCs.
This partitioning reduces the number of physical devices that must be deployed and managed,
but still provides the same functionality that each device could provide.
The figure shows how the physical devices (horizontal) are divided into multiple contexts,
serving various applications (vertical) by using contexts, VLANs, and VRFs as virtualization
means.

2-22

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
Virtual Device Contexts
This topic describes how to design virtualized solutions using VDCs.

Layer 2
Protocols

Layer 3
Protocols

Layer 2
Protocols

Layer 3
Protocols

VLAN Mgr.

UDLD

OSPF

GLBP

VLAN Mgr.

UDLD

OSPF

GLBP

VLAN Mgr.

UDLD

BGP

HSRP

VLAN Mgr.

UDLD

BGP

HSRP

LACP

CTS

EIGRP

VRRP

LACP

CTS

EIGRP

VRRP

IGMP

802.1x

PIM

SNMP

IGMP

802.1x

PIM

SNMP

MAC Table

...

RIB

MAC Table

Protocol Stack (IPv4, IPv6, Layer 2)

RIB

Protocol Stack (IPv4, IPv6, Layer 2)

VDC1

VDCn

Infrastructure
Linux Kernel
Physical Switch
© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—2-9

Cisco Nexus 7000 Series Switches use a number of virtualization technologies that are already
present in Cisco IOS Software. At Layer 2, you have VLANs. At Layer 3, you have VRFs.
These two features are used to virtualize the Layer 3 forwarding and routing tables. The Cisco
Nexus 7000 Switch then extends this virtualization concept to VDCs that virtualize the device
itself by presenting the physical switch as multiple logical devices, each independent of each
other.
Within each VDC there is a set of unique and independent VLANs and VRFs, with physical
ports being assigned to each VDC. This arrangement also allows the hardware data plane to be
virtualized, along with a separate management domain that can manage the VDC, therefore
allowing the management plane to be virtualized as well.
In its default state, the switch control plane runs as a single device context called “VDC 1” that
will run approximately 80 processes. Some of these processes will have other threads spawned,
resulting in as many as 250 processes actively running on the system at any given time. This
collection of processes constitutes what is seen as the control plane for a single physical device
(that is, one with no other VDC that is enabled). VDC 1 is always active, always enabled, and
can never be deleted. Even if no other VDC is created, support for virtualization through VRFs
and VLANs is still available.
The Cisco Nexus 7000 supports multiple VDCs. The creation of additional VDCs takes these
processes and replicates them for each device context that is created. When this occurs, the
duplication of VRF names and VLAN IDs is possible, because each VDC represents its own
logical or virtual switch context, with its own set of processes.
Note

© 2012 Cisco Systems, Inc.

Storage (FCoE) connectivity requires deployment in its own VDC. In that VDC, the Cisco
Nexus 7000 Switch is a full Fibre Channel Forwarder (FCF) switch.

Data Center Technologies

2-23

Neither of these can be the default VDC. 2-24 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Unified fabric interfaces are then shared between the storage VDC and the “server access” VDC. Note Currently. You need to isolate the Cisco FabricPath cloud into one VDC. administration.0 © 2012 Cisco Systems.0—2-10 The use of VDCs provides numerous benefits:  Offering a secure network partition between different user department traffic  Providing departments with the ability to administer and maintain their own configuration  Providing a device context for testing new configurations or connectivity options without affecting production systems  Consolidating multiple department switch platforms to a single physical platform while still maintaining independence from the operating system. Cisco FabricPath and fabric extenders (FEXs) cannot be used within the same VDC. or governmental regulations. you need physical wire connectivity between them. This makes the VDC technology acceptable for various environments with compliance requirements like the Payment Card Industry (PCI). and link them using a cable.certcollecion. VDC Extranet VDC DMZ © 2012 Cisco and/or its affiliates.net Note You need to connect servers using unified fabric (FCoE) in a dedicated VDC. Health Insurance Portability and Accountability Act (HIPAA). . All rights reserved. VDC Production System DCUFD v5. which is not the storage VDC. isolate the FEXs into another VDC. Inc. • Physical network islands are virtualized onto a common data center networking infrastructure. and traffic perspective  Using a device context for network administrator and operator training purposes If you want to send traffic from one VDC to another on the same switch.

The Federal Information Processing Standards (FIP-140-2) certification was completed in 2011. • Each VDC is a separate fault domain. For example. such as the Open Shortest Path First (OSPF) process: if it crashes in one VDC. Data Center Technologies 2-25 . Routing Protocols VRF-n VRF1 HSRP CTS RIB EthPM VMM GLBP STB VDC2 VRF-n HSRP CTS RIB EthPM VMM GLBP STB CTS RIB VDCn DCUFD v5. processes in the other VDCs are not affected and continue to run unimpeded. In addition. The same isolation occurs for other processes. fault isolation is enhanced with the ability to provide per-VDC debug commands and per-VDC logging of messages from syslog. These features provide administrators with the ability to locate problems within their own VDC.net Note The Cisco Nexus 7000 VDC feature has been certified by various authorities as offering sufficient security for most demanding usages. if a spanning tree recalculation is started in one VDC. • If a process crashes in any VDC. Routing Protocols VRF1 Routing Protocols VRF-n VRF1 HSRP EthPM VMM GLBP STB VDC1 © 2012 Cisco and/or its affiliates. that crash is isolated from other VDCs. it does not affect the spanning tree domains of other VDCs in the same physical chassis. the Cisco Nexus 7000 was also awarded Common Criteria Evaluation and Validation Scheme certification #10349 with EAL4 conformance. In the same year.certcollecion. the architecture of the VDC provides a means to prevent failures within any VDC from affecting another.0—2-11 When multiple VDCs are created in a physical switch. Inc. © 2012 Cisco Systems. All rights reserved. NSS Labs certified the use of the Cisco Nexus 7000 VDC feature for PCI-compliant environments. Process isolation within a VDC is important for fault isolation and serves as a major benefit for organizations that implement the VDC concept.

. the switch name. such as the out-of-band (OOB) Ethernet management port.  Shared resources: Resources that are shared between VDCs.net There are three types of VDC resources:  Global resources: Resources that can only be allocated. or configured globally for all VDCs. such as Layer 2 and Layer 3 ports. set. The management interface does not support IEEE 802. An example of a global resource is the boot string that specifies the version of software that should be used on booting up the device. Designing Cisco Data Center Unified Fabric (DCUFD) v5. An example of a shared resource on the switch is that there is only one OOB Ethernet management port.certcollecion. they must share it. such as boot image configuration. Note 2-26 If multiple VDCs are configured and accessible from the management port. and in-band span session. and they cannot be allocated to a VDC like other regular ports. and the management interfaces of the VDCs should be configured for the management VRF and be on the same IP subnet.  Dedicated resources: Resources that are allocated to a particular VDC. Inc.0 © 2012 Cisco Systems.1Q.

Every I/O module has all the forwarding information bases (FIBs) for every VDC that has ports on that I/O module. Layer 2 learning is a VDC local process and has a direct effect on the addresses that are placed on a line card. Note © 2012 Cisco Systems.net • Interfaces on I/O modules are allocated to VDCs. and when a new MAC address is learned by a line card. Inc. 4. DCUFD v5.certcollecion. Line card 2 does have a local port in VDC 10. Switch Fabric 1/1 Line Card 1 Line Card 2 Line Card 3 MAC Table MAC “A” MAC Table MAC “A” MAC Table 1/2 1/3 1/4 VDC 20 VDC 10 2/1 2/2 VDC 20 2/3 2/4 VDC 10 3/1 3/2 VDC 20 3/3 3/4 VDC 30 MAC Address A © 2012 Cisco and/or its affiliates. 3. The address is installed in the local Layer 2 forwarding table of line card 1. so it does not install any MAC addresses that are learned from that VDC. so it installs the MAC address A into its local forwarding tables. 2.000 MAC addresses. enabling the Layer 2 address learning process to be synchronized across all line cards. a copy is forwarded to other line cards.0—2-13 When using VDCs. Here is an example: 1. 5. Line card 3 has no ports that belong to VDC 10. Note The I/O modules have interfaces arranged into groups of ports that share a common ASIC. Data Center Technologies 2-27 . The interface ceases to belong to the default VDC. and now belongs to the assigned VDC. The MAC address table on each line supports 128. Refer to the documentation regarding I/O modules for distribution of ports in port groups. MAC address A is learned from port 1/2. The forwarding engine on each line card is responsible for Layer 2 address learning and maintains a local copy of the Layer 2 forwarding table. you must manually allocate interfaces to the VDC. On line card 1. • I/O modules have a copy of the FIB table only for the VDCs that use ports on that I/O module. The ports in the same port group need to be assigned to the same VDC or are added automatically. All rights reserved. The MAC address is then forwarded to line cards 2 and 3.

This can be seen in the figure.000 entries in the FIB (to store forwarding prefixes). .0—2-14 The forwarding engine on each line card supports 128. where the routes for the default VDC are present in the FIB and ACL TCAMs.0 © 2012 Cisco Systems.certcollecion. Inc. learned routes and ACLs are loaded onto each line card TCAM table so that the line card has the necessary local information to make an informed forwarding decision.000 egress NetFlow entries. When the default VDC is the only active VDC.000 ingress and 512. DCUFD v5.net Line Card 1 Line Card 2 Line Card 3 Line Card 4 Line Card 7 Line Card 8 Line Card 9 Line Card 10 FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM 64K 64K 64K 64K 64K 64K 64K 64K © 2012 Cisco and/or its affiliates. 2-28 Designing Cisco Data Center Unified Fabric (DCUFD) v5. and 512. All rights reserved.000 access control lists (ACLs). 64.

After the flow is created in a NetFlow TCAM on line card 2. All rights reserved. Data Center Technologies 2-29 .000 Cisco Application Control Engine (ACE) Module devices have been installed where a single VDC would only allow 64.net Line Card 1 Line Card 2 Line Card 3 Line Card 4 Line Card 7 Line Card 8 Line Card 9 Line Card 10 FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM FIB TCAM 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) 128K 1M (XL) ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM ACL TCAM 64K 64K 64K 64K 64K 64K 64K 64K VDC 10 © 2012 Cisco and/or its affiliates. therefore. a flow record is created on the local NetFlow TCAM that is resident on that line card.000 forwarding entries are installed in a switch that. More importantly.000 forwarding entries.certcollecion. a total of 100. 8. Inc. No flow in VDC 10 is exported to a collector that is part of VDC 20. the use of the TCAM is optimized. © 2012 Cisco Systems. As with the TCAMs for FIB and ACLs. allowing resources to be extended beyond the system limits. Using the previous example. a total of 180. 9.0—2-15 The effect of allocating a subset of ports to a given VDC results in the FIB and ACL TCAMs for the respective line cards being primed with the forwarding information and ACLs for that VDC. would have a system limit of 128. Both ingress and egress NetFlow are performed on the ingress line card. the use of the NetFlow TCAM is also more granular when multiple VDCs are active. so it is the NetFlow TCAM of the ingress line card where the flow is stored. the FIB and ACL TCAM space on line cards 4.000 access control entries. Likewise. The collection and export of flows is always performed on a per-VDC basis. without VDCs. it is not to be replicated to NetFlow TCAMs on other line cards that are part of the same VDC. VDC 20 VDC 30 DCUFD v5. When a flow is identified. and 10 are free for use by additional VDCs that might be created.

Designing Cisco Data Center Unified Fabric (DCUFD) v5. 2-30  32-port 10-Gb M1 and M1-XL I/O module: The ports can be allocated to a VDC in port groups of four ports.0 © 2012 Cisco Systems.net When designing the interconnections between the switches and the VDCs. The port group layout depends on the type of the I/O module that is used. .certcollecion.  48-port 10/100/1000 M1 and M1-XL I/O module: The ports can be allocated to a VDC on a per-port basis. Inc.  8-port 10-Gb M1-XL I/O module: The ports can be allocated to a VDC on a per-port basis. the ports on the I/O modules can be allocated to different VDCs following the hardware port groups.

net  32-port 10-Gb F1 I/O module: The ports can be allocated to a VDC in groups of two adjacent ports.  6-port 40 Gigabit Ethernet M2 I/O module: The ports can be allocated to VDCs individually.  2-port 100 Gigabit Ethernet M2 I/O module: The ports can be allocated to VDCs individually. They cannot run in the same VDC as M1 and F1 I/O modules. © 2012 Cisco Systems.  48-port 10-Gb F2 I/O module: The ports can be allocated to a VDC in groups of four adjacent ports. All F2 I/O modules must be allocated in their own VDC.certcollecion. Inc. Data Center Technologies 2-31 .

interfaces. 100. the Cisco ACE Module and appliance can be partitioned in multiple contexts to accommodate multiple applications. Security contexts allow administrators to separate and secure data center silos while providing easy management using a single system.” Similarly.certcollecion. Security Contexts Overview The Cisco ASA adaptive security appliance. Cisco Catalyst 6500 Series Firewall Services Module (FWSM). • Logical partitioning of a single Cisco ASA adaptive security appliance or Cisco ACE Module device into multiple logical firewalls or loadbalancing devices • Logical firewall or SLB = Context • Licensed feature on Cisco ASA 5580/5585-X and Cisco ASA-SM: - 2 contexts included. 50.0—2-19 Context Virtualization Concept Virtual firewalling presents logical partitioning of a single physical Cisco ASA adaptive security appliance. All rights reserved. DCUFD v5. . They lower overall management and support costs by hosting multiple virtual firewalls in a single device. By default. 50. • Security contexts can share interfaces.0 © 2012 Cisco Systems. and the Cisco FWSM can be partitioned into multiple virtual firewalls known as security contexts. Controlling resources enables multiple demilitarized zones (DMZs) and service differentiation classes (such as gold. 100. and 250 contexts • Licensed feature on Cisco ACE Module and appliance - 5 contexts included. and 250 concurrent security contexts. 2-32 Designing Cisco Data Center Unified Fabric (DCUFD) v5. A logical firewall is called “security context” or “virtual firewall. or bronze) per context for different data center segments. The system configuration file can also be used to configure resource allocation parameters to control the amount of system resources that are allocated to a context. A system configuration file controls the options that affect the entire module and defines the interfaces that are accessible from each security context. ASA Service Module. licenses for up to 250 contexts • Each context can have its own interfaces and its own security policy.net Virtualization with Contexts This topic describes how to design virtualized services using contexts on firewalling and loadbalancing devices. and administrators. silver. two security contexts can be created on one Cisco FWSM. © 2012 Cisco and/or its affiliates. Inc. into multiple logical firewalls. licenses for 20. You need a license to deploy 20. Each individual security context has its own security policies.

A virtual management interface enables management and administration for each security context and its data. authentication. some features such as OSPF and RIP routing are not supported in multiple context mode. When different security contexts connect to the same network—for example. Note On firewalls. only static routing is supported.certcollecion. Note Interfaces can be dedicated to a single context or shared among many contexts. Data Center Technologies 2-33 .net Each context has a separate configuration file that contains most of the definition statements that are found in a standalone Cisco FWSM configuration file. including items such as IP addressing. traffic control access control lists (ACLs). even while having access to their own context only. interfaces. Network Address Translation (NAT) and Port Address Translation (PAT) definitions. and a global management interface provides configuration in real time for the entire system. This configuration file controls the policies for the individual context. You can independently set the mode of each context to be either routed or transparent in Cisco FWSM and ASA Service Module. Each security context on a multimode Cisco ASA adaptive security appliance or Cisco FWSM has its own configuration that identifies the security policy. Inc. Note © 2012 Cisco Systems. the Internet—you can also use one physical interface that is shared across all security contexts. and almost all the options that you can configure on a single-mode firewall. Administrators can configure each context separately. and accounting (AAA) definitions. On the Cisco ACE appliance. authorization. and interface security levels.

2-34 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Hardware resources are shared among the contexts on a percentage basis.certcollecion. Inc. All rights reserved. and domains • Limited resource allocation • Management and data resource control • Independent application rule sets • Global administration and monitoring © 2012 Cisco and/or its affiliates.net 25% One Physical Device 25% 100% 15% 15% Multiple Virtual Systems (Dedicated Control and Data Path) 20% Traditional device: • Single configuration file Cisco application services virtualization: • Single routing table • Distinct configuration files • Limited role-based access control (RBAC) • Separate routing tables • RBAC with contexts. DCUFD v5. providing complete isolation from other contexts on both the control and data levels. .0—2-20 The Cisco ACE Module also supports the creation of virtual Cisco ACE Module images called “contexts.0 © 2012 Cisco Systems. roles.” Each context has its own configuration file and operational data.

certcollecion. and licenses are available that expand the virtualization that is possible to 250 contexts per Cisco ACE Module or 20 contexts per Cisco ACE appliance. The base code allows 5 contexts to be configured.0—2-21 Network resources can be dedicated to a single context or shared between contexts. Additional contexts. The number of contexts that can be configured is controlled by licensing on the Cisco ACE Module. All rights reserved. and the resources to be allocated to each context. The Admin context does not count toward the licensed limit on the number of contexts. a context named “Admin” is created by the Cisco ACE Module. DCUFD v5. as shown in the figure. Data Center Technologies 2-35 .net Physical Device Context 1 Context 2 Context 3 Admin Context Context Definition Resource Allocation Management Station AAA © 2012 Cisco and/or its affiliates. This context cannot be removed or renamed. © 2012 Cisco Systems. By default. are defined in the configuration of the Admin context. Inc.

or department. Using this platform. In this case. or VDCs. you have services that would be difficult to obtain on a physical device—in the example of firewalls. you can deploy a virtual firewall for a new application. virtual WAAS. . customer.net Virtualization with Virtual Appliances This topic describes how to design virtualized services using virtual appliances. 2-36 Designing Cisco Data Center Unified Fabric (DCUFD) v5. multiple contexts. and so on.0 © 2012 Cisco Systems. Physical devices are then segmented into multiple contexts. the appliance is able to run virtual service modules such as Cisco ASA 1000V. with the full functionality of a physical firewall. VPN connectivity. and so on. virtual NAM. Inc. Cisco Nexus 1010V is the hardware platform that runs virtual devices. a combination of routing. The classic data center design approach is to deploy physical devices and virtualizing them in order to achieve the required flexibility and resource utilization. and so on.certcollecion. Originally running only the Cisco Nexus 1000V Virtual Supervisor Module (VSM).

The drawback of this approach is lower data throughput.net Virtual services are provided by virtual appliances. to the virtual machine that is running the application. Virtualized devices run on general-purpose hardware. © 2012 Cisco Systems. because traffic is typically switched in software on the virtualized host. The benefit of this approach is greater deployment flexibility compared to physical devices.certcollecion. The primary component is the virtual switch. which resides in the virtualized host. which are chained by using VLANs into the correct sequence. You can simply deploy (or clone) a new virtual appliance and hand over the management of the appliance to the appropriate team or to the customer. The switch then forwards the traffic to multiple virtual appliances. These appliances reside in the “compute” cloud within the server infrastructure in the virtual infrastructure. Physical appliances have specialized hardware (network processors) that is purpose-built to inspect and forward data traffic. Data Center Technologies 2-37 . Inc.

net Summary This topic summarizes the primary points that were discussed in this lesson.certcollecion.0 © 2012 Cisco Systems. Inc. . 2-38 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

net Lesson 3 Designing Layer 2 Multipathing Technologies Overview This lesson describes multipathing technologies that are used in modern data centers with Cisco equipment. Objectives Upon completing this lesson. you will be able to design data centers using multipathing technologies. Multichassis EtherChannel (MEC).certcollecion. such as virtual port channel (vPC). and Cisco FabricPath. Multipathing technologies are available for both Layer 2 and Layer 3 forwarding. all without using Spanning Tree Protocol (STP). This ability includes being able to meet these objectives:  Explain link virtualization technologies that allow for scaling of the network  Design solutions using vPCs and MEC  Design solutions using Cisco FabricPath .

if they are not on the same access switch.certcollecion.net Network Scaling Technologies This topic describes link virtualization technologies that allow for scaling of the network. STP is not scalable enough to have larger Layer 2 domains. and virtual machine mobility (such as VMware VMotion).Only 50% of bandwidth available for client-server connections • East-West traffic must be switched across aggregation or even core switch . All rights reserved. Traffic between servers (“East-West”). Traditional networks have scalability limitations: • Only one physical or logical link between any two switches • Suboptimal paths between two switches introduced by the tree topology • North-South traffic flows across only one link .0 © 2012 Cisco Systems. suffers the same limitation. . Cisco FabricPath. it is not trivial to implement— it requires at least an IP addressing plan and IP routing. a good solution to this limitation is to segment the network to divide it into several Layer 3 domains. By building a tree topology. While this solution is proven and recommended. Examples of such traffic are clustered or distributed applications.Same bandwidth constraint for extended clusters or VM mobility A © 2012 Cisco and/or its affiliates. There are two technologies that provide a solution for the limitations of STP: 1. Inc. there are scalability limitations that are primarily introduced because of usage of STP.0—2-4 When traditional networks are implemented using Layer 2. B DCUFD v5. for even greater scalability 2-40 Designing Cisco Data Center Unified Fabric (DCUFD) v5. STP limits the upstream traffic (“North-South”) to only 50 percent of the bandwidth that is available toward the upstream switch. Multilink aggregation by using MEC or vPC to overcome the blocked bandwidth limitation imposed by STP 2. it blocks all other links to switches that could create a Layer 2 loop or provide an alternate path. Generally.

Note The interfaces on the FEX are presented as logical interfaces on the managing switch.Traditional EtherChannel or PortChannel • Mechanisms to overcome the STP issue of a blocked link: . the traffic paths are deterministic and need to travel through the data center core. Inc. FEXs provide additional ports that are attached to a remote. Both MEC and vPC technologies scale to up to two upstream switches.net • Layer 2 multipathing technologies are used to scale bandwidth of links: . All interfaces are managed through the managing switch and presented through the interface that attaches the FEX. but. This technology might utilize the links better than vPC or MEC.Cisco FabricPath . © 2012 Cisco Systems. DCUFD v5. Another technology that uses link virtualization is fabric extender (FEX). To link an access switch with multiple aggregation switches and without having STP blocking one link. on a lower level. Multipathing technologies virtualize links so that a link is presented as a single logical link to the control planes.TRILL PortChannel MEC vPC Cisco FabricPath © 2012 Cisco and/or its affiliates. the suitable technology is EtherChannel (or port channel).vPC • Technologies for even greater scalability: . Data Center Technologies 2-41 .certcollecion. it is a bundled link of several physical links. for traffic that needs to flow between several aggregation blocks. either of the following:  MEC on Cisco Catalyst 6500 Virtual Switching System (VSS)  vPC on the Cisco Nexus family of switches Not all platforms support all types of EtherChannels and port channels. An example of this technology is EtherChannel or port channel.MEC . To add more bandwidth between a pair of generic switches. consisting of a single switch pair or multiple pairs of switches. lightweight chassis that does not perform control plane operations. However. you can use.0—2-5 Link virtualization technologies are used to scale links between data center switches and other equipment. These technologies provide a robust upstream (aggregation) layer. Cisco FabricPath is a technology that allows you to wire access and aggregation switches in a fabric. making the path selection process more flexible and more redundant. All rights reserved. depending on equipment.

certcollecion.net
Cisco FabricPath can accommodate “East-West” traffic across several links and across multiple
topologies, without traffic leaving the aggregation layer.

• STP is used for compatibility or as a failsafe
mechanism for vPC.
• Rapid PVST+
- Easy deployment
- Every VLAN has its own instance, generated BPDUs,
and configured primary and secondary root bridges.

• MST
- Better scalability than Rapid PVST+

Primary
Root Bridge

Secondary
Root Bridge

- Every instance has its primary and secondary
root bridge, and generates BPDUs. There is less CPU
processing.
- VLANs assigned to instances
- Typically, two instances are configured per
aggregation block.

© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—2-6

STP is still used in modern data centers. The use cases are when a piece of equipment does not
support any Layer 2 multipathing mechanism, or as a failsafe mechanism when using vPC.
There are two options for STP deployment: Rapid Per VLAN Spanning Tree Plus (Rapid
PVST+), and Multiple Spanning Tree (MST).
For Rapid PVST+, the switch starts an instance of STP for every created VLAN. You can
configure primary and secondary root bridges for every VLAN, and the switch will generate
and process bridge protocol data units (BPDUs) for every VLAN that it has defined. When you
have many VLANs that are defined, the CPU load is significant, especially when failures occur
in the network.
MST is much more scalable because you manually define STP instances, and primary and
secondary root bridges. The switch then generates and processes BPDUs per instance, which is
much fewer than per VLAN. VLANs are then assigned to those instances.
Typically, two instances of MST are sufficient per aggregation block.

2-42

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
vPC and MEC
This topic describes how to design solutions using vPCs and MEC.

• MEC

- Requires Cisco Catalyst 6500
Series Switch with Supervisor
Engine 720-10GE or Supervisor
Engine 2T to build the VSS

vPC
-

Requires Cisco Nexus 5000,
5500, or 7000 switch to host
the vPC domain

-

Separate control plane on
vPC peer switches

-

Synchronization using Cisco
Fabric Services over
Ethernet

-

Layer 2 PortChannel support
only

- Single control plane in the VSS
- Processing controlled by the VSS
- Layer 2 and Layer 3 PortChannel
support

MEC

© 2012 Cisco and/or its affiliates. All rights reserved.

vPC

DCUFD v5.0—2-8

MEC and vPC are technologies that allow you to terminate a single port channel on two remote
devices. Remote devices run a control protocol that synchronizes the state of the port channel
and maintains it.
MEC is used where Cisco Catalyst 6500 Series Switches are bonded in VSS configuration. vPC
is used where Cisco Nexus Series switches are used. From the perspective of the downstream
switch, both technologies look the same, but there are fundamental differences regarding how
the control plane works on the aggregation (or upstream) devices.

MEC and VSS
The Cisco Catalyst VSS unifies a pair of two Cisco Catalyst 6500 Series Switches with the
Supervisor Engine 720-10GE, or Supervisor Engine 2T, into a single logical system, using a
single control plane. The control plane takes care of how the multichassis port channel is
managed and synchronized. There is one single control plane, running on the active supervisor
engine.

vPC
The vPC functions differently. The aggregation Cisco Nexus Series Switches (5000, 5500,
7000) function as two separate switches, each one with its own control plane. The vPC is
managed using a common entity on both switches—the vPC domain. This vPC domain uses a
special protocol to synchronize and maintain the port channels—the Cisco Fabric Services over
Ethernet.

© 2012 Cisco Systems, Inc.

Data Center Technologies

2-43

certcollecion.net
Primary Differences
This table lists the differences between VSS and vPC from the control plane perspective.
When VSS is created, two Cisco Catalyst 6500 Series Switches form a single virtual switch
with a single control plane. The virtual switch builds a single routing instance. From the
perspective of all other devices, the virtual switch is one network device, a single dynamic
routing protocol neighbor. On VSS, only one configuration is maintained.
A Cisco Nexus switch with a configured vPC has an independent control plane. That means
that there are two independent routing instances. vPC member devices have independent
configurations.
VSS versus vPC
VSS on Cisco Catalyst 6500
Series Router

vPC on Cisco Nexus 7000 Series
Switch

Control plane

Single logical node

Two independent nodes

Control plane protocols

Single instance

Independent instances

Layer 3 port channel support

Yes

No

High availability

Interchassis

Intrachassis, per process

EtherChannel

Static, PAgP, PAgP+, LACP

Static, LACP

Configuration

Configuration on one device

Two configurations to manage

Note

2-44

Unlike MEC, vPC does not support Layer 3 port channels. Routing protocol adjacencies
cannot be formed over a vPC. A dedicated Layer 3 link must be used for this purpose.

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net

• Existing infrastructure can be reused—only aggregation switches might
require an upgrade.
• Builds loop-free networks; STP is run as a failsafe mechanism.
• Condition: physical cabling must be of “looped triangle” design.
• Benefits from regular EtherChannels: added resiliency, easy scaling by
adding links, optimized bandwidth utilization, and improved
convergence.
• Depending on the platform, the amount of active links can be up to 8 (of
16 configured), or 16 (of 32 configured).

Aggregation

STP

MEC

Access

© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—2-9

One of the important benefits of deploying MEC or vPC is that existing infrastructure can be
reused, even without rewiring. Switches like the Cisco Catalyst 6500 can require a supervisor
engine upgrade to be able to form a VSS, but, as a benefit, the amount of oversubscription
between aggregation and access is reduced by half.
The primary loop avoidance mechanism is provided by MEC or vPC control protocols. STP is
still in operation and is running as a failsafe mechanism.
To be able to upgrade or migrate to MEC or vPC, access switches must be connected using the
“looped triangle” cabling design, as shown in the figure.
Link Aggregation Control Protocol (LACP) is the protocol that allows for dynamic port
channel negotiation and allows up to 16 interfaces into a port channel.
Note

On Cisco Nexus 7000 Series switches, a maximum of eight interfaces can be active, and a
maximum of eight interfaces can be placed in a standby state on the Cisco Nexus 7000 M
Series modules. When using Cisco Nexus 7000 F-Series modules, up to 16 active interfaces
are supported in a port channel.

Port Channel Load-Balancing Algorithms
Ethernet port channels provide load balancing based on the following criteria:

For Layer 2 frames, it uses the source and destination MAC address.

For Layer 3 frames, it uses the source and destination MAC address and the source and
destination IP address.

For Layer 4 frames, it uses the source and destination MAC address, the source and
destination IP address, and the source and destination port address.

Note

© 2012 Cisco Systems, Inc.

You can select the criterion in the configuration.

Data Center Technologies

2-45

certcollecion.net

• vPC Peers: A pair of
vPC-enabled switches
vPC PeerKeepalive Link

• vPC Domain: A pair of vPC peers
and associated vPC components
• vPC Peer-Keepalive Link:
Routed link carrying heartbeat
packets for active/active
detection

vPC
Peer

Layer 3
Cloud
vPC Domain
Peer
Link

Orphan
Port

CFS

• vPC Peer Link: Carries control
traffic between vPC peer devices
• vPC: Combined port channel
between the vPC peers and a
port channel-capable
downstream device

vPC
Orphan
Device

vPC Member
Port
Normal
Port Channel

• vPC Member Port: One of a set
of ports that form a vPC
© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—2-10

The vPC architecture consists of the following components:

2-46

vPC peers: The core of the vPC architecture is a pair of Cisco Nexus switches. This pair of
switches acts as a single logical switch, which allows other devices to connect to the two
chassis using MEC.

vPC domain: The vPC domain includes both vPC peer devices, the vPC peer-keepalive
link, the vPC peer link, and all of the port channels in the vPC domain that are connected to
the downstream devices. A numerical vPC domain ID identifies the vPC. You can have
only one vPC domain ID on each virtual device context (VDC).

vPC peer-keepalive link: The peer-keepalive link is a logical link that often runs over an
out-of-band (OOB) network. It provides a Layer 3 communication path that is used as a
secondary test to determine whether the remote peer is operating properly. No data or
synchronization traffic is sent over the vPC peer-keepalive link; only IP packets that
indicate that the originating switch is operating and running a vPC are transmitted. The
peer-keepalive status is used to determine the status of the vPC peer when the vPC peer
link goes down. In this scenario, it helps the vPC switch to determine whether the peer link
itself has failed, or if the vPC peer has failed entirely.

vPC peer link: This link is used to synchronize states between the vPC peer devices. Both
ends must be on 10 Gigabit Ethernet interfaces. This link is used to create the illusion of a
single control plane by forwarding BPDUs and LACP packets to the primary vPC switch
from the secondary vPC switch.

vPC: A vPC is a MEC, a Layer 2 port channel that spans the two vPC peer switches. The
downstream device that is connected on the vPC sees the vPC peer switches as a single
logical switch. The downstream device does not need to support vPC itself. It connects to
the vPC peer switches using a regular port channel, which can either be statically
configured or negotiated through LACP.

vPC member port: This is a port on one of the vPC peers that is a member of one of the
vPCs that is configured on the vPC peers.

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
• vPC VLAN: VLAN carried over the
peer link and across the vPC
vPC PeerKeepalive Link

• Non-vPC VLAN: STP VLAN not
carried over the peer link
• Orphan Device: A device that is
connected to a vPC peer using a
non-vPC link
• Orphan Port: Port on a vPC peer
that connects to an orphan device
- The term “orphan port” is also used
for a vPC member port that connects
to a device that has lost connectivity
to the other vPC peer.

• Cisco Fabric Services: A protocol
that is used for state
synchronization and configuration
validation between vPC peer
devices
© 2012 Cisco and/or its affiliates. All rights reserved.

vPC
Peer

Layer 3
Cloud
vPC Domain
Peer
Link

Orphan
Port

CFS

vPC
Orphan
Device

vPC Member
Port
Normal
Port Channel

DCUFD v5.0—2-11

vPC VLAN: This is one of the VLANs that is carried over the peer link and is used to
communicate via vPC with a peer device.

Non-vPC VLAN: This is one of the STP VLANs that is not carried over the peer link.

Orphan device: This term refers to any device that is connected to a vPC domain using
regular links instead of connecting through a vPC. A device that is connected to one vPC
peer is considered an orphan device. VLANs that are configured on orphan devices cross
the peer link.

Orphan port: This term refers to a switch port that is connected to an orphan device. The
term is also used for vPC ports whose members are all connected to a single vPC peer. This
situation can occur if a device that is connected to a vPC loses all its connections to one of
the vPC peers. An orphan port is a non-vPC interface on a switch where other ports in the
same VLAN are configured as vPC interfaces.

Cisco Fabric Services: The Cisco Fabric Services protocol is a reliable messaging protocol
that is designed to support rapid stateful configuration message passing and
synchronization. The vPC peers use the Cisco Fabric Services protocol to synchronize data
plane information and implement necessary configuration checks. vPC peers must
synchronize the Layer 2 Forwarding (L2F) table between the vPC peers. This way, if one
vPC peer learns a new MAC address, that MAC address is also programmed on the L2F
table of the other peer device. The Cisco Fabric Services protocol travels on the peer link
and does not require any configuration by the user. To help ensure that the peer link
communication for the Cisco Fabric Services over Ethernet protocol is always available,
spanning tree keeps the peer-link ports always forwarding. The Cisco Fabric Services over
Ethernet protocol is also used to perform compatibility checks to validate the compatibility
of vPC member ports to form the channel, to synchronize the Internet Group Management
Protocol (IGMP) snooping status, to monitor the status of the vPC member ports, and to
synchronize the Address Resolution Protocol (ARP) table.

© 2012 Cisco Systems, Inc.

Data Center Technologies

2-47

certcollecion.net

Cisco Fabric Services is used as the primary control plane protocol for vPC. It performs several
functions:

2-48

vPC peers must synchronize the Layer 2 MAC address table between the vPC peers. If one
vPC peer learns a new MAC address on a vPC, that MAC address is also programmed on
the L2F table of the other peer device for that same vPC.
This MAC address learning mechanism replaces the regular switch MAC address learning
mechanism and prevents traffic from being forwarded across the vPC peer link
unnecessarily.

The synchronization of IGMP snooping information is performed by Cisco Fabric Services.
L2F of multicast traffic with vPC is based on modified IGMP snooping behavior that
synchronizes the IGMP entries between the vPC peers. In a vPC implementation, IGMP
traffic that is entering a vPC peer switch through a vPC triggers hardware programming for
the multicast entry on both vPC member devices.

Cisco Fabric Services is also used to communicate essential configuration information to
ensure configuration consistency between the peer switches. Similar to regular port
channels, vPCs are subject to consistency checks and compatibility checks. During a
compatibility check, one vPC peer conveys configuration information to the other vPC peer
to verify that vPC member ports can form a port channel. In addition to compatibility
checks for the individual vPCs, Cisco Fabric Services is also used to perform consistency
checks for a set of switchwide parameters that must be configured consistently on both peer
switches.

Cisco Fabric Services is used to track the vPC status on the peer. When all vPC member
ports on one of the vPC peer switches go down, Cisco Fabric Services is used to notify the
vPC peer switch that its ports have become orphan ports and that traffic that is received on
the peer link for that vPC should now be forwarded to the vPC.

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

Data Center Technologies 2-49 .certcollecion. © 2012 Cisco Systems. this is accomplished by generating the LACP system ID from a reserved pool of MAC addresses.2(6) for the Cisco Nexus 7000 Switches. Both switches actively participate in traffic forwarding for the vPCs. Note Starting from Cisco NX-OS Software version 4. the behavior depends on the use of the peer-switch option. most notably in a peer-link failure. For LACP. combined with the vPC domain ID. Inc. the vPC peer-switch option can be implemented. If the peer-switch option is not used. The vPC primary or secondary role is chiefly a control plane role that determines which of the two switches will be responsible for the generation and processing of spanning-tree BPDUs for the vPCs. For STP. However. the two vPC peer switches present themselves as a single logical switch to devices that are connected on a vPC. However. When the peer-switch option is used. they use Cisco Fabric Services to perform bulk synchronization of the ARP table. they use the same bridge ID to present themselves as a single switch to devices that are connected on a vPC. Layer 3 vPC peers synchronize their respective ARP tables. but does not generate BPDUs itself for the vPCs. When two switches are reconnected after a failure. This feature is transparently enabled and helps ensure faster convergence time upon reload of a vPC switch.0(2) for the Cisco Nexus 5000 Switches and Cisco NX-OS Software version 4. The two switches use the same spanning-tree bridge ID to ensure that devices that are connected on a vPC still see the vPC peers as a single logical switch. For LACP and STP. which allows both the primary and secondary vPC device to generate BPDUs for vPCs independently. This election is non-preemptive. but the vPC peer switches determine through the peerkeepalive mechanism that the peer switch is still operational.2(6) for the Cisco Nexus 7000 Switches. the primary and secondary roles are also important in certain failure scenarios. When the vPC peer link fails. the operational secondary switch suspends all vPC member ports. both the primary and secondary switches send and process BPDUs. the primary vPC is responsible for generating and processing BPDUs and uses its own bridge ID for the BPDUs. an election is held to determine a primary and secondary vPC device. The secondary vPC relays BPDU messages.net  Starting from Cisco Nexus Operating System (NX-OS) Software version 5. The secondary device also shuts down all switch virtual interfaces (SVIs) that are associated with any VLANs that are configured as allowed VLANs for the vPC peer link. Between the pair of vPC peer switches.

DCUFD v5. All rights reserved. It carries only the traffic that needs to be flooded. 2-50 Designing Cisco Data Center Unified Fabric (DCUFD) v5. The peer link does not carry regular traffic for vPCs. © 2012 Cisco and/or its affiliates.Traffic for orphan ports • Regular switch MAC address learning is replaced with Cisco Fabric Services-based MAC address learning for vPCs: .net • The vPC peer link carries the following traffic only: . multicast. unknown unicast) .0 © 2012 Cisco Systems. multicast. . such as broadcast.Non-vPC ports use regular MAC address learning • Frames that enter a vPC peer switch from the peer link cannot exit the switch on a vPC member port.vPC control traffic . Inc.certcollecion. The exception to this rule is traffic that is destined for an orphaned vPC member port. It also carries traffic for orphan ports.0—2-13 vPC is designed to limit the use of the peer link specifically to switch management traffic and the occasional traffic flow from a failed network port. This principle prevents frames that are received on a vPC from being flooded back onto the same vPC by the other peer switch. One of the most important forwarding rules for vPC is that a frame that enters the vPC peer switch from the peer link cannot exit the switch from a vPC member port.Flooded traffic (broadcast. and unknown unicast traffic.

If the vPC peer device is up. to prevent loops and disappearing or flooding traffic. which is a link between vPC peer devices that ensures that both devices are up. The data then forwards down the remaining active links of the port channel.net vPC Peer-Link Failure 1. © 2012 Cisco Systems. Peer Link A 1 X B 3 vPC 4 4. The keepalive messages are used only when all the links in the peer link fail. All rights reserved.0—2-15 This figure shows vPC peer-keepalive link usage. the secondary vPC device disables all vPC ports on its device. the software checks the status of the remote vPC peer device using the peer-keepalive link. Inc. vPC Peer Failure— Peer-Keepalive Link • The software learns of a vPC peer device failure when the keepalive messages are not returned over the peer-keepalive link. 3. The vPC peer link on Switch A fails. The data then forwards down the remaining active links of the port channel. If the vPC on Peer B is up. the secondary vPC on Peer B disables all vPC ports on its device to prevent loops and black-holing or flooding traffic. © 2012 Cisco and/or its affiliates. © 2012 Cisco and/or its affiliates. PeerKeepalive Link 2 vPC Domain 2. All rights reserved. The software checks the status of the remote vPC on Peer B using the peer-keepalive link. The keepalive messages on the vPC peer-keepalive link determine whether a failure is on the vPC peer link only or on the vPC peer device. DCUFD v5.certcollecion.0—2-14 If the vPC peer link fails. PeerKeepalive Link vPC Domain Peer Link vPC DCUFD v5. Data Center Technologies 2-51 . • Use a separate link (vPC peer-keepalive link) to send configurable keepalive messages between the vPC peer devices.

the vPC peer link is excluded from the STP computation.certcollecion. . © 2012 Cisco and/or its affiliates. All rights reserved. To avoid loops.net • A pair of Cisco Nexus 7000 Series devices appears as a single STP root in the Layer 2 topology. In vPC peer-switch mode. This feature eliminates the need to pin the STP root to the vPC primary switch and improves vPC convergence if the vPC primary switch fails. • STP BPDUs are sent on both the vPC legs to avoid issues that are related to STP BPDU timeout on the downstream switches. This feature allows a pair of Cisco Nexus 7000 Series devices to appear as a single STP root in the Layer 2 topology. which can cause traffic disruption.0—2-16 The vPC peer-switch feature was added to Cisco NX-OS Release 5.0 © 2012 Cisco Systems. DCUFD v5.0(2) to address performance concerns around STP convergence. STP BPDUs are sent from both vPC peer devices to avoid issues that are related to STP BPDU timeout on the downstream switches. vPC Domain vPC Primary B P D U vPC Peer Secondary Link B P D U • It eliminates the recommendation to increase STP hello time on the vPC-pair switches. which can cause traffic disruption. Inc. This feature can be used with these topologies: 2-52  The pure peer-switch topology in which the devices all belong to the vPC  The hybrid peer-switch topology in which there is a mixture of vPC and non-vPC devices in the configuration Designing Cisco Data Center Unified Fabric (DCUFD) v5.

• vPC object tracking suspends vPCs on the impaired device so that traffic can be diverted over the remaining vPC peer. which forces all the traffic to be rerouted in the access switch toward the other vPC peer device © 2012 Cisco Systems. All rights reserved. © 2012 Cisco and/or its affiliates. Inc. which forces the vPC secondary peer device to take over  Brings down all the downstream vPCs on that vPC peer device. Data Center Technologies 2-53 .net • Providing flexible behavior under failover conditions • Tracking state of links of a vPC peer device • Peer link and core interfaces can be tracked as a list of Boolean objects.0—2-17 Use this configuration to avoid dropping traffic if a particular module goes down. the system does the following:  Stops the vPC primary peer device from sending peer-keepalive messages.certcollecion. because when all the tracked objects on the track list go down. X X L3 L2 vPC Peer Link XX vPC Peer Keapalive vPC Primary vPC Secondary DCUFD v5.

Inc. . 2-54 Designing Cisco Data Center Unified Fabric (DCUFD) v5. All rights reserved.net No routes Core Core e 1/26 e 1/25 e 1/25 e 1/26 e 1/26 vPC Peer Link Po 12 vPC Peer Link X Po 12 Po 12 vPC Peer Keapalive vPC Primary e 1/25 e 1/25 Shut SVIs X vPC Secondary Without object tracking © 2012 Cisco and/or its affiliates. the system automatically suspends all the vPC links on the primary vPC peer device and stops the peer-keepalive messages.0 © 2012 Cisco Systems.0—2-18 After you configure this feature. and if the module fails. Create a track list that contains all the links to the core and all the vPC peer links as its object. This action forces the vPC secondary device to take over the primary role and all the vPC traffic goes to this new vPC primary device until the system stabilizes.certcollecion. e 1/26 Po 12 vPC Peer Keapalive X vPC Primary vPC Secondary With object tracking DCUFD v5. Apply this same configuration to the other vPC peer device. Enable tracking for the specified vPC domain for this track list.

 It is recommended that you use multiple I/O modules for member links.net When implementing a vPC on Cisco Nexus 7000 Series switches that are populated with F1 and M1 I/O modules. © 2012 Cisco Systems. there are some design issues to consider:  In mixed chassis. Inc.  You must use ports from the same module type on each side of the vPC peer link (all M1 or all F1 ports on each side of the vPC peer link). M1-XL.  Mixing M1 or M1-XL and F1 interfaces in a single port channel is not allowed due to different capabilities. vPCs with M1 or M1-XL ports are allowed only if the vPC peer link runs in Classical Ethernet (CE) mode. M1. or F1 ports can function as vPC peer-link ports.certcollecion. Data Center Technologies 2-55 .  If F1 ports form the vPC peer link.

certcollecion. PeerKeepalive Link vPC Domain .0 © 2012 Cisco Systems.  Multicast: F2 vPC cannot support a dual designated router (DR) for Layer 3 multicast. Inc. F2 Peer Link F2 CFS © 2012 Cisco and/or its affiliates.net vPC on Cisco Nexus 7000 F2 I/O Modules • vPC remains the same except for the following: .Multicast • F2 vPC cannot support a dual DR for Layer 3 multicast. Support for vPC Peer Link and vPC Interfaces for Cisco Nexus 7000 I/O Modules I/O Module vPC Peer-Link vPC Interfaces N7K-M108X2-12L Yes Yes N7K-M132XP-12 Yes Yes No Yes No Yes Yes Yes N7K-M132XP-12L N7K-M148GT-11 N7K-M148GT-11L N7K-M148GS-11 N7K-M148GS-11L N7K-F132XP-15 N7K-F248XP-25 2-56 Designing Cisco Data Center Unified Fabric (DCUFD) v5. vPC remains the same except for the following:  Peer link: The vPC peer link on F2 I/O modules needs identical F2 modules on both sides.0—2-20 When implementing vPC on Cisco Nexus 7000 switches that are populated with F2 I/O modules. All rights reserved. This table lists the support that is available for vPC peer link and vPC interfaces for Cisco Nexus 7000 I/O modules. DCUFD v5. .Peer Link • The peer link on F2 I/O modules needs identical F2 modules on both sides.

Layer 3 adjacencies cannot be formed over a vPC or over a vPC peer link because vPC is a Layer 2-only connection. and Protocol Independent Multicast (PIM) DR with the vPC primary peer.0(2) to address performance concerns around STP convergence. To avoid loops.0—2-21 You must manually configure the following features to conform to the primary and secondary mapping of each of the vPC peer devices:  STP Root: Configure the primary vPC peer device as the STP primary root device and configure the vPC secondary device to be the STP secondary root device.If the vPC peer-switch is implemented. • Align the STP primary root. • Disable STP channel-misconfig guard if it is supported by the access switches. All rights reserved.  vPC Peer-Switch: The vPC peer-switch feature was added to Cisco NX-OS Release 5.  Hot Standby Router Protocol (HSRP) Active/Standby: If you want to use HSRP and VLAN interfaces on the vPC peer devices. They are disabled by default. provision an additional Layer 3 link. Configure the secondary device to be the HSRP standby. To bring up a routing protocol adjacency with a peer switch. This feature eliminates the need to pin the STP root to the vPC primary switch and improves vPC convergence if the vPC primary switch fails. Note © 2012 Cisco Systems. • Enable STP BPDU-guard globally. This feature allows a pair of Cisco Nexus 7000 Series devices to appear as a single STP root in the Layer 2 topology. configure the primary vPC peer device with the HSRP active highest priority. Inc. • Enable STP port type “edge” and port type “edge trunk” on host ports. © 2012 Cisco and/or its affiliates. . the feature optimizes use of the peer link and avoids potential traffic loss. Data Center Technologies 2-57 . Configuring the peer-gateway feature must be done on both primary and secondary vPC peers and is nondisruptive to the operations of the device or to the vPC traffic. • Do not enable loop guard and bridge assurance on the vPC.certcollecion. In vPC peerswitch mode. vPC Primary STP Primary Root HSRP Active vPC Secondary STP Secondary Root HSRP Standby Layer 3 Cloud DCUFD v5. HSRP active router. both vPC peers will behave as a single STP root. STP BPDUs are sent from both vPC peer devices to avoid issues that are related to STP BPDU timeout on the downstream switches. The vPC peer-gateway feature can be configured globally under the vPC domain submode. In this scenario. This feature enables local forwarding of such packets without the need to cross the vPC peer link. • Bridge assurance is enabled by default on the vPC peer link. the vPC peer link is excluded from the STP computation.  vPC Peer-Gateway The vPC peer-gateway capability allows a vPC switch to act as the active gateway for packets that are addressed to the router MAC address of the vPC peer.net • Configure aggregation vPC peers as root and secondary root. which can cause traffic disruption.

• Why Layer 2 domains? . Large Layer 2 domains allow for easy server provisioning and virtual machine mobility.0 © 2012 Cisco Systems.Load balancing . Inc. All rights reserved.Easy server provisioning .0—2-23 Cisco FabricPath is a technology that provides additional scalability and simplification of an Ethernet network. However.Scalable domain with automatic multipathing .Application and protocol requirements . In modern data centers. . and provides for automatic load balancing and redundancy. protects Layer 2 TCAM resources. prevents Layer 2 loops.net Cisco FabricPath This topic describes how to design solutions using Cisco FabricPath. • Cisco FabricPath allows easy scaling of Layer 2 domains in data centers. Cisco FabricPath is an innovative technology that is supported in Cisco NX-OS and brings the benefits of Layer 3 forwarding to Layer 2 networks.Redundancy © 2012 Cisco and/or its affiliates. 2-58 Designing Cisco Data Center Unified Fabric (DCUFD) v5.Allows VM mobility • Cisco FabricPath provides the following: .“Plug-and-play” implementation . DCUFD v5. They also accommodate requirements of clustered applications. It also provides more efficient forwarding and eliminates the need for STP.Simple implementation . there is demand for Layer 2 domains to grow larger. Cisco FabricPath is a technology that scales easily.certcollecion.No addressing required . there were limitations due to volumes of broadcast traffic and overutilization of MAC table ternary content addressable memory (TCAM) resources. Layer 2 domains are easy to implement and do not require any addressing to be preconfigured.

 Complex operation: Layer 3 segmentation makes data center designs static and prevents them from matching the business agility that is required by the latest virtualization technologies. Its role was to provide a stable network. provide design flexibility. DCUFD v5. The first generation of Layer 2 networks was run by STP. limiting bandwidth and enforcing inefficient paths between devices. However.certcollecion. Bridging domains are therefore restricted to small areas. All rights reserved. © 2012 Cisco Systems. The second-generation data center provides the ability to use all links in the LAN topology by taking advantage of technologies such as vPCs. Data Center Technologies 2-59 . strictly delimited by Layer 3 boundaries. there were drawbacks to networks that were based on STP:  Limited scale: Layer 2 provides flexibility but cannot scale. and to unblock them in case there was a change in the topology.net Spanning Tree vPC Cisco FabricPath 16 Switches Active Paths Single Dual 16-Way Layer 2 Scalability Infrastructure Virtualization and Capacity © 2012 Cisco and/or its affiliates.  Suboptimal performance: Traffic forwarding within a bridged domain is constrained by spanning-tree rules.0—2-24 Current data center designs are a compromise between the flexibility that is provided by Layer 2 and the scaling that is offered by Layer 3. Cisco FabricPath technology on the Cisco Nexus 7000 Series switches and on the Cisco Nexus 5500 Series switches introduces new capabilities and design options that allow network operators to create Ethernet fabrics that increase bandwidth availability. and simplify and reduce the costs of network and application deployment and operation. Inc. Any change to the original plan is complicated and configuration is intensive and disruptive. to block links that would form Layer 2 loops.

that provides fabricwide intelligence and ties the elements together. Cisco FabricPath is simple to configure. and switch addresses are assigned automatically. an industry standard that provides fast convergence and that has been proven to scale up to the largest service provider environments. DCUFD v5.certcollecion. . Cisco FabricPath © 2012 Cisco and/or its affiliates. based on Layer 3 technology. • An open protocol. It brings the benefits of Layer 3 routing to Layer 2 switched networks to build a highly resilient and scalable Layer 2 fabric. All rights reserved. A single control protocol is used for unicast forwarding. multicast forwarding.0—2-25 Cisco FabricPath is an innovative Cisco NX-OS feature that is designed to bring the stability and performance of routing to Layer 2. 2-60 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. and VLAN pruning. further reducing the overall management cost.net • Connect a group of switches using an arbitrary topology and aggregate them into a fabric. The only necessary configuration consists of distinguishing the core ports (which link the switches) from the edge ports (where end devices are attached). The Cisco FabricPath solution requires less combined configuration than an equivalent STP-based network. Cisco FabricPath uses a control protocol in addition to the powerful Intermediate System-toIntermediate System (IS-IS) routing protocol. Inc. There is no need to tune any parameter for an optimal configuration.

a protocol adds fabric-wide intelligence and ties the elements together. Inc. the frame is routed until it reaches the remote switch. These addresses are simply the address of the switch on which the frame was received and the address of the destination switch to which the frame is heading. © 2012 Cisco Systems. because Cisco FabricPath does not suffer from the scaling restrictions of traditional transparent bridging. a Cisco FabricPath fabric looks like a single switch. All rights reserved. Cisco FabricPath DCUFD v5. Cisco FabricPath takes control as soon as an Ethernet frame transitions from an Ethernet network (referred to as Classical Ethernet) to a Cisco FabricPath fabric. from a CE perspective. Ethernet bridging rules do not dictate the topology and the forwarding principles in a Cisco FabricPath fabric. Cisco FabricPath © 2012 Cisco and/or its affiliates. This property is achieved by providing optimal bandwidth between any two ports. . high resiliency .Optimal. reinforcing the perception of a single virtual switch. The frame is encapsulated with a Cisco FabricPath header. regardless of their physical locations.High bandwidth.Open management and troubleshooting • Cisco FabricPath provides additional capabilities in terms of scalability and Layer 3 integration.0—2-26 Cisco FabricPath delivers the foundation for building a scalable fabric—a network that itself looks like a single virtual switch from the perspective of its users. Also. low latency connectivity that is any-to-any .certcollecion. where it is de-encapsulated and delivered in its original Ethernet format.Presents itself as an STP root bridge • Internally. which consists of routable source and destination addresses. a particular VLAN can be extended across the whole fabric. From there on. Data Center Technologies 2-61 . This protocol provides the following in a plug-and-play fashion: .net • Externally.

All rights reserved. when combined with 16-port 10-Gb/s port channels.certcollecion. the network can use all the links that are available between any two devices. The first-generation hardware supporting Cisco FabricPath can perform 16-way ECMP. s8 B DCUFD v5. any-to-any. (Layer 2 as if it was within the same switch. Cisco FabricPath s3 A © 2012 Cisco and/or its affiliates.56 terabits per second (Tb/s) between switches.0—2-27 Frames are forwarded along the shortest path to their destination. • Single address lookup at the ingress edge identifies the exit port across the fabric. providing fast convergence. 2-62 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0—2-28 Because Equal-Cost Multipath (ECMP) can be used in the data plane.) MAC IF A e1/1 … … B s8. • Conversational learning—per-port MAC address table only needs to learn the peers that are reached across the fabric. DCUFD v5. with no STP inside. e1/2 Cisco FabricPath s3 s8 e1/2 e1/1 A B © 2012 Cisco and/or its affiliates. . Inc. which. • Reliable Layer 2 and Layer 3 connectivity. All rights reserved.0 © 2012 Cisco Systems. represents a potential bandwidth of 2.net • Shortest path. reducing the latency of the exchanges between end stations when compared to a spanning tree-based solution. any-to-any. • Traffic is then switched using the shortest path available. • ECMP • Multipathing (up to 256 links active between any two devices) • Traffic is redistributed across remaining links in case of failure.

security. DCUFD v5. All rights reserved. Links are assigned to topologies. You can have a link carrying traffic for multiple topologies. • Topologies can be used for traffic engineering. You can create additional topologies by assigning links to these topologies. VLANs are assigned to a topology. Data Center Technologies 2-63 . You can perform traffic engineering (“manual” load balancing) on the fabric. Inc. Cisco FabricPath fabrics have only one logical topology that they use—Topology 0.0—2-29 By default. Assigning links to multiple topologies allows you to assign traffic to dedicated paths if that is required. • A link can belong to several topologies. © 2012 Cisco Systems. and so on. all links are part of Topology 0.net • Topology: A group of links in the fabric. • A VLAN is mapped to a unique topology.certcollecion. L2 L5 L1 L3 L6 L4 L7 L8 L10 L11 Layer 2 Fabric L12 L9 Topology 0 Topology 1 Topology 2 L1 to L12 = Layer 1 to Layer 12 © 2012 Cisco and/or its affiliates. By default. • Other topologies can be created by assigning a subset of the links to them.

. • The forwarding engine learns the remote MAC only if a bidirectional conversation occurs between the local and remote MAC. Each forwarding engine performs MAC address learning independently of the other 15 forwarding engines on the module. The Cisco Nexus 5500 Series switches also have Cisco FabricPath support as of Cisco NX-OS version 5.certcollecion. each host learns the MAC address of every other device on the network. Beginning with Cisco NX-OS Release 5.1(3)N1(1). B e12/1(local) C S300 (remote) DCUFD v5. All rights reserved.1 and using the N7K-F132XP-15 module.0—2-30 Cisco FabricPath Control Plane Operation Conversational MAC Address Learning Conversational MAC address learning means that each interface learns only those MAC addresses for interested hosts. which greatly reduces the size of the MAC address tables. Each interface learns only those MAC addresses that are actively speaking with the interface. the MAC learning process is optimized. Inc. rather than all MAC addresses in the domain.net • The MAC learning method is designed to conserve MAC table entries on Cisco FabricPath edge switches. The MAC learning process takes place on only one of them. An interface only maintains a MAC address table for the MACs that ingress or egress through that forwarding engine. 2-64 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. With Cisco FabricPath. The N7K-F132XP-15 module has 16 forwarding engines. The interface does not have to maintain the MAC address tables on the other 15 forwarding engines on the module. All outer destination address Cisco FabricPath VLANs always use conversational learning. Conversational MAC learning is configured per VLAN. In traditional MAC address learning. not all interfaces have to learn all the MAC addresses on an F-Series module. You can configure CE VLANs for conversational learning on this module as well. S300 Cisco FabricPath MAC Table on S300 MAC C Cisco FabricPath MAC Table on S100 S100 MAC Interface or Switch ID B S200 (remote) C e7/10 (local) S200 Interface or Switch ID Cisco FabricPath MAC Table on S200 A e1/1 (local) MAC Interface or Switch ID B S200 (remote) A S100 (remote) MAC MAC A Cisco FabricPath Core MAC B © 2012 Cisco and/or its affiliates. • Each forwarding engine distinguishes between two types of MAC entry: local and remote.

values (TLVs). IS-IS provides the following benefits:  No IP dependency: There is no need for IP reachability in order to form adjacency between devices.0—2-31 Cisco FabricPath IS-IS With Cisco FabricPath. and high availability • Minimal IS-IS knowledge is required—no user configuration by default STP BPDU STP STP BPDU Cisco FabricPath IS-IS Cisco FabricPath Classical Ethernet Interface Cisco FabricPath Interface © 2012 Cisco and/or its affiliates.net • Cisco FabricPath IS-IS replaces STP as the control plane protocol in a Cisco FabricPath network.  Shortest Path First (SPF) routing: This provides superior topology building and reconvergence characteristics. There is no need to run STP. It is a purely Layer 2 domain. and multicast packets. Although the Cisco FabricPath network automatically verifies that each switch has a unique SID. If you choose to manually configure SIDs. network reconvergence. Cisco FabricPath Layer 2 IS-IS is a separate process from Layer 3 ISIS. All rights reserved. A new switch initially selects a random SID and checks to see if that value is already in use. IS-IS devices can exchange information about virtually anything. broadcast. a configuration command is provided for the network administrator to statically assign a SID to a Cisco FabricPath switch.  Easily extensible: Using custom types. you use the Layer 2 IS-IS protocol for a single control plane that functions for unicast.certcollecion. lengths. DCUFD v5. Every switch must have a unique source ID (SID) to participate in the Cisco FabricPath domain. • Introduces link-state protocol with support for ECMP for Layer 2 forwarding • Exchanges reachability of switch IDs and builds forwarding trees • Improves failure detection. Inc. © 2012 Cisco Systems. be certain that each switch has a unique value because any switch with a conflicting SID will suspend data plane forwarding on Cisco FabricPath interfaces while the conflict exists. Data Center Technologies 2-65 .

certcollecion.Forwarding based on switch ID table Ethernet STP Ethernet FabricPath Header Cisco FabricPath Classical Ethernet Interface Cisco FabricPath Interface © 2012 Cisco and/or its affiliates.0—2-32 Every interface that is involved in Cisco FabricPath switching falls into one of two categories:  Cisco FabricPath edge port: Cisco FabricPath edge ports are interfaces at the edge of the Cisco FabricPath domain.3 Ethernet frames. The switch generates a BPDU on the CE interface.Exchange topology information through Layer 2 IS-IS adjacency .3 Ethernet frame format . putting the Cisco FabricPath cloud at the top of the STP tree. no MAC address learning occurs on Cisco FabricPath core ports.  2-66 Cisco FabricPath core port: Cisco FabricPath core ports always forward Ethernet frames that are encapsulated in a Cisco FabricPath header. Designing Cisco Data Center Unified Fabric (DCUFD) v5.No MAC learning . the port can conceptually be considered a trunk port. Inc.Interfaces connected to existing NICs and traditional network devices . Cisco FabricPath switches perform MAC address learning on edge ports.Send and receive traffic in 802.net • Cisco FabricPath Edge Port (Classical Ethernet Interface): . These interfaces run Classical Ethernet and behave exactly like normal Ethernet ports. Generally. Ethernet frames that are transmitted on a Cisco FabricPath interface always carry an IEEE 802.1Q tag and.0 © 2012 Cisco Systems. You can configure an edge port as an access port or as an IEEE 802. DCUFD v5.Forwarding based on MAC table • Cisco FabricPath Core Port (Cisco FabricPath Interface): . Forwarding decisions occur based exclusively on lookups in the switch table.Participate in STP domain: advertises the Cisco FabricPath fabric as an STP root bridge .Interfaces connected to another Cisco FabricPath device . You can attach any Classical Ethernet device to the Cisco FabricPath fabric by connecting it to a Cisco FabricPath edge port. All rights reserved.1Q trunk.No spanning tree . therefore. . and frames that are transmitted on edge ports are standard IEEE 802.Send and receive traffic with Cisco FabricPath header . The whole Cisco FabricPath fabric appears as a spanning-tree root bridge toward the Classical Ethernet cloud.

... L4 . S100 A B S200 © 2012 Cisco and/or its affiliates. Inc.0—2-33 Building the Cisco FabricPath Routing Table The protocol used to establish the routed topology is a modified version of IS-IS. All rights reserved.net • IS-IS assigns addresses to all Cisco FabricPath switches automatically • Compute shortest. IS-IS automatically assigns addressing and switch names. and computes shortest paths between any two switches in the Cisco FabricPath cloud. L3. If multiple paths are available between two switches. L4 Interface A 1/1 B 400 S20 S30 S40 L3 L2 Cisco FabricPath L1 L4 S100: CE MAC Address Table MAC S10 . Data Center Technologies 2-67 . L2.. © 2012 Cisco Systems. S400 L1.. L2. L3..certcollecion. pair-wise paths • Support equal-cost paths between any Cisco FabricPath switch pairs S100: Cisco FabricPath Routing Table Switch Interface S10 L1 S20 L2 S30 L3 S40 L4 S200 L1. The IS-IS protocol is easily extensible and does not require any IP configuration to discover the topology and determine shortest-path trees. IS-IS installs both routes in the Cisco FabricPath routing table and performs ECMP between these two switches. . S300 S400 L1 to L4 = Layer 1 to Layer 4 DCUFD v5.

0 © 2012 Cisco Systems. • Support for multiple multidestination trees provides multipathing for multidestination traffic. Note For Cisco NX-OS Release 5. For every topology. and multicast packets. Inc. the system creates a broadcast tree that carries broadcast traffic. unknown unicast traffic. multidestination trees can be created. Within the Cisco FabricPath network. the system elects a root node that becomes the root for the broadcast tree. depending on the size of the topology and the links that are available. The system also creates a second tree and all the multicast traffic flows are load-balanced across these two trees for each flow. Note 2-68 Cisco FabricPath accommodates multiple topologies. the system chooses the forwarding path from among multiple system-created paths or trees.net • Multidestination traffic is constrained to loop-free trees that are touching all Cisco FabricPath switches. S200 S101 DCUFD v5. For the Cisco FabricPath network. For each broadcast.0—2-34 Multidestination Trees Cisco FabricPath introduces a new loop-free broadcast functionality that carries broadcast. and multicast traffic through the Cisco FabricPath network. • A loop-free tree is built from each root and assigned a network-wide identifier (the FTag). . All rights reserved. Designing Cisco Data Center Unified Fabric (DCUFD) v5. Cisco FabricPath S100 S100 S20 S10 S101 S30 Root S200 S40 S100 S10 S101 S20 S200 S30 Logical Tree 1 Logical Tree 2 S40 Root © 2012 Cisco and/or its affiliates. That node also identifies another bridge to become the root for the second multidestination tree. or forwarding tag (FTag). and multicast traffic flow. Each tree is identified in the Cisco FabricPath network by a unique value. unknown unicast. or multidestination traffic.1.certcollecion. unknown unicast. the system creates two trees to forward the multidestination traffic for each topology. S10 S20 S30 S40 • A root switch is assigned for each multidestination tree in the Cisco FabricPath domain. which load-balances the multicast traffic.

the end node ID field is not used by the Cisco FabricPath implementation. along with an IEEE 802. The fields of the Cisco FabricPath header are described here:  End node ID: As of Cisco NX-OS Release 5.1Q EType Payload CRC Original CE Frame DMAC SMAC 802. this bit is not used in Cisco FabricPath implementation.1Q EType CRC (new) Payload 12 bits 8 bits 16 bits 16 bits 10 bits 6 bits Switch ID Subswitch ID Port ID EType FTag TTL © 2012 Cisco and/or its affiliates. Inc.  Out of Order/Does Not Learn (OOO/DL) Bit: The function of the OOO/DL bit varies depending on whether the bit is set in the outer destination address (OOO) field or the outer source address (DL) field. MAC addresses and do not uniquely identify a particular hardware component as a standard MAC address would. indicating that the MAC address is locally administered (rather than universally unique). in fact. All rights reserved. The original Ethernet frame. Data Center Technologies 2-69 . This setting is required because the outer source address and destination address fields are not. Cisco FabricPath switches that are receiving such frames on a Cisco FabricPath core port parse these fields according to the format shown in this figure. is prepended by a 48-bit outer source address.certcollecion. a 48-bit outer destination address.2(1). All multidestination addresses have this bit set.0—2-35 Cisco FabricPath Encapsulation Cisco FabricPath encapsulation uses a MAC address-in-MAC address encapsulation format. © 2012 Cisco Systems. As of Cisco NX-OS Release 5.net Cisco FabricPath Encapsulation • • • • Switch ID: Unique number that identifies each Cisco FabricPath switch Subswitch ID: Identifies devices and hosts that are connected via vPC+ Port ID: Identifies the destination or source interface FTag: Unique number that identifies the topology or multidestination distribution tree • TTL: Decremented at each switch hop to prevent frames from looping infinitely Classical Ethernet Frame Cisco FabricPath Frame 1 1 2 bits I/G U/L End node ID (7:6) 1 1 OOO/DL RSVD 6 bits End node ID (5:0) Outer DA (48) Outer SA (48) FP Tag (32) DMAC SMAC 802. and a 32-bit Cisco FabricPath tag.2(1).  Individual/Group (I/G) Bit: The I/G bit serves the same function in Cisco FabricPath as in standard Ethernet. determining whether the address is an individual address or a group address. the presence of this field may provide the future capability for per-packet load sharing when ECMP paths are available. However. allowing forwarding decisions based on Cisco FabricPath down to the virtual or physical end-station level. While the outer source address and destination address may appear as 48-bit MAC addresses. However. DCUFD v5. the presence of this field may provide the future capability for an end station that is enabled for Cisco FabricPath to uniquely identify itself.  Universal/Local (U/L) Bit: Cisco FabricPath switches set the U/L bit in all unicast outer source address and destination address fields.1Q tag.

the FTag identifies the Cisco FabricPath topology that the frame is traversing. The subswitch ID value is locally significant to each vPC+ switch pair.net  Switch ID: Every switch in the Cisco FabricPath domain is assigned a unique 12-bit switch ID. the FTag identifies the multidestination forwarding tree that the frame should traverse.certcollecion. Inc. The value is locally significant to each switch. the value in the outer destination address is set to a specific value depending on the type of multidestination frame: — For multicast frames.  Subswitch ID: The subswitch ID field identifies the source or destination virtual port channel plus (vPC+) interface that is associated with a particular vPC+ switch pair. and the subswitch ID is used to select the outgoing port instead. — For unknown unicast frames. Cisco FabricPath switches running vPC+ use this field to identify the specific vPC+ port channel on which traffic is to be forwarded. In the absence of vPC+. identifies the specific physical or logical interface on which the frame was sourced or to which it is destined. therefore. this field identifies the Cisco FabricPath switch that originated the frame (typically the ingress Cisco FabricPath edge switch). As of Cisco NX-OS Release 5.2(1). For frames that are sourced from or destined to a vPC+ port channel. With unicast frames. this field is populated with the corresponding bits of a reserved multicast address (01:0F:FF:C2:02:C0) to facilitate MAC address table updates on Cisco FabricPath edge switches.  EtherType (EType): The EType value for Cisco FabricPath encapsulated frames is 0x8903. With multidestination frames. this field is set to a common value that is shared by both vPC+ peer switches.0 © 2012 Cisco Systems. a loop.  Port ID: The port ID. only a single topology is supported. also known as the local identifier (local ID). For multidestination frames. this field identifies the destination Cisco FabricPath switch. and this value is always set to 1. this field is populated with the corresponding bits of the destination MAC address field of the original (encapsulated) Ethernet frame. this field is set to 0. In the outer destination address.  FTag: The function of the FTag depends on whether a particular frame is unicast or multidestination. In the outer source address. Cisco FabricPath uses the Time to Live (TTL) field in the frame to prevent unlimited frame flooding and. Note 2-70 Instead of a loop prevention mechanism. . This field in the outer destination address allows the egress Cisco FabricPath switch to forward the frame to the appropriate edge interface without requiring a MAC address table lookup. this field is populated with the corresponding bits of a reserved multicast address (01:0F:FF:C1:01:C0). Designing Cisco Data Center Unified Fabric (DCUFD) v5. — For frames with a known inner destination MAC address but an unknown source.

.. Data Center Technologies 2-71 .. It will not learn the source address.. . If the switch has the destination MAC address in its local MAC address table (S300). A -> B S300: CE MAC Address Table MAC B Interface B 1/2 A S100 DCUFD v5.certcollecion. © 2012 Cisco and/or its affiliates.. The switch that receives the frame (S100). it will forward the frame and learn the source MAC address of the frame... This way..0—2-36 Conversational MAC Address Learning Conversational MAC address learning means that each interface learns only those MAC addresses for interested hosts. All rights reserved.. The switch that does not have the destination MAC address in its MAC address table (S200) will simply disregard the flooded frame. . S200 Lookup B: Hit Learn source A S300 1/2 S200: CE MAC Address Table MAC A . ... . with an unknown MAC address as the destination. MAC address table resources are conserved. Inc. will flood the frame out of all ports in the domain. Lookup B: Miss Do not learn Interface MAC Interface A 1/1 .. © 2012 Cisco Systems.. Each interface learns only those MAC addresses that are actively speaking with the interface. rather than all MAC addresses in the domain. .net MAC Address Learning: Unknown Unicast S10 S20 A→B S30 S40 S100 → M Cisco FabricPath Lookup B: Miss Flood S100 1/1 S100: CE MAC Address Table A -> B ..

. DCUFD v5.. Interface ... L4 Lookup A: Hit Send to S100 S300 1/2 S200: CE MAC Address Table MAC A .. ... 2-72 Designing Cisco Data Center Unified Fabric (DCUFD) v5.. .0 © 2012 Cisco Systems.net MAC Address Learning: Known Unicast S10 S20 B→A S30 S40 S300: FabricPath Routing Table S300 → S100 Switch Cisco FabricPath Lookup A: Hit Learn source B S100: CE MAC Address Table S100 1/1 B -> A . . B -> A S300: CE MAC Address Table MAC B Interface B 1/2 A S100 Conversational Learning © 2012 Cisco and/or its affiliates. Remote MAC addresses have the name of the switch to which they are attached instead of the upstream interface as in Classical Ethernet... L2.certcollecion.. B S300 .. they forward the frames based on the information that they have in their MAC address tables. L3... Inc. S200 Interface MAC Interface A 1/1 .. All rights reserved. S100 L1.0—2-37 After the switches have learned all relevant pairs of MAC addresses.

L2 F1 F1 S2 F1 S4 Cisco FabricPath Classical Ethernet L1.certcollecion. and so on • vPC+ requires F1 or F2 modules with Cisco FabricPath enabled in the VDC. A vPC+ domain allows a CE vPC domain and a Cisco FabricPath cloud to interoperate.Peer link and all vPC+ connections must be to F1 or F2 ports. you cannot mix F1 and F2 interfaces in vPC+.0—2-38 Cisco FabricPath and vPCs vPC support was added to Cisco FabricPath networks in order to support switches or hosts that dual-attach through Classical Ethernet. vPC+ must still provide active/active Layer 2 paths for dual-homed CE devices or clouds. Note vPC+ is an extension to vPCs that run CE only. vPC+ provides the solution by creating a unique virtual switch to the Cisco FabricPath network. L1 F1 • The subswitch ID in the Cisco FabricPath header is used to identify the vPC toward the CE device. Layer 2 po3 Host A DCUFD v5. The subswitch ID field in the Cisco FabricPath header identifies the vPC port channel of the virtual switch where the frame should be forwarded. dual-homed servers. Inc. Data Center Technologies 2-73 . A vPC+ domain also provides a First Hop Routing Protocol (FHRP) active/active capability at the Cisco FabricPath to Layer 3 boundary. L2 = Layer 1. or M1XL Series modules. which is a unique virtual switch to the rest of the Cisco FabricPath network. Therefore. even though the Cisco FabricPath network allows only one-to-one mapping between the MAC address and the switch ID. The F2 Series modules cannot exist in the same VDC with F1. M1. A vPC+ domain enables Cisco Nexus 7000 Series devices that are enabled with Cisco FabricPath devices to form a single vPC+. Each vPC+ has its own virtual switch ID. All rights reserved. © 2012 Cisco Systems. You cannot configure a vPC+ domain and a vPC domain on the same Cisco Nexus 7000 Series device. The system then performs proxy routing using both the F1 Series and the M1 Series modules in the chassis. The F1 Series modules have only Layer 2 interfaces. F1 vPC+ F1 S1 F1 F1 S2 F1 po3 Physical Host A S3 Logical L1 . . Layer 3 routers. • vPC+ creates a “virtual” Cisco FabricPath switch S4 for each vPC+ attached device to allow load balancing within a Cisco FabricPath domain.net S3 Interaction between Cisco FabricPath and vPC: • vPC+ allows dual-homed connections from edge ports into a Cisco FabricPath domain with active/active forwarding. L2 F1 S1 L2 F1 vPC+ F1 Host A→S4→L1.CE switches. You configure the same domain on each device to enable the peers to identify each other and to form the vPC+. © 2012 Cisco and/or its affiliates. you must have an M Series module inserted into the same Cisco Nexus 7000 Series chassis. To use routing with F1 Series modules and vPC+.

certcollecion. The vPC+ downstream links are Cisco FabricPath edge interfaces.0 © 2012 Cisco Systems.net This figure shows the differences between vPC and vPC+. Inc. which connect to the CE hosts. 2-74 Designing Cisco Data Center Unified Fabric (DCUFD) v5. . You must have all interfaces in the vPC+ peer link as well as all the downstream vPC+ links on an F-Series module with Cisco FabricPath enabled.

Data Center Technologies 2-75 .certcollecion. Through the vPC+. the CE switch can take full advantage of multipathing. S200 MAC C DCUFD v5. © 2012 Cisco Systems.or multi-attached to the core switches arbitrarily and form the Cisco FabricPath domain. a virtual switch is created in the topology to accommodate the vPC+. All rights reserved.net Peer link runs as Cisco FabricPath core port vPCs configured as normal S10 Peer link and peer keepalive required S20 S30 S40 VLANs must be Cisco FabricPath VLANs No requirements for attached devices other than channel support S100 MAC A Cisco FabricPath MAC B © 2012 Cisco and/or its affiliates.  On the Cisco FabricPath switches. Classical Ethernet switches must be dual-attached to one pair of Cisco FabricPath switches. Inc.0—2-40 This figure explains vPC+ physical topology:  Cisco FabricPath devices can be dual.

GLBP AVF Cisco FabricPath AVF = actual virtual forwarder DCUFD v5. which hands out a different gateway MAC (up to four different MACs) in a round-robin fashion. HSRP Active/Standby The gateway protocols work in Cisco FabricPath the same way that they work in typical Ethernet networks.certcollecion.HSRP on spine switches .net • HSRP • GLBP .HSRP MAC is active only on one gateway. Even without any further configuration than the standard configuration of HSRP. 2-76 Designing Cisco Data Center Unified Fabric (DCUFD) v5. . instead. you can use the Gateway Load Balancing Protocol (GLBP). GLBP AVG AVF Cisco FabricPath © 2012 Cisco and/or its affiliates. HSRP Active HSRP Standby . be forwarded only to the active HSRP device. .GLBP MAC addresses are active on multiple AVFs.0 © 2012 Cisco Systems. Traffic that must be routed to other networks (“Northbound”) needs to be forwarded to the default gateway.Active/active deployment . The default active/standby behavior of HSRP does not negate the value of Cisco FabricPath. which is especially suitable for “East-West” traffic. the Layer 2 traffic (east to west) would benefit from multipath forwarding.GLBP on spine switches .0—2-41 Default Gateway Routing and Cisco FabricPath Cisco FabricPath is a Layer 2 technology that provides a fabric that is capable of efficient forwarding of traffic and load balancing.Hosts learn one default gateway MAC address through conversational learning. Inc.Active/standby deployment . GLBP Active/Active To perform traffic forwarding from multiple spine switches. .Hosts learn multiple default gateway MAC addresses through conversational learning. Routed traffic (south to north) would. All rights reserved.

HSRP virtual MAC address is bound to the virtual switch ID of vPC+. Inc. The spine devices are connected by an additional Cisco FabricPath link (which would be recommended in any case to optimize multidestination tree forwarding) and by defining it as a vPC+ peer link. . whereby if the peer link goes down. you can achieve HSRP forwarding in active/active fashion. switch-id 209 DCUFD v5.Switches in the fabric perform ECMP to the virtual switch across both spine switches. the default behavior of the vPC applies. Cisco FabricPath switch-id 201 switch-id 202 © 2012 Cisco and/or its affiliates. switch-id 101 vPC+ switch-id 102 vPC peer link vPC peer-keepalive link switch-id 1000 HSRP virtual MAC address is reachable through virtual switch 1000. © 2012 Cisco Systems. All rights reserved. this behavior is not beneficial. because the Cisco FabricPath links are still available. Data Center Technologies 2-77 .0—2-42 HSRP Active/Active By taking full advantage of the concept of vPC+ at the spine. Note The vPC+ peer link must be built using F1 ports and not M1 ports. Peer Link Failure As a result of declaring the link that connects the spines as a vPC peer link. In the context of Cisco FabricPath designs. As a result of configuring HSRP and vPC+. . The use of vPC+ at the spine is strongly recommended. and there is no good reason to shut down the SVIs on the secondary device. the SVIs on the vPC secondary device are shut down. To continue forwarding over the Cisco FabricPath fabric to the HSRP default gateway. vPC+ gives you the ability to forward routed traffic to multiple routing engines as well as to optimize failover times. exclude the SVIs to be shut down by properly configuring the vPC domain. the edge switches learn the association of HSRP virtual MAC addresses with the emulated switch ID instead of the individual spine switch ID. The use of vPC+ allows the routed traffic between any two given endpoints to benefit from both Layer 2 equal-cost multipathing and from the aggregated routing capacity of the spine switches. because it must be configured as a Cisco FabricPath link.net • HSRP active/active is also possible by adding a vPC+.certcollecion.

DCUFD v5. which then load-balances the traffic across both edge switches. • vPC+ at the edge allows multihoming for a FEX or for a vPC to a host. In all cases.0—2-43 vPC+ at the Edge Additional links at the Cisco FabricPath edge are beneficial because they allow direct server-toserver traffic between edge switches and do not need to traverse one of the spine switches. All rights reserved. Another example is using the link as the vPC peer link to either multihome a FEX or to create a port channel to a host that intensively uses the network. 2-78 Designing Cisco Data Center Unified Fabric (DCUFD) v5. .0 © 2012 Cisco Systems. Inc. Traffic from the network to the server (“Southbound”) is sent to the emulated vPC+ virtual switch.net • The link between edge switches allows for direct server-to-server traffic.certcollecion. The additional path is considered by the Cisco FabricPath IS-IS control plane. vPC+ vPC peer link vPC peer-keepalive link Cisco FabricPath © 2012 Cisco and/or its affiliates. the link between edge switches must be configured as a Cisco FabricPath link. such as a virtual machine host. forwarding it to the server.

Inc.net Summary This topic summarizes the primary points that were discussed in this lesson.cisco. Data Center Technologies 2-79 . please refer to the following material:  Cisco FabricPath Design Guide: Using FabricPath with an Aggregation and Access Topology at http://www.html#wp9000285 © 2012 Cisco Systems.com/en/US/prod/collateral/switches/ps9441/ps9670/guide_c07690079. References For additional reference.certcollecion.

.net 2-80 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc.certcollecion.0 © 2012 Cisco Systems.

including those that would otherwise be blocked by Spanning Tree Protocol (STP). you learned about Cisco technologies that are used in data centers. such as Cisco FabricPath and Cisco OTV. increasing the operational efficiency of devices. all installed bandwidth between network layers can be used. This way. New technologies. Data Center Technologies 2-81 . In data centers. Inc. help to design a combination that best suits the intended use. • The network switching equipment can function on two ISO-OSI layers— Layer 2 and Layer 3. © 2012 Cisco Systems. DCUFD v5.0—2-1 In this module. • Virtualization of network components is an important mechanism that allows for consolidation of devices. All rights reserved. The second lesson covered various device virtualization technologies that are used to virtualize physical equipment into multiple logical devices. allowing for designs that utilize all links between the switches. The first lesson covered traditional routing and switching technologies that are widely supported. and MEC.certcollecion. depending on the type of equipment. extended Layer 2 domains are now popular. The last lesson described Layer 2 multipathing technologies. • Layer 2 multipathing technologies allow for better utilization of links that are configured as Layer 2. Examples include vPC. Cisco FabricPath. These technologies include VDCs and contexts.net Module Summary This topic summarizes the primary points that were discussed in this module. both layers are used in various combinations. While routing on Layer 3 has clear benefits in terms of network segmentation and convergence speed. These device virtualization technologies allow for equipment consolidation in data centers. © 2012 Cisco and/or its affiliates.

. Inc.0 © 2012 Cisco Systems.net 2-82 Designing Cisco Data Center Unified Fabric (DCUFD) v5.certcollecion.

It shortens cable lengths. What are two considerations to make when you are assigning interfaces to VDCs? (Choose two. Q1) Which forwarding database is used on the I/O modules to decide where to forward the packet? (Source: Designing Layer 2 and Layer 3 Switching) A) B) C) D) Q2) Which type of forwarding is being used when a switch is forwarding traffic using its supervisor engine? (Source: Designing Layer 2 and Layer 3 Switching) A) B) C) D) Q3) Multiple switches join the same VDC and establish vPCs.net Module Self-Check Use these questions to review what you learned in this module. Power requirements increase when you enable VDCs. It provides switching functions on the extended fabric.) (Source: Virtualizing Data Center Components) A) B) C) D) Q6) number of managed devices firewall inspection capabilities use of contiguous subnets that can be easily summarized use of IPv6 exclusively to prevent attacks to the management network How is the Cisco Nexus 7000 Series Switch virtualized using VDCs? (Source: Virtualizing Data Center Components) A) B) C) D) Q5) MAC address-based forwarding distributed forwarding with central forwarding engine centralized forwarding routed transport What must be considered when designing IP addressing for the data center management network? (Source: Designing Layer 2 and Layer 3 Switching) A) B) C) D) Q4) OSPF topology table dFIB RIB MAC address table The sum of the required resources for every VDC must not exceed the available resources on the I/O modules. A switch is divided into multiple VDCs. Data Center Technologies 2-83 .certcollecion. The switch is divided using multiple VLANs and VRFs. It provides network self-management capabilities. VDCs virtualize the switch on the hypervisor layer. The correct answers and solutions are found in the Module Self-Check Answer Key. You can mix all I/O modules in the VDCs. It provides cost-effective connectivity to gigabit-only endpoints. Inc. What is the purpose of a fabric extender? (Source: Designing Layer 2 Multipathing Technologies) A) B) C) D) © 2012 Cisco Systems. The ports must be allocated considering the distribution of ports in port groups on the I/O module.

0 © 2012 Cisco Systems. .) (Source: Designing Layer 2 Multipathing Technologies) A) B) C) D) E) Q8) Which address learning mechanism does Cisco FabricPath use? (Source: Designing Layer 2 Multipathing Technologies) A) B) C) D) Q9) MAC address learning by flooding IS-IS-based MAC address learning conversational MAC address learning prolonged MAC address retention What is the primary benefit of a Cisco FabricPath fabric against vPC? (Source: Designing Layer 2 Multipathing Technologies) A) B) C) D) 2-84 vPC MEC STP HSRP Cisco FabricPath incorporated routing for HSRP the ability to use more than two upstream switches for load balancing to other parts of the network vPC support for Cisco Catalyst switches faster data plane operation Designing Cisco Data Center Unified Fabric (DCUFD) v5.net Q7) What are the three most commonly used Layer 2 multipathing technologies? (Choose three.certcollecion. Inc.

D Q6) A Q7) A. B. E Q8) C Q9) B © 2012 Cisco Systems. Inc.net Module Self-Check Answer Key Q1) B Q2) C Q3) A Q4) B Q5) A. Data Center Technologies 2-85 .certcollecion.

net 2-86 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc.0 © 2012 Cisco Systems. .certcollecion.

topologies. and so on. and services in the aggregation layer  Design the data center physical access layer  Design the data center virtual access layer and related physical connectivity. In this module. Cisco FabricPath. the layered approach is used. The access layer connects the physical devices. clusters. and access layers. next-hop redundancy protocols. aggregation. and depending on the size of the data center. are the best fit for any particular design requirements. over various underlying technologies .certcollecion. and LISP  Design data center interconnects for both data traffic and storage traffic. In addition to this classic example. Several devices need to be interconnected. there are other examples as well. including IP routing. the virtual access layer. such as the collapsed core layer. you will also study data center topologies that will enable you to determine which technologies. Module Objectives Upon completing this module. and so on. and the core layer interconnects multiple data center aggregation blocks and the rest of the network. This ability includes being able to meet these objectives:  Design data center connections and topologies in the core layer  Design data center connections. you will be able to design data center connections and topologies in the core. and describe scalability limitations and application impact  Design for data center high availability with various technologies.net Module 3 Data Center Topologies Overview In this module. the aggregation layer aggregates the connections. you will learn about designing data centers from the topology point of view. such as virtual port channel (vPC).

Inc.certcollecion. .0 © 2012 Cisco Systems.net 3-2 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

certcollecion. you will be able to design data center connections and topologies in the core layer. This ability includes being able to meet these objectives:  Identify the need for the data center core layer  Design a Layer 3 data center core layer  Design a Layer 2 data center core layer  Evaluate designs using data center collapsed core . High-speed switching and highbandwidth links are provisioned in the core network. you will learn about the role of the data center core layer. The role of the data center core is to provide an interconnection between data center aggregation blocks and the campus core network.net Lesson 1 Designing the Data Center Core Layer Network Overview In this lesson. and design considerations for it. Objectives Upon completing this lesson.

0 © 2012 Cisco Systems. The data center core should not drop frames because of congestion. An efficient core improves the performance of applications. . All rights reserved. and the campus network.0—3-4 The primary function of the data center core layer is to provide a high-speed interconnection between different parts of the data center. Inc. • High-speed switching of frames between networks • No oversubscription for optimal throughput Core Layer Aggregation Layer Access Layer © 2012 Cisco and/or its affiliates. you should allow as little oversubscription as possible. 3-4 Designing Cisco Data Center Unified Fabric (DCUFD) v5. • The data center core layer interconnects aggregation blocks. When provisioning links and equipment for the core.net Data Center Core Layer This topic describes how to identify the need for the data center core layer.certcollecion. DCUFD v5. which are grouped in several aggregation blocks.

© 2012 Cisco Systems. broadcast flooding. and so on). Using traditional Layer 2 technologies introduces severe drawbacks in link scalability (that is. such as the virtual port channel (vPC). so such deployments had limited scalability. Using IP routing implies that you segment the data center into multiple networks. or the Multichassis EtherChannel (MEC) (when using Cisco Catalyst Virtual Switching System [VSS]). and route traffic between them. the Spanning Tree Protocol (STP). To overcome limitations of traditional single-domain Layer 2 deployments. Data Center Topologies 3-5 . When network segmentation is not desired. ECMP supports up to 16 paths with Cisco Nexus 7000 and Catalyst 6500 Series equipment. such as Equal-Cost Multipath (ECMP). you can achieve good scalability and throughput by using multilink aggregation technologies.net Most data center core deployments are based on Layer 3 forwarding technologies. Alternatively. traffic can be forwarded on a Layer 2 basis. you can use Cisco FabricPath as the core forwarding technology. These include IP routing protocols and forwarding technologies. which can control the whole Layer 2 domain and select optimal paths between devices. Inc.certcollecion.

Layer 3 IP routing configuration is required in the data center core and aggregation layers. OSPF in particular allows route summarization on network area borders. and only the default route is advertised into the data center. or point-to-point VLANs between switches. The objective is to keep the enterprise core routing table as concise and stable as possible to limit the impact of routing changes happening in other places in the network from impacting the data center. This includes routing adjacency configuration. and vice versa. 3-6 Designing Cisco Data Center Unified Fabric (DCUFD) v5. possible route filtering. The Layer 2 infrastructure usually uses point-to-point links.certcollecion. summarization. Over those links. . Inc. Route summarization is recommended at the data center core.net Layer 3 Data Center Core Design This topic describes how to design a Layer 3 data center core layer.0 © 2012 Cisco Systems. and default route origination. you can establish a routing protocol adjacency and forward traffic between devices. so that only summarized routes are advertised out of the data center. Some of the common Layer 3 features required in the data core include the ability to run an interior gateway protocol (IGP) such as Open Shortest Path First (OSPF) or Enhanced Interior Gateway Routing Protocol (EIGRP).

Data Center Topologies 3-7 . © 2012 Cisco Systems. such as database servers. This has become the requirement for enabling workload mobility (live virtual machine migration). A failure in the STP or in a VLAN (Layer 2 broadcast domain) in one aggregation block does not reach the core switch. and application clusters. On the other hand. All rights reserved.certcollecion. When using a Layer 3 core layer. Block 1 Agg. since all traffic between aggregation blocks passes through the core • Drawback: network segmentation does not allow extension of the same VLAN between two aggregation blocks Data Center Core ECMP Core Layer Agg. This provides good failure isolation and stability for the rest of the data center network.0—3-8 A design that uses segmented networks provides good failure isolation. the Layer 3 segmented network has some drawbacks: network segmentation does not allow you to extend the same VLAN between two aggregation blocks. STP STP DCUFD v5. you have control over traffic because all traffic between aggregation blocks needs to be forwarded through the core switches. Block 2 Aggregation Layer Access Layer © 2012 Cisco and/or its affiliates. Inc.net • Design using segmented networks provides good failure isolation • Can implement centralized control over traffic.

Server virtualization hosts DC Core ECMP Agg. DCUFD v5. All rights reserved. or virtual machine mobility solutions. the same VLAN must be available in different aggregation blocks) • Examples of applications that require back-end communication to be in the same VLAN: . the Layer 2 connectivity cannot be contiguous.0—3-10 The requirement for data center-wide VLANs has become apparent with the arrival of technologies that enable workload mobility (live virtual machine migration). direct Layer 2 connectivity is not possible. and application clusters. Database server clusters use Layer 2 connectivity for operation. and back-end database synchronization. Inc. Note Some databases may be able to synchronize over IP-based networks. cluster load balancing. Such VLANs are used for control traffic. Block 2 STP Layer 2 local connectivity only Only Layer 3 connectivity available through the core © 2012 Cisco and/or its affiliates.0 © 2012 Cisco Systems.certcollecion.net Layer 2 Data Center Core Design This topic describes how to design a Layer 2 data center core layer. Because there is a Layer 3 core between the aggregation blocks of the data center. . Under these conditions. 3-8 Designing Cisco Data Center Unified Fabric (DCUFD) v5. you do not need to have all servers that are part of clusters be in the same subnet.Database servers . To enable host-to-host reachability through the data center core. such as database servers. the hosts need to be in different subnets and in different Layer 2 domains. In this case. Block 1 STP Agg. Clusters of server virtualization hosts may also require the same VLAN to be extended across various parts of the data center. • When the same VLAN is required in multiple parts of the data center network (that is.

 50 percent of bandwidth available: The STP will block half of the links to prevent Layer 2 loops. © 2012 Cisco Systems. STP STP DCUFD v5. need to be implemented at the core and require devices that manage more bandwidth. firewalling. and so on. Data Center Topologies 3-9 .  Deployment of IP Services at the core: While not a real challenge. and generate a topology change. IP routing. However. Broadcast storm suppression can be used on the links. application broadcast traffic.net • Very large spanning tree domain • Only 50 percent of installed bandwidth available throughout the network • Large MAC tables on all switches • High amount of traffic flooding • One malfunctioning device is a threat to the whole network • Deployment of IP Services at the core: FHRP. but it also stops legitimate broadcast traffic.0—3-11 The Layer 2 data center core provides a foundation for data center-wide VLANs. The more network devices you have in such a network. because it can generate broadcast storms. or are challenging to implement:  Very large spanning tree domains: Very large spanning tree domains greatly affect the convergence time of STP when changes occur in the network. and so on. the topology will need to reconverge and trigger an event.  Large MAC tables on all switches: There are many hosts in such networks. and switches need to constantly age-out and relearn MAC table entries from traffic flows.certcollecion. services such as First Hop Redundancy Protocol (FHRP). IP routing. the greater is the chance that some devices change state from online to offline. any switch that has STP root ports). there are a couple of issues that introduce potential risks. Block 2 STP Aggregation Layer Access Layer © 2012 Cisco and/or its affiliates. DC Core Core Layer Agg. Block 1 Agg.  High amount of traffic flooding: Broadcast frames for protocols such as Address Resolution Protocol (ARP). Inc. All rights reserved. various discovery mechanisms. firewalling.  Malfunctioning devices: One malfunctioning device is a threat to the whole network. and unknown unicast traffic generates much flooding traffic that burdens the entire network and all attached network devices. If a crucial switch goes offline (that is.

such as broadcast storms.0—3-12 Layer 2 multipathing mechanisms will solve the issue of only 50 percent of bandwidth being available because they manage multiple uplinks. All rights reserved. end-to-end VLANs are available. This increases available bandwidth and reduces convergence time. excessive flooding. and so on. Block 2 vPC Aggregation Layer Access Layer vPC © 2012 Cisco and/or its affiliates. However. large MAC table usage.0 © 2012 Cisco Systems. Block 1 Agg. but with all of the potential issues of flat Layer 2 networks/ DC Core Core Layer Agg. this does not protect you from the downsides of Layer 2 flat networks. .net • Layer 2 multipathing using vPC solves only the available bandwidth issue. 3-10 Designing Cisco Data Center Unified Fabric (DCUFD) v5. • Data center-wide. Inc.certcollecion. so that they can be used at the same time. vPC DCUFD v5.

certcollecion. the VLAN extension mechanism. vPC vPC DCUFD v5.net • “Best of both worlds” design • Layer 3 core with network segmentation and ECMP • Extended subnets across aggregation blocks provided by Cisco OTV • Layer 3 core acts as Cisco OTV IP backbone Agg. such as Cisco Overlay Transport Virtualization (OTV). you can have all VLANs that you require present at all aggregation blocks. Block 1 DC Core Agg. and allow for live mobility of virtual machines between the segments of the network. All rights reserved. Data Center Topologies 3-11 . Block 2 ECMP Core Layer OTV Aggregation Layer Access Layer © 2012 Cisco and/or its affiliates. In this case.0—3-13 An elegant solution is to use a common and well-known design for the Layer 3 core. In this case. Inc. © 2012 Cisco Systems. the Layer 3 core acts as an IP backbone for Cisco OTV. Cisco OTV is supported on Cisco Nexus 7000 Series Switches. and extend the VLANs that you need at multiple aggregation blocks with a Layer 3 encapsulation technology.

which allows up to 16-way load balancing. traffic can be load balanced between two leaf switches across both spine switches. IP routing. Optimal path selection and load balancing is achieved by Cisco FabricPath. 3-12 Designing Cisco Data Center Unified Fabric (DCUFD) v5. . server load balancing. One of the most important benefits of Cisco FabricPath is that it can use several links to carry data for the same VLAN between several aggregation switches. To conserve core switch resources.0—3-14 Another option to bring the same VLAN to multiple aggregation blocks is to use Cisco FabricPath.0 © 2012 Cisco Systems. • FabricPath uses an underlying routed topology for loop prevention. it uses conversational MAC address learning. The switches that connect the access switches are called leaf switches. DCUFD v5. All rights reserved. and so on) in the core FabricPath Core Layer FP Spine Switches FP Leaf Switches Aggregation Layer Access Layer Direct L2 connectivity available © 2012 Cisco and/or its affiliates. Inc. You still need to provide IP Services such as FHRP. In the example in the figure. FabricPath uses an underlying routed topology and the Time to Live (TTL) field in the frame to prevent Layer 2 loops and broadcast storms.certcollecion. which connect to other leaf switches across spine switches.net • Cisco FabricPath can be used to enable data center-wide VLANs. and connection firewalling at the data center core layer. • Can distribute the load over several links between aggregation blocks • Implementation of other IP services (firewalling.

Collapsed core designs are suitable for smaller data centers where the cost of core switches cannot be justified. © 2012 Cisco Systems. core and aggregation layers are on the same set of switches. Inc. VDCs provide a collapsed core design on the physical level. In this case.certcollecion. you can use virtual device contexts (VDCs) for device segmentation. but separate core and aggregation layers on the logical level.net Data Center Collapsed Core Design This topic describes how to evaluate designs using data center collapsed core. When using the Cisco Nexus 7000 Series Switches. Data Center Topologies 3-13 .

This VDC has IP routing configuration and route summarization. In the example. you must use physical connections (cables). and with other aggregation VDCs.0—3-17 This example describes the collapsed core layer using Cisco Nexus 7000 Series Switches with VDCs configured.0 © 2012 Cisco Systems. and consumes 10 Gigabit Ethernet ports that would otherwise be used for interconnecting devices. The access layer is Layer 2 only. The aggregation VDCs are the boundary between switched and routed traffic. Note 3-14 There is a drawback of this design. DCUFD v5. are all Layer 3 (routed). The topmost VDCs form the data center core. and between the core VDC and the campus core. there is a core VDC configured.certcollecion. On every Nexus 7000 Series Switch. Each aggregation VDC has connections to its set of access switches or fabric extenders (FEXs). Both aggregation VDCs have dedicated links to the access layer. The links between the core VDCs. there is a possibility to consolidate data center core and aggregation switches: Data Center Core VDC Layer 3 Cisco Nexus 7000 Series Switches DC Aggregation VDCs Layer 2 only Multiple aggregation blocks— one per VDC Data Center Access © 2012 Cisco and/or its affiliates. To interconnect the VDCs. Inc. These consume twenty 10 Gigabit Ethernet ports. The example shows the left and right aggregation VDCs on each switch. For 10 Gigabit Ethernet connections. you need 10 links between the pair of switches to accommodate all needed connections. All rights reserved. They run IP routing with the core. Designing Cisco Data Center Unified Fabric (DCUFD) v5. this approach is expensive. This VDC forms routing adjacencies with the campus core switches. .net • When using VDCs on Cisco Nexus 7000 Series Switches. and a FHRP between them.

certcollecion. Data Center Topologies 3-15 . Inc. © 2012 Cisco Systems.net Summary This topic summarizes the primary points that were discussed in this lesson.

.net 3-16 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. Inc.certcollecion.

Spanning Tree Protocol (STP)-based deployments. or Cisco Unified Fabric-enabled aggregation layer designs. Cisco FabricPath enabled. you will be able to design data center connections. Objectives Upon completing this lesson.certcollecion.net Lesson 2 Designing the Data Center Aggregation Layer Overview This lesson describes the approaches and possibilities for designing the data center aggregation layer. from existing. This ability includes being able to meet these objectives:  Describe classic aggregation layer designs  Design the aggregation layer with VDCs  Design the aggregation layer using Cisco Unified Fabric  Design the aggregation layer with IP storage-related specifics in mind . there are many technology options. topologies. Based on the equipment available. to modern. and services in the aggregation layer.

0—3-4 The classic aggregation layer design using spanning tree is still used in many existing data centers. Inc. • Classic design: Layer 3-Layer 2 boundary at the aggregation layer • IP routing and ECMP forwarding between aggregation and core layers • STP manages the Layer 2 domain: aggregation and access • FHRP protocols in the aggregation layer to provide default gateway functionality Core Layer Aggregation Layer Access Layer ECMP = Equal-Cost Multipath © 2012 Cisco and/or its affiliates.0 © 2012 Cisco Systems.net Classic Aggregation Layer Design This topic describes classic aggregation layer designs. follow these recommendations: 3-18  Interswitch link belongs to the Layer 2 domain. DCUFD v5. like per-VLAN Rapid STP [RSTP]) and are generally robust and with relatively fast reconvergence upon topology changes. . Another classification of the type of data center aggregation layer is whether the aggregation layer uses Layer 2 only. Designing Cisco Data Center Unified Fabric (DCUFD) v5. When using the aggregation layer as a Layer 2-Layer 3 boundary. you need to configure switch virtual interfaces (SVIs). a high oversubscription ratio. This action will route traffic on Layer 2 and Layer 3 on the same path. or introduces the demarcation point between Layer 2 and Layer 3 forwarding. and virtual port channel (vPC) for Cisco Nexus devices.certcollecion. and with this. and possibly other IPbased services. Several new technologies emerged that reduce oversubscription by enabling all links. such as firewalling. the biggest flaw of STP-based designs is the poor utilization of links. high availability. Device capabilities typically include VLANs and virtual routing and forwarding (VRF) instances. but these still use the same control and data planes on the switch.  Match the STP primary root bridge and the Hot Standby Router Protocol HSRP and Virtual Router Redundancy Protocol (VRRP) active router to be on the same device. Classic aggregation layer designs also do not provide any lower-level separation of traffic between parts of data centers. These designs are based on STP (some of its variants. However. When designing a classic aggregation layer. Cisco offers the Multichassis EtherChannel (MEC) for Cisco Catalyst devices. All rights reserved. routing. A possible solution to this concern is the virtual device contexts (VDCs) that are implemented on the Cisco Nexus 7000 Series Switches.

Data Center Topologies 3-19 .certcollecion. © 2012 Cisco Systems.net First Hop Redundancy Protocols There are three options that First Hop Redundancy Protocols (FHRP) you can use:  HSRP: The Cisco proprietary default gateway redundancy protocol has the widest selection of features. Load distribution between default gateways is managed by GLBP and is configurable. Returning traffic (downstream) usually enters the network at a single point.  Gateway Load Balancing Protocol (GLBP): GLBP is another FHRP that allows you to use several default gateways to forward traffic upstream for server subnets. Inc. as well as customizations for vPC and virtual port channel plus (vPC+). The primary difference is that the virtual IP address can be same as one of the interface addresses.  VRRP: The standards-based VRRP offers functionality that is similar to HSRP. It provides several options for object tracking and interaction with other router processes.

The core and aggregation functions are combined on the same switch. All rights reserved. Layer 3 domain.certcollecion.net • Optimized classic design for smaller networks • IP routing and ECMP forwarding between collapsed aggregation and core and campus core layers • STP manages the Layer 2 domain: aggregation and access Collapsed Core & Aggregation Layer Physical Access Layer Virtual Access Layer © 2012 Cisco and/or its affiliates. with the same protocols configured on the collapsed switches: STP for the Layer 2 domain.0—3-5 The collapsed core design is suitable for smaller data center networks. such as vPC and Cisco FabricPath. or when you wish to obtain deliberate concentration on core switches. and IP routing protocol for the upstream. there are better options available. This design still uses the STP protocol as the loop-prevention mechanism for the access network. . DCUFD v5.0 © 2012 Cisco Systems. today. 3-20 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc.

Aggregation VDC and access VDCs (Cisco FabricPath and FEX combination with F1 I/O modules) © 2012 Cisco and/or its affiliates. depending on your use case:  Core VDC and aggregation VDC (collapsed core design with VDCs)  Core VDC and several aggregation VDCs  Core VDC. The VDC tag has local meaning that identifies the VDC to which the packet belongs. I/O modules have a fixed amount of ternary content addressable memory (TCAM) available for packet forwarding. On the data plane. VDCs provide isolation on the control plane and on data plane. a copy of the entire protocol stack is started for every VDC.Core VDC. From the control plane perspective. Cisco Overlay Transport Virtualization (OTV) VDC . and so on. • Allows various topologies with a single. storage VDC . Having a VDC that utilizes many these resources on a particular I/O module might prevent you from having any other VDC on that I/O module. VDCs also virtualize the switch on the Cisco NX-OS software level. VDCs can be used to separate various aggregation layers. access lists.net Aggregation Layer with VDCs This topic describes how to design the aggregation layer with VDCs. storage VDC © 2012 Cisco Systems. This allows you to connect various networks and to construct multiple topologies using a single (or a pair) Cisco Nexus 7000 Series Switch. routing protocol. consolidated device • Aggregation switches can be divided to VDCs to accommodate various needs or run incompatible features: .Core VDC and aggregation VDC . All rights reserved.Core VDC and several aggregation VDCs . and quality of service (QoS).certcollecion. Data Center Topologies 3-21 . Inc.0—3-7 VDCs are a Cisco Nexus Operating System (NX-OS) mechanism that allows virtualization of a single physical switch into multiple logical switches. There are several designs that are possible using VDCs.Aggregation VDC. aggregation VDC(s). aggregation VDC(s). traffic is separated using a VDC tag. with all required higher-level protocols that are required in that VDC: IP. • VDCs provide a means for switch virtualization. As opposed to virtualization that uses only VLANs. When using VDC separation in the aggregation layer. Note You must take care when assigning interfaces to the VDCs and to resource consumption. DCUFD v5.

. In the case of the Cisco Nexus 7000 F1 modules. One VDC hosts the core services. You should provision in such a way that you have enough physical ports available.0 © 2012 Cisco Systems. Connections need to be physically cabled from F1 I/O modules to M1 I/O modules. while another VDC offers aggregation services.0—3-8 In this example. you provide core and aggregation services on the same physical switch. Cisco FabricPath and Cisco FEXs on M1 and F1 Modules VDCs can be used to bridge the Cisco FabricPath domain to Cisco FEXs.net  Aggregation VDC and access VDCs (Cisco FabricPath and Cisco Fabric Extender [FEX] combination with F1 I/O modules)  Aggregation VDC “above” network services and subaggregation VDC “below” network services • Collapsed core design example: Campus Core DC Core Core VDC Multiple Aggregation VDCs DC Aggregation DC Access (dotted lines indicate physical switch) © 2012 Cisco and/or its affiliates. where Cisco FEXs are connected. Designing Cisco Data Center Unified Fabric (DCUFD) v5. as long as the total VDC count does not exceed the maximum that is supported by the Cisco Nexus 7000 platform. Inc. and that they are in different port groups to maximize available bandwidth. A similar solution to the aggregation and subaggregation layer VDC “sandwich” is the collapsed core data center topology. where traffic can be controlled. All rights reserved. you cannot connect Cisco FEXs to these modules directly. This example isolates multiple segments and makes all the traffic flow through the core VDC. In this case. Both VDCs are then consolidated in the same physical switch. You will need physical cables to interconnect VDCs with each other. Note 3-22 This limitation does not exist anymore with F2 I/O modules.certcollecion. the switch can accommodate multiple aggregation layers for multiple segments. DCUFD v5.

along with FHRP. without a configured SVI. while another VDC can be configured as an internal or private VDC. you can also segment the networks using VLANs and VRFs. One VDC is configured as an outside or public VDC. and inserting services in between. In this case. there is a limitation that a VLAN cannot be extended over a Cisco OTV overlay to another site if there is an SVI that is configured for that VLAN. that VLAN is Layer 2 only. Data Center Topologies 3-23 . maintaining the inside-outside demarcation. Note © 2012 Cisco Systems. Multiple VDCs within the same physical switch need to be linked using a physical cable. All rights reserved.0—3-9 One of the use cases for VDCs is to divide the aggregation layer to public and private parts. • Public and private VDC design example: DC Core Public VDC DC Aggregation Private VDC Access (Dotted lines indicate physical switch) © 2012 Cisco and/or its affiliates.net Cisco OTV in a Separate VDC Another example of such usage is Cisco Overlay Transport Virtualization (OTV). Since most VLANs do have SVIs configured. Keep in mind the requirements of each I/O module. and how ports can be assigned in multiple VDCs. forming a “sandwiched” network architecture. After this separation. Inc. the recommendation is to create a dedicated VDC for Cisco OTV.certcollecion. VDC Sandwich Configuration One of the examples is the “VDC sandwich” configuration. This segmentation involves isolating applications unto themselves. with security services interlinking these VDCs. DCUFD v5. In the Cisco OTV VDC. the VDCs are used for traffic separation when implementing IP Services in the aggregation layer. where the aggregation layer is divided into public and private VDCs. In Cisco OTV. and patch that VLAN over a trunk link from the production VDC into the Cisco OTV VDC.

and so on. and so on). They may or may not support dynamic routing. if applicable. and high-availability technologies (vPC.net The benefit of this solution is that you consolidate the public and the private part of the network in the same physical switch. Inc. 3-24 Designing Cisco Data Center Unified Fabric (DCUFD) v5. you still need a pair of aggregation switches.0 © 2012 Cisco Systems. HSRP. in this example) may dictate your selection on where would you implement the boundary between Layer 2 and Layer 3 forwarding. To ensure service redundancy. You can decide which VDC can provide Layer 3 functionality.certcollecion. and which one can provide only Layer 2 functionality. . Service appliances (firewalls.

The service chassis can form a Virtual Switching System (VSS) for increased redundancy.Implemented with Cisco ASA and Cisco ACE . Inc.Physical connections between aggregation switches and service appliances • Services in a service chassis: - Implemented with Cisco Catalyst 6500 service chassis with ASA-SM/ FWSM and ACE - Physical connections to service chassis. such as data center traffic optimization using Cisco Wide Area Application Services (WAAS). easier management.certcollecion. Integrated Services are limited to the Cisco Catalyst 6500 platform.0—3-10 There are two ways to deploy network services: as standalone services. there are differences depending on the capacity of each component: the standalone firewall or the load balancer. shorter convergence time. and not really with the number of available ports on the appliance. at a higher capacity than the standalone appliance. using appliances for firewalling and possibly server load balancing. Data Center Topologies 3-25 . Note Integrated Services can provide another functionality—the route health injection (RHI)— which cannot be done using external appliances. Both solutions are valid choices and can be used depending on the hardware that you have available. A routing protocol must be running between the service chassis and the aggregation switch. © 2012 Cisco Systems. Performance-wise. Integrated Services communicate with the switch through its internal backbone and can connect using higher speed to the aggregation switches.net • Standalone services: . internal connections to service modules VSS © 2012 Cisco and/or its affiliates. Standalone services are implemented using Cisco ASA standalone adaptive security appliances. and easier configuration of EtherChannels to the pair of Nexus 7000 Series Switches. and using the Cisco Application Control Engine (ACE) module to support server load balancing. including the Cisco Adaptive Security Appliance Security Module (ASASM) and Cisco Firewall Services Module (FWSM) for firewalling. DCUFD v5. or by using Integrated Services to exploit the capability of the Cisco Catalyst Series switches that have Integrated Services. and the Cisco ACE30 module to provide server load-balancing functionality. which has a wide selection of service modules. You are limited only by the capacity of the port channel. Other service options are also available. All rights reserved.

On the left side of the figure.Physical connections between aggregation switch VDCs and service appliances - Internal connections (VLANs) to service modules Public VDC VSS Protected VDC © 2012 Cisco and/or its affiliates.certcollecion. and the traffic from the security appliance to the load balancer flows through the default VDC. Active-active load distribution is achieved by using multiple contexts on the service modules. Inc. The VSS operates as a single switch.net • Services with VDCs: • Integrated services: . These include the Cisco ASA-SM and the Cisco ACE30 server load-balancing device. there is a combination of standalone services with a VDC sandwich design.0—3-11 Implementing Services with VDCs You can deploy network services in the aggregation layer using VDCs as well. Implementing Fully Integrated Services Another design is a fully integrated option that takes advantage of the Catalyst 6500 VSS. while the service modules operate in active-standby operation mode.Implemented with standalone appliances and multiple VDCs on aggregation switch - Implemented with Cisco Catalyst 6500 service chassis with ASA-SM/ FWSM and ACE . and Integrated Service modules that plug into the Catalyst 6500. . 3-26 Designing Cisco Data Center Unified Fabric (DCUFD) v5. All rights reserved.0 © 2012 Cisco Systems. The default or internal VDC can host all VLANs that are required for redundant operation of the service appliances. DCUFD v5. The aggregation switches have the public and internal VDCs deployed.

Data Center Topologies 3-27 . © 2012 Cisco Systems. It is always a good idea to completely separate the management and provisioning network from the production network. but do not scale to hundreds of customers.certcollecion. One VDC can be configured with external services in appliances. even if you have a breach into the system of one of your customers. so it makes sense to use VDCs to isolate systems within the data center. when you need to provide different levels of service (or different services) to customers. where another VDC can provide basic connectivity for “lightweight” users. VDCs do provide separation.net VDCs can be used in multitenant data centers that support various types of cloud solutions. the danger cannot spread in the management VDC. Inc. This way. VDCs can be used for production network tiering.

With server virtualization taking place and virtual appliances being available.certcollecion. Physical devices feature high-bandwidth capacity and can be used for raw traffic inspection. customer.0 © 2012 Cisco Systems. Note 3-28 Keep in mind that virtualized services cannot have the same capacity as hardware-based appliances. you can combine approaches for an even more robust and flexible deployment. most typically using hardware-based.Services point of attachment in aggregation layer Tenant inspection Tenant 1 High-bandwidth inspection Nexus 1000V VEM ASA 1000V vACE VMs Tenant 2 Tenant 3 Nexus 1010V: VSM ASA 1000V vACE © 2012 Cisco and/or its affiliates. Inc.Implemented with virtual apliances. Virtualized devices can be deployed with maximum flexibility. All rights reserved. such as Cisco ASA 1000V and Cisco vACE .net • Virtualized services: . They are dependent on the host CPU and on the networking hardware.0—3-13 Implementing Virtualized Services The aggregation layer hosts various IP Services. standalone devices or service modules. Designing Cisco Data Center Unified Fabric (DCUFD) v5.Maximum scalability . Host … DCUFD v5. or another context within an existing instance. allowing you to customize (or automatically generate) configuration for every tenant. . you can also deploy another instance of the virtual services appliance. and department for which you are hosting services. If the tenant has multiple networks or applications.

the equipment must support various ways of interconnecting new equipment and existing systems. This is one of the options and we move on from here.certcollecion. © 2012 Cisco Systems. Data Center Topologies 3-29 . mainly to simplify the network and to reduce costs. Therefore. In the example in the figure. With the development of Cisco Unified Fabric and all supporting technologies and hardware. Cisco Unified Fabric is used only in the access layer. Inc. where Fibre Channel connections are “detached” and connected to the existing SAN infrastructure.net Aggregation Layer with Unified Fabric This topic describes how to design the aggregation layer when using unified fabric. Note Migrations are usually not done before some equipment reaches the end of its investment cycle. it makes sense to consolidate access and aggregation layers.

Each Cisco Nexus 7000 aggregation switch is then connected using FCoE to the Cisco MDS 9500 SAN aggregation or collapsed core layer. These switches are kept separate for data traffic and for storage traffic. so that you can connect a new data center aggregation network to the existing SAN infrastructure. The only upgrade you need for the Cisco MDS directors is FCoE connectivity. Note the connections between the access and aggregation switches. Such a situation is very common. When using the Cisco Nexus 7000 to aggregate storage connections.0 © 2012 Cisco Systems. These ports are shared between the data connectivity VDC and the storage VDC. Inc. or to a FCoE-capable storage device directly. Outer connections are storage-only. maintain the separation between fabric A and fabric B. Note To connect FCoE hosts to the Cisco Nexus 7000. 3-30 Designing Cisco Data Center Unified Fabric (DCUFD) v5. but use FCoE as their underlying transport. to provide two independent paths to the SAN. They are kept separate to provide fabric A and fabric B isolation.net The Cisco Nexus 7000 or 5500 Series Switches can be used as Fibre Channel over Ethernet (FCoE) aggregation switches. an exception is made to the rule that one physical port can belong only to one VDC. This functionality is available on F1 and F2 I/O modules. .certcollecion. and send storage traffic over a consolidated FCoE connection to an Cisco MDS 9500 Series Multilayer Director (which must have the FCoE I/O module). An FCoE connection must be used because the Nexus 7000 does not support Fibre Channel connections. These switches consolidate connections from access switches. These connections use their dedicated links so that they do not suffer from bursts of data from the data network.

which requires a storage VDC to be configured and appropriate licensing. The design of the connection between the access and aggregation layers is maintained.net This example is similar to the previous one. Inc. the access switches can be in N-Port proxy mode. FCoE storage devices are available. Data Center Topologies 3-31 . a distinct separation between fabric A and fabric B is still maintained. The traffic of the data network travels across the vPC. Using this design. with the difference that you do not have an existing SAN network. Using this approach. so you can connect them to the access (for small deployments) or to the aggregation layer directly (for larger deployments).certcollecion. using the FCoE N-Port Virtualizer (NPV). the Cisco Nexus 7000 must be a full Fibre Channel Forwarder (FCF). To provide Fibre Channel services and security. You then connect the storage array directly to Cisco Nexus 7000 aggregation switches using FCoE. © 2012 Cisco Systems.

The data traffic will be forwarded by the data VDC. When using the Cisco Nexus 5000 or 5500 Series Switches in the aggregation layer. and the storage traffic will be forwarded and managed by the storage VDC. depending the complexity of your aggregation layer and the kind of services will you be implementing. This design is common when extensively using FEXs. Inc. you need to have storage and data in separate VDCs. all types of traffic are combined in a single switching domain. Generally. 3-32 Designing Cisco Data Center Unified Fabric (DCUFD) v5.net The approaches of a collapsed core and unified fabric aggregation layer can be combined. . On Cisco Nexus 7000 Series Switches. You will need three to four VDCs on each Cisco Nexus 7000 Series Switch to implement this design.0 © 2012 Cisco Systems.certcollecion. which you need to use if you use the Nexus 7000 as an access switch. you cannot share interfaces between VDCs. so in this sense the virtual contexts are totally separate. The exception is a shared port.

such as the FCF services and the uplinks to the rest of the SAN. The storage VDC is not connected to other VDCs and serves the purpose of aggregating storage connections from the aggregation layer. the core layer features physical links to the campus core. It also depends on how you connect the equipment and if you are using Cisco Nexus 7000 M1 or F1/F2 type of I/O modules. and advertising routes to and from the data center. © 2012 Cisco Systems. Inc. and forwarding packets to the Cisco MDS Fibre Channel switches. storage protocols are configured. performing IP routing and Equal-Cost Multipath (ECMP) to the campus core.certcollecion. Data Center Topologies 3-33 . You need to design the SAN according to storage design best practices. provide routing adjacency. Core VDC The core VDC provides its usual services. and so on. to maintain the separation between two distinct fabrics. HSRP. All rights reserved. depending on your requirements for the services. You cannot combine FEXs and Cisco Unified Fabric on Cisco Nexus 7000 F1 modules. Note This VDC can consist of one single or two VDCs. and it serves as a vPC peer link to extend aggregation VLANs. This topology includes multihop FCoE.net • Collapsed core with storage VDC example: Campus Core DC Core A B Core VDC MDS 9500 with FCoE Nexus 7000 DC Aggregation Aggregation VDC Storage VDC DC Access Nexus 5000/5500 A © 2012 Cisco and/or its affiliates. Storage VDC In this VDC. B FC FCoE Ethernet DCUFD v5. Aggregation VDC The aggregation layer consists of one VDC that is connected to the core VDC with physical connections and performs ECMP Layer 3 forwarding.0—3-19 This example on the Cisco Nexus 7000 includes a compact core-aggregation block that is provided by the Nexus 7000 Series Switch. There is a link between the aggregation VDCs between two physical Nexus 7000 switches. In this example.

certcollecion. Access Layer The access layer consists of a pair of Cisco Nexus 5000/5500 Series Switches. These links are inexpensive (twinax 10 Gigabit Ethernet copper). and sends storage traffic through dedicated unified fabric links to the storage VDC. Note Shared interfaces: If you are connecting the hosts directly to the Cisco Nexus 7000. you will need to share the access interfaces between the data and storage VDC. Shared interfaces are not applicable in our case. This is to maintain fabric A and fabric B separation. and to prevent competition between data traffic and storage traffic.net Storage links from the access layer need to be separate from the data links. so cost should not be a concern. Inc. 3-34 Designing Cisco Data Center Unified Fabric (DCUFD) v5. with hosts connected to them using Cisco Unified Fabric.0 © 2012 Cisco Systems. The switch then separates the data traffic and sends it over vPCs to the aggregation data VDCs. . as we are not using the Cisco Nexus 7000 as an access switch for hosts.

DCUFD v5. Note © 2012 Cisco Systems.certcollecion. Inc. where a basic unit of saved data is an SCSI block. NFS and CIFS are file-level protocols. it is a practical point to attach IP-based storage. Network File System (NFS). Data Center Topologies 3-35 . File-level storage attaches on the file system level. All data center switches in the path of this data must support jumbo frames. All rights reserved. and Common Internet File System (CIFS). • IP storage array connects to the aggregation layer • Low number of connection points DC Core iSCSI NAS • Connecting to access would be uneconomical due to DC high cost of an IP port on a Aggregation storage array • Traffic to storage routed through an SVI or bridged through a VLAN VRF for Storage Connectivity DC Access Cisco UCS C Series © 2012 Cisco and/or its affiliates. The aggregation layer is where storage data is separated from other production data. Note These protocols typically utilize jumbo frames to carry the entire block of data.0—3-21 One of the options to implement storage is to do it over the IP protocol. depending on the number of interfaces that you have on the storage array and the number of access switches. From the server perspective. iSCSI is a block protocol. IP-based storage uses protocols like Internet Small Computer Systems Interface (iSCSI). For this reason. where block storage attaches below the file system implementation in the operating system. Block storage and file storage attach to the host operating system at different levels and at different points. IP-based storage can be reached through the main production data network interface card (NIC) of the server. Attaching storage to the access layer is considered less scalable. This type of storage manages segmenting the files into blocks internally. or by using a separate NIC. where the minimum unit of saved data is a file. Some of these NICs also have additional TCP offload capabilities (relevant for iSCSI) to optimize storage packet processing.net Aggregation Layer with IP-Based Storage This topic describes how to design the aggregation layer with IP storage-related specifics in mind.

storage traffic can be isolated in a VLAN (the case if you have multiple NICs. You can also open up dedicated paths for backup of the storage. Storage access is usually bandwidth-hungry and bursty. The VRF is then used to route storage traffic away.certcollecion. 3-36 Designing Cisco Data Center Unified Fabric (DCUFD) v5. . you can make sure that the storage array can be accessed only from internal subnets. and for storage replication. • Example of an aggregation or global VDC on a Nexus 7000 • The “lower" VRF is a point of decision for data traffic to be forwarded upstream or sent to storage • Traffic going upstream traverses service modules and routing points • Firewall policy set to forbid access to storage from upstream Data VDC DC Aggregation • IP storage connected to the aggregation layer in a VRF SVI VLANs iSCSI NAS VRF • VRF can accommodate special routing requirements for data backup or storage replication Cisco UCS C Series © 2012 Cisco and/or its affiliates. you also relieve the core from a high number of packets that use much bandwidth. This would unnecessarily consume their resources. which makes it difficult to measure and use the correct average values for bandwidth.net Depending on the solution. with a static route pointing to the storage subnet to a local interface. toward the storage array instead of sending it further upstream to the data center core. if necessary. Using this approach. a VRF is created “below” the services. All rights reserved. it does not make sense to route IP storage traffic through these service modules or appliances. connected to the storage array. DCUFD v5. or if you have multiple VLANs to the host).0—3-22 When using network-based application services. On the firewall. To divert the storage traffic from the path that production traffic takes. or in a VRF if you do not separate storage from data traffic on the host.0 © 2012 Cisco Systems. Inc.

If traffic to the IP storage is routed. DCUFD v5. All rights reserved. and all servers that require access to it. Data Center Topologies 3-37 .Does not require VRF. © 2012 Cisco Systems. Storage traffic is kept isolated from production traffic with VLAN isolation. one for storage) or a VLAN trunk toward the server. it traverses service modules before getting to the SVI and then it is bounced back toward the storage . but it requires either two NICs on the server (one for data. Instead.Unnecessary load on service appliances or service modules Data VDC SVI iSCSI NAS VLANs . The server then places storage traffic in the correct VLAN. You must ensure that you do not extend the storage VLAN toward the (possibly present) service modules. as it may consume their resources unnecessarily. but less flexible scenario: .net • Simpler. no static or dynamic routing required between VRF and SVI Cisco UCS C-Series © 2012 Cisco and/or its affiliates.0—3-23 A simpler design does not use the VRF. it uses a flat VLAN where the storage array is connected. but requires a VLAN trunk connection to the server . Inc.DC Aggregation certcollecion.Traffic from server to storage can be bridged. This design is simpler.

Inc.0 © 2012 Cisco Systems.certcollecion. .net Summary This topic summarizes the primary points that were discussed in this lesson. 3-38 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

With the introduction of Layer 2 multipathing technologies. you will learn about designing the access layer of the data center network. The general goal is to make the access layer as efficient and as cost effective as possible by offering high port density. you will be able to design the data center physical access layer. low oversubscription. and the lowest cost based on the needed features.net Lesson 3 Designing the Data Center Access Layer Overview In this lesson. Objectives Upon completing this lesson. This ability includes being able to meet these objectives:  Describe the classic access layer designs and design issues  Design the access layer with vPC and MEC  Design the access layer with FEXs  Design the access layer with Cisco Unified Fabric . the access layer has evolved in recent years.certcollecion.

certcollecion. where a switch is installed in the middle rack and with optimal connections to every server. DCUFD v5. Typically. and all needed VLANs are brought to access switches that connect physical servers. the classic access network design has used access switches that are connected to a pair of aggregation switches using one link to each aggregation switch. but require large. 3-40 Designing Cisco Data Center Unified Fabric (DCUFD) v5. access switches are typically located as top-of-rack (ToR) switches. The aggregation switches are interconnected between themselves using an interswitch link. All rights reserved. or toward other aggregation blocks. the aggregation switches terminate the Layer 2 domain and forward data traffic toward the core using Layer 3. • Classic design: Layer 2 connectivity between access and aggregation layers • Network segmentation using VLANs • STP manages the Layer 2 domain: aggregation and access Core Layer Aggregation Layer Access Layer © 2012 Cisco and/or its affiliates. and end of row (EoR).0—3-4 For a long time. Physically. Alternative designs include middle of row (MoR).0 © 2012 Cisco Systems. The ToR approach minimizes cabling because every rack has its local switch. where an access switch is installed in a rack at the edge of server racks. MoR and EoR simplify management. Their role is to provide network connectivity to servers that are installed in that rack. Networks are segmented using VLANs. Inc.net Classic Access Layer Design This topic describes the classic access layer designs and design issues. modular switches. .

The number of attached devices defines the number of ports that is needed. This defines the equipment type and the cabling. but in exchange. there are a few design considerations that the designer should take into account.net When designing the access layer. or Layer 2 encapsulation technologies.certcollecion. These are your design inputs. If the application requires Layer 2 domains to span across several aggregation blocks. Note A modern design of an access layer using fabric extenders (FEXs) combines all benefits from ToR and EoR designs: optimized cabling and a reduced number of managed devices. The access layer utilizing ToR switches has optimized cabling. data center fabrics. it has many more devices to manage. The access layer topology determines the placement of the switch and the form of the access layer. but more cumbersome cabling. you need to consider Layer 2 multipathing technologies. MoR. such as port channels and virtual port channels (vPCs). It needs to be defined by the application vendor as a design input parameter. The bandwidth requirement is similar. This requirement can be satisfied by using link scaling technologies. © 2012 Cisco Systems. Data Center Topologies 3-41 . The size of the Layer 2 domains also defines the data center design. and EoR. and VLANs that span across multiple switches need Layer 2 data center designs. Localized broadcast domains call for Layer 3 access layers. and on the scale of offered services. The number of ports is completely dependent on the application. EoR and MoR designs have fewer managed devices. Inc. The physical topology can be ToR.

net • Looped-triangle physical wiring STP Root Primary • STP as loop prevention mechanism: .  If the primary root bridge fails. . 3-42 Designing Cisco Data Center Unified Fabric (DCUFD) v5. This means that an access switch connects to both aggregation switches.STP on access switches blocks path to secondary root bridge . with a potential to create a Layer 2 loop.certcollecion. If STP is set correctly. DCUFD v5. the link from the access switch to the STP secondary root bridge will be blocked.0—3-6 The classic access layer requires triangle looped physical wiring. the link to secondary root bridge begins forwarding • Main downside: one link always blocked. the link to secondary root bridge begins forwarding .0 © 2012 Cisco Systems. This is why the Spanning Tree Protocol (STP) is used as a loop prevention mechanism.If the primary root bridge fails. only 50 percent of installed bandwidth available STP Root Secondary Aggregation Access Triangle Loops © 2012 Cisco and/or its affiliates. STP on the access switch will enable the link to the secondary root bridge. allowing utilization of only half of the installed bandwidth. STP then manages network convergence in case of topology changes:  If the link to the primary root bridge fails. Inc.If the link to primary root bridge fails. STP on the access switch will enable the link to the secondary root bridge. All rights reserved. which are also interconnected themselves. This forms a triangle. The main downside of the classic access layer design is that one link is always in the blocking state.

Inc. There are differences in how MEC and vPC function on the control plane and data plane levels. and vPC is employed when Cisco Nexus series switches are used. DCUFD v5. It only needs to support EtherChannel. The physical wiring must be the same—the looped triangle.net Access Layer with vPC and MEC This topic describes how to design the access layer with the vPC and Multichassis Ethernet (MEC). all uplinks from access switches can be used. and reducing by half the oversubscription ratio.certcollecion. All rights reserved. Two technologies are available that can overcome this limitation: vPC and MEC. this is discussed in detail in a separate lesson. doubling the bandwidth available to the servers.0—3-8 The classic access layer has a disadvantage that is inherited from STP: the high oversubscription ratio that is caused by one link in the blocking state. © 2012 Cisco Systems. which enables all upstream links for traffic forwarding • Access switch can be generic and must support EtherChannel STP Root Primary STP Root Secondary Generic L2 switch VSS vPC Generic L2 switch with EtherChannel support © 2012 Cisco and/or its affiliates. Data Center Topologies 3-43 . You use one or the other based on the hardware in the aggregation layer. • Optimization of access links: 2 times more bandwidth available compared to classic STP design • STP is no longer used as loop prevention. MEC is used with the Cisco Catalyst 6500 Virtual Switching System (VSS). The access switch can be generic. With MEC and vPC.

these extenders offer Fast Ethernet and Gigabit Ethernet downstream connectivity. FEX Models These are the FEX models: 3-44  Cisco Nexus 2148T FEX: This is the first FEX and offers Gigabit Ethernet connectivity to servers. This substantially simplifies the network design because the FEX is not an additional logical network element. FEXs are unmanaged devices. Note The FEX enables increased access layer scalability without increasing management complexity. and four 10 Gigabit Ethernet upstream connections. Additional Fast Ethernet connectivity support was added to connect to server management interfaces. There are two sizes: the Cisco Nexus 2224 has two uplink ports. The FEX is often used to provide low-cost. Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc. Nexus 2248TP/2232FP DCUFD v5. FEX is an inexpensive solution to connect Fast Ethernet ports in the data center that are typically used for management without requiring a dedicated management switch.net Access Layer with FEXs This topic describes how to design the access layer with FEXs. • Cisco Nexus 7000 or 5000/5500 as managing switch • Cisco Nexus 2148. . 1-Gb native Ethernet connectivity to devices that would otherwise consume an “expensive” 10 Gigabit Ethernet port on the Cisco Nexus 5000/5500 or 7000 Series Switches.  Cisco Nexus 2224TP and 2248TP FEXs: An enhanced version of the original FEX. It does not require a visible management IP address on its own because it is managed through the managing switch.0—3-10 FEXs are a cost-efficient way to design the access layer. All rights reserved.0 © 2012 Cisco Systems. while the Cisco Nexus 2248 has four. 2248. They are managed by their upstream. or 2232 as FEX • Downlink connections from 100 Mb/s to 10 Gb/s depending on fabric extender type • Uplink connections 10 Gb/s Ethernet Nexus 7000 Nexus 5500/5500 Nexus 5500/5500 vPC Nexus 2148T/2248TP/2232FP Nexus 2248TP/2232FP © 2012 Cisco and/or its affiliates. managing device. 2224.certcollecion.

Host Interfaces These ports connect the servers to the network. Data Center Topologies 3-45 . This FEX also supports Cisco Unified Fabric for data and storage connectivity using the extender. Even if devices are connected to the FEX. connecting the FEX to multiple upstream switches for additional redundancy and load balancing Ports on the FEX are of two types: server ports and uplink ports. an additional switch can be connected to a host or server port to additionally or temporarily further extend the network. and eight FCoE and DCB 10 Gigabit Ethernet ports for uplinks. traffic between them needs to be switched by the managing switch. Fabric Ports Uplink ports connect the FEX to the network managing switch or switches. © 2012 Cisco Systems.net  Nexus 2232PP FEX: This FEX has 32 10 Gigabit Ethernet Fibre Channel over Ethernet (FCoE) and Data Center Bridging (DCB) ports for downlinks. The managing switch performs all switching functions for devices that are attached to the FEX. In some cases.certcollecion. Connecting the FEX FEXs can be connected using these methods:  Single or multiple direct FCoE and DCB or native Ethernet connections  Single port channel connection consisting of multiple physical connections to the same managing switch  vPC connection. Inc.

certcollecion. . This behavior is designed to trigger the NIC teaming driver on the server to switch over to a standby NIC. When using FEXs. Each server is assigned its own upstream interface that it uses to send and receive traffic. Interface Pinning When connecting a server to an FEX that has standalone upstream connections to the managing switch. host interfaces need to be repinned.net Note This makes the FEX less desirable for servers that have much traffic between them. Another option is to bundle the uplinks into a port channel. and little traffic between themselves (“East-West”). For the server to regain connectivity. In case of failure of that link.0 © 2012 Cisco Systems. The load-balancing paradigm for this case is per-server load balancing. the extender performs the server interface pinning. Inc. Such servers are still best connected using a proper switch. All servers are pinned to the same port channel interface. Port channel achieves better load balancing and has better resiliency. Note The FEX does not automatically repin lost connections to a working link. the best solution is to connect servers that have much upstream traffic (“North-South”). such as clustered servers. Load distribution on the links depends greatly on the amount of traffic that each server produces. which does not go offline until at least one physical interface (a member of port channel) is still functioning. 3-46 Designing Cisco Data Center Unified Fabric (DCUFD) v5. the server loses connectivity. Interface pinning can be static or dynamic. and so on. application serves.

Teaming can work in active/standby or active/active operating regime.certcollecion. which are managed by a single managing switch. Later. teamed NICs can operate in active/standby mode. The FEX is connected by two physical connections (forming a port channel) to the network. In this case. Active / Standby NIC Teaming Active / Active No NIC Teaming DCUFD v5. All rights reserved.0.  The third example shows a connection where a server is connected to every FEX using a link. a server is connected to two FEXs. All servers pin to this logical uplink. Data Center Topologies 3-47 . The Nexus 7000 Series Switches support attachment of Nexus 2232 FEXs that support Cisco Unified Fabric. and the Cisco Nexus Operating System (NX-OS) needs to be version 6. Inc. Having several network interface cards is called network interface card (NIC) teaming.  The last example shows an active/active connection.0—3-12 There are several topologies that are available when using FEXs that depend on the requirements for servers and the physical topology of the network. but these connect to a pair of switches. In this mode.  In the second example. The topologies with FEXs and the Cisco Nexus 7000 and 5000/5500 Series Switches that are supported depend on the NIC operating regime. the server uses MAC address pinning and forwards traffic for various internal and virtual MAC addresses through one NIC only. In this case. it depends on the mode in which these connections operate.net • Cisco Nexus 7000/5000/5500: Managing switch • Cisco Nexus 2148T/2248TP/2232FP: Fabric extender Active / Standby NIC Teaming © 2012 Cisco and/or its affiliates. if the server has multiple connections to the network. The line cards on the Nexus 7000 must be with the F2 forwarding engine.  The first connection in the figure is the basic setup. the NICs are allowed to operate in active/standby regime. Note © 2012 Cisco Systems. All links can be native Ethernet or DCB.

certcollecion.  The second example supports an active/standby NIC teaming scenario with the server. 3-48  The first example extends the port channel from the FEX to the server.0—3-13 The following designs are supported when using a port channel connection from the FEX to the server. Designing Cisco Data Center Unified Fabric (DCUFD) v5.  The third example supports an active/active NIC teaming scenario. Inc. Server PortChannel Active / Active NIC Teaming DCUFD v5. All rights reserved.0 © 2012 Cisco Systems.net • Nexus 7000/5000/5500: Managing switch • Nexus 2248TP/2232FP: Fabric extender Server PortChannel Active / Active NIC Teaming Server PortChannel Active / Standby NIC Teaming © 2012 Cisco and/or its affiliates. with a port channel extended to the server. with a vPC extended to the server. both on the uplink side and on the downlink side of the FEX. .

the server can have either two active connections to the network.net • Nexus 5000/5500: Managing switch • Nexus 2248TP/2232FP: Fabric extender vPC vPC vPC Dual-homed FEX Active / Standby NIC Teaming © 2012 Cisco and/or its affiliates. In this case.0—3-14 The following designs are supported when using vPC between the managing switch and the FEX. Data Center Topologies 3-49 .  The first example dual-homes the FEX to two Cisco Nexus 5000 managing switches. All rights reserved. Inc. The server then connects in an active/standby manner using simple NIC teaming.certcollecion.  The second example connects the FEX using a vPC to the managing switches. and additionally forms a vPC to the server.  The last example multihomes the FEX. and connects the server with a single NIC. Enhanced vPC – EvPC Active / Active (N5500. or have an active/standby connection. N2200 only) Dual-homed FEX with single-NIC server DCUFD v5. © 2012 Cisco Systems. These designs can be used when the managing switch is a Cisco Nexus 5000 or 5500 Series Switch.

in reality.0—3-15 This slide lists the topologies that are not possible: 3-50  The first example would.  The third example is not possible due to an asymmetrical connection.  The second example is not possible because two virtual device contexts (VDCs) of the same Nexus 7000 chassis cannot form a vPC.certcollecion. Inc. and try to form a vPC.net Unsupported topologies: VDC 1 VDC 2 vPC vPC Active / Active Active / Active © 2012 Cisco and/or its affiliates. All rights reserved. form a port channel from the managing switch to the server. Designing Cisco Data Center Unified Fabric (DCUFD) v5. over two FEXs.0 © 2012 Cisco Systems. The server would connect directly to the managing switch using one NIC. but over an extender using the second NIC. . Active / Active DCUFD v5.

but takes advantage of high port density and low price of 10 Gigabit Ethernet ports offered by the Cisco Nexus 7000. Note Only the M1 32-port 10 Gigabit Ethernet I/O modules can connect FEXs.certcollecion. All rights reserved.0—3-16 Modern data center designs may use Cisco FabricPath in the aggregation layer due to its advantages regarding network resiliency. Data Center Topologies 3-51 . . so the only way of using both FabricPath toward the network and FEXs to the servers is to create two VDCs and link them with a physical cable. DCUFD v5. and its ability to use multiple paths between any two switches in the fabric.F1 and M1 modules need to work in different VDCs.  The second scenario uses the F2 I/O module. 1 Gigabit Ethernet I/O module do not have this ability. so connecting both types of ports to the same switch is not an issue and is supported.net • Combination of FEX and FabricPath: . Due to hardware design reasons. there are two ways to implement FEXs in such a network when using a Cisco Nexus 7000 Series Switch as the access switch. load balancing. 10 Gigabit Ethernet module and the 48-port. © 2012 Cisco Systems. Note  Such setup is not very common. The 8-port.F2 modules support FabricPath and FEX in the same VDC. while the M1 I/O modules support connecting FEXs. The first scenario uses a Cisco Nexus 7000 switch with two types of I/O modules installed. which need to be interconnected with a physical link. Inc. Another limitation is that you need a dedicated VDC to face the Cisco FabricPath network. The F1I/O modules support Cisco FabricPath. FP Interface CE Interface to FabricPath Aggregation F1 I/O module FabricPath VDC M1 I/O module FEX VDC to FabricPath Aggregation F2 I/O module © 2012 Cisco and/or its affiliates.

net Access Layer with Unified Fabric This topic describes how to design the access layer with unified fabric. wide support. Note 3-52 Such were the initial deployments with Cisco Nexus 5000 as an FCoE access switch. The benefit of this design is simplicity.certcollecion. you may or may not run Fibre Channel services.0—3-18 The most traditional way to implement unified fabric in the access layer is to connect the servers with Converged Network Adapters (CNAs) to two access switches with a UF connection. On the access switch. and easy integration into an existing Fibre Channel SAN. . • Most common scenario: access layer terminates FCoE. Inc. or can be a full FCF A B MDS with native FC connectivity A B Nexus 5000/5500 (FCF) FC FCoE Ethernet © 2012 Cisco and/or its affiliates.0 © 2012 Cisco Systems. DCUFD v5. and rely on the upstream Fibre Channel switch to provide Fibre Channel services.  You can use the switch as a full Fibre Channel Forwarder (FCF)  You can use the switch in N-Port Virtualizer (NPV) mode. All rights reserved. Designing Cisco Data Center Unified Fabric (DCUFD) v5. and uses Ethernet and Fibre Channel upstream • Initial deployments with Cisco Nexus 5000 • Access switch can run Fibre Channel NPIV.

certcollecion.0—3-19 Another option is not to terminate FCoE on the access switch. see the “Designing the Data Center Aggregation Layer” lesson. DCUFD v5. Keep in mind that you need distinct separation between Fabric A and Fabric B to provide two independent paths from the server to the storage array. From the access switch perspective. The access switch may be the full FCF if Fibre Channel services are needed (that is. or upstream to the aggregation switch. or may run in FCoE NPV mode if the upstream aggregation switch is the FCF. Data Center Topologies 3-53 . Note For this design example. when attaching directly to FCoE storage array).net • Option with Unified Fabric only: directly attached FCoE storage array • Access switch is a full FCF to support advanced Fibre Channel configuration • Some FCoE storage arrays may work without an FCF • Keeps distinct separation of Fabrics A and B FCoE A FCoE Storage Array B Nexus 5000/5500 (FCF) A B FCoE Ethernet © 2012 Cisco and/or its affiliates. © 2012 Cisco Systems. All rights reserved. there is not much difference. but instead carry it forward to either an FCoE-capable storage array. Inc.

the access switch can be in FCoE NPV mode. The Cisco Nexus 2232 supports 10 Gigabit Ethernet FCoE/DCB host interface (HIF) downlinks.0—3-20 You can use the Cisco Nexus 2232PP FEX to connect the server to the network using the CNA. 3-54 Designing Cisco Data Center Unified Fabric (DCUFD) v5. the server is dual-homed to two FEXs.  In the second example. which then separate data traffic and transport it over a vPC (to provide better load balancing). If you need Fibre Channel services. The access switch then separates data traffic from storage traffic and sends each type of traffic to its appropriate upstream connection. Storage traffic is transported over the direct uplinks within the vPC. you typically need full Fibre Channel services on the access switch. . The vPC uses the cross links for native Ethernet data only. you can use the access (FEX managing) switch as a full FCF. All rights reserved. the server is dual-homed to two FEXs to provide two independent paths to the storage array.  In the first example. Inc. Both links carry data and storage traffic. while the direct links are used for data and storage traffic. If connecting to another FCoE/DCB FCF switch upstream.net • Option with Unified Fabric only and a fabric extender • Access switch is a full FCF to support advanced Fibre Channel configuration if using a directly attached FCoE storage array • Dual-homing the server to 2 FEXs keeps distinct separation of Fabrics A and B FCoE Nexus 5500 FCoE Nexus 5500 A A B DCB DCB Nexus 2232 A B B vPC DCB A vPC DCB Nexus 2232 B vPC DCB provides Fabric A/B separation for storage traffic vPC DCB FCoE Ethernet © 2012 Cisco and/or its affiliates.0 © 2012 Cisco Systems. DCUFD v5. If connecting directly to an FCoE-capable storage array.certcollecion.

In an MoR deployment. This setup is very suitable for high-performance standalone servers. which may use the Cisco Nexus 7000 as a MoR access switch.0—3-21 The Cisco Nexus 7000 has a distinct configuration when used as a unified fabric switch.Data VDC carries data traffic . and has high port density.net • Nexus 7000 as MoR Layer 2 access switch • Nexus 7000 has distinct separation between data and storage traffic . The FCF needs to be run in its own VDC.Storage VDC runs the FCF • Server connects to an interface that is shared between two VDCs. Note © 2012 Cisco Systems. The FCoE VLAN is added to data VLANs. Data Center Topologies 3-55 . most servers can be reached using 10 Gigabit Ethernet copper-based twinax cables. MDS 9500. or to a Nexus 5000/5500/7000 FCF. DCUFD v5. The storage VDC then connects upstream to either a FCoE-capable storage array.certcollecion. • Data VDC connects upstream to data network. while data traffic is forwarded through regular data VDCs. Only FCoE connectivity is possible on the Nexus 7000 because the Nexus 7000 does not have native Fibre Channel ports. The most suitable configuration of the switch is the Nexus 7009 because it is in a very compact factor suitable for MoR deployment. All rights reserved. The FCoE VLAN is relayed to the storage VDC. and the switch port that terminates this connection is shared between the data VDC and the storage VDC. FCoE storage array) Data VDC Data Interface Shared Interface FCoE Interface Storage VDC F1/F2 I/O module © 2012 Cisco and/or its affiliates. which can be up to 10 meters long and are inexpensive. Inc. The server connects to the switch using a CNA. the data VLANs remain in the data VDC. or to an Cisco MDS 9500 Series Multilayer Director with an FCoE I/O module. storage VDC connects upstream to FCoE network (Nexus 5000/5500/7000.

0 © 2012 Cisco Systems.net Summary This topic summarizes the primary points that were discussed in this lesson. Inc.certcollecion. 3-56 Designing Cisco Data Center Unified Fabric (DCUFD) v5. .

and describe scalability limitations and application impact. more focus is put on switching traffic between virtual machines. this situation was entirely managed by the hypervisor software.certcollecion. Initially. Objectives Upon completing this lesson. which provided poor visibility into network traffic by network administrators. Cisco is extending the network infrastructure into the virtualized hosts to provide more control of traffic flows that would otherwise be contained within the host. you will be able to design the data center virtual access layer and related physical connectivity.net Lesson 4 Designing the Data Center Virtualized Access Layer Overview With the usage of server virtualization growing. This ability includes being able to meet these objectives:  Define the virtual access layer  Describe the virtual access layer solutions within virtual machine hosts  Design solutions with Cisco Adapter FEX  Design solutions with Cisco VM-FEX  Design solutions with the Cisco Nexus 1000V switch .

which is software that presents network interface cards (NICs) to virtual machines and manages the data between the virtual machines and the physical network. 3-58 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. Virtual machines run inside virtualized hosts. This virtual access layer is connected to the physical access layer through physical NICs installed on the host. This environment is controlled by the hypervisor. special-purpose operating system. .0—3-4 The virtual access layer is below the physical access layer and its main role is to provide network connectivity to virtual machines inside virtualized hosts. All rights reserved. DCUFD v5. hypervisors have virtual switches. which is a thin.net Virtual Access Layer This topic describes the virtual access layer.certcollecion. Inc. • Virtual access layer provides network connectivity to virtual machines • Connects to the physical access layer through a physical NIC on the host • Hypervisor software provides connectivity to virtual machines by running a virtual switch and connecting it to virtual NICs Collapsed Core & Aggregation Layer Physical Access Layer Virtual Access Layer © 2012 Cisco and/or its affiliates. such a host appears as a device with multiple MAC addresses that appear at the same port. To manage network connectivity for and between virtual machines. To the physical network infrastructure.

VLAN Tagging The virtual switch can be set to tag the Ethernet frames that it receives with a VLAN tag.  No tagging: The virtual switch does not perform any tagging.0—3-5 The example shows a generic virtualized host. The frame is forwarded as it is to the physical switch. Inc. Data Center Topologies 3-59 . upon receiving the frame. or not.  Virtual guest tagging: The virtual machine has a trunk port and imposes the VLAN tag. after receiving the frame on its access port. imposes the VLAN tag. with the hypervisor and virtual machines. which is a network element of the virtual access layer. The hypervisor runs the virtual switch as well.net • Virtual access switch runs inside the host • Embedded in the hypervisor • Connects virtual NICs on virtual machines • Processes packets and sends them to the physical NIC if the destination MAC address is outside the host • Switches packets between virtual machines if the destination MAC is inside the host Host Virtual NIC Virtual Access Layer Virtual Switch Hypervisor Physical NIC • Different options for VLAN tagging: . © 2012 Cisco Systems. The physical switch then adds the VLAN tag. The virtual switch can do the following with frames:  Virtual switch tagging: The virtual machine gets an access port. This approach is the most common.certcollecion. which means that the virtual switch. it determines what to do with it. When the virtual switch receives a packet.Virtual guest tagging © 2012 Cisco and/or its affiliates. DCUFD v5. The virtual switch with forward the packet through the physical NIC to the physical infrastructure or forward the packet to a virtual machine.Virtual switch tagging . All rights reserved.No tagging Physical Access Layer .

the hypervisor component is the VMware ESXi software.certcollecion. where the distributed virtual switch spans across multiple hosts and acts as a single switch.Cisco Nexus 1000V • Design choices are based on customer needs. and so on. required licensing. Note The distributed virtual switch (DVS) needs the vCenter Server component to function. All rights reserved. The vCenter Server is part of the VMware vSphere solution. The third option for VMware vSphere is an extension to the Cisco Nexus 1000V Distributed Virtual Switch.net Virtual Access Layer Solutions This topic describes virtual access layer solutions within virtual machine hosts.Distributed Virtual Switch . • The VMware environment offers three possibilities for implementation of the virtual switching layer: .0 © 2012 Cisco Systems. The Nexus 1000V provides additional functionality in addition to the VMware DVS. Distributed Virtual Switch Hypervisor Hypervisor © 2012 Cisco and/or its affiliates. In the case of the VMware vSphere solution.0—3-7 The general design is the same for most virtualization environments that are based on the hypervisor model. Within ESXi. . Inc. including the following: 3-60  Cisco Nexus Operating System (NX-OS) CLI  Centralized manageability by providing network administrators with a way to manage virtual networks  Improved redundancy.VMware Standard Virtual Switch . DCUFD v5. and so on Designing Cisco Data Center Unified Fabric (DCUFD) v5. you have two virtual switching solutions:  VMware standard virtual switch  VMware distributed virtual switch The difference between them is that the standard switch is a standalone network element within one host.

The network access layer is moved into the virtual environment to provide enhanced network functionality at the virtual machine (VM) level to enable automated network centralized management. Nexus 1000V DCUFD v5. The Cisco Nexus 1000V works with any upstream switching system to provide standard networking controls to the virtual environment. Distributed switches retain counter values upon virtual machine motion.net Customers choose their virtual switching layer based on their requirements:  Does the customer require virtual machine mobility? All three solutions support it. The Cisco Nexus 1000V is a software-based solution that provides VM-level network configurability and management. depending on the data center design and demands. Data Center Topologies 3-61 . Inc. All rights reserved. policy-based VM connectivity.certcollecion. This can be deployed as a hardware. © 2012 Cisco Systems.  Does the customer require network administrator access and the Cisco NX-OS CLI for improved management? The Cisco Nexus 1000V solution supports this requirement. and do not require separate configuration of every host.or software-based solution. policy mobility.0—3-8 Cisco server virtualization uses technology that was jointly developed by Cisco and VMware. and a nondisruptive operational model. Cisco Nexus 1000V Cisco Nexus 1000V (Software Based) • Distributed virtual switching solution • Virtual Supervisor Module and Virtual Ethernet Modules Physical Server • Leverages functionality of VMware VDS and adds Cisco functionality Nexus 1000V • Policy-based VM connectivity ESX Hypervisor • Mobility of network and security properties • Nondisruptive operational model © 2012 Cisco and/or its affiliates. Both deployment scenarios offer VM visibility.

and port profiles.certcollecion. The virtual NIC (vNIC) is presented to the host operating system as a physical NIC. This situation allows you to directly configure parameters for that interface or vNIC on the Cisco Nexus 5500 Series Switch. including assigning VLANs.net Using Cisco Adapter FEX This topic describes how to design solutions with Cisco Adapter Fabric Extender (FEX).0 © 2012 Cisco Systems. . Cisco Adapter FEX is a technology that is available for a combination of Cisco equipment:  Cisco Nexus 5500  Cisco Nexus 2232 FEX  Cisco Unified Computing System (UCS) P81E virtualized CNA The solution allows the NIC to create a virtual interface on the upstream switch for that particular dynamically created vNIC on the host. access lists. Inc. 3-62 Designing Cisco Data Center Unified Fabric (DCUFD) v5. and presented to the switch as a virtual Ethernet interface. quality of service (QoS).

© 2012 Cisco Systems. access lists. This approach leads to consistent treatment for all network traffic. between the virtual Ethernet (vEthernet) interfaces.certcollecion. virtual or physical. Virtual machine I/O is sent directly to the upstream physical network switch. The Cisco VM-FEX solution offers higher performance than the DVS or Cisco Nexus 1000V because the host CPU is not involved in switching network traffic from and between the VMs. VLAN. Inc. MAC address. All rights reserved. new VIC). DCUFD v5. port profiles. The VM-FEX technology eliminates the vSwitch within the hypervisor by providing individual virtual machine virtual ports on the physical network switch. which then moves the binding of the vEthernet interface from one physical downlink (leading to the first VIC) to the new physical downlink (leading to the second. QoS.net Using Cisco VM-FEX This topic describes how to design solutions with Cisco Virtual Machine Fabric Extender (VMFEX).0—3-12 Port-extension-like functions with Cisco VM-FEX. Interface policy. The change is also registered on the Cisco UCS Fabric Interconnect. which takes full responsibility for virtual machine switching and policy enforcement. Data Center Topologies 3-63 . Workload mobility (VMware VMotion) is possible because the VMware vSphere environment and the Cisco UCS Manager environment are integrated and process a move of a vNIC from the VIC on one host to the VIC on another host. Cisco VM-FEX consolidates virtual and physical switching layers into a single layer and reduces the number of network management points. The traffic between the VMs is switched on the switch. The voice interface card (VIC) is the first implementation of VM-FEX technology from Cisco. and so on is configured on the upstream switch • Switching from and between the VMs occurs on the physical switch • VMware PTS: pass-through switching for vSphere 4 • VMware UPT: universal passthrough switching for vSphere 5 • VNIC VM VM VM VM VM VM VM VM Hypervisor Hypervisor VNIC VNIC VNIC VNIC VNIC VNIC VNIC VNIC Server VIC • Allows creation of a virtual interface on the switch and links it directly to the virtual machine VIC • Server VETH Switch Ability to perform vMotion: integration of Cisco UCS Manager and VMware vCenter required © 2012 Cisco and/or its affiliates.

Because of direct hardware attachment of the VM. allows for hardware forwarding of traffic between the VM and the switch without involvement of the hypervisor • Requires customized hardware drivers to be installed into the VM guest operating system • VM communicates directly with hardware. . 3-64 Designing Cisco Data Center Unified Fabric (DCUFD) v5. This solution offers even higher performance compared to Cisco VM-FEX. Inc.certcollecion. DCUFD v5. the capability of moving the VM to another host while the VM is in operation is not available. The VM is tied to the host. All rights reserved.0—3-13 The hypervisor bypass allows for hardware forwarding of traffic between the VM and the switch without involvement of the hypervisor.0 © 2012 Cisco Systems. which is the virtual NIC created for that virtual machine on the VIC. bypassing the supervisor • vNIC is registered as a logical interface on the physical switch— the UCS Fabric Interconnect • VM does not have ability to use vMotion veth9 Binding to the switch interface vNIC9 VIC Hypervisor bypass Hypervisor Virtual Switch Direct binding to the vNIC • Higher performance than PTS/UPT and DVS (switching in software) © 2012 Cisco and/or its affiliates. The virtual machine uses a customized driver that is installed in the VM guest operating system to communicate directly with virtual hardware.net • In some cases. The VIC is then registered to the physical switch in the same way as when using Cisco VMFEX. where the hypervisor is still involved to some extent to manage VM network traffic. The hypervisor bypass networking completely bypasses the hypervisor and the VM communicates directly with the hardware.

certcollecion.net
Solutions with the Cisco Nexus 1000V Switch
This topic describes how to design solutions with the Cisco Nexus 1000V Distributed Virtual
Switch.

• Replaces for VMware DVS
- Preserves existing VM management
- NX-OS look and feel management
- Compatibility with VMware features: Vmotion, history, and so on
- Additional features: NetFlow, QoS, port profiles, security policies, security
zones, private VLANs, SPAN, and so on

vCenter
Server
Nexus 1000V

Hypervisor
LAN

© 2012 Cisco and/or its affiliates. All rights reserved.

Hypervisor

DCUFD v5.0—3-15

Cisco Nexus 1000V Series Switches are virtual distributed software-based access switches for
VMware vSphere environments that run the Cisco NX-OS operating system. Operating inside
the VMware ESX hypervisor, the Cisco Nexus 1000V Series supports Cisco VN-Link server
virtualization technology to provide the following:

Policy-based VM connectivity

Mobile VM security and network policy

Nondisruptive operational model for your server virtualization, and networking teams

The Cisco Nexus 1000V bypasses the VMware vSwitch with a Cisco software switch. This
model provides a single point of configuration for a networking environment of multiple ESX
hosts.
When server virtualization is deployed in the data center, virtual servers typically are not
managed in the same way as physical servers. Server virtualization is treated as a special
deployment, leading to longer deployment time, with a greater degree of coordination among
server, network, storage, and security administrators.
With the Cisco Nexus 1000V Series, you can have a consistent networking feature set and
provisioning process all the way from the VM access layer to the core of the data center
network infrastructure. Virtual servers can now leverage the same network configuration,
security policy, diagnostic tools, and operational models as their physical server counterparts
attached to dedicated physical network ports.
Virtualization administrators can access predefined network policy that follows mobile virtual
machines to ensure proper connectivity saving valuable time to focus on virtual machine
administration.
© 2012 Cisco Systems, Inc.

Data Center Topologies

3-65

certcollecion.net
This comprehensive set of capabilities helps deploy server virtualization faster and realize its
benefits sooner.

• Policy-based VM connectivity using port profiles

Nexus 1000V
ESX Hypervisor

ESX Hypervisor

VM Connection Policy
• Defined in network

Defined Policies
WEB Apps

• Applied in vCenter
• Linked to VM UUID

HR
DB

vCenter Server
© 2012 Cisco and/or its affiliates. All rights reserved.

Compliance

DCUFD v5.0—3-16

VM connection policies are defined in the network and applied to individual VMs from within
VMware vCenter. These policies are linked to the universally unique identifier (UUID) of the
VM and are not based on physical or virtual ports.
To complement the ease of creating and provisioning VMs, the Cisco Nexus 1000V includes
the Port Profile feature to address the dynamic nature of server virtualization from the network
perspective. Port profiles enable you to define VM network policies for different types or
classes of VMs from the Cisco Nexus 1000V Virtual Supervisor Module (VSM), then apply the
profiles to individual VM virtual NICs through the VMware vCenter GUI for transparent
provisioning of network resources. Port profiles are a scalable mechanism to configure
networks with large numbers of VMs.

3-66

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net

• Mobility of network and security properties

Nexus 1000V
ESX Hypervisor

ESX Hypervisor

Policy Mobility
• VMotion for network

Defined Policies
WEB Apps

• Maintained connection state
• Ensured VM security

HR
DB

vCenter Server
© 2012 Cisco and/or its affiliates. All rights reserved.

Compliance
DCUFD v5.0—3-17

Through the VMware vCenter application programming interfaces (APIs), the Cisco Nexus
1000V migrates the VM port and ensures policy enforcement as machines transition between
physical ports. Security policies are applied and enforced as VMs migrate through automatic or
manual processes.
Network and security policies defined in the port profile follow the VM throughout its life
cycle, whether it is being migrated from one server to another, suspended, hibernated, or
restarted.
In addition to migrating the policy, the Cisco Nexus 1000V VSM also moves the VM network
state, such as the port counters and flow statistics. VMs participating in traffic monitoring
activities, such as Cisco NetFlow or Encapsulated Remote Switched Port Analyzer (ERSPAN),
can continue these activities uninterrupted by VMotion operations.
When a specific port profile is updated, the Cisco Nexus 1000V automatically provides live
updates to all of the virtual ports using that same port profile. With the ability to migrate
network and security policies through VMotion, regulatory compliance is much easier to
enforce with the Cisco Nexus 1000V because the security policy is defined in the same way as
physical servers and constantly enforced by the switch.

© 2012 Cisco Systems, Inc.

Data Center Topologies

3-67

certcollecion.net

• Layer 2
- VLAN, PVLAN, 802.1q
- LACP
- vPC host mode

• QoS classification and marking
Nexus 1000V

• Security
- Layer 2, 3, 4 access lists

ESX Hypervisor

ESX Hypervisor

- Port security

• SPAN and ERSPAN
• Compatibility with VMware
- VMotion, Storage VMotion

vCenter Server

- DRS, HA, FT

© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—3-18

Cisco Nexus 1000V supports the same features as physical Cisco Catalyst or Nexus switches
while maintaining compatibility with VMware advanced services like VMotion, Distributed
Resource Scheduler (DRS), Fault Tolerance (FT), High Availability (HA), Storage VMotion,
Update Manager, and vShield Zones.

vPC Host Mode
Virtual port channel host mode (vPC-HM) allows member ports in a port channel to connect to
two different upstream switches. With vPC-HM, ports are grouped into two subgroups for
traffic separation. If Cisco Discovery Protocol is enabled on the upstream switch, then the
subgroups are automatically created using Cisco Discovery Protocol information. If Cisco
Discovery Protocol is not enabled on the upstream switch, then the subgroup on the interface
must be manually configured.

Layer 2 Features
The following Layer 2 features are supported by Cisco Nexus 1000V:

3-68

Layer 2 switch ports and VLAN trunks

IEEE 802.1Q VLAN encapsulation

Link Aggregation Control Protocol (LACP): IEEE 802.3ad

Advanced port channel hashing based on Layer 2, 3, and 4 information

vPC-HM

Private VLANs with promiscuous, isolated, and community ports

Private VLAN on trunks

Internet Group Management Protocol (IGMP) snooping versions 1, 2, and 3

Jumbo frame support of up to 9216 bytes

Integrated loop prevention with bridge protocol data unit (BPDU) filter without running
Spanning Tree Protocol (STP)

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net
QoS Features
The following QoS features are supported by Cisco Nexus 1000V:

Classification per access group (by access control list [ACL]), IEEE 802.1p class of service
(CoS), IP Type of Service: IP precedence or differentiated services code point (DSCP)
(RFC 2474), UDP ports, packet length

Marking per two-rate three-color marker (RFC 2698), IEEE 802.1p CoS marking, IP Type
of Service: IP precedence or DSCP (RFC 2474)

Traffic policing (transmit- and receive-rate limiting)

Modular QoS CLI (MQC) compliance.

Security Features
The following security features are supported by Cisco Nexus 1000V:

Ingress and egress ACLs on Ethernet and vEthernet ports

Standard and extended Layer 2 ACLs

Standard and extended Layer 3 and Layer 4 ACLs

Port ACLs (PACLs)

Named ACLs

ACL statistics

Cisco Integrated Security features

Virtual service domain for Layer 4 through 7 virtual machine services

© 2012 Cisco Systems, Inc.

Data Center Topologies

3-69

certcollecion.net

VEM

VSM
• Management, monitoring, and
configuration

• Replaces ESX virtual switch

• Integrates with VMware vCenter

• Enables advanced networking on ESX
hypervisor

• Uses NX-OS

• Provides each VM with dedicated port

• Configurable via CLI

• Running on the host

• Running on the Nexus 1010 Virtual
Services Appliance, or on the host
VSM

vCenter
Server

VEM

VEM

LAN

© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—3-19

Virtual Supervisor Module
Cisco Nexus 1000V is licensed per each server CPU regardless of the number of cores. It
comprises the following:

Cisco Nexus 1000V Virtual Supervisor Module (VSM): This module performs
management, monitoring, and configuration tasks for the Cisco Nexus 1000V and is tightly
integrated with the VMware vCenter. The connectivity definitions are pushed from Cisco
Nexus 1000V to the vCenter.

Cisco Nexus 1000V Virtual Ethernet Module (VEM): This module enables advanced
networking capability on the VMware ESX hypervisor and provides each VM with a
virtual dedicated switch port.

A Cisco Nexus 1000V deployment consists of the VSM (one or two for redundancy) and
multiple VEMs installed in the ESX hosts—a VMware vNetwork Distributed switch (vDS).
A VSM is the control plane on a supervisor module much like in regular physical modular
switches, whereas VEMs are remote Ethernet line cards to the VSM.
In Cisco Nexus 1000V deployments, VMware provides the virtual network interface card
(vNIC) and drivers while the Cisco Nexus 1000V provides the switching and management of
switching.

Virtual Ethernet Module
The VEM is a software replacement for the VMware vSwitch on a VMware ESX host. All
traffic-forwarding decisions are made by the VEM.
The VEM leverages the VMware vNetwork Distributed Switch (vDS) API, which was
developed jointly by Cisco and VMware, to provide advanced networking management for
virtual machines. This level of integration ensures that the Cisco Nexus 1000V is fully aware of
all server virtualization events, such as VMware VMotion and DRS. The VEM takes
configuration information from the VSM and performs Layer 2 switching and advanced
networking functions:
3-70

Designing Cisco Data Center Unified Fabric (DCUFD) v5.0

© 2012 Cisco Systems, Inc.

certcollecion.net

Port channels

Port profiles

Quality of service (QoS)

Security: Private VLAN, access control lists, port security

Monitoring: NetFlow, Switch Port Analyzer (SPAN), ERSPAN

• Recommended Layer 3 connectivity to VSM allows a VSM to manage remote VEMs.
ESX Host

ESX Host
vmnics

VSM

vCenter
Server

LAN

ESX Host

Management
Control
Packet
Data (multiple VLANs)
© 2012 Cisco and/or its affiliates. All rights reserved.

DCUFD v5.0—3-20

Cisco Nexus 1000V VSM-VEM Connectivity Options
Layer 3 Connectivity
The Cisco Nexus 1000V VSM and VEM need to communicate in order to maintain the control
plane of the switch (VSM) and to propagate the changes to the data plane (VEMs).
The VSM and the hosts need to be reachable over the IP protocol.
Note

The minimum release of Cisco NX-OS for Cisco Nexus 1000V Release 4.0(4)SV1(2) is
required for Layer 3 operation.

Deployment details for a Layer 3 VSM-VEM connection are the following:

Layer 3 connectivity between VSM and VEMs:

VSM: Software virtual switch (SVS) Layer 3 mode with control or management
interface

VEM: VMkernel interface and Generic Routing Encapsulation (GRE) to tunnel
control traffic to VSM

Requires per-VEM Layer 3 control port profile

© 2012 Cisco Systems, Inc.

Data Center Topologies

3-71

LACP. • Required Layer 2 VLANs for Nexus 1000V operation ESX Host ESX Host vmnics VSM vCenter Server LAN ESX Host Management Control Packet Data (multiple VLANs) © 2012 Cisco and/or its affiliates. It is recommended that separate VLANs are maintained.0—3-21 Layer 2 connectivity (VLANs) is required between the VSM and VEMs: 3-72  Management VLAN. . In such cases. the VSM in the primary data center is primary for local VEMs and secondary for remote VEMs. Layer 2 Connectivity The original option for Cisco Nexus 1000V VSM-VEM connectivity is using Layer 2. and Internet Group Management Protocol (IGMP)  Data VLANs: One or more VLANs are required for VM connectivity.net   Option 1: Management interface: — Out-of-band (OOB) management for VSM: mgmt0 port — Should be the same as VMware vCenter and ESX management VLAN — VSM-to-VEM traffic mixed with vCenter management traffic Option 2: Special control interface with own IP address: — Note Dedicated control0 interface for VSM-to-VEM communication VSM-VEM Layer 3 connectivity allows a VSM in a data center to manage VEMs in a remote data center. DCUFD v5.0 © 2012 Cisco Systems. Inc. All rights reserved. OOB for VSM (mgmt0 port): Should be the same as vCenter and ESX management VLAN  Domain ID: Single Cisco Nexus 1000V instance with dual VSM and VEMs  Control VLAN: Exchanges control messages between the VSM and VEM  Packet VLAN: Used for protocols like Cisco Discovery Protocol.certcollecion. Designing Cisco Data Center Unified Fabric (DCUFD) v5.

Cisco Virtual Security Gateway (VSG).net • • • • Hardware platform for Nexus 1000V VSM and other service appliances Provides VSM independence of existing production hosts Cisco Nexus 1010V comes bundled with licenses Platform for additional services: Cisco virtual NAM. Inc. It is designed for customers who want to provide independence for the VSM so that it does not share the production infrastructure. © 2012 Cisco Systems. including the Cisco virtual Network Analysis Module (NAM). The Nexus 1010V also serves as a hardware platform for various additional services. Physical Switches DCUFD v5. VSG VSM on a Virtual Machine Cisco Nexus 1000V VSM VM Cisco Nexus 1000V VM VSM on a Cisco Nexus 1010 VM VM VM VM VM Cisco Nexus 1000V VEM VMware vSphere VMware vSphere Server Server Cisco Nexus 1010 Physical Switches © 2012 Cisco and/or its affiliates. and so on.certcollecion. the Nexus 1010V comes bundled with VEM licenses.0—3-22 The Cisco Nexus 1010 Virtual Services Appliance is the hardware platform for the VSM. As an additional benefit. All rights reserved. Data Center Topologies 3-73 .

. such as the Cisco Nexus 1000V VSM. Virtual Security Gateway (VSG).0 © 2012 Cisco Systems. together with the production VMs. and so on. 3-74 Designing Cisco Data Center Unified Fabric (DCUFD) v5.certcollecion. a host system that is not used for production VMs is used. or when repurposing existing hosts). listed from most to least recommended. This is the least desired option because the virtual appliances are not isolated from failures that may occur in the production network. Typically.net These are the three possibilities for deploying virtualized services. if it has enough CPU and memory resources. during the migration phase of VMs to new hardware. Off-Cluster Deployment Virtual appliances run on one host that is not part of the production host cluster. On-Cluster Deployment In this case. Deployment on Cisco Nexus 1010 Virtual Services Appliance The Cisco Nexus 1010 Virtual Services Appliance is a dedicated appliance that runs virtualized appliances. virtual Wide Area Application Services (vWAAS). If a host in the production network goes offline. This host is not part of the production cluster and is isolated from failures on the production environment. so do virtual appliances. Inc. vNAM. This option is the best because all appliances are running in a controlled and dedicated environment. the virtual appliances run on a host that is part of the production host cluster. such as control software or cluster management software failure. Such a solution is meant to be temporary (for example.

© 2012 Cisco Systems.0—3-24 The Cisco VSG for Cisco Nexus 1000V Series Switches is a virtual appliance that provides trusted access to secure virtualized data centers in enterprise and cloud provider environments while meeting the requirements of dynamic policy-driven operations. All rights reserved. Inc. DCUFD v5. or tenants Data Center Segments / LoBs / Tenants Web Zone QA Zone HR Zone App Zone Dev Zone Finance Zone VDI Zone Lab Zone Mfg Zone Staging Zone Partner Zone R&D Zone Cisco VSG Cisco VSG Cisco VSG Shared Computing Infrastructure © 2012 Cisco and/or its affiliates. mobility-transparent enforcement.certcollecion. Data Center Topologies 3-75 . and scale-out deployment for dense multitenancy. departments.net • Virtual network is divided to multiple security zones • Security zones belong to network segments.

net • Cisco VSG inspects an incoming flow. Dev Zone VM VM Cisco VSG (Segment 2) VM Cisco Nexus 1000V VEM vPath VMware vSphere Server team: Manage virtual VMware vCenter Server machines Security team: Manage Cisco VSGs and security policies (security profiles) DCUFD v5. to the designated Cisco VSGs.certcollecion. 3-76 Designing Cisco Data Center Unified Fabric (DCUFD) v5. . All rights reserved. A split-processing model is applied in which initial packet processing occurs in the Cisco VSG for policy evaluation and enforcement. Cisco vPath provides these advantages:  Intelligent traffic steering: Flow classification and redirection to associated Cisco VSGs  Fast path offload: Policy enforcement of flows offloaded by Cisco VSG to vPath Cisco vPath is designed for multitenancy and provides traffic steering and fast path offload on a per-tenant basis. Inc.0—3-25 Cisco vPath technology steers traffic. Subsequent policy enforcement for packets is offloaded directly to vPath. allows the flow through the Nexus 1000V switch.0 © 2012 Cisco Systems. and if it is administratively permitted. A vPath is created. whether inbound or traveling from virtual machine to virtual machine. Data Center Segment / LoB / Tenant (1) Cisco VSG (Segment 1) Web Zone VM Cisco Nexus 1000V VEM VM App Zone VM vPath VM QA Zone VM Cisco Nexus 1000V VEM VMware vSphere Cisco Nexus 1000V VSM Network team: Manage Cisco Nexus 1000V and network policies (port profiles) Data Center Segment / LoB / Tenant (2) VM VM vPath VMware vSphere Data Center Network Cisco Virtual Network Management Center Server © 2012 Cisco and/or its affiliates.

net Summary This topic summarizes the primary points that were discussed in this lesson.certcollecion. Inc. © 2012 Cisco Systems. Data Center Topologies 3-77 .

0 © 2012 Cisco Systems. Inc. .certcollecion.net 3-78 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

and LISP. On the equipment level. including IP routing. These include IP routing protocols. there are options to provide high availability by employing redundant supervisor engines and similar technologies. first hop redundancy protocols.net Lesson 5 Designing High Availability Overview In this lesson. you will be able to design for data center high availability with various technologies. In addition to IP protocol high availability. Objectives Upon completing this lesson. This lesson focuses on high availability provided by Layer 3 protocols. where both virtual port channel (vPC) and Cisco FabricPath can be used.certcollecion. clusters. Locator Identity Separation Protocol (LISP). there are other high-availability approaches. next-hop redundancy protocols. you will analyze various technologies that can provide high availability on Layer 3. One variant is to provide high availability on the data link layer. and clustered applications to some extent. This ability includes being able to meet these objectives:  Design high availability for IP-based services  Design high availability by implementing link aggregation  Design high availability of services using IP routing and FHRPs  Provide high availability with RHI  Design high availability of services using LISP .

and Gateway Load Balancing Protocol (GLBP). These protocols provide path redundancy across different links. with IBGP and Routing Information Protocol (RIP) for special applications.Security and application-delivery IP services • IP forwarding using redundancy technologies: .0 © 2012 Cisco Systems. DCUFD v5. GLBP. In the last example.First hop redundancy protocols: HSRP. VRRP . LISP is not directly a redundancy protocol. 3-80 Designing Cisco Data Center Unified Fabric (DCUFD) v5. but can accommodate path selection and failover for the IP protocol. • Data center IP services: . All rights reserved. EIGRP . you need to provision the IP layer as well. The primary use of LISP is to separate the IP endpoint identity information from the IP endpoint location information. . The second set of protocols is IP routing protocols. The most popular protocols used in data centers are Open Shortest Path First (OSPF) and Enhanced Interior Gateway Routing Protocol (EIGRP).net High Availability for IP This topic describes how to design high availability for IP-based services. Server virtualization across geographically separated data centers requires location independence to allow for dynamically moving server resources from one data center to another. Inc. The IP protocol is not highly available as such. Virtual Router Redundancy Protocol (VRRP). Dynamic workload requires route optimization when the virtual servers move while keeping the server IP address the same.Dynamic routing protocols: OSPF.certcollecion. The most common place to implement IP high availability is the data center aggregation layer.IP routing . so various enhancements and protocols are available to guarantee continuous operation. LISP facilitates a more robust high availability in situations where requirements go beyond a single data center. The first set of protocols that makes IP highly available is the First Hop Redundancy Protocols (FHRPs): Hot Standby Router Protocol (HSRP).Static routing .0—3-4 When designing highly available data centers. LISP then enables IP endpoints to change location while keeping their assigned IP addresses.LISP for redundancy and load balancing • Applies to IPv4 and IPv6 routed traffic • Most common placement of services is the data center aggregation layer or above © 2012 Cisco and/or its affiliates.IP default gateway service .

© 2012 Cisco Systems. This protocol is Cisco proprietary. Data Center Topologies 3-81 . inter-VLAN traffic must be routed at the aggregation layer (Layer 3 FHRP gateways. IP reachability tracking.HSRP . The use of FHRP may also depend on the amount of server-to-storage and server-to-server traffic. because that is the demarcation between Layer 2 and Layer 3). traffic in the direction toward the servers usually travels only across one of these gateways. All rights reserved. However. Servers in the same subnet can use multiple gateways to forward traffic upstream. utilizing all upstream links.Interface or object tracking to respond to topology changes: physical interface tracking.HSRP on Cisco Nexus 7000 Series Switches with vPC for access switches • FHRP protocols are configurable and flexible: .0—3-5 The FHRPs provide the default gateway service to devices in a subnet. IPv6 support. The GLBP protocol additionally provides load balancing between several default gateway devices. It has plenty of configurable options ranging from timer configuration to tracking objects.VRRP .net • Provide the IP default gateway service to devices in a subnet using these protocols: . The protocol is set up between (at least) two physical devices that otherwise host the default gateway IP address. IP route tracking. as well as where the storage is attached (aggregation or access layer). DCUFD v5.Tuning possible for quicker switchover . and so on. If the traffic does not remain local to the access switches. The VRRP protocol is open-standards-based and provides functionality that is similar to HSRP.GLBP • Variants: .certcollecion. and so on © 2012 Cisco and/or its affiliates. The most popular FHRP protocol is the HSRP. Inc.

0—3-6 The slide presents one of the most classic HSRP designs found in data center networks. This is done by configuring object tracking that can monitor:  Physical upstream interfaces  IP routes  General IP reachability. In some cases. Within the same group. it is desirable to have multiple next-hop HSRP addresses that are active between different pairs of switches on the same subnet. DCUFD v5. Designing Cisco Data Center Unified Fabric (DCUFD) v5.net HSRP Primary for Subnet A Secondary for Subnet B Server in Subnet A Secondary for Subnet A Primary for Subnet B Server in Subnet B • Manual load balancing: primary default gateway on different aggregation switches for different subnets • Only one default gateway. . The data-plane optimization made by vPC allows Layer 3 forwarding at both the active HSRP peer and the standby HSRP peer. Two if combined with vPC. Note To fully support this scenario. this provides an active-active FHRP behavior. and so on Note 3-82 Given the prevalence of vPC. All rights reserved. the surviving device takes 100 percent of the load. The aggregation switches need to have the same reachability for upstream routes for the HSRP secondary device to forward traffic. Inc. • Same design applies when using vPC for the connection between access and aggregation layers • Combine with tracking of upstream interfaces or routes from the core © 2012 Cisco and/or its affiliates.certcollecion. you need to use the Per-VLAN Spanning Tree (PVST) protocol or Multiple Spanning Tree (MST) protocol. HSRP can leverage the capability to forward data on both the active and the standby switch. HSRP is run on a pair of aggregation switches. Load balancing is achieved manually by equally distributing the VLANs among the switches. where one is selected as the primary switch for selected VLANs. This setting allows the forwarding path to be aligned on both Layer 2 and Layer 3. In effect. Interface tracking or object tracking can be used to have the active HSRP gateway running on the device that actually has functioning upstream connectivity. you have only one switch acting as the default gateway for servers in that subnet. • In case of failure. The Spanning Tree Protocol (STP) primary root bridge and HSRP active router must be on the same device for the same network (VLAN). while the other is the primary default gateway for the remaining VLANs.0 © 2012 Cisco Systems.

IP routing must be adjusted to attract the traffic from the Internet to the secondary data center.4 DC 2 HSRP . servers in the secondary data center need to send the data across the data center interconnect. you have redundant data centers interconnected with Layer 2 transport. HSRP can run between the aggregation switches in both data centers. while in the secondary data center there are local primary and local secondary IP default gateways.1 DGW .net DC 1 . © 2012 Cisco Systems. where most network traffic is concentrated in the primary data center. DCUFD v5.HSRP within data center for first-hop redundancy . All rights reserved.HSRP hello traffic must be filtered out at the data center interconnect link © 2012 Cisco and/or its affiliates.certcollecion. This design is more suitable for active-standby load distribution. Data Center Topologies 3-83 . Downstream traffic from the Internet to the servers goes only to the primary data center. This approach uses the primary default gateway in the primary data center. If the primary site is down.254 . In the primary data center.Only one default gateway . there is the primary and secondary IP default gateway.3 DGW .Challenge is how to bring the traffic in toward the data center and to maintain data sessions: needs to be done at the data center core layer .2 Layer 2 DC Interconnect: Cisco OTV HSRP .0—3-7 In this scenario.254 . Inc.

 Different hosts receive different AVF MAC addresses.certcollecion. the deployments of GLBP have a smaller footprint than HSRP. If both uplinks are not active. The other reason it is not used in more environments is that the Virtual Switching System (VSS) removes the need.0 © 2012 Cisco Systems. you send the traffic up to only one aggregation switch.0—3-8 The Cisco GLBP is another first-hop redundancy protocol that can be used in data center networks. and then forward data to the other aggregation switch across the interswitch link. which forwards traffic for that host. but this does not impose any problems. GLBP distributes the load between gateways in the following way:  When a host issues an Address Resolution Protocol (ARP) request for the MAC address for the configured default gateway IP address.net GLBP AVG for Subnet A AVF for Subnet A AVF for Subnet A Servers in Subnet A • Automatic load balancing • In case of failure. GLBP is suitable for servers that produce much outgoing traffic. it only makes sense to use GLBP if both uplinks are active for a specific VLAN. if there are any blocked uplinks from the access to the aggregation layer. Failure is detected by other gateways by lost hello packets. the active virtual gateway (AVG) replies with an ARP reply and sends the MAC address of a chosen active virtual forwarder (AVF). . The return path may be asymmetrical at the last few hops. If an AVF fails. Note 3-84 Generally. for example. The difference between GLBP and HSRP is that GLBP automatically provides for load balancing among multiple gateways. Inc. Designing Cisco Data Center Unified Fabric (DCUFD) v5. another AVF assumes the MAC address of the failed AVF. Because GLBP shares the load between two aggregation switches. the surviving device takes 100 percent of the load • Traffic from server to the network using all upstream paths • Downstream traffic coming in through one device only • Suitable for servers with high data output © 2012 Cisco and/or its affiliates. All rights reserved. The most significant reason that GLBP does not have wider deployment is that it provides minimal value if you span VLANs between closet switches. DCUFD v5.

All rights reserved. DCUFD v5. Inc. The Cisco Catalyst 6500 VSS also appears as one routing neighbor to all other devices. The Cisco Catalyst 6500 VSS control plane manages gateway redundancy in this case. an FHRP is not necessary.0—3-10 The Cisco Catalyst 6500 Virtual Switching System (VSS) does not need an FHRP protocol because the default gateway IP address resides on the active chassis.net High Availability Using vPC and VSS This topic describes how to design high availability by implementing link aggregation. • No need for HSRP on VSS: default gateway redundancy coupled with control plane redundancy • Upstream traffic exits either switch • Downstream traffic enters either switch and prefers local path to server . © 2012 Cisco Systems. Data Center Topologies 3-85 .certcollecion. Typically it utilizes all available (Layer 2 or Layer 3) links at the same time. Because it is a single virtual device.1 © 2012 Cisco and/or its affiliates.

. the packet destined for the server can arrive on any aggregation switch.certcollecion. When IP routing is configured so that it also load-balances the links downstream (that is. You need a VLAN trunk link between aggregation switches because it is necessary to transport HSRP hello packets between the switch virtual interfaces (SVIs) on both switches. Downstream traffic from the core to the servers will arrive on the primary aggregation switch unless Equal-Cost Multipath (ECMP) is used and IP routes for server subnets are advertised and tuned properly. with the same cost.Both devices advertise the same IP prefix for server subnet.0—3-11 The vPC scenario is different because you have two devices with two distinct control planes that are joined in the same vPC domain.1 . Inc. both aggregation switches will forward traffic upstream if both of them have the same route to the destination. when the aggregation switches advertise the server subnets with the same cost to all upstream switches). an FHRP is necessary. Because the gateways remain two physical devices. All rights reserved. DCUFD v5.2 HSRP vPC © 2012 Cisco and/or its affiliates. This is done to optimize upstream network connectivity and to efficiently distribute the load between all links to the core layer.Double-sided vPC . 3-86 Designing Cisco Data Center Unified Fabric (DCUFD) v5. the aggregation switch will receive the packet and use the local link to the access switch to forward traffic toward the server.net • Needs HSRP • Secondary upstream Layer 3 forwarding with vPC if equal route found • Downstream traffic through one device only unless one of these situations exists: . It will avoid the vPC peer link to skip an unnecessary switched hop. switch will prefer local downstream path . In this case. Normally.0 © 2012 Cisco Systems.

3. Upstream Traffic Flow 1.0—3-13 To design a fully redundant solution. 3. The routing protocol is tuned to advertise data center subnets and summary routes to the rest of the network.0/0 HSRP . Inc.1 .2 Advertise DC routes vPC Upstream traffic paths DCUFD v5.0.certcollecion. Advertise 0.net High Availability Using IP Routing and FHRP This topic describes how to design high availability of services using IP routing and FHRPs. All rights reserved. Data Center Topologies 3-87 . • Design high availability of services using IP routing and FHRPs • Routing protocol tuned to advertise data center subnets or summaries toward the rest of the network • Data center core switches are configured to advertise a default route that becomes a foundation for ECMP. When having core and aggregation switches connected with multiple links. The core switch forwards the traffic to an aggregation switch using ECMP. and data center core switches advertise default routes into the data center. 2. you need to use a routing protocol in combination with an FHRP.0. There is no control at this point regarding which aggregation switch will get the traffic. The aggregation switch will forward the traffic upstream. The downstream traffic enters the data center at one of the core switches and randomly chooses an aggregation switch on its way to the servers • Aggregation switch uses local Layer 2 link and does not forward traffic across the vPC peer link core © 2012 Cisco and/or its affiliates. Both aggregation switches will forward the packet upstream.0.0. Traffic from the servers traveling upstream through the network is first forwarded on Layer 2. © 2012 Cisco Systems. packets are forwarded using ECMP. If there are multiple paths to the same destination with the same cost. The link between aggregation switches would not be used. Traffic enters the data center core based on the routing information that the core advertises. This tuning allows core and aggregation switches to use ECMP for routed traffic. 2. The aggregation switch uses the local link to send traffic in Layer 2 to the destination.0/0 Advertise DC routes Advertise 0. Downstream Traffic Flow 1. The core switches would forward the packets based on their routing table. all have same cost and are considered for distributing the load.

10.  For data center routes that are sent to the campus. use “auto-cost reference-bandwidth” set to 10 G value • VLANs on 10 GE trunks have OSPF cost = 1 G (cost 1000).  Use the passive-network default command to prevent unwanted OSPF adjacencies and paths.0 © 2012 Cisco Systems. All rights reserved. and link OSPF to the BFD instance.1 L0=10. as shown in the figure:  The not-so-stubby area (NSSA) helps to limit link-state advertisement (LSA) propagation. Inc. but permits route redistribution (RHI) • Advertise default into NSSA.  The OSPF default reference bandwidth is 100 Mb.certcollecion. Configure OSPF only on links for which you want to allow it. you can use Bidirectional Forwarding Detection (BFD) instead to detect the presence of the neighbor.0—3-14 IP Routing Protocols Deployment Design OSPF Routing Protocol Design Recommendations The data center aggregation and core layer can be an applied OSPF routing protocol. .3 • Use authentication: more secure and avoids undesired adjacencies L0=10.  Use routing protocol authentication. summarize routes out • OSPF default reference bandwidth is 100 Mb.4 Access • Timers SPF 1/1. interface hello-dead = 1/3.10. which makes the routing tables simple and easy to troubleshoot. DCUFD v5. in order to distinguish between links that are faster than 100 Mb/s.2.net • NSSA helps to limit LSA propagation.  OSPF advertises a default route into the NSSA area by itself. so you need to use the autocost reference-bandwidth command with a value of 10 Gb.  Establish a Layer 3 VLAN (SVI) on both switches that you use for OSPF adjacency and route exchange between them.4. adjust bandwidth value to reflect 10 GE for interswitch Layer 3 VLAN Campus Core Area 0 NSSA DC Core L0=10. but permits route redistribution if you use route health injection (RHI). Note 3-88 Because reduced OSPF timers impose an additional CPU load.10.10.  Reduce OSPF timers for faster convergence and neighbor loss detection. summarization is advised.  Loopback interfaces simplify troubleshooting (OSPF router ID).2 Default Default DC Subnets (Summarized) L3 vlan-ospf • Loopback interfaces simplify troubleshooting (neighbor ID) • Use passive-network default: open up only links to allow Aggregation L0=10.3. It is more secure and avoids undesired adjacencies.1. Designing Cisco Data Center Unified Fabric (DCUFD) v5. • BFD can be used for neighbor keepalive Web Application Database Servers Servers Servers © 2012 Cisco and/or its affiliates.

Reducing the SPF delay and hold time may cause permanent SPF recalculation upon route flapping.  Run the SPF algorithm for a few hundred milliseconds. Holdtime between 2 consecutive SPF calculations is 10 sec by default. Inc.Router calculates the SPF in 5-second delay after receiving LSA . All rights reserved.  Enable incremental SPF to avoid recomputing the entire SPF. DCUFD v5. route summarization design.The convergence time for the link or router failure is given by the time that is required to do the following: • Detect link failure • Propagate LSA information • Wait for SPF calculation to be finished: . Data Center Topologies 3-89 . © 2012 Cisco and/or its affiliates. These steps can aid in reducing OSPF convergence time:  Reduce the Shortest Path First (SPF) delay hold time.0—3-15 OSPF convergence time with default timers is 6 seconds for an average topology. Note © 2012 Cisco Systems.net • OSPF convergence time with default timers is 6 seconds on average: .Configurable with timers SPF delay hold time • Run SPF alghorithm a few hundred miliseconds • Calculate OSPF seed metric for 0G links • Update the routing table • OSPF convergence time can be brought down to a few seconds using the timers SPF 1 5: .certcollecion. The use of SPF timers is recommended only in an environment that uses a well-structured area. and link flap damping features.

20. as shown in the figure:  You need to advertise the default into the data center with the interface command at the core: ip summary-address eigrp 20 0.ip summary-address eigrp 20 10. Inc. for example).net • Advertise default into data center with interface command on core: Campus Core . .0—3-16 EIGRP Routing Protocol Design Recommendations The data center aggregation and core layers can be applied EIGRP. you may need to use distribute lists to filter them out.0.20.0 © 2012 Cisco Systems.  Use the passive-network default to prevent EIGRP from forming unwanted adjacencies.  If other default routes exist (from the Internet edge.0 255.0. DCUFD v5.0.255.0 0.  Summarize data center subnets to the core with the interface command on the aggregation switch: ip summary-address eigrp 20 10. for example).0 200 .Cost of 200 required to be preferred route over the NULL0 route installed by EIGRP • If other default routes exist (from Internet edge.0 3-90 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0.ip summary-address eigrp 20 0.0 Access Web Application Database Servers Servers Servers © 2012 Cisco and/or its affiliates.0.0 200 The cost (200) is required to make this route preferred over the Null0 summary route that is installed by EIGRP.0 255.0.0.0. may need to use distribute lists to filter out DC Core Default Default DC Subnets (Summarized) L3 vlan-ospf Aggregation • Use passive-network default • Summarize data center subnets to core with interface command on aggregation switch: .255.0.0.certcollecion.0.0.0 0. All rights reserved.

The RHI feature gives the Cisco Application Control Engine (ACE) module the capability to inject static host routes into the routing table in the base Cisco IOS Software on the Cisco Catalyst 6500 Series chassis. The RHI feature is used to advertise a static host route to a particular server or service throughout the network. and the service must be uniform across multiple servers and across network sites.net High Availability Using RHI This topic describes how to provide high availability with RHI. and remove it if the service becomes unavailable. The network finds the best way to reach a certain service. service modules can be used to inject a static host route for a particular server or service in the routing table. DCUFD v5. Data Center Topologies 3-91 . OSPF and RHI In the example involving the Cisco Catalyst 6500 Series Switch. Note Other platforms that do not have integrated service modules (for example. Cisco Nexus platforms) do not support RHI. Inc. These advertisements are sent out-of-band from the Cisco ACE Module directly to the Catalyst 6500 Series Switch supervisor.0—3-18 RHI is a mechanism that can be used to advertise availability of a service into the routing domain.certcollecion. • Cisco service modules can be configured to inject static routes in the Multilayer Switch Feature Card (MSFC) routing table of the Cisco Catalyst switch. RHI enables active and standby services and anycast. • Service module injects or removes (time out) the route based on the health of the back-end servers (checked with Layer 3–7 probes). All rights reserved. Cisco Catalyst 6500 Series Switch ACE Redistribution into a routing protocol RHI Catalyst 6500 MSFC © 2012 Cisco and/or its affiliates. and the routes advertised by RHI can therefore be put into appropriate VRF routing tables. © 2012 Cisco Systems. with configurable metrics. Both the Cisco ACE Module and the Cisco IOS Software are VRF-aware. The Cisco IOS Software image on the supervisor takes the information from the RHI advertisement and creates a static route in its routing table. This static route is then redistributed into the routing protocol and advertised to the rest of the network.

certcollecion. it will appear as an OSPF external route in the OSPF routing domain. Anycast can be used both for IPv4 and IPv6.100. You must be able to advertise the same routing prefix at multiple sites (10.0—3-19 RHI can be used to provide additional redundancy for the service. The solution to this problem is the OSPF NSSA area. Traffic to that destination IP address is then routed to the closest server.1.0 © 2012 Cisco Systems. which typically is not allowed in the Internet. One of the ways to offer stateless services in a scalable way is anycast.3 Data Center B: Preferred Location for VIP 10. All rights reserved. Note An example of an anycast service deployed in the global Internet is the root Domain Name System (DNS) servers where public IP addresses are used. In this example.net Because it is a redistributed route.1. anycast is implemented in such way that you advertise a host route to the same IP address in multiple points of the network. Root DNS servers are an allowed exception in the Internet. it cannot originate external routes. DCUFD v5.100. Anycast can be used in global enterprise networks.1.3/32 in this example).3 © 2012 Cisco and/or its affiliates. Keep in mind that these scenarios are possible in enterprise data center networks. where servers always answer the same type of requests with the same answers. . 3-92 Designing Cisco Data Center Unified Fabric (DCUFD) v5.100. Very High Cost Secondary Low Cost Always Preferred Data Center A: Backup Location for VIP 10. If the data center routing domain is configured as an OSPF stub area. Inc.

100.100.1. The figure shows how RHI manages a failure of the primary site.1. Inc. Service continuity is guaranteed by the routing protocol.net Very High Cost Low Cost Data Center A: Backup Location for VIP 10. Note © 2012 Cisco Systems.0—3-20 If the service fails. Data Center Topologies 3-93 . Clients are able to reach the backup virtual IP at Data Center A as soon as the routing protocols involved in the network converge. and a path to another host route (advertising the same IP address) is chosen. Data Center B: Preferred Location for VIP 10.3 DCUFD v5. the host route is not advertised at that point in the network anymore. However.certcollecion. this convergence happens very quickly. All rights reserved. In general. Connections that were in progress to Data Center B when it failed are lost. Data Center A accepts new connections very quickly.3 © 2012 Cisco and/or its affiliates.

Routing functions in the network direct client requests to the server farm that is closest to the client.certcollecion.100.3 DCUFD v5. If either location fails.3 © 2012 Cisco and/or its affiliates.0—3-21 When RHI is used with the same costs. This route is redistributed into the routing protocol and advertised at multiple sites.0 © 2012 Cisco Systems. 3-94 Designing Cisco Data Center Unified Fabric (DCUFD) v5. . true anycast service is offered to the clients. The load balancer injects the route to a particular server IP address in the routing table. Data Center B: VIP 10. RHI is also used to provide load-balanced service based on the proximity of the client to one of the server farms. The figure shows both locations advertising the same virtual IP via RHI. the routing protocols in the network quickly converge and the remaining location receives all client requests. Inc.100.1.1.net Low Cost Low Cost Data Center A: VIP 10. All rights reserved.

A host with the same device ID can be reached at another location ID. Services enabled by using LISP include the following:  IP mobility with LISP for virtual machine mobility (Cisco LISP virtual machine [VM] mobility)  IPv6 enablement © 2012 Cisco Systems. support data center virtual machine mobility.1. Internet Global Internet domain 1. All rights reserved.1 Location changes → Endpoint IP address does not change Location ID IP address changes © 2012 Cisco and/or its affiliates.1. LISP implements a new semantic for IP addressing by creating two new namespaces:  Endpoint identifiers (EIDs). LISP uses a map-and-encapsulate routing model in which traffic destined for an EID is encapsulated and sent to an authoritative RLOC. Inc.3. • LISP provides separation between the device ID and the device location • LISP provides for high availability indirectly.1 1.2. based on the results of a lookup in a mapping database.0—3-23 LISP brings high availability indirectly.net High Availability Using LISP This topic describes how to design high availability of services using LISP.1. LISP brings a whole new concept in IP routing that enables enterprises and service providers to simplify multihoming. Data Center Topologies 3-95 .4.4 1.1 2.1.certcollecion. It is primarily a protocol that can provide separation between the identity and location for both IPv4 and IPv6. Note The server virtualization solution provides high availability. LISP helps to make it transparent. which are assigned to devices (primarily routers) that make up the global routing system In the current Internet routing and addressing architecture. DCUFD v5. which are assigned to end hosts  Routing locators (RLOCs).1.1. and reduce operation complexities. facilitate scalable any-to-any WAN connectivity. rather than directly to the destination EID.4.2 Location changes → IP address changes Internet Global Internet domain with LISP 3.3.3 4. the IP address is used as a single namespace that simultaneously expresses two functions about a device: its identity and how it is attached to the network.2.

1 LISP Site EID Namespace RLOC Namespace (Internet) LISP xTR • LISP Infrastructure LISP xTR • LISP Site Devices .1. DCUFD v5. These are the LISP internetworking devices:  Proxy ITR (P-ITR): This device is a LISP infrastructure device that provides connectivity between non-LISP sites and LISP sites.1. Note Customer edge (CE) devices can implement both ITR and ETR functions. but also allows LISP sites to see LISP ingress traffic engineering benefits from non-LISP traffic.2 • LISP Internetworking Devices . Proxy ETR (P-ETR): This device is a LISP infrastructure device that allows IPv6 LISP sites without native IPv6 RLOC connectivity to reach LISP sites that only have IPv6 RLOC connectivity. A P-ITR advertises coarse-aggregate prefixes for the LISP EID namespace into the Internet.1.Ingress / Egress Tunnel Router (ITR / ETR) . This process not only facilitates internetworking between LISP and non-LISP sites. Designing Cisco Data Center Unified Fabric (DCUFD) v5.Map Resolver (MR) .1 100.certcollecion. The ETR de-encapsulates LISP packets or delivers them to local EIDs at the site. The PITR then encapsulates and forwards this traffic to LISP sites.  Egress Tunnel Router (ETR): This device is deployed as a LISP site edge device. The ITR LISP encapsulates packets to remote LISP sites or the natively forwards packets to non-LISP sites.net  Multitenancy and large-scale VPNs  Prefix portability and multihoming Non-LISP Site LISP Infrastructure MS MR P-xTR ALT LISP Site EID Namespace 1. Note  3-96 The best location for an ITR or PITR is in the service provider environment. It receives packets from core-facing interfaces (the Internet).Proxy Ingress / Egress Tunnel Router (P-xTR) © 2012 Cisco and/or its affiliates. It receives packets from site-facing interfaces (internal hosts). . All rights reserved.0—3-24 The LISP site devices are as follows:  Ingress Tunnel Router (ITR): This device is deployed as a LISP site edge device.Alternate Topology (ALT) 1. In addition.1. which attracts non-LISP traffic that is destined to LISP sites. Inc.1. This type of CE device is referred to as a PxTR.Map Server (MS) .0 © 2012 Cisco Systems. the P-ETR can also be used to allow LISP sites with Unicast Reverse Path Forwarding (uRPF) restrictions to reach non-LISP sites.1.

An authentication key must match the key that is configured on the ETR. it forwards Map-Requests to the ALT. The MS also receives Map-Request control packets from the ALT.  Map Resolver (MR): This device is deployed as a LISP infrastructure device. Inc. you can use ALT-only devices with basic router hardware or other off-the-shelf devices that can support BGP and GRE. These are the LISP infrastructure devices:  Map Server (MS): This device is deployed as a LISP infrastructure component. It receives encapsulated Map-Requests from ITRs.  Alternative Topology (ALT): This is a logical topology and is deployed as part of the LISP infrastructure to provide scalable EID prefix aggregation. Because the ALT is deployed as a dual-stack (IPv4 and IPv6) Border Gateway Protocol (BGP) over Generic Routing Encapsulation (GRE) tunnels. It must be configured to permit a LISP site to register to it by specifying for each LISP site the EID prefixes for which registering ETRs are authoritative. it injects aggregates for the EID prefixes for registered ETRs into the ALT. which it then encapsulates to the registered ETR that is authoritative for the EID prefix that is being queried. This type of CE device is referred to as an xTR. © 2012 Cisco Systems.net Note CE devices can implement both P-ITR and P-ETR functions. Data Center Topologies 3-97 . When the MS is configured with a service interface to the LISP Alternate Topology (ALT). An MS receives Map-Register control packets from ETRs.certcollecion. The MR also sends Negative MapReplies to ITRs in response to queries for non-LISP addresses. When configured with a service interface to the LISP ALT.

1 ETR ETR 5 10.16.1.3.16. The LISP mapping database informs the branch router how to get to the one (or more) available addresses that can get it to the destination.0.1→ 10.com MR EID-to-RLOC mapping EID-Prefix: 10.0.16.16.3.certcollecion.1→ 10.16.10. .1→ 172.0/24 Locator-set: 172.16.3.1 172.1 as the source address.com. The LISP mapping database can return priority and weight as part of this lookup.1. All rights reserved.1. with the 172.1.cisco. Traffic is remote.0. The source endpoint performs a DNS lookup to find the destination: hostname d.0.3. Inc.0.com A 10.1.0.1.3.16. to help with traffic engineering and shaping.cisco.1.0 © 2012 Cisco Systems. The 10.0.cisco. so traffic is sent to the branch router.2. removes the encapsulation.0.10. 5.0.1 Inner header 172.1 priority 1 172.1.1 Outer header 172. 2.0.1 source address (ITR).1 host is located behind the 172.cisco.16.1 ALT ITR 10.0—3-25 The figure describes the flow of a LISP packet: 1.1 d.2.1.1 LISP Sites 10.1 as the destination address.16.0.1. 10.1 DNS Entry: d.0. The branch router does not know how to get to the specific address of the destination. The branch router performs an IP-in-IP encapsulation and transmits the data out of the appropriate interface based on standard IP routing decisions.0/24 DCUFD v5. so the packet is encapsulated into another IP header.0.0.16.2.1 destination address (ETR).1 4 10.1.1 RLOC. The receiving LISP-enabled router receives the packet. 4. 3-98 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0. and 172.16.16.1.3.1 172.0/24 a.10.net LISP Infrastructure MS 1 LISP Site 2 10. 3.1 3 10.1 172.1 priority 1 RLOC Namespace (Internet) 172. and 10.3.1→ 10. The DNS server within the LISP domain replies with the IP address: 10.0.1.com 10. and forwards the packet to the final destination (10.0. but it is LISP-enabled so it performs a LISP lookup to find a locator address.1. The IP packet has 10.4.1).0/24 © 2012 Cisco and/or its affiliates.1.0.1.1.

16.4 10.0.0/24 © 2012 Cisco and/or its affiliates.1 RLOC Namespace (Internet) ETR ALT LISP Infrastructure ETR LISP Site LISP Site 10.0. Note When a VM is moved. Data Center Topologies 3-99 . in another network. VM Mobility with LISP The first use of LISP is to support virtual machine mobility. LISP VM.0/24 10. LISP updates the Map Servers and indicates that the EID of that particular virtual machine has a new RLOC (location). and so on. which redirects traffic to the new locations without causing any disruption to the underlying routing. With LISP.1. The VM is. The scenario involves providing mobility of virtual machines between data centers. after all. Inc. multiple virtualized infrastructures. its access to its storage (disks) is also managed by the system so that the VM accesses the storage locally. running in another data center.0. When deployed at the first hop router. but not its EID.certcollecion.0—3-26 There are a couple of LISP use cases that are relevant for data centers.3.2. © 2012 Cisco Systems.1 MR 172. DCUFD v5. you can deploy virtual machine mobility between data centers. Another challenge is how to manage incoming [production] data flows from the network and how to preserve existing open data sessions. When a new xTR detects a move. it updates the mappings between EIDs and RLOCs. All rights reserved. its RLOC changes. IP prefixes of roaming devices within the range of allowed prefixes are referred to as the dynamic EIDs. Moving virtual machines between data centers is done over a dedicated Layer 2 network (VLAN).10.1.0. LISP VM mobility compares the source IP address of host traffic received at the LISP router against a range of prefixes that are allowed to roam.1. when the VM changes its location. The LISP Tunnel Router (xTR) dynamically detects VM moves based on data plane events. By using the LISP routing infrastructure.net VM Mobility • Move detection on ETR • VM maintains its IP address • Dynamic update of EID-to-RLOC mapping • Traffic redirected on ITR/P-ITR to the correct ETR MS LISP Site EID-to-RLOC mapping ITR 10.0. mobility provides adaptable and comprehensive first hop router functionality to service the IP gateway needs of the roaming devices that relocate.4 10. and you must route IP traffic correctly to reach the VM at its new location.

certcollecion. one in each data center.10. The Cisco Nexus Operating System (NX-OS) software resynchronizes with the MS/MR servers so that they are aware that a VM is available behind another ETR. 10.0.RLOC mapping.1.net Note The LISP ETR implementation on the Cisco Nexus 7000 monitors the MAC addresses in the local subnets and is able to detect that a MAC address of a VM that has moved is not available at this location anymore. .1.16. HSRP must be blocked between sites • Traffic redirected on ITR/P-ITR to the correct ETR MS LISP Site ITR 10. two different subnets exist. or in case of cloud bursting. which allows the virtual servers to be mobile between the data centers with ingress path optimization.0—3-27 The figure shows LISP VM mobility in an extended subnet between two enterprise-class data centers.0.0. All rights reserved.0/24 © 2012 Cisco and/or its affiliates. 3-100 Designing Cisco Data Center Unified Fabric (DCUFD) v5. The subnets and VLANs are extended from the West data center (West DC) to the East data center (East DC) using Cisco OTV or VPLS.1 MR EID-to-RLOC mapping 172. and it updates the LISP mapping system with its current EID.0. this approach poses the challenge of ingress path optimization. VM Mobility across Extended Subnet • Coordinated VM location update sent to MS • Both sites must have identical IP default gateway configuration.0. Inc.3. and subnet and VLAN extension techniques such as Cisco Overlay Transport Virtualization (OTV) and Virtual Private LAN Services (VPLS) are not deployed. This mode can be used when an enterprise IT department needs to quickly start disaster recovery facilities when the network is not provisioned for virtual server subnets.1. In traditional routing. relocate EIDs across organization boundaries.4 10. In this case.4 10. LISP VM mobility provides transparent ingress path optimization by detecting the mobile EIDs (virtual servers) dynamically. The figure shows LISP VM mobility across subnets between two enterprise-class data centers.1 RLOC Namespace ALT LISP Infrastructure (Cisco OTV) ETR ETR LISP Site LISP Site 10.0 © 2012 Cisco Systems.0/24 DCUFD v5.1. or any other LAN extension technology.

and Easy Virtual Network (EVN).1.12.1 172.3.0—3-28 As a map-and-encapsulate mechanism.1 ITR 172.1.16.1.128 ETR 10.net Multitenant Environments LISP Site LISP Site 10.1.0 10. Data Center Topologies 3-101 . Color coding is encoded in the LISP control plane as stipulated in the standard definition of the protocol. LISP is well suited to manage multiple virtual parallel address spaces.2. LISP mappings can be “color coded” to give VPN and tenant semantics to each prefix managed by LISP. VRF-Lite. Inc. MR EID-to-RLOC mapping ALT DCUFD v5.0 ITR ITR 172.0 LISP Site 10.16.1 172.0.1.0.2. The LISP multitenancy solution is particularly attractive because it is natively integrated with the mobility.3.0 LISP Infrastructure MS 10.16. All rights reserved. scalability. and the LISP data plane has the necessary fields to support the segmentation of traffic into multiple VPNs.16.certcollecion. Virtual routing and forwarding (VRF) instances are used as containers to cache mapping entries and to provide transparent interoperability between the LISP segmentation solution and more traditional VRF interconnection techniques such as Multiprotocol Label Switching (MPLS) VPNs. The LISP multitenancy solution is not constrained by organizational boundaries.11.0 10.10.16.0.1. allowing users to deploy VPNs that can cut across multiple organizations to effectively reach any location and extend the network segmentation ubiquitously. © 2012 Cisco Systems. allowing all the various services to be enabled with the deployment of a single protocol.1 RLOC Namespace 172.1. and IPv6 enablement functions that LISP offers.0 LISP Site © 2012 Cisco and/or its affiliates.

16.cisco.1 priority 1 IPv4 RLOC Namespace 2001:db8:1:3::1→ 2001:db8:1:1::1 172.com AAAA 2001:db8:1:1::1 ITR 172.1 172. Inc.1 Inner header Outer header EID-Prefix: 2001:db8:1:1::1/64 Locator-set: 172.16. LISP can be used to provide other benefits.certcollecion. DCUFD v5. At the same time.2.com DNS Entry: d6. IPv6 traffic is encapsulated in IPv4 packets that can travel across the IPv4 Internet.1 172. The EID namespace runs IPv6.16.1 ETR d6.1 priority 1 172.16.1→ 172.0 © 2012 Cisco Systems.net IPv6 Enablement IPv6 EID Namespace LISP Site 2001:db8:1:3::1 2001:db8:1:3::1→ 2001:db8:1:1::1 2001:db8:1:3::/64 a6.1.com 2001:db8:1:3::1→ 2001:db8:1:1::1 LISP Site 2001:db8:1:1::1 2001:db8:1:1::/64 © 2012 Cisco and/or its affiliates. .16.10.10. such as VM mobility.1. while the RLOC namespace runs IPv4. 3-102 Designing Cisco Data Center Unified Fabric (DCUFD) v5.2.0—3-29 LISP can be used as a technology to extend your IPv6 “islands” across a commodity IPv4 network.16. All rights reserved.cisco. LISP provides seamless connectivity for IPv6-enabled data centers across IPv4-enabled networks.1.cisco.16.

© 2012 Cisco and/or its affiliates. Inc. do not need first-hop redundancy protocols. • RHI is a mechanism that can be used to advertise availability of a service into the routing domain. all have the same cost and are considered to distribute the load. such as Cisco Nexus switches with vPC.0—3-30 Data Center Topologies 3-103 . and support for multitenant environments. such as the Cisco Catalyst 6500 VSS. • High availability for the IP protocol is achieved by using several technologies: IP routing protocols and first-hop redundancy protocols.certcollecion. IPv6 implementation. Semicoupled nonclustered devices. Both devices forward traffic upstream. • LISP is an emerging protocol that has many applications within data centers. • When core and aggregation switches are connected with multiple links. including providing for easy virtual machine mobility. • Clustered devices. All rights reserved. DCUFD v5. need first-hop redundancy protocols to provide high availability. but their behavior is slightly modified. © 2012 Cisco Systems.net Summary This topic summarizes the primary points that were discussed in this lesson.

net 3-104 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems. Inc. .certcollecion.

The purpose of interconnects is to have a link for data replication and for workload mobility. over various underlying technologies. These interconnects are crucial for enterprises who want to have globally available data centers to provide continuous services. This ability includes being able to meet these objectives:  Identify the reasons for data center interconnects  Describe data center interconnect technologies  Design data center interconnects using Cisco OTV  Describe storage replication technologies . Objectives Upon completing this lesson. These links are typically of high bandwidth. you will be able to design data center interconnects for both data traffic and storage traffic.net Lesson 6 Designing Data Center Interconnects Overview This lesson explains transport options for data center interconnections (DCIs) and main reasons to implement DCIs.certcollecion.

One of the main reasons to implement data center interconnections are business needs. This situation represents the optimum use of resources. The same application runs concurrently in multiple data centers. In such a case. .net Reasons for Data Center Interconnects This topic describes how to identify the reasons for data center interconnects. data centers are concurrently active for a limited amount of time. Business needs may also dictate that you to use an active-active data center.0 © 2012 Cisco Systems. 3-106 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc. You should always try to lower the probability of a disaster scenario by migration of the workload before an anticipated disaster.certcollecion. where multiple data centers are active at the same time. which may require that you use a disaster recovery site that is activated after the recovery.

In the case of an active-active data center. You should always try to lower the probability of a disaster scenario by adjusting the application load. Data Center Topologies 3-107 .net Interconnection of data centers may require replication of storage to the disaster recovery site. and load balancing. For this replication to be possible. WAN connectivity. Note © 2012 Cisco Systems. you may need WAN connectivity at the disaster recovery site. Inc. The Cisco global server load balancing (GSLB) solution is the Cisco Application Control Engine (ACE) Global Site Selector. use global load balancing to manage requests and traffic flows between data centers.certcollecion.

. 3-108 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Servers at the disaster recovery site are started after primary site failure. An example of high availability in such a case is duplicated storage.0 © 2012 Cisco Systems. Local and global load balancing facilitates seamless failover. You should always try to lower the probability of a disaster scenario so that you experience minimum downtime. Inc.certcollecion. You can also use a temporary or permanent stretched cluster between sites.net An important aspect of designing data center interconnections is the requirement for high availability.

IP Security [IPsec]): — Plain Layer 3 connectivity is needed — Non-IP traffic must be tunneled © 2012 Cisco Systems. SONET or SDH. Link Aggregation Control Protocol [LACP]. or dense wavelength division multiplexing (DWDM): —    Layer 1 connectivity supports any Layer 2 and Layer 3 technology or protocol Pseudowires: — A mechanism providing connectivity above Layer 1 and below Layer 2 that performs emulation and adaptation of Layer 1 mechanisms to transport Layer 2 payload — Point-to-point links implemented with Ethernet (rarely ATM or Frame Relay) — Dictates packet framing — End-to-end Layer 2 control protocols (Spanning Tree Protocol [STP]. Data Center Topologies 3-109 . Link Layer Discovery Protocol [LLDP]) Virtual Private LAN Services (VPLS): — Layer 2 connectivity — Emulates a switched LAN — Service provider switches are visible to the end user — No end-to-end STP or LACP IP-based solutions (Cisco Overlay Transport Virtualization [OTV]. Inc.certcollecion.net There are several options to interconnect data centers:  Dark fiber. Multiprotocol Label Switching [MPLS] VPN.

such as a demilitarized zone (DMZ). Inc.certcollecion. storage — Implementation: Multiple virtual routing and forwarding (VRF) and point-to-point VLANs. application. virtual machine (VM) mobility — Implementation: Depends on the available transport technology Designing Cisco Data Center Unified Fabric (DCUFD) v5. management. . or MPLS VPN Layer 2 interconnect: — Stretched VLANs (bridging across WAN) — Business requirements: stretched cluster.0 © 2012 Cisco Systems. database.net There are several DCI network-side design options:  Layer 3 (IP) interconnect: —   3-110 Traditional IP routing design Layer 3 interconnect with path separation: — Multiple parallel isolated Layer 3 interconnects — Segments that are strictly separated.

The table presents Layer 2 DCI transport technologies and their implementation options.certcollecion. © 2012 Cisco Systems.net Data Center Interconnect Technologies This topic describes data center interconnect technologies. Data Center Topologies 3-111 . Inc.

several different technologies must be combined.net EoMPLS Site A Site B MPLS VPLS Site B Site A Dark Fiber Site B Site A MPLS A Site C © 2012 Cisco and/or its affiliates. In order to facilitate DCIs.certcollecion. . Inc. The main disadvantage of those topologies is often complex adding or removing of sites.0—3-11 Traditional Layer 2 VPNs use either tunnels or pseudowires for DCIs. which creates complex configuration. Site D B Site C DCUFD v5. All rights reserved. 3-112 Designing Cisco Data Center Unified Fabric (DCUFD) v5.0 © 2012 Cisco Systems.

net Cisco OTV This topic describes how to design data center interconnects using Cisco OTV. © 2012 Cisco Systems. Cisco OTV configuration is composed of several configuration lines on each participating Cisco OTV device and no additional configuration is needed when adding new sites.certcollecion. Cisco OTV is a MAC-in-IP Layer 2 tunneling technique that is used to extend selected VLANs between different sites across the IP core. Each site has its own STP root bridge. Inc. STP is confined to each site with bridge protocol data unit (BPDU) filtered and with it preventing any failure flooding. No IS-IS configuration is needed during Cisco OTV configuration. Robust control plane operation is achieved using Intermediate System-to-Intermediate System (IS-IS) as the underlying protocol. Data Center Topologies 3-113 . Multihoming is natively supported and used without any additional configuration.

certcollecion. reduce flooding traffic. A packet for which the destination is known is encapsulated and sent as unicast. . which is based on flooding of traffic with unknown or unlearned destinations.net Cisco OTV technology features several enhancements of traditional.  Dynamic encapsulation: Dynamic encapsulation replaces complex full-mesh topologies. Layer 2 VPN data center interconnect technologies: 3-114  Control plane-based MAC address learning: Control plane-based learning replaces MAC address learning. and improve the efficiency of the DCI. The control plane uses IS-IS to send reachability information for MAC addresses. Inc. Designing Cisco Data Center Unified Fabric (DCUFD) v5. Cisco OTV also splits the STP domain so that every site has its local STP domain. and no high-level loops are possible through the DCI.  Native built-in multihoming: Native built-in multihoming greatly simplifies Layer 2 DCI designs and eliminates complex configurations that need to take into account all possible equipment and link failures.0 © 2012 Cisco Systems.

No spanning tree on overlay interface • ARP ND cache: ARP snooping reduces intersite ARP traffic • Site VLAN: VLAN used for edge device discovery .  Join interface: This interface is a WAN-facing uplink interface on an edge device and is a routed interface.certcollecion.  Edge device: This device is an IP host.  ARP neighbor discover (ND) cache: Address Resolution protocol (ARP) snooping reduces intersite ARP traffic. Data Center Topologies 3-115 .  Authoritative edge device: This edge device performs internal-to-overlay forwarding for a VLAN. This is a logical multiple access and multicast-capable interface.Edge device is an IP host • Overlay interface: Virtual interface with Cisco OTV configuration OTV OTV L3 L2 .Regular Layer 2 interface Cisco OTV • Join interface: WAN-facing uplink interface on edge device .  Site VLAN: This VLAN is used for edge device discovery.Routed interface . All rights reserved.net • Edge device: Device performing Ethernet-to-IP encapsulation Transport Infrastructure* • Internal interface: Data center-facing interface on edge device .0—3-15 The Cisco OTV terminology is as follows:  Edge device: This device performs Ethernet-to-IP encapsulation. It is a regular Layer 2 interface.  Overlay interface: This interface is a virtual interface with Cisco OTV configuration.Must be configured on internal interfaces • Authoritative edge device: Edge device performing internal-to-overlay forwarding for a VLAN © 2012 Cisco and/or its affiliates. and must be configured on internal interfaces.  Internal interface: This interface is a data center-facing interface on an edge device. There is no spanning tree on the overlay interface. This allows a Cisco OTV device to find another Cisco OTV device on the same site. Inc. © 2012 Cisco Systems.Logical multiaccess multicast-capable interface . such as a VLAN. DCUFD v5.

From a multicast perspective. all control traffic continues to be exchanged between sites using multicast packages. as they were hosts (no PIM on EDs) • Cisco OTV hellos and updates are encapsulated in the multicast group © 2012 Cisco and/or its affiliates. All rights reserved. Multicast must be supported by transport infrastructure and it is configured by the infrastructure owner (either an enterprise or service provider).0—3-16 The Cisco OTV multicast control plane uses IP multicast on transport infrastructure.certcollecion. . The End Result • Adjacencies are maintained over the multicast group • A single update reaches all neighbors DCUFD v5. while data traffic uses other multicast groups. There is no PIM configuration on Cisco OTV edge devices. edge devices are multicast hosts. A single Protocol Independent Multicast sparse mode (PIM-SM) or Bidirectional PIM (BIDIR-PIM) group is used for control plane traffic. 3-116 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Cisco OTV site adjacency is established by exchanging multicast messages between edge devices on different sites. Once adjacency is established.0 © 2012 Cisco Systems. Inc.net • Reduced amount of traffic across control plane • Topology used in initial Cisco OTV design Multicast-enabled Transport OTV Control Plane OTV OTV OTV Control Plane Cisco OTV IP A IP B West East The Mechanism • Edge devices join a multicast group in the transport.

certcollecion. but a service that can run on any edge device. © 2012 Cisco Systems. the Cisco OTV unicast control plane can be used. all traffic between edge devices is IP unicast. The adjacency server is not a separate device on the network. • • Cisco OTV hellos and updates are encapsulated in IP and unicast to each neighbor. edge devices announce their presence to a configured adjacency server. • All signaling must be replicated for each neighbor. Neighbor discovery is achieved by querying the adjacency server on the Cisco OTV cloud using unicast packets. when the service provider does not support relaying multicast traffic). • Neighbor discovery is automated by the adjacency server. Data traffic must also be replicated at the head end. © 2012 Cisco and/or its affiliates. Each Cisco OTV device needs to replicate each control plane packet and unicast it to each remote Cisco OTV device that is part of the same logical overlay. so there is no additional configuration required by the transport network owner. Data Center Topologies 3-117 . All rights reserved. In the Cisco OTV unicast control plane. DCUFD v5.net • Ideal for connecting two or three sites • Multicast transport is the best choice for a higher number of sites Unicast-only Transport OTV Control Plane OTV OTV OTV Control Plane Cisco OTV IP A IP B West East The Mechanism The End Result • Edge devices register with an adjacency server edge device. Instead of announcing themselves across a multicast group.0—3-17 When the transport network does not support multicast or the number of Cisco OTV connected sites is low. Note When using Cisco OTV over a unicast control plane (that is. Inc. this approach has a cost. • Edge devices receive a full list of Neighbors (oNL) from the adjacency server.

certcollecion. The BPDUs stop here L2 DCUFD v5. There are no changes to the STP topology. a failure on any STP site does not influence traffic on any other site. • This functionality is built into Cisco OTV and no additional configuration is required.0 © 2012 Cisco Systems. 3-118 Designing Cisco Data Center Unified Fabric (DCUFD) v5.net • Cisco OTV is site transparent. All rights reserved. Because STP domains are separated. Cisco OTV • Each site keeps its own STP domain. Inc. each Cisco OTV site is a separate STP domain with its own root bridge switch. so possible damage is contained within single site. Consequently. Loop prevention on the Cisco OTV cloud itself is performed using IS-IS loop prevention. • An edge device will send and receive BPDUs only on the Cisco OTV internal interfaces. .0—3-18 The Cisco OTV cloud prevents STP traffic from flowing between edge devices. Edge devices only send and receives BPDUs on Cisco OTV internal interfaces. OTV OTV L3 The BPDUs stop here © 2012 Cisco and/or its affiliates.

4. Once on the new site. the East VM will send a Gratuitous ARP (GARP) frame to authoritative edge device (AED) for its VLAN. All rights reserved. © 2012 Cisco Systems. VM is moved from the West site to the East site due to VMware VMotion activity.certcollecion. 2. 3.net OTV 1 VM Moves OTV MAC X West MAC X MAC X East Cisco OTV OTV MAC X ESX OTV ESX MAC X MAC X AED 4 OTV West AED advertises MAC X with a metric of zero OTV MAC X East MAC X MAC X OTV MAC X ESX Cisco OTV OTV ESX MAC X MAC X MAC X AED AED 3 AED detects MAC X is now local © 2012 Cisco and/or its affiliates. Server originates a GARP frame 2 DCUFD v5. Inc. Data Center Topologies 3-119 . The AED advertises the VM MAC with metric 0 to all edge devices on the Cisco OTV overlay. The AED detects that the VM MAC is now local and sends a GARP frame to all Layer 2 switches on the local site.0—3-19 Cisco OTV is ideal for solving issues with MAC mobility that might occur during migration of live VMs between sites: 1.

The AED on the East site will forward a GARP broadcast across the Cisco OTV overlay to other edge devices. The edge device on the West site will receive an advertisement with a better metric for the VM MAC and change the VM MAC from a local to a remote address. .net 5 OTV West Edge devices in West site see MAC X advertisement with a better metric from East site and change them to remote MAC address. Inc. The AED on the West site will forward a GARP broadcast into the site and the Layer 2 switches will update their content-addressable memory (CAM) tables with the local AED as the target for the VM MAC. AED DCUFD v5. OTV MAC X East MAC X MAC X MAC X Cisco OTV OTV ESX OTV ESX MAC X MAC X MAC X AED AED 7 AED in West site forwards the GARP into the site and the Layer 2 switches update their CAM tables AED OTV OTV MAC X MAC X West East MAC X MAC X MAC X OTV OTV OTV ESX ESX MAC X MAC X AED MAC X 6 AED in site East forwards the GARP broadcast frame across the overlay © 2012 Cisco and/or its affiliates. 3-120 Designing Cisco Data Center Unified Fabric (DCUFD) v5. All rights reserved.0—3-20 5. 7.certcollecion.0 © 2012 Cisco Systems. 6.

Inc. • Cisco OTV elects one of the edge devices to be the authoritative edge device (AED) for a subset of the OTV extended VLANs: . AED OTV OTV Internal peering for AED election © 2012 Cisco and/or its affiliates. Cisco OTV will elect one of edge device to be the AED for a set of VLANs.Higher system ID manages odd-numbered VLANs. © 2012 Cisco Systems. • VLANs are split between the Cisco OTV edge device. Even-numbered VLANs are managed by the edge device with the lower IS-IS system ID.Election is done within the site using the Cisco OTV site VLAN. but will be in future Cisco OTV releases. All rights reserved. DCUFD v5.certcollecion.0—3-21 Multihoming is configured automatically at sites where more than one edge device is connected to the Cisco OTV overlay. VLANs are split between edge devices and each edge device is responsible for its own VLAN traffic across the Cisco OTV overlay.net • The detection of multihoming is fully automated and it does not require additional protocols and configuration. This election is done automatically over the Cisco OTV site VLAN.Lower system ID manages even-numbered VLANs. Data Center Topologies 3-121 . and each edge device is responsible for its own VLAN traffic: . Oddnumbered VLANs are managed by the edge device with the higher IS-IS system ID. VLAN allocation is currently not configurable. which is the VLAN that is configured for communication between edge devices on site. .

which is available with the Cisco Nexus 7000 platform: — A dedicated VDC should be configured for Cisco OTV. and less intrusive solution is the use of VDCs. • An alternative. If network design requires an SVI interface on a VLAN that is extended across Cisco OTV. All rights reserved.net • Guideline: The current Cisco OTV implementation on the Cisco Nexus 7000 enforces the separation between SVI routing and OTV encapsulation for a given VLAN.  The SVI interface can be in a separate virtual device context (VDC).The aggregation VDC used to provide SVI routing support L3 L2 OTV VDC OTV VDC Aggregation © 2012 Cisco and/or its affiliates. following solutions can be used: 3-122  The SVI interface can be on a separate physical device that is connected to that VLAN. Designing Cisco Data Center Unified Fabric (DCUFD) v5. which are available with the Cisco Nexus 7000 platform: .certcollecion. . DCUFD v5. — Another VDC should provide SVI routing service.A dedicated Cisco OTV VDC to perform the OTV functionalities .0 © 2012 Cisco Systems.0—3-22 Current Cisco OTV implementation requires that the Cisco OTV join interfaces are physical interfaces on an M-family module. • This separation can be achieved by having two separate devices to perform these two functions. switch virtual interface (SVI) interface is not available. Due to that restriction for VLANs that are extended across the Cisco OTV overlay. Inc. cleaner.

Data Center Topologies 3-123 . In addition. These configuration examples are listed here only to show how simple it is to design and deploy Cisco OTV. in addition to enabling Cisco OTV functionality.5S OTV functionality  Advanced Services Package (LAN_ADVANCED_SERVICES_PKG) to enable VDCs Cisco OTV configuration on each site consists of only a few commands. The following licenses are required to deploy Cisco OTV using Cisco Nexus 7000 Series Switches:  Transport Services Package (LAN_TRANSPORT_SERVICES_PKG) to enable Cisco IOS XE 3. only the overlay interface needs configuration. only Cisco OTV site VLAN configuration is needed. Note © 2012 Cisco Systems.5S — Advanced switch image Note This allows you to terminate Cisco OTV on different hardware in the primary and in the secondary data center. On the switch global level. Inc.net Cisco OTV functionality is delivered on different hardware platforms:   Cisco Nexus 7000 Series Switches: — Initial hardware platform with Cisco OTV support — Cisco OTV supported on M family line cards — Licensed feature Cisco ASR 1000 Series Aggregation Services Routers: — Cisco OTV (Phase 1) supported on all platforms with Cisco IOS XE 3.certcollecion. in case you do not have the same equipment available.

1.1. Inc.1 otv data-group 232. In addition.0/24 otv extend-vlan 100-150 Cisco OTV Unicast Configuration Example On the switch global level.1.0/24 otv extend-vlan 100-15  South device: feature otv otv site-vlan 99 interface Overlay1 description SOUTH-DC otv join-interface Po16 otv control-group 239.1. in addition to enabling Cisco OTV functionality.1.0 © 2012 Cisco Systems.0/24 otv extend-vlan 100-150  East device: feature otv otv site-vlan 99 interface Overlay1 description EAST-DC otv join-interface e1/1.1 otv data-group 232.certcollecion.1.net The required input parameters to design a Cisco OTV deployment are the following:  VLANs to be extended between sites  Join interfaces: Interfaces that join the Cisco OTV overlay  Site VLANs: VLANs local to a site to control multihoming  Multicast group address for the Cisco OTV control plane or unicast addresses of the Cisco OTV peers if the service provider does not support multicast.  West device: feature otv otv site-vlan 99 interface Overlay1 description WEST-DC otv join-interface e1/1 otv adjacency-server local 3-124 Designing Cisco Data Center Unified Fabric (DCUFD) v5.1.1. Cisco OTV Multicast Configuration Example  West device: feature otv otv site-vlan 99 interface Overlay1 description WEST-DC otv join-interface e1/1 otv control-group 239.192.10 otv control-group 239. . only the overlay interface needs configuration.1.192. Cisco OTV site VLAN configuration is needed.192.1 otv data-group 232.

10 otv adjacency-server 10.1 otv extend-vlan 100-150 © 2012 Cisco Systems.1.net otv extend-vlan 100-150  South device: feature otv otv site-vlan 99 interface Overlay1 description SOUTH-DC otv join-interface Po16 otv adjacency-server 10.1. Inc.certcollecion. Data Center Topologies 3-125 .1.1 otv extend-vlan 100-150  East device: feature otv otv site-vlan 99 interface Overlay1 description EAST-DC otv join-interface e1/1.1.

certcollecion. . In this approach. data is backed up with the use of standard backup applications. the application can be protected from problems with the backup process by the use of techniques such as snapshots and split mirrors. All rights reserved. It is sometimes known as remote vaulting. • Reduces recovery time objective: .0 © 2012 Cisco Systems. Local Data Center Remote Data Center WAN Backup © 2012 Cisco and/or its affiliates. but the backup site is located at a remote location. DCUFD v5. FCIP is an ideal solution for remote backup applications for several reasons: 3-126  FCIP is relatively inexpensive.  Backup applications are sensitive to high latency. Data is backed up to a remote data center: • Backup is accessible directly over the MAN or WAN. such as Veritas NetBackup or Legato Celestra Power. Designing Cisco Data Center Unified Fabric (DCUFD) v5. reliability. Backup is accessible directly over the WAN or metropolitan-area network (MAN).Much faster than standard offsite vaulting (trucking in tapes) • Ensures data integrity. but in a properly designed SAN. • Uses the infrastructure of existing facilities. and availability. compared to optical storage networking.  Enterprises and storage service providers can provide remote vaulting services by using existing IP WAN infrastructures.net Storage Replication Technologies and Interconnects This topic describes storage replication technologies. Inc.0—3-25 Remote backup is a core application for Fibre Channel over IP (FCIP).

• Data replication reduces the recovery time objective.net Data is continuously synchronized across the network: • Data can be mirrored for multiple points of access. • Data replication enables rapid failover to a remote data center and business continuity for critical applications. Local Data Center Remote Data Center WAN Data Replication © 2012 Cisco and/or its affiliates. Data Center Topologies 3-127 . • Data replication reduces the recovery point objective. or HP-Compaq Data Replication Manager (DRM)  Host-based application schemes such as Veritas Volume Replicator (VVR) © 2012 Cisco Systems. IBM Peer-to-Peer Remote Copy (PPRC). Here are some examples of these types of applications:  Array-based replication schemes such as EMC Symmetrix Remote Data Facility (SRDF).certcollecion.0—3-26 The primary type of application for an FCIP implementation is a disk replication application that is used for business continuance or disaster recovery. DCUFD v5. Inc. All rights reserved. Hitachi TrueCopy.

Applications that use synchronous copy replication are very sensitive to latency delays and might be subject to unacceptable performance. This is why synchronous replication is supported only up to a certain distance. .net Synchronous replication: • Data must be written to both arrays before I/O operation is complete • Data in both arrays is always fully synchronized • Distance between sites influences application performance Asynchronous replication: • Data for the remote site is cached and replicated later • Trade-off between performance and business continuity • Can extend over greater distances and use high-latency transport Asynchronous Replication WAN DWDM Synchronous Replication © 2012 Cisco and/or its affiliates. Synchronous Replication In synchronous mode. disk writes are acknowledged before the remote copy is completed. DCUFD v5. All rights reserved.0 © 2012 Cisco Systems. The application response time is shortened because the application does not need to wait for confirmation that the data has been written on the remote location as well.0—3-27 Replication applications can be run in synchronous mode or asynchronous mode.certcollecion. The local array does not acknowledge the data to be written until it has been written to the remote array as well. Asynchronous Replication In asynchronous mode. Inc. an acknowledgment of a disk write is not sent until copying to the remote site is completed. 3-128 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Consequently. The data to the remote storage array is cached and written afterward. the data on both storage arrays is always up to date.

Inc. a 10-km (6. — Write operation: The application is idle. The added idle time can significantly reduce the I/O operations per second (IOPS) that the server can achieve.net Here are some characteristics of latency:  Latency in dark fiber is approximately 5 ns per meter or 5 microsec per kilometer (1 kilometer equals 0. waiting for data to arrive.certcollecion.62 miles): — Therefore. Note © 2012 Cisco Systems. Data Center Topologies 3-129 .  Latency over SONET/SDH is higher because of the added latency of the infrastructure.  Latency over IP networks is much greater because of the added latency of the infrastructure and delays due to TCP/IP processing. waiting for write confirmation before it can proceed.2-mile) fiber channel link can have a 5 * 10 = 50-microsec latency and a 2 * 5 * 10 = 100-microsec round-trip latency.  Latency has a direct impact on application performance: — Read operation: The application is idle.

FCIP. reliability.net • FC Synchronous Replication 1-WR • FCoE • iSCSI (use checksum) 4-OK • FCIP 2-WR WAN 3-OK Alternative: Distributed file system with NFS Asynchronous Replication iSCSI DWDM/fiber   Pseudowires FCIP  VPLS FCIP  IP FCIP  © 2012 Cisco and/or its affiliates.0 © 2012 Cisco Systems. and relative cost. All rights reserved. 2-OK FC 1-WR Transport 2-WR WAN 3-OK DCUFD v5.certcollecion. 3-130 Designing Cisco Data Center Unified Fabric (DCUFD) v5. latency.0—3-29 There are several options for back-end and storage connectivity: Fibre Channel. Fibre Channel over Ethernet (FCoE). Nextgeneration Nexus equipment supports distances for FCoE of a few kilometers. and Internet Small Computer Systems Interface (iSCSI). The figure summarizes SAN extension solutions and compares capabilities like maximum distance. . bandwidth. Inc.

com/go/dci  Overlay Transport Virtualization http://www.net Summary This topic summarizes the primary points that were discussed in this lesson. Inc. Data Center Topologies 3-131 .com/go/otv © 2012 Cisco Systems.cisco. References For additional information.certcollecion.cisco. refer to these resources:  Cisco Data Interconnect http://www.

.0 © 2012 Cisco Systems. Inc.certcollecion.net 3-132 Designing Cisco Data Center Unified Fabric (DCUFD) v5.

virtual appliances. and LISP. including OSPF. All these protocols allow for highly available designs. • To provide high availability on the IP level. DCUFD v5. DCUFD v5. including the Cisco Nexus 1000V switch. IP Services. BGP. Layer 2 MPLS VPN. • Data center interconnection technologies can extend Layer 2 domains between data centers. forming a collapsed core layer. © 2012 Cisco Systems. • The data center core layer interconnects several aggregation blocks with the campus core or enterprise edge networks. without oversubscription.net Module Summary This topic summarizes the primary points that were discussed in this module. VPLS. are installed in this layer. The main activity performed in a data center core is fast packet switching and load balancing between links. and CWDM and DWDM. • The data center aggregation layer aggregates connections from access switches. Inc. which include firewalling and server load balancing. dark fiber. © 2012 Cisco and/or its affiliates. and hardware-assisted switching between virtual machines using Cisco VM-FEX. All rights reserved. Various products and technologies are available for the virtualized access layer. The focus is on technologies such as Cisco Unified Fabric (to save on cabling and equipment) and FEXs for improved ease of management. several routing protocols and technologies are available.0—3-2 Data Center Topologies 3-133 .0—3-1 • The virtual access layer interconnects virtual machines with the physical network. The main technologies in this area are Cisco OTV. This is a foundation for workload mobility in disaster avoidance or disaster recovery scenarios. IP routing protocols and ECMP are used in the core. © 2012 Cisco and/or its affiliates. Some of the best designs involve virtualization with VDCs. • The data center access layer is used to connect servers to the network and is the largest layer considering the number of devices. This layer is typically the boundary between the Layer 2 and Layer 3 network topologies.certcollecion. The aggregation layer can be combined with the core layer for smaller designs. All rights reserved. EIGRP.

0 © 2012 Cisco Systems. . Understanding the role of every component of the data center allows you to design data center networks functionally and optimally. virtual access layer devices are used to provide efficient management for virtual networks inside server virtualization hosts. aggregation. virtual device contexts (VDCs) are used to virtualize equipment. ease of management. and data center interconnections. and utilization of equipment. 3-134 Designing Cisco Data Center Unified Fabric (DCUFD) v5. Inc. access. and data center interconnect solutions are used to provide workload mobility. A number of technologies are used in combination to provide the best efficiency. highly available IP designs. For example. and virtual access layers). Cisco Unified Fabric is used to optimize the number of links and device utilization.certcollecion. you learned about various components of data center networks (the core.net In this module.

Q1) What is the classic division of a hierarchical network? (Source: Designing the Data Center Core Layer Network) A) B) C) D) access—aggregation—core management—policy control—policy enforcement routing—switching—inspection hypervisor—kernel Q2) Under which circumstances would you implement a collapsed core layer? (Source: Designing the Data Center Core Layer Network) Q3) Which option is a reason to implement a Layer 2 core? (Source: Designing the Data Center Core Layer Network) A) B) C) D) Q4) Where is the common Layer 2 termination point in data center networks? (Source: Designing the Data Center Aggregation Layer) A) B) C) D) Q5) data center core layer data center aggregation layer data center access layer data center virtual access layer Which two technologies are used to optimize bandwidth utilization between the access and aggregation layers? (Choose two.certcollecion.) (Source: Designing the Data Center Aggregation Layer) A) B) C) D) E) © 2012 Cisco Systems.) (Source: Designing the Data Center Aggregation Layer) A) B) C) D) Q6) when IP routing cannot scale to the required level when a very large Layer 2 domain is required when the data center is used as a web server farm when performing server load balancing at the access layer per-VLAN RSTP MEC vPC OSPF Which three combinations can be done and make sense with VDCs at the aggregation layer? (Choose three. The correct answers and solutions are found in the Module Self-Check Answer Key. core and aggregation VDC aggregation and storage VDC multiple aggregation VDCs multiple access layer VDCs multiple core layer VDCs Data Center Topologies 3-135 . Inc.net Module Self-Check Use the questions here to review what you learned in this module.

.0 © 2012 Cisco Systems.) (Source: Designing the Data Center Virtualized Access Layer) A) B) C) D) E) Q13) FCoE FCF mode Fibre Channel switch mode FCoE NPV mode domain manager mode What are the two purposes of a virtual access layer? (Choose two. in which Fibre Channel mode must the storage VDC operate? (Source: Designing the Data Center Aggregation Layer) A) B) C) D) Q8) Which FEX does not support multihoming to two managing switches? (Source: Designing the Data Center Access Layer) A) B) C) D) Q9) Cisco Distributed Virtual Switch Cisco Nexus 1000V switch Cisco UCS Pass-through switching with VM-FEX Cisco Adapter-FEX when using Cisco UCS C-Series servers Cisco Virtual Services Appliance Which protocol or solution provides default gateway redundancy? (Source: Designing High Availability) A) B) C) D) 3-136 provides network communication between virtual machines provides firewalling for network edge provides access for the virtual machines to the physical network provides server load-balancing capabilities.certcollecion. Inc. as in physical access layer provides console management access for virtual machines Which three Cisco technologies or solutions are used in the virtual access layer? (Choose three.net Q7) When using a storage VDC on the Cisco Nexus 7000 Series Switch in the aggregation layer.) (Source: Designing the Data Center Virtualized Access Layer) A) B) C) D) E) Q12) loop-free inverted-U wiring triangle-loop wiring square-loop wiring loop-free U wiring What is the recommended mode for an access switch when designing a Cisco Unified Fabric deployment? (Source: Designing the Data Center Access Layer) A) B) C) D) Q11) Cisco Nexus 2248P Cisco Nexus 2148T Cisco Nexus 2224TP Cisco Nexus 2232PP Which kind of wiring needs to be installed to support migration of the access layer from the spanning tree design to the vPC design? (Source: Designing the Data Center Access Layer) A) B) C) D) Q10) pinning mode FCoE NPV mode Fibre Channel switch mode Fibre Channel transparent mode HSRP OSPF RIP NHRP Designing Cisco Data Center Unified Fabric (DCUFD) v5.

Inc.) (Source: Designing High Availability) A) B) C) D) Q17) OSPF totally stubby area OSPF not-so-stubby area OSPF Area 0 OSPF virtual link inter-AS multicast routing Virtual Machine Mobility IPv6 transition low latency server failover Which underlying technology does Cisco OTV require to establish an overlay link between two data centers? (Source: Designing Data Center Interconnects) A) B) C) D) © 2012 Cisco Systems.certcollecion.) (Source: Designing High Availability) A) B) C) D) E) Q16) route redistribution gratuitous ARP RHI transparent mode firewall rendezvous points What are the two possible uses of LISP? (Choose two. IP multicast IP Ethernet DWDM Data Center Topologies 3-137 .net Q14) Which OSPF area type is most suitable for data center usage? (Source: Designing High Availability) A) B) C) D) Q15) Which two mechanisms are required to implement anycast? (Choose two.

Inc. C Q17) B Designing Cisco Data Center Unified Fabric (DCUFD) v5.net Module Self-Check Answer Key 3-138 Q1) A Q2) when a separate core is not needed because of small data center size Q3) B Q4) B Q5) B. C Q7) C Q8) B Q9) B Q10) C Q11) A. C Q12) B.certcollecion. . D Q13) A Q14) B Q15) A. C Q6) A. C Q16) B.0 © 2012 Cisco Systems. B. C.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer: Get 4 months of Scribd and The New York Times for just $1.87 per week!

Master Your Semester with a Special Offer from Scribd & The New York Times