
Next-Generation Data Centers
Today’s data centers are mostly driven by the following needs:
● The need for a higher level of reliability, with minimized downtime for updates and configuration changes:
Once a consolidated architecture is built, it’s critical to keep it up and running with minimum disruption.
● The need to optimize the use of the data center network infrastructure by moving towards a topology where no
link is kept idle, whereas legacy topologies based on Spanning Tree Protocol are known to be inefficient
because of Spanning Tree Protocol blocking links or because of active/standby network interface card (NIC)
teaming. This need is addressed by Layer 2 multipathing technologies such as Virtual PortChannels (vPCs).
● The need to optimize computing resources by reducing the rate of growth of physical computing nodes. This
need is addressed by server virtualization.
● The need to reduce the time that it takes to provision new servers. This need is addressed by the ability to
configure server profiles, which can be easily applied to hardware.
● The need to reduce overall power consumption in the data center. This need can be addressed with various
technologies, including unified fabric (which reduce the number of adapters on a given server), server
virtualization, and more power-efficient hardware.
● The need to increase computing power at a lower cost: More and higher-performance computing clouds are
being built to provide a competitive edge to various enterprises.
The data center design drivers just listed call for capabilities such as:
● Architectures capable of supporting a SAN and a LAN on the same network (for power use reduction and
server consolidation).
● Architectures that provide an intrinsic lower latency than traditional LAN networks, so that computing cloud
can be built on the same LAN infrastructure as regular transactional applications.
● Architectures that provide the ability to distribute Layer 2 traffic on all available links.
● Simplified cabling: For a more efficient airflow, lower power consumption, and lower cost of deployment of
high-bandwidth networks.
● Reduction of management points: It’s important to limit the impact of the sprawl of switching points (software
switches in the servers, multiple blade switches, and so on).
All Links Forwarding
The next-generation data center provides the ability to use all links in the LAN topology by taking advantage of
technologies such as virtual PortChannels (vPCs). vPCs enable full, cross-sectional bandwidth utilization among LAN
switches, as well as between servers and LAN switches.
Server Connectivity at 10 Gigabit Ethernet
Most rackable servers today include redundant LAN-on-motherboard (LOM) interfaces for management, an
integrated-lights-out (iLO) standard-based port, and one or more Gigabit Ethernet interfaces, and redundant host bus
adapters (HBA). The adoption of 10 Gigabit Ethernet on the server simplifies server configuration by reducing the
number of network adapters and providing enough bandwidth for virtualized servers. The data center design can be
further optimized with the use of Fibre Channel over Ethernet (FCoE) to build a unified fabric.
Cost-effective 10 Gigabit Ethernet connectivity can be achieved by using copper twinax cabling with Small FormFactor
Pluggable Plus (SFP+) connectors.
A rackable server configured for 10 Gigabit Ethernet connectivity may have an iLO port, a dual- LOM, and a dual-port
10 Gigabit Ethernet adapter (for example, a converged network adapter). This adapter would replace multiple Quad
Gigabit Ethernet adapters and, in case the adapter is also a Cisco Network Adapter, it would also replace an HBA.
Fabric Extender
Fabric extender technology simplifies the management of the many LAN switches in the data center by aggregating
them in groups of 10 to 12 under the same management entity. In its current implementation, Cisco Nexus 2000
Series Fabric Extenders can be used to provide connectivity across 10 to 12 racks that are all managed from a single
switching configuration point, thus bringing together the benefits of top-of-the-rack and end-of-the-row topologies