Wednesday 8 October 2014

Data Center Design for Cloud: Example Overview Part-I

More often than not I am asked about an “example” of a Data Center design that I have worked on. This write up relates to a low level design of a fictional but typical customer for the infrastructure components and is based on finalized typical Basis of Design for the model data center.

 

As part of the Data Center Network infrastructure, the following devices will be covered:

·         Two (2) Nexus 7010 switches as Core switches

·         Fourteen (14) Nexus 7010 as Aggregation switches (see note below)

·         Two (2) Nexus 5596 switches as Aggregation switches (see note below)

·         Two (2) Catalyst 6509-E switches as Distribution switches

·         Ten (10) Catalyst 3750X switches as Access switches

·         Two (2) Cisco 3945E routers as Perimeter routers

·         Two (2) Cisco 3945E routers as WAN routers

·         Sixteen (16) Catalyst 3750X switches as Server management switches

·         Two (2) Catalyst 3750X switches as WAN edge switches

·         Two (2) Catalyst 3560X switches as Perimeter switches

·         Two (2) Catalyst 3750X switches as DMZ switches

Assumptions and Caveats

·         OTV adds an additional 42 bytes to the IP header. This required the links between the two redundant data centers (DC1 and DC2) to be configured for at least 1560 bytes MTU. It was assumed that an MTU of 1600 bytes will be available on the links between these two sites.

·         It was assumed that for each optical interconnect, the distance will be well within the maximum distance specified for the optical transceiver ordered to provision the said optical link.

·         It is assumed that all electrical (UTP) interconnects would be deployed within the standard 1000BaseT Ethernet distance specifications.

·         Cabling density was adequate to support all required optical and electrical interconnects

 

Network Background

The customer currently has one Data Center in their main building. Additionally, they also have two other branches located in two branch offices. As part of this project customer was building a new Data Center at DC1 located. This Data Center would be connected with the existing main building as well as the standalone two remote branches.

 

Following figure shows the high level design that was agreed during the pre-sales phase of the project.

 

The main building network consists of multiple network modules, each with its dedicated

function. Following points summarize the modules at a high level:

·         Campus Module – This part of the network provides the infrastructure for main building end user connectivity.

·         Core Module – The Core module interconnects the various main building Data Center modules together for seamless data flow.

·         DC Aggregation Module – This module provides the infrastructure for server connectivity.

·         Extranet Module – External partners and other B2B services are securely terminated in this module.

·         WAN Edge Module – The WAN Edge module extends the reachability of the main building to remote branches.

·         Perimeter Module – This module extends Internet service to main building and also allows the customer to host Internet facing services from this Data Center.

Network Objectives

Based on the understanding of the requirements, the design should ensure best practices are followed, where applicable, in order to achieve the following:

·         High Availability

·         Scalability

·         Flexibility of extending Layer-2 domains between DC1 and DC2

·         Infrastructure security

Main building Layout

The main building was a new four-storey building that would house both a new Data Center as well as internal users. The ground floor and the first three floors would have users sitting in them while the fourth floor would be dedicated for Data Center facility.

 

Each floor will have one  room where Access switches will be installed. In addition to the user floors there will be two Customer Service Areas that are separated from the main building, however, are in close physical proximity.

Data Center Floor

The fourth floor of the main building would be dedicated for use as a Data Center facility. This floor would have the following rooms:

·         Two Main Distribution Areas/Rooms (MDAs). Each room will have eight (8) racks installed.

·         Two ISP Rooms with each room equipped with one (1) rack.

·         Data Center Hall with a total of eight (8) rows of racks with each row deployed with fourteen (14) racks.

·         Staging room outside the main Data Center Hall. The Staging room will have a total of six (6) racks.

·         A  DC Control Room with one (1) rack.

 

Hardware Distribution in Customer Service Areas

 

Each Customer Service Area will have one Cisco Catalyst 3750X switch (WS-C3750X-48PF-S) installed in it. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Ground Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in Ground Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  First Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in First Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Second Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in Second Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Third Floor

There will be two (2) Cisco Catalyst 3750X switches (WS-C3750X-48PF-S) installed in the IDF in Third Floor. Each switch will be equipped with two 1100-watt power supplies and one dual-port Tengigabit Ethernet service module.

 

  Fourth Floor

 

       Main Distribution Area 1 (MDA-1)

The following hardware will be installed in MDA-1:

• One Nexus 7000 Core switch – (N7K-C7010)

• One Catalyst 3750X DMZ switch - (WS-C3750X-48T-S)

• One Catalyst 6500 Distribution switch – (WS-C6509-E)

• One Cisco 3945E WAN edge router – (CISCO3945E-SEC/K9)

• One Catalyst 3750X WAN edge switch – (WS-C3750X-48T-E)

• One Cisco ASA 5585 Perimeter/DMZ firewall – (ASA5585-S10P10XK9)

• One Cisco ASA 5585 WAN Edge firewall – (ASA5585-S20X-K9)

• One Cisco ASA 5585 Core firewall – (ASA5585-S60P60-K9)

• One Cisco ASA 5585 Distribution firewall – (ASA5585-S20X-K9)

 

       Main Distribution Area 2 (MDA-2)

The following hardware will be installed in MDA-2:

• One Nexus 7000 Core switch – (N7K-C7010)

• One Catalyst 3750X DMZ switch - (WS-C3750X-48T-S)

• One Catalyst 6500 Distribution switch – (WS-C6509-E)

• One Cisco 3945E WAN edge router – (CISCO3945E-SEC/K9)

• One Catalyst 3750X WAN edge switch – (WS-C3750X-48T-E)

• One Cisco ASA 5585 Perimeter/DMZ firewall – (ASA5585-S10P10XK9)

• One Cisco ASA 5585 WAN Edge firewall – (ASA5585-S20X-K9)

• One Cisco ASA 5585 Core firewall – (ASA5585-S60P60-K9)

 

ISP Room 1

Following equipment will be installed in ISP Room 1:

• One Cisco 3945E Perimeter router – (CISCO3945E-SEC/K9)

• One Cisco 3560X Perimeter switch – (WS-C3560X-24T-S)

 

ISP Room 2

Following equipment will be installed in ISP Room 1:

• One Cisco 3945E Perimeter router – (CISCO3945E-SEC/K9)

• One Cisco 3560X Perimeter switch – (WS-C3560X-24T-S)

 

Data Center Halls

Data Center Halls will have the Nexus 7010 and Nexus 5596 access switches installed as explained below:

·         The first seven rows will each have two Nexus 7010 (N7K-C7010) switches installed. There will be one switch installed in Rack 1 and another in Rack 9 of each row.

·         The eighth row will have two Nexus 5596 (N5K-C5596UP-FA) switches. One switch will be installed in Rack 1 while the other will be installed in Rack 9.

·         Each row will have two Catalyst 3750X (WS-C3750X-48T-S) server management switches

·         installed, one in Rack 1 and the other one in Rack 9.

Network Elements

General hardware description of the equipment procured by CBO is described below. For convenience, URLs pointing to the respective product data sheet have been provided at the end of each section.

 

Cisco Nexus 7000 9-Slot Chassis Switches

Two (2) Cisco Nexus 7000 9-Slot Chassis Switches will be deployed in the existing main building. Each chassis will be populated with the following modules:

·         2 x N7K-AC-6.0KW

·         2 x N7K-SUP2 (slots 1-2)

·         1 x N7K-M224XP-23L (slot 3)

·         5 x N7K-7009-FAB-2

·         1 x N7K-M148GT-11L (slot 4)

·         1 x N7K-M148GS-11L (slot 5)

·         1 x N7K-F248XP-25E (slot 9)

 

Figure: Nexus 7009 Core Switch – Proposed Slot Allocation

 

Table 1 Nexus 7009 Core Switch – Interface Numbering

 

Nexus 7000 9-Slot Chassis

Following figure provides front and rear view of the Cisco Nexus 7000 9-slot chassis, and below

are its main features:

 

Figure:  Nexus 7000 9-Slot Chassis – Front View

 

·         Side-to-side airflow increases the system density in a 14RU footprint, optimizing the use of rack space. The optimized density provides the capability to stack up to three 9-slot chassis in a 42RU rack.

·         Independent variable-speed system and fabric fans provide efficient cooling capacity to the entire system. Fan-tray redundancy features help ensure reliability of the system and support for hot swapping of fan trays.

·         I/O modules, supervisor modules, and fabric modules are accessible from the front.Power supplies and fan trays are accessible from the back.

·         Integrated cable management system designed to support the cabling requirements of a fully configured system at either or both sides of the switch, allowing outstanding flexibility. All system components can easily be removed with the cabling in place, providing ease of maintenance tasks with little disruption.

·         A series of LEDs at the top of the chassis provide a clear summary of the status of the major system components, alerting operators to the need to conduct further investigation. These LEDs report the power supply, fan, fabric, supervisor, and I/O module status.

·         A purpose-built optional front-module door provides protection from accidental interference with both the cabling and modules installed in the system. The transparent front door allows easy observation of cabling and module indicators and status lights without any need to open the doors, reducing the likelihood of faults caused by human interference. The door supports a dual-opening capability for flexible operation and cable installation while fitted. The door can easily be completely removed for both initial cabling and day-to-day management of the system.

 

Nexus 7000 6.0-kW AC Power Supply Module – N7K-AC-6.0KW

The 6.0-kW AC power supply module for the Cisco Nexus 7000 Series is a dual 20A AC input power supply. When both inputs are at high line nominal voltage (220 VAC), the power output is 6000W. Connecting to low line nominal voltage (110 VAC) or using just one input will produce lower output power levels. Table 2 shows the available power output for the input options.

Figure Cisco Nexus 7000 6.0-kW AC Power Supply Module

 

The Cisco Nexus 7000 Series AC power supply module delivers fault tolerance, high efficiency,load sharing, and hot-swappable features. Each Cisco Nexus 7000 Series chassis can accommodate multiple power supplies, providing both chassis-level and facility-level power fault tolerance. Designed to address high-availability requirements, the power supplies incorporate internal component-level monitoring, temperature sensors, and intelligent remote-management capabilities.

The power supply modules are fully hot swappable, helping ensure no system interruption during installation or upgrades, and they are fitted at the back of the Cisco Nexus 7000 Series Switch chassis, allowing installation and removal without disturbing the network cabling on the front. Cisco Nexus 7000 Series systems can operate in four user-configurable power-redundancy modes, summarized in the following table:

 

Power Redundancy Modes

It was recommended that the customer configure power supply redundancy mode as “redundant”. This could be achieved using the following command: power redundancy-mode redundant

Power Calculation of the Nexus 7000 9-Slot Chassis

The power calculator on the CCO can be used to determine how much power a system configuration requires and the available redundancy modes: http://tools.cisco.com/cpc/.

 

The datasheet for the N7K-AC-6.0KW can be found at the following URL:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/Data_Sheet_C78-

437761_ps9402_Products_Data_Sheet.html

 

Second-Generation Nexus 7000 Supervisor Modules – N7K-SUP2

The Second-Generation Nexus 7000 Supervisor Modules (Figure 6) scale the control-plane and data-plane services for the Cisco Nexus 7000 Series Switches in scalable data center networks.They are designed to deliver scalable control-plane and management functions for the Cisco Nexus 7000 Series chassis.

 

The supervisor controls the Layer 2 and 3 services, redundancy capabilities, configuration management, status monitoring, power and environmental management, and more. It provides centralized arbitration to the system fabric for all line cards. The fully distributed forwarding architecture allows the supervisor to support transparent upgrades to I/O and fabric modules with greater forwarding capacity. Two supervisors are required for a fully redundant system, with one supervisor module running as the active device and the other in hot-standby mode, providing exceptional high-availability features in data center-class products.

 

Cisco Nexus 7000 Series Supervisor 2 Module

The module is based on a quad-core Intel Xeon processor with 12 GB of memory and scales the control plane by harnessing the flexibility and power of the quad cores.

 

Cisco Nexus 7000 Series Supervisor 2 Module Connectivity and Indicators

 

The Nexus 7000 9-slot chassis would be deployed with redundant supervisor modules. This is required for high availability. One supervisor module will be operationally active while the other will be serving as a hot backup.

 

The datasheet for the N7K-SUP2 can be found at the following URL:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/data_sheet_c78-

710881.html

 

Cisco Nexus 7009 Series Fabric-2 Modules – N7K-7009-FAB2

The Cisco Nexus 7000 Series Fabric-2 Modules for the Cisco Nexus 7000 9-Slot Series chassis are separate fabric modules that provide parallel fabric channels to each I/O and supervisor module slot. Up to five simultaneously active fabric modules work together delivering up to 550 Gbps per slot. The fabric module provides the central switching element for fully distributed forwarding on the I/O modules.

 

Switch fabric scalability is made possible through the support of from one to five concurrently active fabric modules for increased performance as your needs grow. All fabric modules are connected to all module slots. The addition of each fabric module increases the bandwidth to all module slots up to the system limit of five modules. The architecture supports lossless fabric failover, with the remaining fabric modules load balancing the bandwidth to the entire I/O module slot.

 

Cisco Nexus 7009 Fabric-2 Module

The datasheet for the N7K-7009-FAB2 can be found at the following URL:

http://www.cisco.com/en/US/prod/collateral/switches/ps9441/ps9402/ps9512/data_sheet_c78-

684211_ps9402_Products_Data_Sheet.html

 

A High Level View of a Cloud supporting DC

Following is a high level logical topology of based on Cisco’s Virtualized Multi-tenant Data Center (VMDC) 2.x architecture solutions set.

 

To be Continued: Part-II will have Unified Computing System Integration, Topology and options for DCI (OTV or not to OTV).

 

 

No comments:

Breakfast At Serengeti

Breakfast At Serengeti
Lion's Share

The Ngorongoro Family

The Ngorongoro Family
Click on the Picture Above To Make It Larger

Tabloid Time: The Aliens Are a'Landing ?!.. ;-)

At the risk of being ridiculed and being labelled a freak, I shall like to draw everyone's attention to the following recent events....If you watch the videos then turn on the sound for the commentary...



Fireball across Ausin, Texas (16th Feb 2009). According to BBC, apparently, its NOT debris from a recent satellite collision...:
http://news.bbc.co.uk/1/hi/world/7891912.stm
http://us.cnn.com/2009/US/02/15/texas.sky.debris/index.html

Same in Idaho in recent times. NO meteor remains found yet: http://news.bbc.co.uk/1/hi/sci/tech/7744585.stm

Exactly same in Sweden: http://news.bbc.co.uk/1/hi/world/europe/7836656.stm?lss

This was recorded on 25th Feb 2007 in Dakota, US:
http://www.youtube.com/watch?v=cVEsL584kGw&feature=related

This year has seen three of the spookiest UFO videos surface, with people in India, Mexico and even in space, NASA, spotting things they couldn't explain: http://www.youtube.com/watch?v=7WYRyuL4Z5I&feature=related

CHECK out this one on 24th Januray, 2009 in Argentina close to Buenos Aires:
You tube: www.youtube.com/
Press:
Press Coverage

AND Lastly, and more importantly, from Buzz Aldrin on Apollo 11 : http://www.youtube.com/watch?v=XlkV1ybBnHI

Heh?! Don't know how authentic these news are... don't even know if these are UFO's or meteors or ball lightning or something else. But, if meteors, then where are the meteorites ? However, I see no reason why life cannot exist in other planets and why they could not be sneaking around here :-) . I for one, have long suspected some of my relations to be space aliens or at least X-people from X-files :-)

I am waiting for a job on an Alien spaceship myself. :-)


Giraffes in Parallel Universe

Giraffes in Parallel Universe
At Lake Manyara

Serengeti Shall Never Die

Serengeti Shall Never Die
Wildebeeste Calf starts running only 5 min. after being born. CLICK on the pitcture to view Slideshow