Invalidating query cache entries Sexy chat cam bot
High Availability Campus Recovery Analysis Introduction Audience Document Objectives Overview Summary of Convergence Analysis Campus Designs Tested Testing Procedures Test Bed Configuration Test Traffic Methodology Used to Determine Convergence Times Layer 3 Core Convergence—Results and Analysis Description of the Campus Core Advantages of Equal Cost Path Layer 3 Campus Design Layer 3 Core Convergence Results—EIGRP and OPSF Failure Analysis Restoration Analysis Layer 2 Access with Layer 3 Distribution Convergence—Results and Analysis Test Configuration Overview Description of the Distribution Building Block Configuration 1 Results—HSRP, EIGRP with PVST Failure Analysis Restoration Analysis Configuration 2 Results—HSRP, EIGRP with Rapid-PVST Failure Analysis Restoration Analysis Configuration 3 Results—HSRP, OSPF with Rapid-PVST Failure Analysis Restoration Analysis Configuration 4 Results—GLBP, EIGRP with Rapid-PVST Failure Analysis Restoration Analysis Configuration 5 Results—GLBP, EIGRP, Rapid-PVST with a Layer 2 Loop Failure Analysis Restoration Analysis Layer 3 Routed Access with Layer 3 Distribution Convergence—Results and Analysis Layer 3 Routed Access Overview VLAN Voice 102, 103 and 149 EIGRP Results EIGRP Failure Results EIGRP Restoration Results OSPF Results OSPF Failure Results OSPF Restoration Results Tested Configurations Core Switch Configurations Core Switch Configuration (EIGRP) Core Switch Configuration (OSPF) Switch Configurations for Layer 2 Access and Distribution Block Distribution 1—Root Bridge and HSRP Primary Distribution 2—Secondary Root Bridge and HSRP Standby IOS Access Switch (4507/Sup II ) Cat OS Access Switch (6500/Sup2) Switch Configurations for Layer 3 Access and Distribution Block Distribution Node EIGRP Access Node EIGRP (Redundant Supervisor) Distribution Node OSPF Access Node OSPF (Redundant Supervisor) Cisco Validated Design Cisco Validated Design May 21, 2008 Both small and large enterprise campuses require a highly available and secure, intelligent network infrastructure to support business solutions such as voice, video, wireless, and mission-critical data applications.
To provide such a reliable network infrastructure, the overall system of components that make up the campus must minimize disruptions caused by component failures.
This document also helps operations and other staff, understand the expected convergence behavior of an existing production campus network.
This document records and analyzes the observed data flow recovery times after major component failures in the recommended hierarchical campus designs.
The convergence time recorded for each failure case was determined by measuring the This worst case result recorded is the maximum value observed over multiple iterations of each specific test case, and represents an outlier measurement rather than an average convergence time.
The use of the worst case observation is intended to provide a conservative metric for evaluating the impact of convergence on production networks.
Within the structured hierarchical model, the following four basic variations of the distribution building block were tested: •Layer 2 access using Per VLAN Spanning Tree Plus (PVST ) •Layer 2 access running Rapid PVST •Layer 3 access end-to-end EIGRP •Layer 3 access end-to-end Open Shortest Path First (OSPF) Both component failure and component restoration test cases were completed for each of these four specific distribution designs.
In addition to the four basic distribution configurations tested, two additional tests were run comparing variations on the basic L2 distribution block design.
•IPTV Video streams @ 1451kbps (1460 byte payload, RTP = MPEG1).
Testing demonstrated that a campus running Layer 3 access and EIGRP had a maximum loss of less than 200 msec of G.711 voice traffic for any single component failure.
Convergence for a traditional Layer 2 access design using sub-second Hot Standby Routing Protocol (HSRP)/Gateway Load Balancing Protocol (GLBP) timers was observed to be sub-second for any component failure.
The endpoints attached to each of the 39 access and data center switches were configured to generate the following unicast traffic: •G.711 voice calls—Real-Time Protocol (RTP) streams.
•94 x TCP/UDP data stream types emulating Call Control, Bulk data (ftp), mission-critical data (HTTP, tn3270), POP3, HTTP, DNS, and WINS.