Skip to content

NX-OS CLI for IOS Administrators

IOS has been around for decades and has built up quite a bit of muscle memory in administrators’ fingers. But lets be honest, IOS isn’t perfect. Cisco revisited the NX-OS CLI to improve it but kept it similar to IOS so retraining should be mostly minimal.

Interface Names

Cisco’s IOS devices have always indicated the theoretical port speed in the interface command. For example, the first Ethernet port on a stackable Fast Ethernet switch would be designated FastEthernet 1/0/1. NX-OS refers to all Ethernet ports as…Ethernet. Ethernet1/0/1 would be the first Ethernet port in the first slot of a Nexus 7000 chassis. Personally, this is a welcome change as it abstracts out the speed. Because the speed command can change the speed anyways, implying port speed in the title can be incorrect. Furthermore, many Nexus ports can use either 1GbE or 10GbE transceivers so names other than Ethernet would be a misnomer.

Say Goodbye to the “Do” Command

show commands have always been present in operational mode in IOS and still are in NX-OS. However, the do command is no longer required to access commands present in operational mode. Nothing replaces it either, just type the command as you would in operational mode. Hallelujah!

Router02(config)# int eth2/1
Router02(config-if)# ip add 1.1.1.1 255.255.255.0
Router02(config-if)# no shut
Router02(config-if)# show int e2/1 brief
------------------------------------------------------------------------------
Ethernet        VLAN        Type    Mode        Status  Reason      Speed       Port
Interface                                               Ch #
------------------------------------------------------------------------------
Eth2/1          --        eth   routed      up      none        1000(D)     --
Router02(config-if)# 

All Hail CIDR Notation

Tired of typing subnet masks in IOS? NX-OS now supports CIDR notation.

Router02(config)# int eth2/1
Router02(config-if)# ip add 1.1.1.1/24

No More User EXEC Mode

NX-OS does away with User EXEC Mode. If you don’t recall what that means, it is the original command prompt you get when you log in which is suffixed by a >.

Router02> 

All User EXEC functions are now merged into Privileged EXEC mode. AKA…

Router02# 

No More Spanning Tree PortFast

PortFast is a Cisco proprietary term and the one IOS used. The Spanning Tree standard, 802.1D, uses the phrases “edge port” and “network port”. Enabling PortFast equivalent functionality isn’t hard on NX-OS, just a little different.

Router02(config-if)# spanning-tree port type edge

Alternatively, a switch-to-switch connection could be configured using:

Router02(config-if)# spanning-tree port type network

Modular Functionality

Boot up a brand new Nexus switch and configure an OSPF instance. Try it, I dare you. It won’t work. Features can be enabled or disabled using the feature command.

Router02(config)# feature ospf
LAN_ENTERPRISE_SERVICES_PKG license not installed. ospf feature will be shutdown after grace period of approximately 120 day(s)
Router02(config)#

Besides OSPF, other optionally enabled features which can be used are:

  • BGP
  • vPC
  • OTV
  • DHCP

There are others, but it gives an idea of some of the functionality which can be enabled or disabled.

Conclusion

Many other differences between IOS and NX-OS exist and I will point them out in other trainings. These are the high-level ones and I’m sure are a must on the CCNA DC exam.

Nexus Product Line (High-Level)

Cisco has a few switches in the Nexus family. I can’t say how deep CCNA Data Center goes into the product line[1] but it’s worth covering anyways.

Nexus 7000

The flagship of Cisco’s product line is the Nexus 7000 series[2]. These are chassis switches in four physical form factors:

  • 7004 – 4 slot
  • 7009 – 9 slot
  • 7010 – 10 slot
  • 7018 – 18 slot

Each chassis has two slots available for supervisors. Neither slot is capable of running IO line cards as they are dedicated to the supervisor.

There are a few components inside the Nexus 7000.

Supervisor

The supervisor is responsible for management of the device as well as control plane processes. In many ways it is similar to a supervisor in a 4500 or 6500 except it is not responsible for data plane traffic. Nexus 7000 has three supervisor options: Sup1, Sup2, and Sup2E. Cisco offers a nice comparison of the differences on their website so I’d rather not re-iterate here. In sum, the differences are

  • Amount of memory available
  • FCoE capabilities
  • VDC capabilities
  • FEX capacity

Fabric

Nexus 7000 enjoys scalability because of its fabric modules. The fabric connects the line cards together and transports traffic from line card to line card. Fab–1 and Fab–2 modules are available providing 46 and 110Gbps per slot per fabric module respectively. This means if a 7010 has 3 line cards in it, each line card will have 110Gbps of backplane connectivity per Fab–2 installed. Nexus 7004 does not use fabric modules as the concept is embedded in the chassis.

Line Cards

Line cards are what provides the physical ports. When designing a Nexus 7000 solution, this is where mistakes are made. Four types of line cards are available:

  • M1
  • F1
  • F2
  • F2E

M1

The M1 was the original line card with tons of TCAM memory and ASICs. They support OTV, MPLS, and license enabled expanded memory[3]. However, other advanced features such as FCoE and FabricPath are not available on the M-series.

M2 line cards are available but are probably newer than what is on the test. See the Nexus 7000 data sheets for more information.

F1

F1 was released as a less expensive line card but with FCoE and FabricPath support. Latency is lower as is price. However, these rely on an M1 for traffic forwarding[4]. In other words, if an F1 card must send traffic to another F1 card, the traffic needs to go through an M1 first. If a F1 line card lacks a M1 card to bind to, it is able to forward traffic but only at layer 2.

F2

F2 line cards were designed so they are not able to be in the same VDC as a M1 line card[5]. I am not aware of the rationale behind it but this caused some heartache during design phases.

F2E

F2E line cards were released to fix the compatibility problems with the F2. They support FCoE and FabricPath but still do not support OTV or MPLS which is still the responsibility of the M1 series. These are the default line cards I used for traffic in my designs unless an M-series is required.

Nexus 6000

The Nexus 6000 series is optimized for 10 and 40Gbps traffic. The 6001 offers 48 ports of 1 or 10Gbps SFP/SFP+ ports whereas the 6004 is more of a chassis covering up to 96 40Gbps ports. 6001 also has 4 40Gbps uplink ports. Any 40Gbps port on the 6000 series is capable of using quad-breakout cables effectively creating 4 10Gbps connections.

FabricPath and FCoE are both supports on all Nexus 6000 switches. Additionally, it offers line rate layer 2 and layer 3 throughput.

Nexus 5000

If there’s one switch which was almost good enough but still has a unique place in the product line, despite its deficiencies, it is the Nexus 5000. Its uniqueness is because it supports native FibreChannel. Each port on the 5500UP series can be configured using 1/10Gbps Ethernet SFPs or FC SFPs. Be wary of the ordering of the ports but the different protocols can be used on the same switch simultaneously. Very cool.

Unfortunately, layer 3 support is provided by an additional module or daughter card operating at 160Gbps. All traffic, management or otherwise, needs to go through this module. If this module is saturated, there is possibility for traffic to be dropped, corrupted, or the administrator may not be able to access the management console to solve the problem. I like the 5000 for FC or layer 2 connectivity but steer toward Nexus 6000 if layer 3 is a requirement.

Nexus 3000

Of all the physical Nexus switches, I have the least design experience with the 3000 series. They’re low-latency, high-performance switches marketed toward financial services or big data firms. However, they can be nice top of rack switches as well.

Nexus 2000

Here is where Cisco got creative. Nexus 2000 isn’t really a switch at all. Instead it’s more of a remote line card. It doesn’t have much intelligence on board, just enough to do the bare minimum. Instead, all forwarding decisions are made on the upstream Nexus 5000, 6000, or 7000 switch thanks to 802.1BR. Nexus 2000 offers a very nice top of rack “switch” at a low price point. Management is also great since it appears as a line card on the parent switch.

Future posts will go into 802.1BR and uplinking methods of the Nexus 2000 family.

Nexus 1000V

1000V stands alone as it is a hypervisor based switch, replacing the vSwitch in VMware ESXi. Network administrators can now use their Nexus skills to administer hypervisor switches. A future post will go into the architecture of the 1000V.

Summary

Cisco’s Nexus product line has matured quite a bit over the years. Future posts will go into detail about each one. For now, this will get you started on the Nexus product line.


  1. I haven’t taken the test and people aren’t allowed to talk about test contents anyways.  ↩
  2. Ignore the 7700 and recently announced 9000 for the sake of this discussion.  ↩
  3. Expanded memory only certain models.  ↩
  4. http://www.cisco.com/en/US/docs/solutions/Enterprise/Data_Center/VMDC/2.6/vmdcm1f1wp.html  ↩
  5. https://supportforums.cisco.com/thread/2211563  ↩

Introduction to CCNA Data Center

With my CCNP Route/Switch completed, I am pursuing the CCNA Data Center exam and will go for the CCNP Data Center certificate shortly thereafter. CCNA Data Center is comprised of two exams: 640-911 and 640-916.

640-911 (Introducing Cisco Data Center Networking)

This exam appears to be similar to the CCNA CCENT exam but with a focus on data center instead of general networking. 640-911’s curriculum looks a little sparse so I’m concerned about what will be on it. I don’t however, expect this to be too challenging of a test.

640-916 (Introducing Cisco Data Center Technologies)

Of the two exams, this is the one I’m looking forward to the most since it covers Unified Fabric (aka. Nexus), UCS, and storage. DCICT appears to be the meat and potatoes test between the two options.

Study Materials

Most Cisco tests have Cisco Press books associated with them. Unfortunately, the CCNA Data Center exams seem to have been forgotten. None of my research on the topic has revealed any books written for the exams. Cisco has compiled a long list of technical videos to get students started. However, my opinion of them is quite mixed.

  • Some of the speakers have very quiet volumes. If you turn your speaker up louder to compensate, prepare to be deafened by another speaker with appropriate levels.
  • Video resolution is poor. When CLI examples are shown, the text is too small to read in the window. Maximizing the video creates blurry but mostly readable text.
  • Much of the content is redundant. Redundancy is important when studying for exams, don’t get me wrong. But there were at least two, maybe three, vPC videos covering mostly the same content.

I am told Data Center Virtualization Fundamentals: Understanding Techniques and Designs for Highly Efficient Data Centers with Cisco Nexus, UCS, MDS, and Beyond is good for the exam although I did not purchase this one. Instead I ordered CCNA Data Center – Introducing Cisco Data Center Networking Study Guide: Exam 640-911. I await this book’s arrival and will post a review on it when I have completed the 640-911 exam. Its complimentary book for the 640-916 exam, CCNA Data Center: Introducing Cisco Data Center Technologies Study Guide: Exam 640-916 , is expected to be released in April 2014. Unfortunately, I expect to be done with the exam by then.

Summary

I look forward to my studies with the CCNA Data Center exams. Expect some posts in the near future.

Note: All links to Amazon are Amazon Associates links. If you would prefer not to use my Amazon Associate links, simply search for the book’s title on Amazon.

EIGRP Metric Calculation – Exercise 2

Calculate the metric from R1’s Fa0/0 to R2’s Fa0/0. Assume K-values are default.

ScreenCapture at Fri Apr 26 18:26:58 CDT 2013

 

 

R1#sh int fa0/0
FastEthernet0/0 is up, line protocol is up 
 Hardware is Gt96k FE, address is c400.6413.0000 (bia c400.6413.0000)
 Internet address is 10.1.1.1/24
 MTU 1500 bytes, BW 10000 Kbit/sec, DLY 1000 usec, 
 reliability 255/255, txload 1/255, rxload 1/255
 Encapsulation ARPA, loopback not set
 Keepalive set (10 sec)
 Half-duplex, 10Mb/s, 100BaseTX/FX
R2#sh int f0/0
FastEthernet0/0 is up, line protocol is up 
 Hardware is Gt96k FE, address is c401.6413.0000 (bia c401.6413.0000)
 Internet address is 10.1.1.2/24
 MTU 1500 bytes, BW 100000 Kbit/sec, DLY 1000 usec, 
 reliability 255/255, txload 1/255, rxload 1/255
 Encapsulation ARPA, loopback not set
 Keepalive set (10 sec)
 Half-duplex, 10Mb/s, 100BaseTX/FX

Answer

The trick here is even though the bandwidth on R2’s Fa0/0 is 100000 it still bases the calculation on 10000 because EIGRP uses minimum bandwidth, not sender bandwidth value.

Bandwidth setting on R1: 10000

Delay setting on R1: 1000

((10,000,000 / 10,000) + 100) * 256
(1,000 + 100) * 256
1,100 * 256 = 281,600

R1#sh ip eigrp topology 10.1.1.0 255.255.255.0 
IP-EIGRP (AS 1): Topology entry for 10.1.1.0/24
 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 281600
 Routing Descriptor Blocks:
 0.0.0.0 (FastEthernet0/0), from Connected, Send flag is 0x0
 Composite metric is (281600/0), Route is Internal

EIGRP Metric Calculation – Exercise 1

Calculate the metric from R1’s Fa0/0 to R2’s Fa0/0. Assume K-values are default.

ScreenCapture at Fri Apr 26 18:26:58 CDT 2013

R1#sh int f0/0
FastEthernet0/0 is up, line protocol is up 
 Hardware is Gt96k FE, address is c400.6413.0000 (bia c400.6413.0000)
 Internet address is 10.1.1.1/24
 MTU 1500 bytes, BW 10000 Kbit/sec, DLY 1000 usec, 
 reliability 255/255, txload 1/255, rxload 1/255
 Encapsulation ARPA, loopback not set
 Keepalive set (10 sec)
 Half-duplex, 10Mb/s, 100BaseTX/FX

R2#sh int f0/0
FastEthernet0/0 is up, line protocol is up 
 Hardware is Gt96k FE, address is c401.6413.0000 (bia c401.6413.0000)
 Internet address is 10.1.1.2/24
 MTU 1500 bytes, BW 10000 Kbit/sec, DLY 1000 usec, 
 reliability 255/255, txload 1/255, rxload 1/255
 Encapsulation ARPA, loopback not set
 Keepalive set (10 sec) Half-duplex, 10Mb/s, 100BaseTX/FX

Answer

Bandwidth setting on R1: 10000

Delay setting on R1: 1000

((10,000,000 / 10,000) + 100) * 256
(1,000 + 100) * 256
1,100 * 256 = 281,600

R1#sh ip eigrp topology 10.1.1.0 255.255.255.0 
IP-EIGRP (AS 1): Topology entry for 10.1.1.0/24
 State is Passive, Query origin flag is 1, 1 Successor(s), FD is 281600
 Routing Descriptor Blocks:
 0.0.0.0 (FastEthernet0/0), from Connected, Send flag is 0x0
 Composite metric is (281600/0), Route is Internal

Prefix List Exercises

Prefix lists are a powerful tool for route filtering but they also may be a little confusing when learning them. Here are some prefix list examples to reinforce knowledge. Answers should be syntactically sound commands and using the most specific matching possible.

  1. Filter the following routes:

* 10.1.5.1/24
* 10.2.2.1/24
* 10.5.7.1/24

  1. Filter the following routes:

* 10.1.1.1/24
* 10.2.1.1/25
* 10.2.1.128/25
* 10.100.1.1/30

  1. Filter all but the last route and allow no others

* 172.16.2.1/24
* 172.16.3.1/25
* 172.16.3.128/25
* 172.16.4.1/26
* 172.16.4.64/26
* 172.16.10.1/30

  1. Filter all routes using one command

* 10.1.1.1/16
* 10.2.1.1/17
* 10.2.128.1/17
* 10.10.10.5/30

  1. Filter all /30 routes

* 192.168.1.0/24
* 192.168.2.1/25
* 192.168.2.128/30
* 192.168.3.1/30

Answers are below so don’t scroll down too far!

Answers

1.

ip prefix-list 1 deny 10.0.0.0/13 ge 24 le 24
ip prefix-list 1 permit 0.0.0.0/0 le 32

2.

ip prefix-list 2 deny 10.0.0.0/8 ge 24 le 30
ip prefix-list 2 permit 0.0.0.0/0 le 32

3.

ip prefix-list 3 deny 172.16.0.0/20 ge 24 le 26
ip prefix-list 3 permit 172.16.10.0/30

4.

ip prefix-list 4 deny 10.0.0.0/12 ge 16 le 30
ip prefix-list 4 permit 0.0.0.0/0 le 32

5.

ip prefix-list 5 deny 192.168.2.0/23 ge 30 le 30
ip prefix-list 5 permit 0.0.0.0/0 le 32

Difference Between Path Cost and Root Path Cost

Spanning Tree uses costs associated to ingress ports to calculate the best path to the root bridge. The root path cost is the cumulative cost from the root to any given switch. Each port has a cost associated to it. On a Cisco switch, the port cost can be altered using

SW1 (config-if)# spanning-tree [vlan vlan-id] cost cost

A third term exists which causes a little confusion: path cost. The path cost is the same thing as the port cost, just a different name for it.