|
Ive had IGMP snooping cause this, may want to disable it and see if it helps if its on.
|
# ? Jan 17, 2011 14:00 |
|
|
# ? Apr 27, 2024 16:47 |
|
Are all the servers on the same vlan? Is most traffic Server to Server within the stack, or does most of it leave the 3550s. I assume the topology consists of Stacked 3750's uplinked to a daisy chained 3550 core? If that is the case I'm thinking you may want to change your topology. Also you shouldn't be experience any packet loss, this is telling me that your NIC Teaming could be setup wrong where the switch is constantly learning the same Mac from two different ports and it is recieving traffic from a different VLAN the switch doesn't know where to forward the traffic to, it may also have to constantly perform an ARP which takes time. Check the logs on a few of the switches and see what you can see. "show log" Also be a little more clear with your topology, make a diagram in paint or something.
|
# ? Jan 17, 2011 18:10 |
|
It has been my experience that the HP Teaming doesn't work well across multiple switches. It's worth a shot to see. I have a similar setup as you and most of my servers have a backup. I put one server on one switch, and one on the other. Speaking of HP Network Teaming, has anybody set it up with Etherchannel?
|
# ? Jan 18, 2011 02:58 |
|
A stack of 3750's is logically the same switch though, but you are right etherchannel would be even better. Cross switch etherchannel with switches in a stack provides high availability as well as efficient bandwidth use. The problem is often times 802.3ad LACP is finicky depending on the Server OS, as there have been several revisions of the standard and Cisco only supports the current one.
|
# ? Jan 18, 2011 04:14 |
|
Powercrazy posted:A stack of 3750's is logically the same switch though, but you are right etherchannel would be even better. Cross switch etherchannel with switches in a stack provides high availability as well as efficient bandwidth use. The problem is often times 802.3ad LACP is finicky depending on the Server OS, as there have been several revisions of the standard and Cisco only supports the current one. So, I'm going to give some anecdotal stories here, but I am firmly against cross-stack portchannels. The logic is this, if one stack member dies, reboots, whatever...those 5 minutes of port and stack inconsistencies will wreck havoc on the STP process. Once the stack member fully rejoins the stack everyone is happy..but those 5 minutes are a mess. Edit: I know you're talking about end host access-layer etherchannels, which I am with you on. My example relates to dist/core uplink cross-stack portchannels. No bueno. jbusbysack fucked around with this message at 04:39 on Jan 18, 2011 |
# ? Jan 18, 2011 04:24 |
|
I'm pretty sure you can resolve that STP issue you are seeing by using the spanning-tree backbone fast command on the physical interfaces on the port channel.
|
# ? Jan 18, 2011 13:08 |
|
Some of this is going above my head, however Jmdg may be onto something. I have a couple of Linux machines patched into the 3750s; they can ping each other and the Windows servers with >1ms latency.
|
# ? Jan 18, 2011 14:00 |
|
Just for my own education, when you say "Stack of 3750's" are you referring to the fact that the switches are literally on top of each other? Or are you referring to the fact that they are daisy chained with a network cable for redundancy?
|
# ? Jan 18, 2011 15:08 |
|
It looks like a bit of both. Seems like the 3750s at least are connected by the "proprietary looking" stacking cables, but the 3550s are just daisy chained together.
|
# ? Jan 18, 2011 16:41 |
|
Bardlebee posted:Just for my own education, when you say "Stack of 3750's" are you referring to the fact that the switches are literally on top of each other? Or are you referring to the fact that they are daisy chained with a network cable for redundancy? http://www.cisco.com/en/US/prod/collateral/switches/ps5718/ps5023/prod_white_paper09186a00801b096a.html
|
# ? Jan 18, 2011 17:49 |
|
Badgerpoo posted:...but the 3550s are just daisy chained together. Eh, the old switches had these half assed stack things. http://www.cisco.com/en/US/products/hw/modules/ps872/products_data_sheet09186a00800a1789.html Amusing story : a bozo department somewhere I worked once saw a whole bunch of 3550s stacked together with those interfaces; so they "stacked" a whole bunch of their switches together with regular 1000bSX gbics. They were so confused and baffled when nothing worked due to exceeding the recommended spanning tree network diameter by a whole lot of switches.
|
# ? Jan 18, 2011 18:59 |
|
So I just learned something today. When you have switches that are remote with no out of band management, it is probably a good idea to have these commands on them: errdisable recovery cause all errdisable recovery interval 60 Especially if they are long range trunks, in this case 60km. Otherwise when you accidentally span vlans of different MTU sizes and your trunk go errdisable, you can fix the problem locally and then get back to the switches without have to drive all the way across London. Helpful Tips.
|
# ? Jan 18, 2011 19:12 |
|
Enabling err-disable and then losing something to err-disable without having the appropriate err-disable recovery is a rite of passage. Today you are a man.
|
# ? Jan 18, 2011 19:33 |
|
Apparently errdisable with no recovery is default behavior? Who knew? Also I've done lots of work involving long range links with no OoB and I've never errdisabled anything on the remote end, first time for everything I guess.
|
# ? Jan 18, 2011 19:53 |
|
Yeah, it's kinda... dumb.
|
# ? Jan 18, 2011 20:28 |
|
Heh we put a whole bunch of errdisable recovery on our network a year or so back. One problem it does cause however is a link to continually errdisable, come back and then err-disable again forever until a problem is fixed. This seemed to cause a weird STP issue on our router making the cpu spike to 100% and cause a whole world of hurt.
|
# ? Jan 18, 2011 20:41 |
|
Does anyone know why when I try to do a Traceroute form my local LAN to another local LAN that is across a VPN connection, it only lists the routers on either side. For example if I am sending form 192.168.2.0 to 192.168.11.1, it will only list: 192.168.11.1 That's it.
|
# ? Jan 18, 2011 21:54 |
|
the VPN connection is what exactly? IOS with crypto maps?
|
# ? Jan 18, 2011 21:59 |
|
jwh posted:the VPN connection is what exactly? IOS with crypto maps? It's a VPN via IPSec. So, yeah crypto maps I believe. I mean logically, yeah, no other routers really open the packet I don't think, but it does go through other routers.... right?
|
# ? Jan 18, 2011 22:05 |
|
Bardlebee posted:It's a VPN via IPSec. So, yeah crypto maps I believe. It physically traverses them but logically it does not. VPN endpoints appear as direct hops from each other since they are encapsulated/decapped at those two routers you see (the endpoints).
|
# ? Jan 18, 2011 22:11 |
|
jbusbysack posted:It physically traverses them but logically it does not. VPN endpoints appear as direct hops from each other since they are encapsulated/decapped at those two routers you see (the endpoints). Ah, ok I kind of figured this, but wanted a more experienced viewpoint. This leads me to another troubleshooting issue. I supposedly have a T1 line connected between both of my buildings that are across the city. However on one of my 'sh cdp neighbors' outputs it shows there is a 2620XM router that is connected to it. I can physically see this router. On the other supposed end point, there is no such router and my 1811 connects straight to the internet modem. I say supposed T1 line because when I transfer data from one server to another in these respective local LAN's, the speed never gets over 25Kbps, when it should be at worst 500Kbps. I would think this indicates the T1 is not working or its not even connected. Any tips on what I should be looking for?
|
# ? Jan 18, 2011 22:17 |
|
Powercrazy posted:I'm pretty sure you can resolve that STP issue you are seeing by using the spanning-tree backbone fast command on the physical interfaces on the port channel. Hrm, I'm not particularly sure that would fix the issue. The problem seemed to stem from the inconsistencies in the stacking cable interfaces passing traffic between the two switches...not the physical interfaces that uplinked to the core. Since the boot-up and stack election process changes the logical structuring of them, it seemed like BDPUs kept flowing through in a loop as the STP process was the one pegging both core switches to 99.9 until the stack process was fully complete. Unsure as to the actual root cause of it, but that seems to be the issue at fault. Ideas?
|
# ? Jan 18, 2011 22:22 |
|
Bardlebee posted:Ah, ok I kind of figured this, but wanted a more experienced viewpoint. This leads me to another troubleshooting issue. That's probably terminating your T1.
|
# ? Jan 18, 2011 22:27 |
|
jwh posted:Maybe it's your 2620XM and it's gone off the reservation? Heh, you know it wouldn't surprise me with these people. No, its clearly labeled as my ISP's router with a sticker... additionally I have been told the router just forwards traffic. Question: When you have such a router, such as the 2620XM forwarding traffic through a T1 line? Would you need a specific static IP address? For instance, if at Site A (1811 router with the 2620XM attached) and Site B (1811 router with no 2620XM) had an external IP of 1.1.1.1 and 2.2.2.2, respectively, would changing one of the IP's one day screw up that T1? That is where I could imagine the mess up. It almost just seems to me they did not install a 2620XM to terminate the other end. In which case, we just payed them 500 dollars a month for a T1, legacy technology, line for about two years now... for no reason. Oh ho, the corporate emails will be sweet.
|
# ? Jan 18, 2011 22:35 |
|
Well, there could be a lot of different things at work here. It could have been that at the time your company purchased the point-to-point T1, they (your company) didn't have the equipment / expertise / wherewithal / cash money / etc. to buy the necessary router, so your ISP pitched a managed solution (or something like that). On the other hand, it is weird that they would drop their 2620XM off at one location, but not at the other. When you say the other side (the location with only the 1811) connects to an Internet modem, what is that piece of equipment you're describing? You should call your provider and ask them what the specifics of that arrangement are. $500/mo for a T1 port & access isn't bad. I've seen some loops that are like $1,800/mo.
|
# ? Jan 18, 2011 22:44 |
|
jwh posted:Well, there could be a lot of different things at work here. It could have been that at the time your company purchased the point-to-point T1, they (your company) didn't have the equipment / expertise / wherewithal / cash money / etc. to buy the necessary router, so your ISP pitched a managed solution (or something like that). Yeah the price ain't half bad. Previously we did not have these 1811's, we just had some lovely home router. Heh. The one without the 2620XM goes straight into a modem. So I don't think it goes anywhere to connect to the T1, additionally I did a cdp neighbors, but I know it only catches what is directly connected, so that didn't come up with anything. When I do a tracert, it looks like it goes through at least 4-5 routers, which I would imagine should be wrong. It should only go through 2 at max, i.e. the /30's
|
# ? Jan 18, 2011 22:49 |
|
jwh posted:$500/mo for a T1 port & access isn't bad. I've seen some loops that are like $1,800/mo. T1 pricing is heavily influenced by distance and service area, so $1800 isn't as bonkers as it may sound. Bardlebee posted:When I do a tracert, it looks like it goes through at least 4-5 routers, which I would imagine should be wrong. It should only go through 2 at max, i.e. the /30's Do you know for certain that it's supposed to be a point-to-point T1, and not (effectively) two T1's that are to the telco, routed to each other? CrazyLittle fucked around with this message at 22:53 on Jan 18, 2011 |
# ? Jan 18, 2011 22:51 |
|
CrazyLittle posted:T1 pricing is heavily influenced by distance and service area, so $1800 isn't as bonkers as it may sound. It is within the city... additionally I don't know if you guys have PBX experience but we run a Nortel phone system (I know their dead) and we can transfer calls from this office to Site B. Now, I am wondering if that is because of the T1 line or if this is possible through other means. I have opened a ticket with my ISP on this, so we will see. CrazyLittle posted:T1 pricing is heavily influenced by distance and service area, so $1800 isn't as bonkers as it may sound. I don't know for certain, however... either way I should be getting more then 25KBps when I transfer files.... I would think.
|
# ? Jan 18, 2011 22:53 |
|
Bardlebee posted:I don't know for certain, however... either way I should be getting more then 25KBps when I transfer files.... I would think. It could be a fractional circuit, too. If you're truly interested in investigating, you should dig up the contracted specifications for that line like jwh suggested.
|
# ? Jan 18, 2011 22:56 |
|
Bardlebee posted:It is within the city... Sounds like they rob a bunch of the channels and turn those into PRI circuits. XO has a similar model called IPFlex or something, where it dynamically adjusts your bandwidth based upon how many of the 23 channels are used for voice and the rest goes to data.
|
# ? Jan 18, 2011 23:37 |
|
It could be parceled out, sure. There's a lot that could be going on, really. Your best bet is to talk to the provider.
|
# ? Jan 18, 2011 23:50 |
|
jbusbysack posted:Hrm, I'm not particularly sure that would fix the issue. The problem seemed to stem from the inconsistencies in the stacking cable interfaces passing traffic between the two switches...not the physical interfaces that uplinked to the core. Since the boot-up and stack election process changes the logical structuring of them, it seemed like BDPUs kept flowing through in a loop as the STP process was the one pegging both core switches to 99.9 until the stack process was fully complete. You also want to mirror your two master switches configs as much as possible. If the switch 1 has a cross stack port channel from port 1, then switch two needs that as well. Switch three doesn't matter because in theory it should never be a master unless you have catastrophic failure. By being clever you should never have down time even if you lose a master switch.
|
# ? Jan 19, 2011 00:04 |
|
Powercrazy posted:Stack Election. You need to specify who the master is of the stack and who the backup is, if you don't specify the backup it will be in limbo until it finally descides based on some obscure criteria who the "real" master should be. I should have been more clear. This example is only two switches in the stack, and the issues only occur when the 'dead' switch is brought back online. One switch can die and nobody misses a beat, as the end hosts are cross-switch portchanneled (via LACP), however the core switch uplinks were cross-switch portchanneled via PAgP. Both switches have identical configurations as they were designed with the idea that one can die and nobody would ever know. The issues cropped up when the formerly 'dead' switch was turned back on, and the initial boot-up process is what caused STP to go absolutely nuts until everything was resolved. As for the comment of stack master priority, they're split 15 and 1, so its not an issue of a tie in master priorities either. It's a fun exercise in thought. I may TAC this tomorrow just to see what the gospel truth turns out to be.
|
# ? Jan 19, 2011 03:05 |
|
There is also a procedure for adding a new slave switch to a stack. I've never done it before, but I imagine its one of those things where if you do it wrong you'll see issues. I'd be interested in the "gospel" truth though.
|
# ? Jan 19, 2011 03:28 |
|
Breaking a 3750 stack doesn't cause much harm. You just can't destroy the ring entirely (isolation).
|
# ? Jan 19, 2011 03:37 |
|
jwh posted:Breaking a 3750 stack doesn't cause much harm. You just can't destroy the ring entirely (isolation). Yep, breaking does zero harm, it's putting it back together that causes problems It's just interesting to me personally since in trading environments we would always leave / remotely shut off PDU ports for any device that even blinked or thought twice about not working properly. Turning switches back on outside of outage windows after downing them was a new concept to me. TAC has been filed, I'll keep the thread posted.
|
# ? Jan 19, 2011 04:13 |
|
I'm surprised you use stacks of 3750s in a trading environment at all. We use 4948's and 6500s with 6700 series line cards.
|
# ? Jan 19, 2011 04:27 |
|
Powercrazy posted:I'm surprised you use stacks of 3750s in a trading environment at all. We use 4948's and 6500s with 6700 series line cards. Think retail commodities application vendor, nothing algorithmic in that situation. The big kids, yes.
|
# ? Jan 19, 2011 05:26 |
|
Hey guys, I wanted to recap some very basic packet behavior in a ping to see if I have it all right in my head: 1. PC1, if it does not have the IP and MAC Address in its ARP Cache it will send out an ARP with a destination MAC of FFFF:FFFF:etc for the MAC Address of PC2. 2. The Echo Request will go to the switch and be flooded out all ports except for the port it came in on. (split horizon) 3. If PC2 recognizes its IP address it sends it back, with its source MAC Address in a Echo Reply. What I am wondering, and I know this is pretty fundamental and I should know this, but how does PC1 ping PC2 if it doesn't know its IP address OR MAC Address? Is this possible?
|
# ? Jan 19, 2011 19:57 |
|
|
# ? Apr 27, 2024 16:47 |
|
Bardlebee posted:What I am wondering, and I know this is pretty fundamental and I should know this, but how does PC1 ping PC2 if it doesn't know its IP address OR MAC Address? Is this possible? I don't think the scenario you describe is possible. You need at least an IP to ping. Unless you mean ping PC2 by its DNS name, in which case the PC will send a DNS request to the DNS server, then use the IP it gets back to conduct an ARP request.
|
# ? Jan 19, 2011 20:04 |