|
Oh boy. We have over 7,000 5508's in production effected.
|
# ? Feb 10, 2017 03:39 |
|
|
# ? Apr 19, 2024 21:44 |
|
Prescription Combs posted:Oh boy. We have over 7,000 5508's in production effected. What the unholy gently caress
|
# ? Feb 10, 2017 04:16 |
|
Kazinsal posted:What the unholy gently caress Probably works at a big hosting provider or colo or something, like me, though as far as I've heard we only have a handful of 5508-Xs in use; we mostly pushed people to the 5515-X as our entry level. Whatever the case, godspeed, 5508 guy. Godspeed.
|
# ? Feb 10, 2017 07:18 |
|
Home depot has a gigantic wifi deployment, maybe that's it. edit: scratch that wifi talk, thought he was referring to the WLC for some reason Also there's a rumor that the 6800 will have an early EOL and nexus will become a datacenter and campus switch. Sucks to those who bought in. Sepist fucked around with this message at 18:20 on Feb 10, 2017 |
# ? Feb 10, 2017 14:04 |
|
Heard the same from a rep in the federal space a few months ago. Told us that 6500 and 6800 was getting abandoned sooner rather than later.
|
# ? Feb 10, 2017 15:43 |
|
To be honest if you are a mid-level network engineer why would you EVER have bought into the "6800" in the first place?
|
# ? Feb 10, 2017 17:11 |
|
Kazinsal posted:What the unholy gently caress Dedicated hosting. They're the new 'low-end' high throughput firewall my company offers to customers since the 5510-5550's are EOL.
|
# ? Feb 10, 2017 18:11 |
|
ate poo poo on live tv posted:To be honest if you are a mid-level network engineer why would you EVER have bought into the "6800" in the first place? When does a mid-level engineer ever have say in the purchase? You probably got them for a refresh if Cisco had a sweet bundle price for them at any point
|
# ? Feb 10, 2017 18:19 |
|
Sepist posted:When does a mid-level engineer ever have say in the purchase? Well when my Cisco SE was pitching the 6800's to refresh our Datacenter 6509's I laughed at him as a mid-level engineer, and we went with Nexus instead. The final say wasn't mine, but my laughter was taken into consideration. For access, we went 4510's IIRC.
|
# ? Feb 11, 2017 00:15 |
|
Sepist posted:Home depot has a gigantic wifi deployment, maybe that's it. Can confirm.
|
# ? Feb 11, 2017 05:29 |
|
If you were building out a new small datacenter environment, would you go with two independent 10Gbe switches and use nsx or something similar to build layer two overlays, would you go with a chassis switch like a 9k, or would you do two independent 10Gbe switches and use mlag or VPC? I know leaf spine and layer 2 overlay is the new hotness, but I don't want to do something unnecessary here. Assume the following: 12 VMware hosts (6 for VDI and 6 for server virtualization) 2 HA SANs, and 4-5 metro ethernet carriers No more than 25% growth over the next 5 years We do a lot of virtual routing already with VyOS/quagga I know this is a complicated question, I'm just looking for some general thoughts.
|
# ? Feb 11, 2017 19:51 |
|
NSX is a pretty expensive solution for 12 VM hosts and no real need for microsegmentation.
|
# ? Feb 11, 2017 20:15 |
|
psydude posted:NSX is a pretty expensive solution for 12 VM hosts and no real need for microsegmentation. I thought vmware recently dropped the price considerably, is it still pretty expensive even after that?
|
# ? Feb 11, 2017 21:23 |
|
More expensive than just using 9ks like you mentioned, yes. I don't see what benefit you would get from NSX in an environment that small unless you're trying to meet strict security requirements or are hosting multiple subsidiaries' applications and environments that need to be kept separate in ways that the traditional DC network you detailed cannot.
psydude fucked around with this message at 21:49 on Feb 11, 2017 |
# ? Feb 11, 2017 21:47 |
|
psydude posted:More expensive than just using 9ks like you mentioned, yes. I don't see what benefit you would get from NSX in an environment that small unless you're trying to meet strict security requirements or are hosting multiple subsidiaries' applications and environments that need to be kept separate in ways that the traditional DC network you detailed cannot.
|
# ? Feb 11, 2017 22:04 |
|
I love neat next generation networking poo poo as much as the next guy, but sometimes networking is just networking and all you need is a non-blocking 10GB switch that can do fabricpath or TRILL. Or none of that because most of the time traditional L2 networking is sufficient.
|
# ? Feb 12, 2017 05:16 |
|
psydude posted:I love neat next generation networking poo poo as much as the next guy, but sometimes networking is just networking and all you need is a non-blocking 10GB switch that can do fabricpath or TRILL. Or none of that because most of the time traditional L2 networking is sufficient. Really don't even need fabricpath or trill. EVPN is pretty much supported on every vendor now and gets you the same sort of capability.
|
# ? Feb 12, 2017 06:24 |
|
ate poo poo on live tv posted:To be honest if you are a mid-level network engineer why would you EVER have bought into the "6800" in the first place? The 6800/6500 platform is basically the cornerstone of the enterprise product line. The NEXUS code is no where near as stable or feature-complete to be a drop-in replacement. The 6500 was - and is, still - the swiss army knife of the networking world, it does everything at a reasonable price. If they discontinued that product line I have no idea what's going on at CISCO but it's bad.
|
# ? Feb 12, 2017 13:00 |
|
abigserve posted:The 6800/6500 platform is basically the cornerstone of the enterprise product line. The NEXUS code is no where near as stable or feature-complete to be a drop-in replacement. The 6k's been around since 1999, it's time to let go. They tried to keep it relevant by ditching classic bus and doubling up on fabric channels (hooray, now you have DFCs everywhere which doubles line card costs) but as a flagship product it's days are numbered unless they do a ground up rebuild (and at that point just make a new product and be done with it). VSS is inferior to vPC, and routing BU withdrew their support for the platform when they moved to ezchip and doubled down on the 9k. If they do replace it, I imagine we'll see something built primarily around merchant silicon with a custom ASIC for features, but DC switching has a few years on them in that department.
|
# ? Feb 12, 2017 18:56 |
|
I am hoping someone can help me identify this cable? I think it's a 10 gigabit cable but I am looking for a link; it's from Foxconn. Unfortunately their website isn't searchable and when I find images from other sites they are not depicted exactly as what I have here. http://imgur.com/a/K1QJU
|
# ? Feb 13, 2017 00:11 |
|
QSFP?
|
# ? Feb 13, 2017 00:18 |
|
doomisland posted:QSFP? Correct
|
# ? Feb 13, 2017 00:29 |
|
Looks like SFF-8088 SAS to me. Some stuff uses it to stack.
|
# ? Feb 13, 2017 01:30 |
|
kripes posted:I am hoping someone can help me identify this cable? I think it's a 10 gigabit cable but I am looking for a link; it's from Foxconn. Unfortunately their website isn't searchable and when I find images from other sites they are not depicted exactly as what I have here. Latches look like SFF-8088 as Ants says, SFF-8661 (QSFP+) has retention clips on the sides.
|
# ? Feb 13, 2017 01:48 |
|
Thanks so much!
|
# ? Feb 13, 2017 05:40 |
|
ragzilla posted:The 6k's been around since 1999, it's time to let go. They tried to keep it relevant by ditching classic bus and doubling up on fabric channels (hooray, now you have DFCs everywhere which doubles line card costs) but as a flagship product it's days are numbered unless they do a ground up rebuild (and at that point just make a new product and be done with it). VSS is inferior to vPC, and routing BU withdrew their support for the platform when they moved to ezchip and doubled down on the 9k. If they do replace it, I imagine we'll see something built primarily around merchant silicon with a custom ASIC for features, but DC switching has a few years on them in that department. VPC better than VSS? Whaaaat If you're just using it as a switch then yeah the 7k is better but the 6800/6500 is (or was, at least) far superior as a routing platform which is what made it so desirable.
|
# ? Feb 13, 2017 11:46 |
|
Ethernet is so ubiquitously deployed that there's only need for switches to do dumb transport. Everything else can be done on hosts.
|
# ? Feb 13, 2017 14:47 |
|
Good luck doing your edge routing on hosts without asic for forwarding.
|
# ? Feb 13, 2017 18:36 |
|
abigserve posted:VPC better than VSS? Whaaaat To each his own, but you're still using a switching-oriented platform to do routing and you're stuck on classic IOS to boot. If I wanted a modular router I'd be looking at something like ASR 1000 or 9000.
|
# ? Feb 13, 2017 19:09 |
|
abigserve posted:The 6800/6500 platform is basically the cornerstone of the enterprise product line. The NEXUS code is no where near as stable or feature-complete to be a drop-in replacement. I thought the 4500 was supposed to replace the 6500 for Enterprise LAN Switching. Or was there some kind of inter-BU fuckery that torpedoed that idea?
|
# ? Feb 13, 2017 21:10 |
|
Eletriarnation posted:To each his own, but you're still using a switching-oriented platform to do routing and you're stuck on classic IOS to boot. If I wanted a modular router I'd be looking at something like ASR 1000 or 9000. I agree, but if I'm not mistaken the price of the ASR range is orders of magnitude higher. edit: just looked, and yeah an ASR eclipses the 6800 line in price. An ASR1004 without any linecards or licensing any of the 10G ports costs over twice as much as a 6840. abigserve fucked around with this message at 01:12 on Feb 14, 2017 |
# ? Feb 13, 2017 21:27 |
|
falz posted:Good luck doing your edge routing on hosts without asic for forwarding. Not disagreeing, that's where Jericho/Dune shines... but it's still a commodity Ethernet chipset. No magic here.
|
# ? Feb 14, 2017 00:32 |
|
Philistines. I use IPTables and Snort and a bunch of custom Perl scripts as my perimeter security because why would I spend my organization's money on Big Networking's solution and a support contract. Check and mate. psydude fucked around with this message at 02:00 on Feb 14, 2017 |
# ? Feb 14, 2017 01:57 |
|
abigserve posted:I agree, but if I'm not mistaken the price of the ASR range is orders of magnitude higher. Sure, the ASRs go a lot higher but there are closer alternatives to be used for this example like an ASR9001. Not that they're directly comparable anyway, the 6840 has more port density, but I was mainly just lamenting using a classic IOS platform at this point.
|
# ? Feb 14, 2017 02:00 |
|
psydude posted:Philistines. I use IPTables and Snort and a bunch of custom Perl scripts as my perimeter security because why would I spend my organization's money on Big Networking's solution and a support contract. Check and mate.
|
# ? Feb 14, 2017 03:59 |
|
I have a small foreign child route all my packets by hand, he's pretty efficient though
|
# ? Feb 14, 2017 05:07 |
|
Sepist posted:I have a small foreign child route all my packets by hand, he's pretty efficient though Possible if you use card stock.
|
# ? Feb 14, 2017 10:21 |
|
falz posted:Good luck doing your edge routing on hosts without asic for forwarding. Good luck running a 6500 in the DFZ doing anything other than vanilla v4.
|
# ? Feb 17, 2017 01:28 |
|
abigserve posted:VPC better than VSS? Whaaaat For a data center you can do most of everything you'd need to do on a 7k/9k. We've been doing a lot of Nexus and Arista in the data center and doing HP, Brocade and occasionally Juniper for campus. It's worked out well for us!
|
# ? Feb 17, 2017 03:02 |
|
|
# ? Apr 19, 2024 21:44 |
|
FatCow posted:Good luck running a 6500 in the DFZ doing anything other than vanilla v4. Challenge... accepted! code:
|
# ? Feb 17, 2017 03:42 |