|
I have 2k/500 and it pretty much runs off of one (VM), but there's a pair of 'em.
|
# ¿ May 19, 2013 15:05 |
|
|
# ¿ Apr 23, 2024 12:42 |
|
FatCow posted:And acquisitions, gently caress acquisitions.
|
# ¿ Jun 23, 2013 10:03 |
|
CrazyLittle posted:What are you guys doing for 10gig switches? I would buy Arista if I were buying, but I'm about to get swallowed by a big Cisco fish and all my dreams of opensystem multi-vendor hippie flowers will shatter just as they'd shimmered into view... everything is really Finisar optics in sleeves, no? edit: seconding service tag nonsense (your databases are my problem ). One other problem with Dell support: Dell Europe, Asia, Australia and North America all seem to be different companies. That's four vendors sending you invoices, four sales reps to badger when Irish support is dragging rear end, and you'd think that because I have Dell switching, I could get less finger-pointing from Dell Storage or vice-versa. Not really. edit2: also Network Hardware Resale. Not if you need it now, then you go to your serious VAR. Not if you're loving over a channel partner who sells directly to you. But if you need it kinda soon and a bunch of it cheap (maybe smartnet it later ), they're awesome. Good RMAs, all that. Sales guys love you for buying. bort fucked around with this message at 04:25 on Aug 1, 2013 |
# ¿ Aug 1, 2013 03:56 |
|
workape posted:Oh man, loving WAAS. I've got a hardon for Riverbed in the worst way. Getting 95%+ optimization on traffic and being able to stage VDI across the WAN?
|
# ¿ Aug 6, 2013 22:19 |
|
It should try to renew in 15 days if your lease is 30.
|
# ¿ Aug 13, 2013 22:42 |
|
F5 and Palo Alto come to mind, too.
|
# ¿ Aug 22, 2013 14:59 |
|
/\ /\ there are RFC4193 addresses which begin with fc00::/7 as well as the link local addresses you're discussing. I think RFC4193 addresses will be able to be routed if you and a peer agree on it, but will probably be filtered at the border by ISPs. ior posted:Cisco WLC. Pet peeve: code:
config ap syslog host global <ip> Who decided to have APs log to broadcast? bort fucked around with this message at 22:09 on Aug 22, 2013 |
# ¿ Aug 22, 2013 21:59 |
|
I was drinking a beer and looking at this http://www.opendcim.org before my nap. And yeah, proceed with caution. "I was trying to clean stuff up" doesn't generally satisfy people when your network is down.
|
# ¿ Oct 10, 2013 00:18 |
|
Not completely relevant, since I am perennially trying to justify budget for Shark, but we use Cascade for Net-/Sflow. Recent versions have been challenging. We had the most recent (10.0.7) Profiler upgrade hang up and require us to set it to defaults and restore from backup (support: 'it happens'). It later crashed hard when a device had time that was off -- and sure, I'll take partial blame for that. But most unlike a Riverbed product, and never had an issue with a Cascade before. Getting burned by expensive equipment makes an especially sour taste. e: forgot a word bort fucked around with this message at 20:19 on Oct 16, 2013 |
# ¿ Oct 16, 2013 20:14 |
|
jwh posted:I had used RiOS boxes before, supposedly for wan opt, but our users were pretty ambivalent about apparent improvement.
|
# ¿ Oct 17, 2013 17:28 |
|
Seriously. I'd tell them I'd pull any configs they need to see, but at that time we're going to have a conversation about why they think they need to see it. Gotta manage upward. If it's really a requirement that people without the skills to configure a router using the cli would need access to my configs, I'd suggest they purchase Solarwinds NCM.
|
# ¿ Nov 28, 2013 16:33 |
|
Argue bitterly and tearfully for colocation. You're likely not in the datacenter business and you probably shouldn't be if you're asking about books on it. Once you're sure capex > opex and the gear has to stay on-site, hire a really solid general contractor who builds datacenters. I don't see that as a DIY project.
|
# ¿ Dec 16, 2013 22:58 |
|
Ask for a facilities person one level above your position. e: that's still good advice, but I harped on colocating as a non-specialized company here some more because I sometimes post more than I read. DC work is such non-technical work but it can get really broad. Even an EE isn't up to it, just naturally. You can do something as silly as underestimating your need for door swing space and totally screw yourself -- and you don't find out until your racks are placed. My experience running DCs as a regular firm: have fun when the cottonwoods jettison their sexual gunk into your AC intake filter. Enjoy sweating on a rooftop cleaning the unit on the Fourth of July. You also get to sweat on the inside when the electrician has to move the feed to your UPS unit and hope you have enough battery life, or driving through the snow because some VP needs some system rebooted instead of calling and opening a ticket and sending them the ticket number. Generator test during Christmas for Sarbaines Oxley. Finding out some capacitors in your UPS chassis are end of life and not available anymore. Enjoy running a datacenter. e: and when you finally do hit the big time and build a really nice datacenter, you're confronted with CRAC failover problems, sensor failures, generator transfer switch maintenance, alarms report things like "FIRE CONTROL PREACTION INITIATED" freaking everyone the hell out. The problems are bigger and more expensive when done right, but don't fail. When you do them cheaply, they fail often and inconveniently. ee: Bluecobra posted:I would also reserve two cabinets for network gear (one core/distribution switch in each cabinet), patch panels, and cross-connects. Buy racks without using them? Have fun unhooking your PDUs to rack gear, or with your inability to resolve airflow and cabling problems. bort fucked around with this message at 02:23 on Dec 17, 2013 |
# ¿ Dec 17, 2013 01:26 |
|
sudo rm -rf posted:The latter stuff. I conceptually like the stuff Microsoft showed in a Nanog presentation and a relatively recent Packet Pushers: a leaf-and-spine top of rack system with each rack having a BGP AS and a software-based BGP controller peering with and feeding routes to the switch layer. Nanog PDF Packet Pushers That sounds pretty sick to me, since failure convergence is so potentially quick and tunable and the ability to "drain" traffic from a device or a rack is there.
|
# ¿ Dec 17, 2013 02:39 |
|
jwh posted:I also think you should switch to gre with tunnel protection ipsec, and stop mucking with crypto maps
|
# ¿ Dec 17, 2013 23:28 |
|
jwh posted:(Ditch those ASAs)
|
# ¿ Dec 18, 2013 13:52 |
|
I use /22s but I always wonder: how many nodes/subnet is too many?
|
# ¿ Feb 6, 2014 20:47 |
|
I agree the old "200 nodes per segment" isn't relevant anymore, but nobody I know would fill a /16 with workstations. /22 seems to be the defacto "big network" but I was just wondering if that was based in anything other than "we outgrow /24s too quickly."
|
# ¿ Feb 6, 2014 22:55 |
|
All it takes is one good-sized acquisition and your well-designed network is a giant bus wreck.
|
# ¿ Feb 7, 2014 14:15 |
|
http://www.cisco.com/c/en/us/support/docs/field-notices/636/fn63697.html Workaround/Solution
|
# ¿ May 22, 2014 19:23 |
|
|
# ¿ Apr 23, 2024 12:42 |
|
Can you show the number of current CUBE sessions?
|
# ¿ Sep 5, 2014 21:56 |