Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
bort
Mar 13, 2003

I have 2k/500 and it pretty much runs off of one (VM), but there's a pair of 'em.

Adbot
ADBOT LOVES YOU

bort
Mar 13, 2003

FatCow posted:

And acquisitions, gently caress acquisitions.
You guys use 10.1 10.10 and 10.100 also? Fancy that!

bort
Mar 13, 2003

CrazyLittle posted:

What are you guys doing for 10gig switches?
Force10 S4810s. Some of them rock and a few of them don't and fail like Dell devices will. Serious pros, though: any Cisco guy can configure them, and when I bought 'em they were half the cost of the Nexus port-to-port. Wirespeed, reversible airflow. Declining support quality. But rockin' switches if you can buy a bunch of them and don't have to call them.

I would buy Arista if I were buying, but I'm about to get swallowed by a big Cisco fish and all my dreams of opensystem multi-vendor hippie flowers will shatter just as they'd shimmered into view...

everything is really Finisar optics in sleeves, no?

edit: seconding service tag nonsense (your databases are my problem :confused:). One other problem with Dell support: Dell Europe, Asia, Australia and North America all seem to be different companies. That's four vendors sending you invoices, four sales reps to badger when Irish support is dragging rear end, and you'd think that because I have Dell switching, I could get less finger-pointing from Dell Storage or vice-versa. Not really.

edit2: also Network Hardware Resale. Not if you need it now, then you go to your serious VAR. Not if you're loving over a channel partner who sells directly to you. But if you need it kinda soon and a bunch of it cheap (maybe smartnet it later :ssh:), they're awesome. Good RMAs, all that. Sales guys love you for buying.

bort fucked around with this message at 04:25 on Aug 1, 2013

bort
Mar 13, 2003

workape posted:

Oh man, loving WAAS. I've got a hardon for Riverbed in the worst way. Getting 95%+ optimization on traffic and being able to stage VDI across the WAN? :fap:
If I didn't have remote offices angrily calling when a Steelhead isn't working, I would think they were just passing traffic and making pretty graphs. Those things are magic and their claims are so outrageous they don't seem possible.

bort
Mar 13, 2003

It should try to renew in 15 days if your lease is 30.
:spergin:

bort
Mar 13, 2003

F5 and Palo Alto come to mind, too.

bort
Mar 13, 2003

/\ /\ there are RFC4193 addresses which begin with fc00::/7 as well as the link local addresses you're discussing. Neither will beLink local addresses aren't routable, but your hosts will likely have global addresses, as well. The other thing to remember is that if you subnet below /64 you will break SLAAC on the subnet. That may not be a problem but I think it's going to be important in network designs.

I think RFC4193 addresses will be able to be routed if you and a peer agree on it, but will probably be filtered at the border by ISPs.

ior posted:

Cisco WLC.
Haha no doubt.

Pet peeve:
code:
$ ssh wladmin@chi-wisma

(WiSM-slot11-1)
User:
and
config ap syslog host global <ip>

Who decided to have APs log to broadcast?

bort fucked around with this message at 22:09 on Aug 22, 2013

bort
Mar 13, 2003

I was drinking a beer and looking at this http://www.opendcim.org before my nap. And yeah, proceed with caution. "I was trying to clean stuff up" doesn't generally satisfy people when your network is down.

bort
Mar 13, 2003

Not completely relevant, since I am perennially trying to justify budget for Shark, but we use Cascade for Net-/Sflow. Recent versions have been challenging. We had the most recent (10.0.7) Profiler upgrade hang up and require us to set it to defaults and restore from backup (support: 'it happens'). It later crashed hard when a device had time that was off -- and sure, I'll take partial blame for that. But most unlike a Riverbed product, and never had an issue with a Cascade before. Getting burned by expensive equipment makes an especially sour taste.

e: forgot a word

bort fucked around with this message at 20:19 on Oct 16, 2013

bort
Mar 13, 2003

jwh posted:

I had used RiOS boxes before, supposedly for wan opt, but our users were pretty ambivalent about apparent improvement.
I think they crush CIFS out of the park and are marginal on everything else.

bort
Mar 13, 2003

Seriously. I'd tell them I'd pull any configs they need to see, but at that time we're going to have a conversation about why they think they need to see it. Gotta manage upward.

If it's really a requirement that people without the skills to configure a router using the cli would need access to my configs, I'd suggest they purchase Solarwinds NCM.

bort
Mar 13, 2003

Argue bitterly and tearfully for colocation. You're likely not in the datacenter business and you probably shouldn't be if you're asking about books on it. Once you're sure capex > opex and the gear has to stay on-site, hire a really solid general contractor who builds datacenters. I don't see that as a DIY project.

bort
Mar 13, 2003

Ask for a facilities person one level above your position. e: that's still good advice, but I harped on colocating as a non-specialized company here some more because I sometimes post more than I read. :doh:

DC work is such non-technical work but it can get really broad. Even an EE isn't up to it, just naturally. You can do something as silly as underestimating your need for door swing space and totally screw yourself -- and you don't find out until your racks are placed.

My experience running DCs as a regular firm: have fun when the cottonwoods jettison their sexual gunk into your AC intake filter. Enjoy sweating on a rooftop cleaning the unit on the Fourth of July. You also get to sweat on the inside when the electrician has to move the feed to your UPS unit and hope you have enough battery life, or driving through the snow because some VP needs some system rebooted instead of calling and opening a ticket and sending them the ticket number. Generator test during Christmas for Sarbaines Oxley. Finding out some capacitors in your UPS chassis are end of life and not available anymore. Enjoy running a datacenter.

e: and when you finally do hit the big time and build a really nice datacenter, you're confronted with CRAC failover problems, sensor failures, generator transfer switch maintenance, alarms report things like "FIRE CONTROL PREACTION INITIATED" freaking everyone the hell out. The problems are bigger and more expensive when done right, but don't fail. When you do them cheaply, they fail often and inconveniently.

ee:

Bluecobra posted:

I would also reserve two cabinets for network gear (one core/distribution switch in each cabinet), patch panels, and cross-connects.
I can't stress that enough. It's so easy to forget some bulky panels or rear-mounted gear or that the crappy rails on some server someone bought on the cheap won't rack properly with your plan because of the Dell gear above it. Need to be flexible early on, density only comes with rigorous testing. Plan for more than you need unless your planners are really good at it.

Buy racks without using them? Have fun unhooking your PDUs to rack gear, or with your inability to resolve airflow and cabling problems.

bort fucked around with this message at 02:23 on Dec 17, 2013

bort
Mar 13, 2003

sudo rm -rf posted:

The latter stuff.
People will sell you stacking at the ToR, or elsewhere but I'm unconvinced it's such a hot strategy with the software upgrade implications: if you have one large logical switch, it's difficult to swap the software on it.

I conceptually like the stuff Microsoft showed in a Nanog presentation and a relatively recent Packet Pushers: a leaf-and-spine top of rack system with each rack having a BGP AS and a software-based BGP controller peering with and feeding routes to the switch layer.
Nanog PDF
Packet Pushers

That sounds pretty sick to me, since failure convergence is so potentially quick and tunable and the ability to "drain" traffic from a device or a rack is there.

bort
Mar 13, 2003

jwh posted:

I also think you should switch to gre with tunnel protection ipsec, and stop mucking with crypto maps :)
Isn't that six of one and half-dozen of the other?

bort
Mar 13, 2003

jwh posted:

(Ditch those ASAs)
As rarely as this applies to Cisco, it's awfully hard to argue with cheap.

bort
Mar 13, 2003

I use /22s but I always wonder: how many nodes/subnet is too many?

bort
Mar 13, 2003

I agree the old "200 nodes per segment" isn't relevant anymore, but nobody I know would fill a /16 with workstations. /22 seems to be the defacto "big network" but I was just wondering if that was based in anything other than "we outgrow /24s too quickly."

bort
Mar 13, 2003

All it takes is one good-sized acquisition and your well-designed network is a giant bus wreck.

bort
Mar 13, 2003

:xd: http://www.cisco.com/c/en/us/support/docs/field-notices/636/fn63697.html

Workaround/Solution
  • Buy better cables, fucknuts

Adbot
ADBOT LOVES YOU

bort
Mar 13, 2003

Can you show the number of current CUBE sessions?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply