Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
ragzilla
Sep 9, 2005
don't ask me, i only work here




lilbean posted:

It's neat but each power supply runs about half of the components. That's doubling a point a failure and a pretty crappy compromise.

Depends on their environment- if they also implement redundancy at the network or application levels (storing the same file on multiples of these boxes) ala waffleimages/lefthand it doesn't really matter if one of the boxes dies on rebuild, just get copies redistributed and bring the node back into the cluster when it's fixed. But at that point you're just reinventing mogilefs/googlefs.

Adbot
ADBOT LOVES YOU

ragzilla
Sep 9, 2005
don't ask me, i only work here




EnergizerFellow posted:

- 2x 4-port SATA PCIe x4 cards for ~$60/ea and run one of the SATA backplanes off the motherboard SATA port. Fewer chips to fail in the box, eliminate the very over saturated PCI SATA card, and one of the PCIe controllers.

They mentioned that on the page- the onboard SATA ports have issues with multipliers. Of course that may not be an issue on a different mobo.

ragzilla
Sep 9, 2005
don't ask me, i only work here




skipdogg posted:

Any Compellent users? Opinions?

edit: it's for a lab environment, our dev's want complete control over drive assignment, something our LeftHand P4300 boxes don't let them do.

The software still has some kinks in 'bizarre' failure scenarios (like if a controller loses both connections to a loop because they were both on a failing ASIC on your FC card, it does NOT fail over to the other controller) but overall it works as advertised.

ragzilla
Sep 9, 2005
don't ask me, i only work here




skipdogg posted:




New toys just showed up. Too bad they're not for me :(

Make sure when they cable up the backend loops, you don't put a both ends of a backend loop on the same ASIC of a controller (adjacent ports), try to keep them on separate cards if you have multiple FC cards for backend.

ragzilla
Sep 9, 2005
don't ask me, i only work here




Anyone here have any experience with CommVault SnapProtect? In particular, can it quiesce and SAN snap raw RDMs presented to VMs?

ragzilla
Sep 9, 2005
don't ask me, i only work here




Misogynist posted:

I dream of a day when all SANs do nothing but dynamically tier blocks across storage and anything more complicated than "I want this much storage with this class of service" all happens silently.

Edit: Basically Isilon I guess

Or 3par, or Compellent, or EMC VMAX.

ragzilla
Sep 9, 2005
don't ask me, i only work here




conntrack posted:

The migration of blocks between the tiers takes some time after profiling. Just because you get some amount of SSD storage it still has to be right sized for the working set. I'd be interested in when work spent and dedicated 15k spindles costs more than autotiering and SSDs. From the white papers and sales guys it's hard to get solid info. "you started saving money just by talking to me now, wink wink".

At least on Compellent the migration is based on snapshots (so after a snapshot, the data is 'eligible' to get pushed down to a lower tier) so you'd run a snapshot on that LUN after the batch completes, and it'd start getting pushed down once there's pressure on that tier. Realistically though if you have some app that runs overnight and you don't want it using Tier0, just put it on a LUN that only has Tier1-3 storage? Just because you can give every LUN tier 0-3 access doesn't mean you should or would in production.

ragzilla fucked around with this message at 14:13 on Mar 3, 2011

ragzilla
Sep 9, 2005
don't ask me, i only work here




three posted:

How do you handle systems that run on monthly cycles, which would have their data migrated down to slow speed by then? Disable data progression on those volumes, or create custom profiles?

Remove the slow disk pools from the storage profile for that LUN (I'm fuzzy on the actual names in the GUI, haven't touched our Compellent in over a year).

Data progression also relies on snapshots so you have to snapshot every LUN you want data progression on.

ragzilla
Sep 9, 2005
don't ask me, i only work here




Bluecobra posted:

What controller are you talking about? I saw our SC40 boot up from the serial console before and it looked like it was running some flavor of BSD.

Could be their NAS offering. IIRC their NAS was just WSS in front of a regular cluster.

ragzilla
Sep 9, 2005
don't ask me, i only work here




bort posted:

You have 220V in your house? :slick:

Do you have to keep that rig air conditioned? That's a half ton right there.

Most power supplies will autorange from 90 to 250, so you can hook them up to 240V single phase. Just have to find yourself a NEMA 6-15P/6-20P to C13 cord.

ragzilla
Sep 9, 2005
don't ask me, i only work here




KS posted:

Price for four new controllers is $48k. I suspect we're being gouged because we're asking to be released from our VAR (Cambridge Computer, stay the gently caress away) to go with another, and Dell is requiring us to do this one deal with them first since it originated with them, back when we first talked about upgrading to SC40s a year and a half ago.

4 series 8000 controllers for 48 is a decent price, assuming they're quoting with 64GB memory.

ragzilla
Sep 9, 2005
don't ask me, i only work here




Expect maintenance on any enterprise piece of hardware to be ~18% of list per year, maybe a bit more/less depending on response times.

ragzilla
Sep 9, 2005
don't ask me, i only work here




NippleFloss posted:

I'm not sure what the performance of Nimble's solution is like because they are also still pretty small. One interesting thing they do is provide a special multi-path drive that not only manages paths on the host, but also directs IO requests for specific LBAs to the node that owns that LBA, so there is no back end traffic to retrieve the data from a partner node and no need for global cache. I'm not sure how they do that, though (A round robin assignment of LBAs or blocks of LBAs to each node, perhaps?) and it could cause other issues.

The Nimble CIM/PSP download a map of block to node mappings and use that to direct requests to the correct node. If you're not running the provided CIM/PSP it does iSCSI redirections on the backend.

ragzilla
Sep 9, 2005
don't ask me, i only work here




NippleFloss posted:

Typed read cache, then changed it to write through cache, but not well enough. Phone posting.

I'm actually not sure if Nimble cache is write through, or if it caches on first read, but either way the persistent store is HDD and losing an SSD just means the loss of some cache capacity.

The SSDs in a Nimble array are read cache only, writes are coalesced in NVRAM into CASL stripes before being committed to disk.

ragzilla
Sep 9, 2005
don't ask me, i only work here




NippleFloss posted:

I know, my question was about whether the cache is populated when new data is written (write-through) or only on reads.

It depends, if the data is considered 'hot' it will place the CASL stripe in flash as well as disk (according to public whitepapers, I haven't seen any metrics in Infosight which provide feedback on this).

Adbot
ADBOT LOVES YOU

ragzilla
Sep 9, 2005
don't ask me, i only work here




Zephirus posted:

It is, in my opinion, the absolute worst storage gear I have ever come across.

...

I could go on, but in conclusion, gently caress this poo poo. If you see it, or a sales rep suggests it (if you're in the media/broadcast industry, they might well) run a loving mile.

Something worse than Scale Computing, amazing.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply