Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
optikalus
Apr 17, 2008

Mierdaan posted:

Thanks for the thread, 1000101.

Anyone out there using Dell's MD3000i box? We're just getting to the point where we need something low-end, due to increasing interest in virtualization and an unwillingness on my part to replace an ancient Proliant ML350 file server with another traditional file server. We don't have anything terribly IOPS-intensive we'd be putting on it; probably just Exchange transaction logs SQL transaction logs + DB for 200-person company setup, so I don't think iSCSI performance issues are worth worrying about for us.

It's a 15 spindle box, so we're thinking about carving it up thusly:

1) RAID 10, 4x 300GB 15K RPM SAS drives: SQL DB
2) RAID 10, 4x 146GB 15K RPM SAS drives: SQL transaction logs
3) RAID 1, 2x 73GB 15K RPM SAS drives: Exchange transaction logs
4) RAID 5, 4x 450GB 15K RPM SAS drives: VMimages, light file server use
(with 1 spare drive for whatever RAID set we decide most needs it)

That'd take care of our immediate needs and give us some room to expand with additional MD1000 shelves (2 can be added to the MD3000i, though the IO doesn't scale at all). We're a small shop and have no experience with SANs, so I could definitely use some feedback on this idea.


I've been looking at the MD3000i as well, mainly because it is the only iSCSI filer that Dell sells that doesn't use insanely expensive disks.

On paper, it looks quite good, however some of the wording is a bit confusing. For example, they claim more performance by adding a secondary controller, and they only mention cache when you have a dual-controller setup. I searched and searched and could not find any mention of cache per controller, only when used in a dual-controller setup, so you'll probably want that second controller just-in-case.

Those disks should yield a healthy 170 IOPS per disk (vs. ~130 for 10k) so that would be ideal for a database. If your database can live fine on 680 IOPS, its probably good enough.

That extra drive should be used as an universal hot-spare, though.

Adbot
ADBOT LOVES YOU

optikalus
Apr 17, 2008

Mierdaan posted:

Have you looked at any vendors outside of Dell? The reseller we work with is pretty big on Dell, so I don't know how biased they are; they're claiming wonderous things about the 3000i.

Not really as I don't have the capital to plop down $25k unfinanced on a filer (Dell offers $1 buy-out leasing terms). HP has a similar product, but I dislike HP for various reasons and I didn't immediately see any financing options.

I would love an EMC or NetApp, but its just out of my reach at the moment.

Actually, I did have an EMC IP4700 for a few days -- it took a 4' drop off my cart on the way to my car. Flattened. Oops.

optikalus
Apr 17, 2008

dexter posted:

My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008.

The PERC4/DC isn't even compatible with RHEL. I found that out the hard way with my PV220S. I could transfer enough data to fill the card's cache at speed, but any further writes were dog slow, like 10MB/s or less (initial write was only 90MB/s).

The Adaptec I replaced my PERC with can max out at 300MB/s reads and something like 200MB/s write.

LSI can suck it.

optikalus
Apr 17, 2008

Catch 22 posted:

I collected some IOPS data for my environment, and we are looking at a average of 300, top end 600 IOPS, excluding when my backup is running. I am looking at a Equallogic SATA SAN 8TB now (3000 IOPS), and its 50K+ or 60K+ for SAS.

Mind you this is SATA, and I assume you are talking about SCSI, SAS high end enterprise stuff. You say 30K for comparable specs? What am I getting hosed on, or what are you leaving out?

I was thinking 30K when I started this trek and I don't know what happened now that I hold this quote in my hands.

I'm curious how you're calculating 3000 IOPS for the Equallogic. 7200RPM SATA drives yield about 80 IOPS per spindle, so you would need 38 drives to reach 3000. You would need to fill the box with 15k RPM drives to get anywhere near 3000 IOPS.

I can't imagine that the Equallogic is doing that fancy pilardata thing where they split up the disk and put the 'faster' data on the outside edge, but that'd be neat.

optikalus
Apr 17, 2008

Wicaeed posted:

Also how hard is iSCSI to set up on a Linux computer, especially if I am not that familiar with Linux.

If you can burn an ISO, you can install FreeNAS, OpenFiler, or a few other distros. My "SAN" is just a PowerEdge attached to a PowerVault 220S running CentOS 5.2 and iscsitarget. I'm not even sure if it has IO caching or whatnot, but performance is excellent. I'd probably recommend OpenFiler over FreeNAS as FreeBSD's NFS implementation has been broken for about 150 years, and they've given up on trying to make it right. I've got no experience with their iscsi target daemon, but I wouldn't trust it.

optikalus
Apr 17, 2008
Well, my "SAN" has just turned itself into a pile of crap.

I lost two drives and the thing is still running thanks to the hotspare and RAID5, but obviously if I lose one more drive, I'm totally F'd. I overnighted two new drives to the DC which are supposed to be identical to the ones in there, but it detects them as being just a few MB smaller so the controller won't use them.

I have two options: I can bring up a TB storage box and copy all the data to it, then steal my NAS's IP address and run it there for a little bit while I rebuild the array with the smaller drive size, or I can invest in some real hardware.

I've always wanted a NetApp, but they're not answering their phone and EMC is going to get back to me by Monday.

We've already discussed the MD3000i and the general consensus is that it is junk, but I don't recall seeing why it is junk. How does it compare against the HP MSA 2000i?

How does the EVA4100 compare to the EMC and NetApp boxes?

My upper limit is probably around 20k so recommendations around that range would be appreciated.

optikalus
Apr 17, 2008

H110Hawk posted:

What is your "SAN"? What brand+part number hard disks did you have in it? Did you order the exact same model disk, down to the revision, or a "compatible" one?

Guaranteed sector count is something you need to pay attention to in the future. It is a good idea to only carve 95% of your disks into your array, this lets you use the high end of them as fudge factor in case you ever have to switch disk manufacturers.

How long between disk failures? Were you able to gank the old disk before the second one failed?

I quote "SAN" because it isn't really a dedicated SAN at all, just a RAID box with SAN protocols running on it.

It's just a Dell PowerVault 220S. I got a /great/ deal on it, but turns out they filled it with white-label Worlddisk drives (factory refurb Fujitsu MAW3147NCs). When I received it, I had two drives with SMART failures already, so I replaced them with two new Worlddisk drives, all good. I ordered 4 new Fujitsu-labeled drives to have on hand just-in-case. When the two drives failed, luckily the hot-spare was rebuilt just before the 2nd one failed, so I didn't lose anything. I shipped up the 4 Fujitsu drives and the NOC replace them which is when I found my dilemma.

I think at this point, I'm just going to head up there on Monday with my old SATA NAS server that I decommissioned and have yet to part out / sell, mirror the LUNs off the degraded RAID onto the SATAs, rebuild the PV220 array then copy everything back. Fun.

I'm going to check to see if my Adaptec 2130s will let me carve out a custom % of the disks to prevent this in the future -- if so, that'd be ideal as I'm sure I'll be replacing the rest of those lovely Worlddisks in the future.

There's also always the possibility that I get that hitachi posted above as that seems like an incredible deal, but I don't trust leaving the array degraded for any longer than I have to -- its already been too long heh.

optikalus
Apr 17, 2008

BonoMan posted:

What about Qnap? I haven't heard crappy things and we're looking at this:

http://www.newegg.com/Product/Product.aspx?Item=N82E16822107023

Which seems decent enough and has 8 bays which is nice.

Any thoughts?

Looks like a cheaper version of the Adaptec SnapAppliance, but the SnapAppliance has a decent OS and is proven reliable (ours has a few year uptime). I wouldn't use it for anything but archive.

optikalus
Apr 17, 2008

BonoMan posted:

Huh. Well 2500 is our budget and I can't find good pricing info on SnapAppliance anywhere. We're looking at 6TB storage.

Well, $2100 + tax and shipping doesn't leave you much room for drives, and I'd heavily recommend RAID6 for SATAs, so 8 1TB drives.

optikalus
Apr 17, 2008

BonoMan posted:

Yeah there might be a little bit of flux we'll have to see. What's pricing like on SnapAppliances? I realize that's kind of a vague question, but any ideas?

Thanks for the advice!

Looks like Adaptec sold it to Overland, and I can't find any current pricing. I remember them running about $5k for an 8TB box. At that price, you might as well look at Hitachi as well.

optikalus
Apr 17, 2008

da sponge posted:

Has anyone heard of backblaze? Seems like a personal backup service, but their blog post on their infrastructure is pretty cool - http://blog.backblaze.com/2009/09/01/petabytes-on-a-budget-how-to-build-cheap-cloud-storage/

Someone posted that blog entry to my forums a few days ago. I see several issues with it, mainly cooling and power. Only 6 fans to cool 45 drives? Those are going to bake.

The fact that it spikes 14A on power up is going to cause major issues. Real RAID hardware would have delayed start on all those drives.

I have delayed start on Areca RAID cards in 6 SuperMicro SuperServers with 8x 250gb SATA drives, and I have to be very careful powering them on otherwise I'll pop my cabinet's breaker :/ I just run a few as cold spares now.

If that set up had a real RAID adapter (not fakeRAID), or at least something like ZFS, I'd give it props. Doing a fakeRAID 6 is just asking for failures. I played with fakeRAID on BSD and RHEL for years, before writing both implementations off for production use. I personally wouldn't bother using any RAID at all, but just set them up JBOD. My reasoning being that it is a backup service -- if a drive fails, yes, that backup is gone, but the client should be able to see that the checksum is different and resync.

I wanted to build something similar, but with systems instead of drives. I even built a prototype, but power stability was a problem. However, I had /much/ more airflow, and no drives (netboot).

Edit: also, anyone remember that whitepaper about SATA drives reaching 2TB in a RAID setting will almost always be guaranteed to fail upon rebuild due to the size of the drive exceeding the drive's own bit error MTBF?

optikalus fucked around with this message at 19:11 on Sep 2, 2009

optikalus
Apr 17, 2008

H110Hawk posted:

I think you're wrong here, fan count is a lot less important than CFM forced over the disks. Sure the disks may be hotter than a 2U server with 4 disks in it, but they will be consistently the same temperature. The disks will probably suffer some premature failure, but that is the whole point of RAID. Get a cheapo seagate AS disk with 5 year warranty and just replace them as they fail.

The way the disks are laid out, the drives in the middle row will no doubt be several degrees hotter than the drives next to the fans. Air also has very poor thermal conductivity, so having such a small distance between the drives means that:

1) minimum air contact to the drive surface
2) maximum radiant heat transfer between drives

The drive rails on many servers actually act as heatsinks as well, to dissipate heat to the chassis. There are no such heatsinks in this chassis.

I'd love to see a plot of the temperatures of the disks vs. location in the chassis. Even in my SuperMicros, the 8th disk consistently runs 3 degrees C hotter than all the other drives:

1)
HDD #1 Temp. : 34
HDD #2 Temp. : 35
HDD #3 Temp. : 33
HDD #4 Temp. : 33
HDD #5 Temp. : 35
HDD #6 Temp. : 36
HDD #7 Temp. : 36
HDD #8 Temp. : 39

2)
HDD #1 Temp. : 30
HDD #2 Temp. : 31
HDD #3 Temp. : 29
HDD #4 Temp. : 32
HDD #5 Temp. : 30
HDD #6 Temp. : 33
HDD #7 Temp. : 34
HDD #8 Temp. : 36

3)
HDD #1 Temp. : 30
HDD #2 Temp. : 31
HDD #3 Temp. : 29
HDD #4 Temp. : 30
HDD #5 Temp. : 31
HDD #6 Temp. : 32
HDD #7 Temp. : 33
HDD #8 Temp. : 34

4)
HDD #1 Temp. : 32
HDD #2 Temp. : 33
HDD #3 Temp. : 32
HDD #4 Temp. : 34
HDD #5 Temp. : 34
HDD #6 Temp. : 35
HDD #7 Temp. : 33
HDD #8 Temp. : 36

and so on

The drives are installed:

1 3 5 7
2 4 6 8

Drives 1 - 6 are cooled by a 3 very high cfm fans, where as 7 and 8 are in front of the PSUs, which have their own fans. Those fans aren't as powerful, so obviously they bake.


Click here for the full 1200x886 image.


Given the density of those 5 drives, I can't see those fans in front sending much air through the drive array. It'd probably just send it out the side vents. The fans in the rear are then blowing that heated air over the CPU and PSU, which can't be good for them either.

Further, they're running *software* RAID. I can't believe how many times I've tried it and had it gently caress me over some how. It is flaky at best when your drives are in perfect working order, I could only imagine what'd it do when half the disks it knows about drop offline due to a popped breaker or bad PSU.

Don't get me wrong, I think it is a great idea, just poor execution. Instead of 45 drives per chassis, I'd stick to 30 or so. That'd give about 3/4" clearance between each drive, which would allow sufficient air flow and reduce radiant transfer.

optikalus
Apr 17, 2008

StabbinHobo posted:

another loving random filesystem-goes-read-only-because-you-touched-your-san waste of an afternoon.

I have four identically configured servers, each with a dualport hba, each port going to a separate fc-switch, each fc-switch linking to a separate controller on the storage array. All four configured identically with their own individual 100GB LUN using dm-multipath/centos5.3.

I created two new LUNs and assigned them to two other and completely unrelated hosts to the four mentioned above.

*ONE* of the four blades immediatly detects a path failure, then it recovers, then detects a path failure on the other link, then it recovers, then detects a failure on the first path again, and says it recovers, but somewhere in here ext3 flips its poo poo and remounts the filesystem readonly.

Now, if I try to remount it, it says it can't because the block device is write protected. However multipath -ll says its [rw].

I had this *exact* same problem with my colocation provider plugged both powersupplies of one of my servers into the same (overloaded) PDU. When the PDU finally tripped, ext3 lost its mind and everything was readonly. However, it also incorrectly claimed that it was r/w, but could not remount rw. I had to reboot single user, then run fsck against the partition (2TB!) and finally was able to mount it again.

optikalus
Apr 17, 2008

FISHMANPET posted:

I've read somewhere that when making a RAID it's a good idea to make a partition a little bit smaller than the size of the disk, in case you're replacment disk is a few sectors smaller than the failed disk.

I ran into this problem on my PowerVault 220S with 14 146GB drives. One of the drives failed (Fujitsu), so I replaced it with a new Fujitsu and the Adaptec RAID card would not use the drive. It was 2MB smaller than the other drives >:(

I had to move all the content to a new filer, swap all the users over, down the PV220, rebuild the array, move all the content back and swap everyone back over.

My Areca RAID cards have an option to truncate the disk capacity to the nearest specified capacity round (I used 10GB), so a 250GB drive will probably only get 240GB, but will allow much variation between actual 250GB drive capacities.

I don't think this has anything to do with the partition size, though.

optikalus
Apr 17, 2008

Xenomorph posted:

How do people with dozens/hundreds of terabytes handle backups? Multiple backup drives? HUGE backup drives? What about bandwidth? We back up over the network. Do you have backup drives attached locally to servers? Does each server get its own backup unit?

Quite a few people just rely on snapshots (many manufacturers allow snaps to be on separate disk shelves than the filer, so if you lose a shelf, you still have your snaps and can restore from that).

Others rely on tape (though LTO4+ would be a necessity at that level). Others rely on disk-based vtls like a Quantum DXi.

optikalus
Apr 17, 2008

Serfer posted:

I'm actually looking at building our own poo poo with Gluster (I really don't want to do this). It will cost roughly the same, but could be 100% SSD. Why can't someone offer something for smaller offices that still have a really high IO need?

Make sure you run some tests before you set your heart on gluster. The performance was just acceptable at best in my test cases, which was still quite a bit slower than even NFS on a LVM vol. Also, this was with a TCPoIB scheme and gluster would hang/crash when using RDMA. My benchmarks were done a year ago, so maybe they're completely invalid now.

You can PM me if you like and I'll provide you with my bonnie+ benchmark results

optikalus
Apr 17, 2008

xarph posted:

How many spindles do you think would be necessary for about 50-60 lightly used VMs? Would an enclosure with 12 in raid6+hot spare cut it? I'm going to try to set it up as a science experiment anyway, but knowing whether I should be disappointed in advance would be helpful.

IOPS is calculated based on the drive's characteristics (RPM, avg seek time) not the number of spindles in an array. You could get the same IOPS with half the drives if you use 15k RPM SAS vs 7200 RPM SATA.

Generally, 7200 RPM SATA drives give ~80 IOPS; ~120 IOPS for 10K, and ~180 for 15K drives. Obviously different drives and busses will perform slightly differently, but those generalizations have been pretty consistent with my own tests on a bunch of different drives over the years.

You need to find out how many IOPS your VMs will require, then plan for that with the disk appropriately.

optikalus
Apr 17, 2008

Serfer posted:

What the gently caress? These things run Windows XP Embedded?

Clariion right?

Aside from the horrible installation support from EMC, I was always amused with the mishmash of hardware and software on their stuff. Their control station was an Intel ISP1100 running some old version of Redhat (7 I think).

Nothing was more frustrating than being at work for 32+ hours while their techs *RACK* the system. They were opening everything one at a time, reading all the manuals FOR THE RAILS, then phoning home ASKING FOR MORE SUPPORT. Wouldn't give me a chance to do it as they wouldn't approve it.

When they were finally finished with racking and cabling (another 36 hour day), we were left with a disgustingly messy installation. Co-worker and I re-racked/re-cabled it in a 4 hour maintenance window a few years later when we had to move it because the facility built out our racks improperly (perpendicular to a hot/cold aisle).

optikalus
Apr 17, 2008

GrandMaster posted:

I've had great experiences with the EMC techs that have come onsite to relocate arrays/add disk trays etc. They were nice friendly guys, knew their poo poo, kept everything neat and worked quickly. I suppose it varies from region to region though.

Phone support is a different story altogether

In EMC's defense, it was only the initial install guys that were garbage. We had some great techs come after the install to assist with new shelf installations, flare upgrades, etc.

However, they should be sending top guys to make sure the system gets up and running and is installed properly and professionally. First impressions are the most important. Wasting 60+ hours of the client's time for a simple installation should never happen.

optikalus
Apr 17, 2008
Shot in the dark, but we just deployed a NetApp 2240-4 (ontap 8.1) to our office, and while it works perfectly for all Windows clients, our OSX Lion clients are extremely slow. The volumes are set up as NTFS and we've got it connected to our AD via CIFS.

Default OSX clients take several minutes to display folders on the share, and copying takes an eternity. It is pretty much unusable for our power-users. These problems don't exist on the Windows side.

We've tried all the various tricks people suggest for OSX and SMB (Apple ditched Samba for 10.7 and wrote their own since they didn't like the license apparently).

The trick is that installing DAVE on OSX makes it work perfectly. The problem is that DAVE is going to run about $100/workstation/year. It may just be what we have to do. I guess we could mount them to a cheap Windows server and re-export the shares as well.

optikalus
Apr 17, 2008

NippleFloss posted:

Setting "options cifs.show_snapshot off" sometimes helps with directory browsing on OSX clients, but shouldn't make any difference for transfer speeds. What kind of throughput and latencies are you seeing on transfers? A lot of times if the issue is tough to pin down getting a packet trace is the best way to find the culprit. You can do this from the filer with the pktt command.

If NFS is licensed and you have a suitable directory service to use for mapping you could also consider doing multi-protocol and letting your OSX clients connect via NFS.

cifs.snow_snapshot is already set to off. I just ran a simple test to attempt to copy a 5GB ISO from one of the shares on my iMac running Lion and it got about 13MB in 38 minutes (~6k/s). I ran pktt start e0a -i <myip> during this and ran pktt dump after a few minutes. It captured 6 packets.. all ICMP from my workstation to the filer. Fired up Parallels running Windows 7 with a bridged ethernet interface and copied the same file to completion at about 40MB/s.

NFS is also licensed but we don't have any nis/yp on the osx hosts. I wouldn't be worried about permissions on the main shares (everything should be wide open on those) just the home shares and we could always add a new user on the filer for osx home share nfs exports.

optikalus
Apr 17, 2008

Corvettefisher posted:

Anyone have some advice for setting up an Active/active NAS? Openfiler to my knowledge only does active/passive. I know Centos supports GFS so doing an active/active would be doable, but any suggestions about other distros or what not to look into for this would be appreciated.

I tried asking a few people in my storage/Vmware class most everyone answers with. "Oh we just pay the EMC/netapp people to set it up, we don't really touch that too much"

I played with active/active with drbd+GFS2 on RHEL, but even having a 10Gb infiniband cross connect, writes were pretty slow. There was a little bit of a read penalty presumably from the GFS filesystem, but it was negligible. In fact, drbd wasn't able to utilize even 1Gb. Writes for syncs tended to peak around 80MB/s across the infiniband link. I could have lived with having the data with a very slight lag between hosts, but GFS locking just wasn't working out at all. Just reading a file on one host blocked it from being read on the other host. Perhaps with more tuning it could be usable.

I didn't have a chance to play with lvm2's version of block syncing, so I don't know if it is better or worse than DRBD. DRBD is kind of a pain with CentOS 6 as it isn't in the CentOS repo. Had to compile the kernel modules from source (very easy to make your own RPMs from the source), and you have to remember to rebuild it every time you update.

You'd probably be better off running drbd or lvm sync with corosync or RHEL's cluster manager and running an ext4 filesystem exported via NFS. Corosync will handle the heartbeat and IP handoff between the pair. Note though that in my testing, there is about 30-60 seconds from failover where the filehandles just stick and NFS retries. In my case, Apache tended to hit maxclients during this time as well, so it wasn't nearly as seamless as I'd like.

optikalus
Apr 17, 2008

ghostinmyshell posted:

If you are in the DIY mood, buy one of these http://www.supermicro.com/products/nfo/sbb.cfm and slap nexenta/openfiler on it with some minor GUI changes. Tada you are now qualified to sell super low end NAS/SAN systems apparently.

I was just looking at that today (briefly). They're not exactly cheap though, and after you load them up with drives, you're possibly in the range for a low-end NetApp. Granted you could use a bunch of refurbished SAS drives or something and get it built for 4k, but stocking with new drives is basically going to double the cost of the unit (~6500) -- that's only for 4.8TB.

I could use the interposer on my existing 320GB WD RE3 SATA drives, but they're still ~$50/ea. So for my budget it'd either be loading it with a bunch of refurb SAS drives or using existing SATA drives for the same cost. I'd go with the SAS and just order another dozen or so for spares haha.

optikalus
Apr 17, 2008
We paid just a little more than that for our 2240-4 with 48T (24x2) fully licensed.

optikalus
Apr 17, 2008

Aniki posted:

How long ago did your purchase your SAN? It looks like I'll need to get them to drop their price quite a bit if you guys got twice the storage for the same price.

It was ordered about a month or so ago.

optikalus
Apr 17, 2008
It's the megasas driver being too verbose. I don't think those are actually errors.

optikalus
Apr 17, 2008

Aniki posted:

It looks like NetApp and CDW are starting to provide some solutions on price. It looks like they included a $5,000 installation fee, which I'll see if they can waive or heavily discount, otherwise I don't see why we couldn't set it up ourselves. They also mentioned including some training vouchers, so I assume that we could use those or their general support for any questions during the setup. That would bring us down from $34k to $29k and if we hold off on some virtualization specific software, which we probably aren't going to need right away, then that would bring it down to $26k. That being said, I would like to just get all of the software included, but I need to wait and see how much they budge.

I've never touched a NetApp before receiving our 2240-4. It was quick and painless to install. The setup takes care of everything you need to get it online, and then you can use a GUI to configure everything else if you want. I found the docs are pretty clear and easy to follow if you want to do everything through CLI like a boss.

optikalus
Apr 17, 2008

NippleFloss posted:

I don't think you're going to find anything that lets you do truly synchronous replication between a local file using a standard filesystem like ext3 or NTFS and a remote location running some network sharing protocol. There's just too many technical issues there. You'd also probably have terrible performance due to write latency issues.

drbd does it pretty well, and can do async writes.

optikalus
Apr 17, 2008

BelDin posted:

I;m pretty sure you can, but you have to use a cluster filesystem (like GFS or OCFS2) in order to make it work. My fuzzy recollection was that I created a Redhat web/database cluster back around 2008 using DRBD and GFS and we could publish to the secondary node which would then replicate to the primary. The filesystem was active/active, even though the services were active/passive.

Yep, though I think RHEL uses LVM2 now instead of DRBD for their own GFS2 active/active stuff.

Everything except for mysql worked great in my testing (LAMP). mysql did not play nice when another server had a lock set.

Adbot
ADBOT LOVES YOU

optikalus
Apr 17, 2008

Zero VGS posted:

We had a Snap Server 410 sitting in storage so my boss asked me to upgrade the storage and put it into use. I checked the official website: "Supports up to 2TB drives, hot swappable!"

OK, so I filled it with 2TB drives, and used the old 500GB drives that were in it to build a VM server.

Then I try to actually set this thing up. It turns out that unlike every other NAS I've encountered, this thing keeps its OS on the drives. The drives I just wiped. No biggie, I'll copy an image from the company's site. "Sorry," says phone support, "We don't have a process for that. The software comes preloaded on the drive in manufacturing. The only thing you can do is buy a whole new server."

Are these loving guys for real? Looks like the best thing I can do is call around to our other facilities and see if someone else has a hard drive with the OS so I can clone it.

If all else fails, I have a 4500 and can try to image the non share data. You'd likely have to put the drive in another linux box and set up the partitions and stuff. It is running an /old/ version of GuardianOS, so might not be ideal.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply