Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us $3,400 per month for bandwidth bills alone, and since we don't believe in shoving popup ads to our registered users, we try to make the money back through forum registrations.
«188 »
  • Post
  • Reply
KS
Jun 10, 2003


Update:

We shut down our entire SAN last weekend and brought up one linux server, then one 2k3 server. Performance was identical to the performance we get in the middle of the day at peak load.

We found some benchmarks here .These are reads in MB/sec over numbers of I/O streams.



This jibes with what we're seeing. How is that single stream performance anywhere near acceptable? I can throw 4 SATA disks in a software RAID-5 and beat that read performance.

What are the strategies, if any, we should be implementing here? Striping volumes across multiple vdisks? Tweaks to increase the number of i/o "streams" per server? How will we ever get acceptable ESX performance?

KS fucked around with this message at Sep 2, 2008 around 15:57

Adbot
ADBOT LOVES YOU

rage-saq
Mar 21, 2001

Thats so ninja...

KS posted:

Update:

We shut down our entire SAN last weekend and brought up one linux server, then one 2k3 server. Performance was identical to the performance we get in the middle of the day at peak load.

We found some benchmarks here .These are reads in MB/sec over numbers of I/O streams.



This jibes with what we're seeing. How is that single stream performance anywhere near acceptable? I can throw 4 SATA disks in a software RAID-5 and beat that read performance.

What are the strategies, if any, we should be implementing here? Striping volumes across multiple vdisks? Tweaks to increase the number of i/o "streams" per server? How will we ever get acceptable ESX performance?



Open up a case with HP, something is seriously not right here.

unknown
Nov 16, 2002
Ain't got no stinking title yet!

What are people using for doing their I/O tests of boxes?

bonnie++ or just straight 'dd' or any other package?

rage-saq
Mar 21, 2001

Thats so ninja...

unknown posted:

What are people using for doing their I/O tests of boxes?

bonnie++ or just straight 'dd' or any other package?

I prefer IOmeter. Here are some ideas towards creating some workloads to help evaluate storage performance.

code:
Shamefully ripped from the vmware forums

######## TEST NAME: Max Throughput-100%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,100,0,0,1,0,0

######## TEST NAME: RealLife-60%Rand-65%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,65,60,0,1,0,0

######## TEST NAME: Max Throughput-50%Read
size,% of size,% reads,% random,delay,burst,align,reply
32768,100,50,0,0,1,0,0

######## TEST NAME: Random-8k-70%Read
size,% of size,% reads,% random,delay,burst,align,reply
8192,100,70,100,0,1,0,0

rage-saq
Mar 21, 2001

Thats so ninja...

KS posted:

performance woes

Actually I just remembered something, I had some similar performance issues on some BL25ps because of some stock settings of the HBA driver/BIOS. Are you using Qlogic or Emulex?
What is your maximum queue depth and/or execution throttle? You might want to try messing with that figure to see if you can improve your single server scenario.

edit: Also, theres a bunch of storage guys in #shsc on irc.synirc.org. You should stop by and chat we might be able to give you other pointers.

rage-saq fucked around with this message at Sep 2, 2008 around 19:06

brent78
Jun 23, 2004

I killed your cat, you druggie bitch.

Anyone making SANs with SFF SAS drives yet? We're trying to standardize our environment around 72/146 GB 2.5" SFF SAS drives (300 GB by end of year)

unknown posted:

What are people using for doing their I/O tests of boxes?
Also recommending IOMeter. But more importantly, try to simulate the actual workload that you expect to use. I recently had a vendor tell me "you should be getting at least 12,000 IOPS on that LUN, not sure why you're only seeing 8,000". As it turns out, their test was performed using 512B blocks, 100% sequential, 100% read. Well duh.

small blocks = higher IOPS
big blocks = higher throughput

To simulate our SQL workload, we use 8k blocks, 60% write, 40% read, 100% random.

brent78 fucked around with this message at Sep 3, 2008 around 04:39

KS
Jun 10, 2003


rage-saq posted:

What is your maximum queue depth and/or execution throttle? You might want to try messing with that figure to see if you can improve your single server scenario.

Messing around with this a bit today. We're using QLogic mezzanine cards in a mix of 20P G3s and 25Ps.

BL20P default was queue depth 16 and execution throttle 16.
Dell server with QLA2640s was queue depth 16 and execution throttle 255.

I've been tweaking queue depth a bunch but not execution throttle. I'll have to try more settings tomorrow.

I started messing with I/O schedulers too. Out of the box it was using CFQ on the 8 paths underlying the mpath device. I think there's some performance to be gained here.

This thread, especially jcstroo's post, drives me crazy.

I'll hang out in #SHSC, thanks.

unknown
Nov 16, 2002
Ain't got no stinking title yet!

rage-saq posted:

I prefer IOmeter. Here are some ideas towards creating some workloads to help evaluate storage performance.

brent78 posted:

Also recommending IOMeter. But more importantly, try to simulate the actual workload that you expect to use. I recently had a vendor tell me "you should be getting at least 8,000 IOPS on that LUN, not sure why you're only seeing 8,000". As it turns out, their test was performed using 512B blocks, 100% sequential, 100% read. Well duh.

Unfortunately I'm running Freebsd - although I can probably get IOmeter working under the linux emulation. Just haven't seen any 'success' parts while googling so far. So I'm looking for other various industry standard ones.

And yes, I'm well aware of testing my load, not someone else's marketing focused one. In one of our applications, bonnie++ is actually fairly close to what we're doing (numerous files).

Ixian
Oct 9, 2001

Many machines on Ix....new machines

Pillbug

We've only recently begun to Virtulize, with ESXi. Probably going to be buying a starter kit for 3 servers + VMM here in a bit from VMWare, right now I am just digging the free version and working with the evaluation of the rest.

I'm using 6 servers, Dell PE2950's and 1950's - they have the newer CPU's that support Intel-vt. All have 8gb of ram except the 2950 running our primary SQL 2005 server - it has 16gb. They host our custom web-based application(s), so there are 2 IIS servers, 2 Active Directory DC's, a Backup Exec server, 2 SQL 2005 servers (one primary, the other mirrored), and an ISA 2006 firewall, plus a file storage host. Pretty simple stuff.

I've also inherited (from the previous CEO) a Dell/EMC AX150i iSCSI SAN with 8 out of 12 disks filled with 500gb SATA drives. Finally I have two Dell 5724 iSCSI optimized switches to tie it all together outside of my regular network.. The fun part has been trying to fit them all together.

So far I've managed to get several of our lighter load servers virtual-ized, but I am unsure if I should do so with the SQL Server - we don't have an enormous load on it (it hosts our customer management apps and product catalogs, and runs just fine on a 2950 w/RAID 5 local storage, Perc 5i controller). Is my setup too low-rent for such a situation? Any general pointers on how I should set up ESXi with the AX150i/my config?

Mierdaan
Sep 14, 2004



Pillbug

Mierdaan posted:

Thanks for the thread, 1000101.

Anyone out there using Dell's MD3000i box? We're just getting to the point where we need something low-end, due to increasing interest in virtualization and an unwillingness on my part to replace an ancient Proliant ML350 file server with another traditional file server. We don't have anything terribly IOPS-intensive we'd be putting on it; probably the most IOPS would be Exchange transaction logs, SQL transaction logs + DB for 200-person company setup, so I don't think iSCSI performance issues are worth worrying about for us.

It's a 15 spindle box, so we're thinking about carving it up thusly:

1) RAID 10, 4x 300GB 15K RPM SAS drives: SQL DB
2) RAID 10, 4x 146GB 15K RPM SAS drives: SQL transaction logs
3) RAID 1, 2x 73GB 15K RPM SAS drives: Exchange transaction logs
4) RAID 5, 4x 450GB 15K RPM SAS drives: VMimages, light file server use
(with 1 spare drive for whatever RAID set we decide most needs it)

That'd take care of our immediate needs and give us some room to expand with additional MD1000 shelves (2 can be added to the MD3000i, though the IO doesn't scale at all). We're a small shop and have no experience with SANs, so I could definitely use some feedback on this idea.

Floating this question again

Vanilla
Feb 24, 2002

Hay guys what's going on in th

I can answer most of your EMC questions (features, functionality, why EMC over xyz) from a sales perspective, i'm not that technical!

I can give opinions on other vendors in general.

optikalus
Apr 17, 2008


Mierdaan posted:

Thanks for the thread, 1000101.

Anyone out there using Dell's MD3000i box? We're just getting to the point where we need something low-end, due to increasing interest in virtualization and an unwillingness on my part to replace an ancient Proliant ML350 file server with another traditional file server. We don't have anything terribly IOPS-intensive we'd be putting on it; probably just Exchange transaction logs SQL transaction logs + DB for 200-person company setup, so I don't think iSCSI performance issues are worth worrying about for us.

It's a 15 spindle box, so we're thinking about carving it up thusly:

1) RAID 10, 4x 300GB 15K RPM SAS drives: SQL DB
2) RAID 10, 4x 146GB 15K RPM SAS drives: SQL transaction logs
3) RAID 1, 2x 73GB 15K RPM SAS drives: Exchange transaction logs
4) RAID 5, 4x 450GB 15K RPM SAS drives: VMimages, light file server use
(with 1 spare drive for whatever RAID set we decide most needs it)

That'd take care of our immediate needs and give us some room to expand with additional MD1000 shelves (2 can be added to the MD3000i, though the IO doesn't scale at all). We're a small shop and have no experience with SANs, so I could definitely use some feedback on this idea.


I've been looking at the MD3000i as well, mainly because it is the only iSCSI filer that Dell sells that doesn't use insanely expensive disks.

On paper, it looks quite good, however some of the wording is a bit confusing. For example, they claim more performance by adding a secondary controller, and they only mention cache when you have a dual-controller setup. I searched and searched and could not find any mention of cache per controller, only when used in a dual-controller setup, so you'll probably want that second controller just-in-case.

Those disks should yield a healthy 170 IOPS per disk (vs. ~130 for 10k) so that would be ideal for a database. If your database can live fine on 680 IOPS, its probably good enough.

That extra drive should be used as an universal hot-spare, though.

Mierdaan
Sep 14, 2004



Pillbug

optikalus posted:

I've been looking at the MD3000i as well, mainly because it is the only iSCSI filer that Dell sells that doesn't use insanely expensive disks.

Have you looked at any vendors outside of Dell? The reseller we work with is pretty big on Dell, so I don't know how biased they are; they're claiming wonderous things about the 3000i.

"optikalus posted:

Those disks should yield a healthy 170 IOPS per disk (vs. ~130 for 10k) so that would be ideal for a database. If your database can live fine on 680 IOPS, its probably good enough.
Time to dig up some performance counters.

"optikalus posted:

That extra drive should be used as an universal hot-spare, though.
I wish; apparently it can only be a hot spare for one RAID set, not any RAID set. I was thinking as long as the spare was at least as large as the largest spindle in a RAID set, it could be universal, but I'm told that's not the case.

optikalus
Apr 17, 2008


Mierdaan posted:

Have you looked at any vendors outside of Dell? The reseller we work with is pretty big on Dell, so I don't know how biased they are; they're claiming wonderous things about the 3000i.

Not really as I don't have the capital to plop down $25k unfinanced on a filer (Dell offers $1 buy-out leasing terms). HP has a similar product, but I dislike HP for various reasons and I didn't immediately see any financing options.

I would love an EMC or NetApp, but its just out of my reach at the moment.

Actually, I did have an EMC IP4700 for a few days -- it took a 4' drop off my cart on the way to my car. Flattened. Oops.

dexter
Jun 24, 2003


optikalus posted:

Not really as I don't have the capital to plop down $25k unfinanced on a filer (Dell offers $1 buy-out leasing terms). HP has a similar product, but I dislike HP for various reasons and I didn't immediately see any financing options.

I would love an EMC or NetApp, but its just out of my reach at the moment.

Actually, I did have an EMC IP4700 for a few days -- it took a 4' drop off my cart on the way to my car. Flattened. Oops.

My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008.

optikalus
Apr 17, 2008


dexter posted:

My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008.

The PERC4/DC isn't even compatible with RHEL. I found that out the hard way with my PV220S. I could transfer enough data to fill the card's cache at speed, but any further writes were dog slow, like 10MB/s or less (initial write was only 90MB/s).

The Adaptec I replaced my PERC with can max out at 300MB/s reads and something like 200MB/s write.

LSI can suck it.

oblomov
Jun 20, 2002

Meh... #overrated

Anyone have experience with LeftHand Networks, specifically their rebranded HP (or Dell 2950) appliances? Went to couple of demos and their premise seems pretty slick. Kind of like 3Par but cheaper.

I especially liked ability to add/remove nodes almost on the fly.

echo465
Jun 3, 2007
I like ice cream

Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500.

rage-saq
Mar 21, 2001

Thats so ninja...

echo465 posted:

Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500.

Get ready to have fun when the controller shits itself and annihilates all the data on your array, and Promise has no idea why and offers no recourse for fixing it or preventing it from happening again (which it will).

I've had 3 different customers who had differing Promise units and they all did something along these lines happen, and Promise basically told them to go gently caress themselves.

I can't tell people enough to avoid this kind of crap.

adorai
Nov 2, 2002

10/27/04 Never forget

Grimey Drawer

echo465 posted:

Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500.
i'd rather run a whitebox openfiler than some poo poo rear end promise nas.

echo465
Jun 3, 2007
I like ice cream

rage-saq posted:

Get ready to have fun when the controller shits itself and annihilates all the data on your array, and Promise has no idea why and offers no recourse for fixing it or preventing it from happening again (which it will).

I've had 3 different customers who had differing Promise units and they all did something along these lines happen, and Promise basically told them to go gently caress themselves.

I can't tell people enough to avoid this kind of crap.

My google skills must be failing me, because most of the whining about the Promise VTrak line that I am finding is from when Apple discontinued the X-RAID and told everyone to buy Promise instead. I'm interested in hearing it if this is a widespread problem.

echo465 fucked around with this message at Dec 15, 2015 around 05:37

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


echo465 posted:

My google skills must be failing me, because most of the whining about the Promise VTrak line that I am finding is from all the Mac fags when Apple discontinued the X-RAID and told everyone to buy Promise instead. I'm interested in hearing it if this is a widespread problem.
People in the enterprise world tend to have better things to do than bitch about their hardware on the Internet.

Cidrick
Jun 10, 2001

Praise the siamese


I was caretaker for 4 vTrak m500i units at a contract gig once, filled with 500GB Seagate SATA drives. One of the units had the controller take a poo poo, twice, about 6 months apart. After a very stressful morning on the phone with Promise, they had me force each drive online in the web management tool, and that caused the controller to magically fix itself somehow and everything was hunky dory. No data loss or anything. I used the exact same procedure for the second instance and that also fixed everything.

I think I was just very very lucky

Reefer Inc.
Oct 11, 2007
A psychotic is a guy who's just found out what's going on. - WSB

Misogynist posted:

People in the enterprise world tend to have better things to do than bitch about their hardware on the Internet.

Oh I don't know, alt.sysadmin.recovery is probably a good resource if you remember to ROT13 the brand name before you search.

oblomov
Jun 20, 2002

Meh... #overrated

So, nobody has any experience with LeftHand? Googling online did not find much. I guess I'll tell them for some references and will put the product in the lab for some stress testing.

Alowishus
Jan 8, 2002

My name is Mud

How can I definitively determine the blocksize of a EMC Clariion CX3 RaidGroup?

I've got a script that is pulling a "navcli getall" report and parsing it to produce a web-based report about free space. Unfortunately, all I get from the report for each RaidGroup is this:
code:
Raw Capacity (Blocks):                     633262480
Logical Capacity (Blocks):                 506609984
Free Capacity (Blocks,non-contiguous):     296894784
By looking at a LUN on that RaidGroup:
code:
LUN Capacity(Megabytes):    102400
LUN Capacity(Blocks):       209715200
... I can infer that the blocksize is 2048, but is that something I can rely on to always be the case? If not, is there an easier way to get it out of Navisphere?

rage-saq
Mar 21, 2001

Thats so ninja...

Alowishus posted:

code:
LUN Capacity(Megabytes):    102400
LUN Capacity(Blocks):       209715200
... I can infer that the blocksize is 2048, but is that something I can rely on to always be the case? If not, is there an easier way to get it out of Navisphere?

I'm not an EMC guy (I've had very little experience with it actually) but I'd definitely say you are at a 2k block size. Lots of larger SANs don't give you the option of your block size. HP EVA, Netapps and I think HDS are all 4k blocks and you can't change it.

H110Hawk
Dec 28, 2006
Can't install Windows?
BUY APPLE


I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work.

Our requirements:

Hardware raid, dual parity preferred (RAID6), BBU
Cheap!
Runs or attachable to Debian Etch, 2.6 kernel.
Power-dense per gb.
Cheap!

To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O.

We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing.

The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail.

I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks.

Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

Alowishus posted:

How can I definitively determine the blocksize of a EMC Clariion CX3 RaidGroup?

I've got a script that is pulling a "navcli getall" report and parsing it to produce a web-based report about free space. Unfortunately, all I get from the report for each RaidGroup is this:
code:
Raw Capacity (Blocks):                     633262480
Logical Capacity (Blocks):                 506609984
Free Capacity (Blocks,non-contiguous):     296894784
By looking at a LUN on that RaidGroup:
code:
LUN Capacity(Megabytes):    102400
LUN Capacity(Blocks):       209715200
... I can infer that the blocksize is 2048, but is that something I can rely on to always be the case? If not, is there an easier way to get it out of Navisphere?

Unless i'm wrong you can set the block size on a lun by lun basis depending on your requirements. 2k, 4k, 8k, 16k, 32k, etc. This is the size you are likely looking for.

Overall background block size is set at 520 bytes. 512 bytes data and 8 bytes of Clariion data to ensure integrity, but the last 8 bytes is user transparent.

Have you tried looking at the lun via Navisphere?

rage-saq
Mar 21, 2001

Thats so ninja...

H110Hawk posted:

I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work.

Our requirements:

Hardware raid, dual parity preferred (RAID6), BBU
Cheap!
Runs or attachable to Debian Etch, 2.6 kernel.
Power-dense per gb.
Cheap!

To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O.

We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing.

The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail.

I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks.

Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.

Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures.
P800 is about $1k, each shelf is about $3k and then add your 1TB LFF SATA drives and you are good to go. If you need more attach another P800 and more shelves etc.

complex
Sep 16, 2003



This is for the 50GB backup offer, I presume?

H110Hawk
Dec 28, 2006
Can't install Windows?
BUY APPLE


complex posted:

This is for the 50GB backup offer, I presume?

Hrm?

rage-saq posted:

Get an HP SmartArray P800 card and then attack up to 8 MSA60 12xLFF enclosures.
P800 is about $1k, each shelf is about $3k and then add your 1TB LFF SATA drives and you are good to go. If you need more attach another P800 and more shelves etc.

Right now our current theory is a 3ware 9690SA card with these:

http://www.siliconmechanics.com/i20...od-sas_sata.php

So your solution is about 2x the cost. It's a supermicro backplane, we're getting a demo unit in about 5-10 days. Any horror stories about the card? Backplane? Is there something cheaper per rack U per gb? (Or moderately close, mmonthly cost of the rack and all.)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


H110Hawk posted:

I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work.

Our requirements:

Hardware raid, dual parity preferred (RAID6), BBU
Cheap!
Runs or attachable to Debian Etch, 2.6 kernel.
Power-dense per gb.
Cheap!

To give you an idea, we currently have a 24-bay server with a 3ware 9690SA hooked up to a SAS/SATA backplane, and have stuffed it full of 1TB disks. People are using SFTP/FTP and soon RSYNC to upload data, but once the disks are full they are likely to simply stay that way with minimal I/O.

We are looking for the cheapest/densest cost/gig. If that is buying an external array to attach to this system via SAS/FC, OK! If it's just buying more of these systems, then so be it. I've started poking around at Dot Hill, LSI, and just a JBOD looking system, but I figured I would ask here as well, since LSI makes it hard to contact a sales rep, and Dot Hill stuff on CDW has no pricing.

The ability to buy the unit without any disks is a big plus, we are frequently able to purchase disks well below retail.

I need about 50-100tb usable in the coming month or two, then it will scale back. I am putting these in 60amp/110v four post 19" racks.

Edit: Oh and I will murder anyone who suggests coraid, and not just because it is neither power dense nor hardware raid.
If OpenSolaris is a good alternative to Debian Etch, you might try a SunFire X4500, which drops by a really substantial amount if you get them to discount you over the phone. 48TB on ZFS, you end up with about 38TB usable and all the redundancy you could ever need. OpenSolaris's built-in CIFS server is awfully good at UID/GID mapping for mixed Unix/Windows environments, and outperforms the poo poo out of Samba. I'm running three of them in production with two more in testing and two more on the way. They're rock solid and, in spite of using 7200 RPM SATA disks, benchmark really close to the fibre channel arrays on my [vendor name removed] tiered NAS which does NFS and CIFS in hardware.

You could also use Solaris 10 with Samba if OpenSolaris makes you uncomfortable, but OpenSolaris is just so much goddamn nicer, especially with the built-in CIFS server, and just as stable.

If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat.

Vulture Culture fucked around with this message at Sep 17, 2008 around 23:00

H110Hawk
Dec 28, 2006
Can't install Windows?
BUY APPLE


Misogynist posted:

If you're really afraid of Sun OSes, you could also run Linux on the thing, but if you're trying to manage a 48TB pool without using ZFS you're kind of an rear end in a top hat.

I have 26 thumpers.

Wicaeed
Feb 8, 2005


Anyone have recommendations for reading material for SAN's?

Absorbs Quickly
Jan 6, 2005

And then the ArchAngel descended from heaven.

dexter posted:

My only problem with the MD3000 (SAS) we use is that the controller card isn't compatible with FreeBSD. It only works with RHEL/Centos and Windows 2003/2008.

Thanks for that. Dell was trying to sell us those explicitly *for* use with freeBSD.

I wish I could stab whoever in procurement decided we had to use dell for all of our freeBSD boxes.

H110Hawk posted:

I come seeking a hardware recommendation. It's not quite enterprise, but it's not quite home use, either. We need a bulk storage solution on the cheap. Performance is not a real concern of ours, it just needs to work.

Our requirements:

Hardware raid, dual parity preferred (RAID6), BBU
Cheap!
Runs or attachable to Debian Etch, 2.6 kernel.
Power-dense per gb.
Cheap!
The sales team at IX has had some really nice things to say about their new colossus array, and wile I haven't found any reviews the sales guys there are usually pretty honest about what sucks and what doesn't. It's a FC-attached RAID device, 48TB (raw) in 4U. It's more than the supermicro solution, but it does RAID on the device and has a bunch of multipathing and redundant controller fancyness (as well as support ).

http://www.ixsystems.com/products/colossus.html

Absorbs Quickly fucked around with this message at Sep 18, 2008 around 16:31

Catch 22
Dec 1, 2003
Damn it, Damn it, Damn it!

1000101 posted:

So this poo poo all sounds fancy and expensive; how much does it cost man!?!?!

Depending on options you can consolidate all of your storage for <30k fairly easily. I think you can get 4TB of StorVault (low end netapp) for ~12-15k if not less.

I collected some IOPS data for my environment, and we are looking at a average of 300, top end 600 IOPS, excluding when my backup is running. I am looking at a Equallogic SATA SAN 8TB now (3000 IOPS), and its 50K+ or 60K+ for SAS.

Mind you this is SATA, and I assume you are talking about SCSI, SAS high end enterprise stuff. You say 30K for comparable specs? What am I getting hosed on, or what are you leaving out?

I was thinking 30K when I started this trek and I don't know what happened now that I hold this quote in my hands.

M@
Jul 10, 2004


Wanted to chime in this thread and say that if anyone needs any used NAS/SAN hardware, I stock this stuff in my warehouse. Mostly Netapp, low to mid-high end equipment (960s, 980s, 3020s, and I've got one 6080 ) and HP, and Sun, although we do get the occasional EMC system in. I stock used disk shelves and individual drives too, so if you're looking for a cheap spare or something, let me know.

I've worked with a bunch of goons in the past and no one's ever said anything bad about me to my face Just shoot me an IM/PM if you need help with anything, I don't care if it's just to see if your current vendor is screwing you over or if you just want to talk about feelings. Because I do.

fake edit: Sorry if this sounds spammy. There's not really any way to say "HEY I CAN HELP YOU! DO BUSINESS WITH ME" without doing so.

Auslander
Nov 26, 2003



Oh man, this thread is a welcome sight.

I'm just about to get a Clariion AX4 configured w/ 11 400GB 10k SAS drives and 2 heads to replace our crap-rear end Ubuntu NFS server that's currently providing backend storage to 4 ESX frontends.

Any setup tips on carving this thing up once it ships to us? How should i connect VM -> storage? Expose a LUN for each VM then use RDM in vmware? One giant VMFS volume stored to a RAID-10 on the AX4? A mix of both? Nothing? OH GOD THE CHOICES ARE KILLING ME

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.


M@ posted:

Wanted to chime in this thread and say that if anyone needs any used NAS/SAN hardware, I stock this stuff in my warehouse. Mostly Netapp, low to mid-high end equipment (960s, 980s, 3020s, and I've got one 6080 ) and HP, and Sun, although we do get the occasional EMC system in. I stock used disk shelves and individual drives too, so if you're looking for a cheap spare or something, let me know.
I need a handful of IBM GNS300C3 drives, since in spite of me having 4-hour replacement on these IBM cannot direct me to a single one anywhere in the country

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply
«188 »