Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cavepimp posted:

Well, the good news is that the S500 I was talking about is now on the way out (being relegated to low-priority storage until it dies, basically).

Now I'm looking for something entirely different. I want to move to a D2D2D backup environment, one being off-site at our colo.

Our storage amount is relatively low (under 2tb, and not really growing that fast), so what would be my best bet if I wanted a fairly low end NAS/SAN that would replicate to an identical off-site unit and be fairly expandable later?

My only real experience is with Synology, but it looks like people have had issues with them recently over in the other thread and I've never tried replicating between two units.
Have you looked at EMC's lower-end VNX line?

Adbot
ADBOT LOVES YOU

Syano
Jul 13, 2005
Seconding the VNXe series from EMC. NFS, iSCSI, dedupe all come with the unit and you can add replication fairly cheaply

Cavepimp
Nov 10, 2006
I'll definitely have to take a look at those. Thanks guys.

Nomex
Jul 17, 2002

Flame retarded.
I just inherited an environment where they're about to get a FAS6210. One of the workloads will be 8 SQL servers, each needing a contiguous 4TB volume. I'll need 32TB worth of volumes total. I'm wondering what the best practice would be for carving up my aggregates. Should I just make 1 large aggregate per FAS or would it be better to split them into smaller ones? This was my first week working with Netapp, so I'm not sure what would be recommended.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Nomex posted:

I just inherited an environment where they're about to get a FAS6210. One of the workloads will be 8 SQL servers, each needing a contiguous 4TB volume. I'll need 32TB worth of volumes total. I'm wondering what the best practice would be for carving up my aggregates. Should I just make 1 large aggregate per FAS or would it be better to split them into smaller ones? This was my first week working with Netapp, so I'm not sure what would be recommended.

With RAID-DP, you always want the biggest aggregates / raid groups you can get, as it saves you from wasting drives to new raid sets. Every aggregate means a new raid set, which means 2 disks lost to the dual parity drives. Ideally you'll split the drives evenly between your controllers and make the biggest aggregates you can, making sure to maximize your "raid group" size to minimize lost disks. More disks in an aggregate = more spindles your data is spread across = better performance.

Assuming you get ONTAP 8.0.1 on the FAS (which I am 99% sure you will, I think it's the only supported ONTAP for the 62xx series) you can make 64-bit aggregates, so you can toss as many disks as you want into a single aggregate (per controller).

madsushi fucked around with this message at 03:09 on Apr 15, 2011

namaste friends
Sep 18, 2004

by Smythe

madsushi posted:

With RAID-DP, you always want the biggest aggregates / raid groups you can get, as it saves you from wasting drives to new raid sets. Every aggregate means a new raid set, which means 2 disks lost to the dual parity drives. Ideally you'll split the drives evenly between your controllers and make the biggest aggregates you can, making sure to maximize your "raid group" size to minimize lost disks. More disks in an aggregate = more spindles your data is spread across = better performance.

Assuming you get ONTAP 8.0.1 on the FAS (which I am 99% sure you will, I think it's the only supported ONTAP for the 62xx series) you can make 64-bit aggregates, so you can toss as many disks as you want into a single aggregate (per controller).

Be careful about modifying your raid group sizes. If you make them too big, it will take an eternity for your raid group to rebuild after a disk failure. I woudn't recommend changing them at all unless you had a very good reason for doing so (ie you had no choice).

The unfortunate problem with RAID6 (or DP) is that you "waste" a lot of disk for the sake of resiliency.

I agree with you, 64-bit aggregates are the way to go.

Nomex, 1 aggregate is fine.

Nomex
Jul 17, 2002

Flame retarded.
Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.

namaste friends
Sep 18, 2004

by Smythe

Nomex posted:

Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.

Before 64 bit aggregates rolled up, the main limitation was the 16 TB aggregate maximum which would play a major factor in your planning. However, now that this limit is no longer a problem, your main concern should be whether or not you think you'll ever need to perform aggregate level snapshots/restores. I've never seen anyone perform an aggregate level snaprestore however I have heard that it has saved the skin (and thus careers) of some people. It all comes down to how much money you have for disk.

Performance/spindles aren't so much of a design consideration anymore, now that disks are massive, which is why NetApp now sells FlashCache (aka PAM II) cards.

For example, if you need 50 TB raw and you wanted to use 2 TB SATA drives, you wouldn't get very good performance/spindle compared to 50 TB worth of SAS drives. However if you stuck some FlashCache in front of your SATA array, you'd probably obtain comparable performance, depending on your workload.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Nomex posted:

Would you put multiple workloads in 1 aggregate to maximize the amount of disks globally? Should I only have 2 aggregates total? If so, that makes my life a lot easier.

Yep, two big aggregates, one for each controller.

Timdogg
Oct 4, 2003

The internet? Is that thing still around?
Thanks for the thread OP. It has been really helpful these past few months. We are just entering the consolidated storage arena and are currently leaning toward a Dell 3200i DAS with 3 1200's attached to it. This will get us 96 TB RAW, which we desperately need, all for around $37k.

Does anyone have experience with these systems? We aren't looking for blazing speed, mostly just storing lots of large files in one place as opposed to distributed over multiple 5Us.

Also, Dell is trying to push their "iSCSI Optimized" switches, but I have limited experience with their switches and am hesitant to jump in now. Anyone recommend them?

conntrack
Aug 8, 2003

by angerbeet
My personal theory is that nobody outside dell marketing knows what makes the switches "optimized".

Probably some play on flow control or qos?

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Netapp question guys.

What's the write penalty on RAID DP?

I.e RAID 1 = 2 writes, R5 = 4 writes, R6 = 6 writes but i've no idea what it is with RAID DP. I know there's two parity drives but no idea the actual penalty.

Syano
Jul 13, 2005

Timdogg posted:

Also, Dell is trying to push their "iSCSI Optimized" switches, but I have limited experience with their switches and am hesitant to jump in now. Anyone recommend them?

The only thing their iSCSI optimization does is prioritize iSCSI traffic. If you only have iSCSI traffic on that network (like you should) then their optimization is completely irrelevant.

conntrack
Aug 8, 2003

by angerbeet

Vanilla posted:

Netapp question guys.

What's the write penalty on RAID DP?

I.e RAID 1 = 2 writes, R5 = 4 writes, R6 = 6 writes but i've no idea what it is with RAID DP. I know there's two parity drives but no idea the actual penalty.

Im on lovely GRPS now so i couldn't find the netapp papers i was looking for to give you.

http://blogs.netapp.com/extensible_netapp/2009/03/understanding-wafl-performance-how-raid-changes-the-performance-game.html

Try this url and digg around the blogs, they have really good explanations about raid-dp,wafl and the tricks they use to maximise write performance.

Mierdaan
Sep 14, 2004

Pillbug

Timdogg posted:

Also, Dell is trying to push their "iSCSI Optimized" switches, but I have limited experience with their switches and am hesitant to jump in now. Anyone recommend them?

We got one of these pieces of poo poo - terrible decision. It's sitting unused on a shelf now, I'm trying to convince my boss to let me run it over with my car since we replaced it with a 3560.

Dreadite
Dec 31, 2004

College Slice
We're looking at the possibility of getting a SAN for our office of ~25 users. We wouldn't need more than 2TB of space, or anything particularly fast, but we'd like one for all the cool features that come with a SAN.

The problem seems to be price, as we're only looking to spend 10-12k. Is getting something to meet our modest needs under 12k completely unrealistic? What brands should we be looking at? I've gotten a quote for 15k for a NetApp 2020, but we'd really like to spend less than that so it doesn't cut into our budget for new servers.

H110Hawk
Dec 28, 2006

Dreadite posted:

We're looking at the possibility of getting a SAN for our office of ~25 users. We wouldn't need more than 2TB of space, or anything particularly fast, but we'd like one for all the cool features that come with a SAN.

The problem seems to be price, as we're only looking to spend 10-12k. Is getting something to meet our modest needs under 12k completely unrealistic? What brands should we be looking at? I've gotten a quote for 15k for a NetApp 2020, but we'd really like to spend less than that so it doesn't cut into our budget for new servers.

Do you know what % that is off list price? Push back on the price until they say no. Tell them your budget is $10k, then when/if they come back with a $12k quote, bite. Remember you have to renew that support contract annually or purchase a third party one.

http://www.peppm.org/Products/netapp/price.pdf
http://www.macmall.com/p/NetApp-NAS-%28Network-Attached-Storage%29/product~dpno~7780983~pdp.febgdcc

Mierdaan
Sep 14, 2004

Pillbug

Dreadite posted:

We're looking at the possibility of getting a SAN for our office of ~25 users. We wouldn't need more than 2TB of space, or anything particularly fast, but we'd like one for all the cool features that come with a SAN.

The problem seems to be price, as we're only looking to spend 10-12k. Is getting something to meet our modest needs under 12k completely unrealistic? What brands should we be looking at? I've gotten a quote for 15k for a NetApp 2020, but we'd really like to spend less than that so it doesn't cut into our budget for new servers.

I think Misogynist's question from above applies to you too; we picked up a FAS2020 as well when we were getting into low-end enterprise storage, but I think the VNX line is a better bang for your buck now.

Misogynist posted:

Have you looked at EMC's lower-end VNX line?

Dreadite
Dec 31, 2004

College Slice

H110Hawk posted:

Do you know what % that is off list price? Push back on the price until they say no. Tell them your budget is $10k, then when/if they come back with a $12k quote, bite. Remember you have to renew that support contract annually or purchase a third party one.

http://www.peppm.org/Products/netapp/price.pdf
http://www.macmall.com/p/NetApp-NAS-%28Network-Attached-Storage%29/product~dpno~7780983~pdp.febgdcc

This is good advice. Something I noticed was that this particular vendor quoted 11k for the actual hardware and 3600 dollars for what appears to be "racking and stacking" the server in our noc. Needless to say, that's outrageous, but this is my first time buying a piece of hardware in this way. Is that to be expected with all vendors, or can I find someone who will just send me my hardware?

Edit: I'm actually waiting on a quote from another couple of vendors for some EMC equipment and an HP Lefthand setup, I'll probably report back with those prices too so I can get a feel if the prices are fair.

Maneki Neko
Oct 27, 2000

Dreadite posted:

This is good advice. Something I noticed was that this particular vendor quoted 11k for the actual hardware and 3600 dollars for what appears to be "racking and stacking" the server in our noc. Needless to say, that's outrageous, but this is my first time buying a piece of hardware in this way. Is that to be expected with all vendors, or can I find someone who will just send me my hardware?

Edit: I'm actually waiting on a quote from another couple of vendors for some EMC equipment and an HP Lefthand setup, I'll probably report back with those prices too so I can get a feel if the prices are fair.

"Services" are pretty standard, and usually cover installation, initial setup and some sort of training/knowledge transfer, best practices, etc.

Some of that you can pick up along the way, but it's often helpful.

H110Hawk
Dec 28, 2006

Dreadite posted:

This is good advice. Something I noticed was that this particular vendor quoted 11k for the actual hardware and 3600 dollars for what appears to be "racking and stacking" the server in our noc. Needless to say, that's outrageous, but this is my first time buying a piece of hardware in this way. Is that to be expected with all vendors, or can I find someone who will just send me my hardware?

A lot of people like help racking their hardware. Others require it for warranty coverage. Ask them exactly what that entails. If it doesn't involve a lot of actual setup stuff, such as aggregate planning, network configuration, etc, calmly explain them that you are a very technical group and can trivially rack a server yourself with clear instructions. They have flexibility in that price because it's a cost internal to them. One of their technical lackey's, possibly the same guy in the sales meetings with you, is going to drive out and unbox all the stuff to rack it.

That being said, if you're buying one disk tray or a head/tray combo unit $3,600 is a lot of money. Keep in mind that this is nothing personal, and that MSRP is so high on these boxes because there are companies which actually pay that much or are happy with 5%-10% off list as a killer deal.

Dreadite posted:

Edit: I'm actually waiting on a quote from another couple of vendors for some EMC equipment and an HP Lefthand setup, I'll probably report back with those prices too so I can get a feel if the prices are fair.

Be sure and share these numbers around in a circle. Prices have ways of suddenly dropping when competition is introduced. "Well, I have a quote for similar gear for $10,000." It works better if you actually have a quote with that number on it. If they ask to see it remember that they all say "TOP SECRET" across the top so point that out in their quote and say you can't share them out of respect for the vendors putting in all of this hard work. Explain the parts and services included, but not which vendor is giving it to you.

H110Hawk fucked around with this message at 16:01 on Apr 22, 2011

complex
Sep 16, 2003

Who knows things about fibre channel switches? We have Cisco MDS 9216i and 9506 now, but I'd like to investigate Brocade switches.

What kind of features are indicated by a "director" level switch, like an MDS. Do I have to step into Brocade's DCX line, or could I get by with a Brocade 300? As far as I know we don't do any ISL Trunking.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
The DCX is the competitor to the 9506 and 9509. You get redundant CPs, big backplane, etc. Really it depends on your port needs and bandwidth requirements. Most people who need a lot of ports don't buy a bunch of small switches for a core-edge SAN, they buy director class for dual SAN. Depends a lot on the port count and the availability you need.

I don't think the features and functionality are vastly different between the 300 and the director. It's more the port count, backplane, and some additional redundancy. If you need hundreds of ports it may be easier to go with the director class.

blk
Dec 19, 2009
.
I'm thinking about replacing my nonprofit's file server. I was going to just upgrade it from Windows Server 2003 to 2008 R2, but it seems like I should probably do something about the hardware as well.

My situation now:

The file server is a Poweredge 2950 with dual Xeon Woodcrest 5140s, 2 GB of RAM, and four 146 GB 10k SAS drives in one big RAID 5 volume on a PERC5i. The amount of RAM seems pitifully low, and the storage volume is near capacity. It lives on a 100 megabit network, but I'm hoping that will change sometime in the future.

I'm looking at buying through Dell again as I get a modest nonprofit incentive from them, but I'm not sure how I should play this. The server would only handle SMB file sharing, antivirus, and printing for 30-50 users, who would mostly work directly with office documents on the server.

There are four other servers in the organization. A newer domain controller and inventory application server (we're a food bank), an Exchange server, and a backup domain controller.

My questions for you:

Should I get one processor or two? Dual core or quad core?

Is it worth investing in hard drives as SSDs continue to evolve and come down in price?

Would I really need SAS drives for this kind of use, or could I get away with SATA?

The options I see myself having:

1) Max out RAM to 8 GB, see if I can shoe horn two more SAS drives in similar to the four in now (not sure if the controller supports 6 drives, but there are connectors for them). Hope that we do not run out of space. ($500)

2) Buy a new SATA loaded server. Don't worry about space, do worry about speed. ($3000-4000 list)

3) Buy a new SAS loaded server. Don't worry about space or speed. ($4000-6000 list)

The added complication is that I have pressure to spend money now when we have it, rather than in the future. I have access to funds from a recent donation until the end of the fiscal year (June/July). The subsequent years look ugly as federal funding for hunger relief has essentially disappeared (tragic considering that we can feed three healthy meals for a dollar - more efficient than most social services, let alone any government agencies).

Hok
Apr 3, 2003

Cog in the Machine

blk posted:

I'm thinking about replacing my nonprofit's file server.
<snip>

There are two questions here, first, whats the limiting factor on your current system, is it just storage, or are you seeing memory/cpu issues as well.

The other is how much have you got to spend, if you have the budget for something better and it's going to go away in a few months if you don't spend it than you really should spend it.

If it's just a storage issue, upgrade the memory and add a couple of extra drives, you've probably got 6 drive bays, that's the most common version out there so add two more and give yourself some extra capacity. The perc5 can handle 8 drives, the limit will be the slots.

If the money just needs to be spent, then go for the new system. $5k will get you a fairly well specced R710 which will blow the old system away.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
Been a loooooong time since I touched on iSCSI so I have some 101 questions which I know the answers to but things change so worth asking!

-- With iSCSI it's still OK to just use the standard servers NICs but you need an iSCSI TOE card if you want to boot from iSCSI SAN?
-- All you need with iSCSI is the MS Initiator, which is free.
-- Generic IP switches are fine - or do they need certain 'things?

Anyone know some free, accepted tools for copying LUNs off of DAS onto arrays? Robocopy still liked?

Any general rules for someone moving from just DAS onto an iSCSI array?

da sponge
May 24, 2004

..and you've eaten your pen. simply stunning.

Vanilla posted:

servers NICs but you need an iSCSI TOE card if you want to boot from iSCSI SAN?
-- All you need with iSCSI is the MS Initiator, which is free.
-- Generic IP switches are fine - or do they need certain 'things?

It's a good idea to make sure your switch can support jumbo frames AND flow control simultaneously.

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys
For the VMware side of things, but applicable generally:
http://media.netapp.com/documents/tr-3916.pdf
http://media.netapp.com/documents/tr-3808.pdf
http://www.vmware.com/files/pdf/vsphere_perf_exchange-storage-protocols.pdf

Vanilla posted:

-- With iSCSI it's still OK to just use the standard servers NICs but you need an iSCSI TOE card if you want to boot from iSCSI SAN?
Yep.

Vanilla posted:

-- All you need with iSCSI is the MS Initiator, which is free.
Yep.

Vanilla posted:

-- Generic IP switches are fine - or do they need certain 'things?
Flow control support is the biggie here. If you're using a single switch pair for a complete VMware cluster vertical, watch the switch's backplane bandwidth and latency. Think 3560X and up in Cisco land. Make sure you get a 1:1 mapping of initiators to physical NICs for iSCSI multi-pathing. Jumbo frames are a complete waste of time on modern NICs and switches, in addition to causing latency issues.

http://www.vmware.com/pdf/vsphere4/r40/vsp_40_iscsi_san_cfg.pdf

Vanilla posted:

Anyone know some free, accepted tools for copying LUNs off of DAS onto arrays? Robocopy still liked?

Any general rules for someone moving from just DAS onto an iSCSI array?
Yeah, you can just present additional datastores to the servers, use a local copy, then cut over the point points. Pretty straightforward stuff.

Hok
Apr 3, 2003

Cog in the Machine

Vanilla posted:

Been a loooooong time since I touched on iSCSI so I have some 101 questions which I know the answers to but things change so worth asking!

-- With iSCSI it's still OK to just use the standard servers NICs but you need an iSCSI TOE card if you want to boot from iSCSI SAN?
-- All you need with iSCSI is the MS Initiator, which is free.
-- Generic IP switches are fine - or do they need certain 'things?

Anyone know some free, accepted tools for copying LUNs off of DAS onto arrays? Robocopy still liked?

Any general rules for someone moving from just DAS onto an iSCSI array?

You need an iSCSI HBA to boot from iSCSI, TOE on it's own won't do it.

TOE isn't really needed these days with the amount of CPU grunt we have available, and I've seen it cause lots of issues, especially with Jumbo frames in use.

And yup the MS initiator is all thats needed on the host side.

As for the switches, they don't need to be anything special, but I'd avoid the really cheap ones.

Just make sure they support Jumbo frames and flow control, the ability to vlan off your iSCSI ports can also help as well.

da sponge
May 24, 2004

..and you've eaten your pen. simply stunning.

EnergizerFellow posted:

Jumbo frames are a complete waste of time on modern NICs and switches, in addition to causing latency issues.

I could see it being useless on vm network side of things, but does that hold true for the usage patterns on storage network as well?

This shows a sizable jump in throughput (although it doesn't compare it to latency)
http://www.vmware.com/files/pdf/vi3_performance_enhancements_wp.pdf

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Hok posted:

You need an iSCSI HBA to boot from iSCSI, TOE on it's own won't do it.
Don't forget that many commodity integrated NICs shipped with even low-end servers are perfectly capable of booting from iSCSI. For example, IBM's xSeries servers with UEFI can boot from iSCSI using the commodity Broadcom NetXtreme II NICs that they ship with.

Hok posted:

TOE isn't really needed these days with the amount of CPU grunt we have available, and I've seen it cause lots of issues, especially with Jumbo frames in use.
TCP Segment Offload (TSO) is the one particular piece of TOE that generally causes issues with jumbo frames. It's generally possible to disable TSO without affecting the rest of your TOE acceleration.

Vulture Culture fucked around with this message at 18:45 on Jun 2, 2011

EnergizerFellow
Oct 11, 2005

More drunk than a barrel of monkeys

Misogynist posted:

Don't forget that many commodity integrated NICs shipped with even low-end servers are perfectly capable of booting from iSCSI. For example, IBM's xSeries servers with UEFI can boot from iSCSI using the commodity Broadcom NetXtreme II NICs that they ship with.

TCP Segment Offload (TSO) is the one particular piece of TOE that generally causes issues with jumbo frames. It's generally possible to disable TSO without affecting the rest of your TOE acceleration.
See the links I put out earlier, which covers a modern build of VMware and hardware. The following goes into how frame size affects latency and throughput:

http://media.netapp.com/documents/tr-3808.pdf

Absolute storage throughput is very rarely an issue outside of backup media servers and streaming media concentrators. It's usually all about latency, latency, IOPS, IOPS, and IOPS. Did I mention IOPS?

Misogynist posted:

Don't forget that many commodity integrated NICs shipped with even low-end servers are perfectly capable of booting from iSCSI. For example, IBM's xSeries servers with UEFI can boot from iSCSI using the commodity Broadcom NetXtreme II NICs that they ship with.
This is par for the course these days. Only exception I can think of are the Cisco UCS B-series blade mezzanine cards, which have intentionally broken iSCSI boot code (so they can sell you licensed FCoE ports...).

EnergizerFellow fucked around with this message at 19:06 on Jun 2, 2011

Mausi
Apr 11, 2006

EnergizerFellow posted:

Absolute storage throughput is very rarely an issue outside of backup media servers and streaming media concentrators.
These days I find CPU to process the deduplication as the bottleneck, rather than raw storage throughput.

This is of course TSM, the slow learner of all solutions.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Mausi posted:

These days I find CPU to process the deduplication as the bottleneck, rather than raw storage throughput.
Our bottleneck is cpu for gzip compression on our replication traffic.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

blk posted:

My questions for you:

Should I get one processor or two? Dual core or quad core?

Is it worth investing in hard drives as SSDs continue to evolve and come down in price?

Would I really need SAS drives for this kind of use, or could I get away with SATA?


If the money's only available till the end of the month, and nothing else requires replacement now, I'd spend the money on a new server. Nothing says you have to toss the old one, maybe get some more RAM and use it to set up a lab using free VMware/Xen/HyperV?

All of the rest depends on what kind of usage you're seeing now, use perfmon to graph a few days of CPU and memory usage, as well as HDD read/write rates and latencies.

For a fileserver, I rarely see any benefit in having more than 1 CPU, and have sometimes even gone with the lower end SMB servers that use Xeon 3xxx or desktop processor. Whether this will work, or whether to get dual or quad depends heavily on what you learn from the perfmon reports. If you're seeing high CPU usage on the existing 4 cores, one modern quad core Xeon 55xx/56xx will have enough instruction per clock speed increase to handle everything no problem. If you're seeing minimal CPU usage the majority of the time (which I'd suspect), you might be able to get away with a Xeon 3xxx/Core CPU, either dual or quad, depending on costs. In the 5xxx series Xeons, you don't get a whole lot of price reduction on the dual cores, but you do get feature reductions which can put them below the Xeon 3xxx series.

SSDs main benefit is high speed, which you're rarely if ever going to need on a general user file server, especially on 100Mb network. You'll spend several times the money for the same amount of storage with SSDs, and with no need for the speed, it will just be wasted money.

I'm partial to SAS, personally, but there may not be much reason to use it here, for many of the same reasons SSDs won't do much good. One thing to consider if you go with SATA (or nearline 7.2K SAS) is that you may need to rethink your RAID level on the higher capacity drives. If you're using 1 or 2TB 7.2K drives with RAID 5 and lose a drive, you've got quite an increased chance of a second drive failing before rebuild finishes on the first drive. That means you lose the entire array. Not sure if Dell's current PERC cards include R6 support by default, or if it's a feature you have to license (you do on some HP cards). You'd also need to include the cost of additional drives for the higher RAID levels: 2TB usable would quadruple your current storage, and requires 3 1TB drives with R5, but 4 1TB drives with R6.

I threw together an R310 with a Xeon X3430, 8GB of RAM (2x4GB RDIMM), PERC H700 with 512MB write cache, 4x 1TB Near-line SAS drives, DVD, and Win 2008 R2 SP1 with 5 CALs. List price was around $4250 on the Dell SMB site. Being a non-profit, I'm sure you'd get significant discounts, so that seems to be well within your budget.

HTH

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I love this Dell MD 3200 Powervualt, currently I am going over a few things about disaster recovery, and looking to automate the servers shutdown when APC power is running. Basically the way to turn it off is, you don't...

thanks dell

Hok
Apr 3, 2003

Cog in the Machine

Intraveinous posted:

Not sure if Dell's current PERC cards include R6 support by default, or if it's a feature you have to license (you do on some HP cards).

The Perc 5i didn't, the 6i and h700 do

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Thoughts on Compellent? We just had a meeting a Dell rep. It all sounds pretty awesome, if it works.

Odds are we're going to go with a JBOD attached to a server because we're dumb, but it looks cool.

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

FISHMANPET posted:

Thoughts on Compellent? We just had a meeting a Dell rep. It all sounds pretty awesome, if it works.

Odds are we're going to go with a JBOD attached to a server because we're dumb, but it looks cool.

We just took delivery about 6 weeks ago of two Series 40 controllers and 5 disk trays. So far it's been great.

Copilot support is fantastic, great getting someone who knows what's going on on the second ring. I'll be heading up to Eden Prairie sometime this summer to go through their training, but the majority of everything I've touched thus far has been very intuitive.

The data progression seems to work pretty well, though we're still in the process of getting everything up and into production use. We'll be getting a second array for our DR site in the next few months, and setting up array replication.

So far, I'm really happy with it.

Adbot
ADBOT LOVES YOU

carlcarlson
Jun 20, 2008
I'm not sure if this question should go here or in the virtualization or Exchange threads, but this seems like the best place.

I'm new to the SAN/virtual world and have a question for an Exchange server that I'm about to setup. Should I create a separate volume on the SAN just for Exchange, or should it go on the existing volume that my other virtual servers are already running on? Or does it even matter?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply