|
ExileStrife posted:I was working on one of the storage teams at a very large company, though not working directly on the technology (mostly just workflow improvements). Never had to get my hands dirty with this stuff, but one of the side projects that I would hear about occasionally was bringing in three DMX 4's to move 1.4 PB onto. Since each DMX 4 can handle 1 PB alone, what kind of factors would drive the decision to get three? Future capacity management seems like an odd answer to me, since the forecast was not anywhere near that in for the near future because other data centers were going up. Might this be for some kind of redundancy? Is it possible for one of those DMX's to completely fail? Is it seriously like one singular, monster storage array?
|
![]() |
|
![]()
|
# ¿ Dec 6, 2023 15:08 |
|
echo465 posted:Anyone using a Promise SAN? We're using a Promise VTrak M610i, which is a 16-SATA, iSCSI box for around $5k. Populate it with 1TB hard drives (that are really only 930MB or so), and you've got 13TB of RAID-6 storage for around $7500.
|
![]() |
|
Alowishus posted:Just out of curiosity, in an HP environment, how does one add their own 1TB LFF SATA drives? Is there a source for just the drive carriers? Because using HP's 1TB SATA drives @~$700/ea makes this quite an expensive solution.
|
![]() |
|
Catch 22 posted:Cool, I will get on that Monday.
|
![]() |
|
BonoMan posted:One of our companies has a FC SAN that has Stornext or whatever and it's like 3500 per computer. Do Dell EqualLogic's that are iSCSI require licenses per user?
|
![]() |
|
Can someone tell me what the practicality of using a Sun 7210 w/ 46 7200 rpm disks as the backend for approximately 40 VMware ESX guests is? On the one hand, I am very afraid of using 7200rpm disks here, but on the other hand there are 46 of them. I realize that without me pulling real IOPS numbers this is relatively stupid, but I need to start somewhere and this seems like a good place.
|
![]() |
|
Misogynist posted:You're also not telling us what kind of workload you're trying to use here. I've got close to 40 VMs running off of 6 local 15K SAS disks in an IBM x3650, but mostly-idle development VMs have very different workloads than real production app servers. 6 DCs 1 500 user exchange box (about half of which are for part time employees who use it very very lightly) 1 BES for roughly 30 blackberries 4 light-medium duty database servers 4 light webservers 1 citrix / TS license server 5 Application / Terminal Servers This is what we currently have running on an older IBM SAN, 16 15k FC drives and 8 10k FC drives. In addition to that workload, I want to add about 15 application servers, with workloads that I haven't even started to measure. All are currently running 2 to 6 10k disks.
|
![]() |
|
We are currently looking at replacing our aging ibm san with something new. The top two on our list are a pair of Netapp 3020s and a 2050 for our offsite or a pair of EMC Clarions. I am interested in looking at a dual head Sun Unified Storage 7310 and a lower end sun at our offsite. The numbers seem to be literally half for the Sun solution, so I feel like I have to missing something on it. For usage, the primary purpose will be backend storage for about 100 VMs, some cifs, and some iSCSI/fibre storage for a few database servers. Any thoughts from you guys?
|
![]() |
|
optikalus posted:Edit: also, anyone remember that whitepaper about SATA drives reaching 2TB in a RAID setting will almost always be guaranteed to fail upon rebuild due to the size of the drive exceeding the drive's own bit error MTBF?
|
![]() |
|
StabbinHobo posted:its absolutely insane for anyone without two very niche aspects to their storage needs:
|
![]() |
|
For that much storage sun is probably the cheapest outside of a roll your own type solution.
|
![]() |
|
shablamoid posted:Does anyone have any experience with Open-E's DSS v6?
|
![]() |
|
On our NetApp 3140, when running the command: priv set diag; stats show lun; priv set admin I am seeing a large percentage of partial writes. There does not seem to be a corresponding number of misaligned writes, as you can see in the below output: lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.0:72% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.1:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.2:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.3:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.4:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.5:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.6:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_align_histo.7:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:read_partial_blocks:0% lun:/vol/iscsivol0/lun0-W-OMCoT2A9Iw:write_partial_blocks:27% Should I be worried about this? All of my vmdks are aligned (i went through every single one to double check), plus there are no writes to .1-.7 so the evidence also shows no alignment issue. I submitted a case to have an engineer verify there is no issue, but I was wondering if anyone else has seen partial writes like this on a VMware iSCSI lun. The partial writes typically hover around 30%, but can vary. They were at 75% at one point today.
|
![]() |
|
1000101 posted:This is odd....
|
![]() |
|
Cultural Imperial posted:Are you using Operations Manager? If so, can you create a graph for iSCSI latency? That should tell us how well your iSCSI SAN is performing. When you created your iSCSI LUNs, did you use snapdrive or did you create them manually? If you created them manually did you remember to align the VMFS formatting? Also, do you have the ESX host utilities kit installed? To answer your questions, I could create the graph, but am pretty lazy. As far as lun creation, it was all done with snapdrive or by selecting the proper lun type when creating it. And I do have the host utilities installed.
|
![]() |
|
Nomex posted:sacrifice performance for space.
|
![]() |
|
you need logzillas and readzillas to get any kind of performance out of the sun gear.yzgi posted:3) Misalignment of the LUNs. I've read, here and elsewhere, about aligning the LUNs and VMFS/VMDK partitions but I'm a bit confused about when it matters. I know we never did any alignment on the EMC. You should align both the VMFS volume and the VMDKs. Otherwise, a single 4k write could require 2 reads and 3 writes, instead of just one write. By aligning all of your data you will likely see a 10% to 50% performance improvement. edit: about the alignment. Here is your unaligned data: code:
code:
adorai fucked around with this message at 23:23 on Jan 13, 2010 |
![]() |
|
Cowboy Mark posted:Personally I'd have got VMWare high availability and moved virtual machines around instead of this clustering lark, but I think it's a bit late for that now. Does anyone have any advice? Have Dell screwed us over and sold us something not fit for purpose?
|
![]() |
|
If you don't need HA, you might want to take a look at the lower end 7000 series devices from sun. They are the only vendor that won't nickel and dime you on every little feature.
|
![]() |
|
Nukelear v.2 posted:I have a setup similar to your description running on Dell MD3000i's. Dual controller, 15x450GB SAS 15k RPM drives. Was a bit over $13k.
|
![]() |
|
if HA is not a requirement, a 7100 series from sun with 2TB raw can be had for around $10k.
|
![]() |
|
EnergizerFellow posted:As mentioned, deduplication on ZFS still hasn't made it out of development builds, AFAIK.
|
![]() |
|
EnergizerFellow posted:Yeah, NetApp's list prices are comical and not in a good way. You need to go through a VAR and openly threaten them with real quote numbers from EqualLogic, EMC, Sun, etc. The pricing will suddenly be a lot more competitive. Welcome to the game.
|
![]() |
|
Insane Clown Pussy posted:Do the 8 port HP Procurve 1810G-8 switches support simultaneous flow control and jumbo frames? HP have give me two different answers so far. If not, are there any other "small" switches that support both flow control and jumbo? I don't have a huge datacenter so I'd rather not drop another few grand on a couple switches if I can avoid it. The OCD in me would also make me cringe every time I saw a switch with only 25% of the ports in use by design. To answer your question, I can select both options, I can't actually say whether it works or not.
|
![]() |
|
EnergizerFellow posted:You ever get Compellent and/or 3PAR in? I'd like to hear some more experiences with them. They've sure been hitting up the trade shows lately. That was pretty much the end of the compellent discussion.
|
![]() |
|
StabbinHobo posted:So if you have a "4+1" RAID-5 setup on a 3par array for instance, what that really means is that there are four 256mb chunklets of data, per one 256mb chunklet of parity, but those actual chunklets are going to be spread out all over the place and mixed in with all your other LUNs effectively at random.
|
![]() |
|
StabbinHobo posted:wait, you lost me here, is that because its important to spend money on buzzphrase technology you don't understand in the enterprise, or is that because archival storage is ok to lose data?
|
![]() |
|
TobyObi posted:To do either NFS or iSCSI, it's time to fork out for 10Gb ethernet infrastructure, otherwise it's just pissing into the wind.
|
![]() |
|
TobyObi posted:We're already saturating 4Gb FC links, and we're migrating to 8Gb with this upgrade that is being worked on.
|
![]() |
|
Cyberdud posted:What do you suggest for a small company who wants around 2-4 Tb that can survive one drive failure.
|
![]() |
|
lilbean posted:Sooo, I'm in the enviable position of looking at SSD-based disk arrays right now. I'm looking at the Sun F5100, and it's pretty expensive but seems completely solid (and loving awesome). Anyone have experience with other SSD arrays? Should I even be thinking about filling a standard disk array with flash disks (probably X25-Es?)
|
![]() |
|
lilbean posted:The budget for the storage improvement is 50K. We already have a pair of Sun 2530s with 12x600GB 15K SAS disks, and three HBAs to put in each host (active-passive). edit: I see that the 2350s are apparently not using zfs. If you don't need HA you could try a 7200 series unit from sun.
|
![]() |
|
soj89 posted:I want to build 3 identical file servers and have 2 in the office and one doing off-site replication from the CEO's home using Crashplan.
|
![]() |
|
soj89 posted:Bottom line: What's the best RAID type to put in place? What about the controller type? The more I've been reading, it seems like Raid 1+0 is preferred over Raid 5 in any case. Would an i7 quad with 8 gb be overkill for the main file server? Especially since it's not particularly i/o intensive.
|
![]() |
|
Maneki Neko posted:What other options out there are actually worth considering?
|
![]() |
|
GanjamonII posted:Oracle is reporting average request latency of 35-50+ms for some of the database files, whereas our storage team reports average request latency on the filer is something like 4ms. So seems there is something going on between oracle and the filer. CPU usage on the servers is low, there isn't any network issues we're aware of, though we're checking into it.
|
![]() |
|
i always see how many nics people use for iSCSI and wonder, are they going overkill or are we woefully underestimating our needs. We use 3 nics on each host for iscsi, 2 for the host iscsi connections and 1 for the guest connections to our iSCSI network. We have 6 total nics on each of filers, setup as 2 3 nic trunks in an active/passive config. We have roughtly 100-120 guests (depends if you include test or not) and don't come close to the max throughput of our nics on either side.
|
![]() |
|
Cultural Imperial posted:It's worth your while to install Cacti or Nagios somewhere to monitor the switches your filers are plugged into. Are we talking about NetApp filers here? If so you can also check autosupport for network throughput graphs.
|
![]() |
|
Syano posted:I know this has to be an elementary question but I just wanted to throw it out there anyways to confirm with first hand experience. We have our new MD3200i humming along just nice and we have the snapshot feature.Before i start playing around though I assume that snapshotting a LUN that supports transactional workloads like SQL or Exchange is probably a bad idea right?
|
![]() |
|
![]()
|
# ¿ Dec 6, 2023 15:08 |
|
Misogynist posted:Keep in mind that there may be a substantial performance penalty associated with the snapshot depending on how intensive, and how latency-sensitive, your database workload is. Generally, snapshots are intended for fast testing/rollback or for hot backup and should be deleted quickly; don't rely on the snapshots themselves sticking around as part of your backup strategy. The performance penalty scales with the size of the snapshot delta.
|
![]() |