|
Rhymenoserous posted:Two units is more than one. Sky is the limit. It's an EXPONENTIAL increase. If they keep doubling at this rate every spare corner of the universe will be stuffed with Partner's Data gear within a few decades.
|
# ? Oct 5, 2012 00:48 |
|
|
# ? Apr 18, 2024 06:24 |
|
I dunno guys, it's shaped like it conforms electrically to the Storage Bridge Bay specification. It's probably identical to a ripoff big-box vendor array! IBM and Dell are just charging you for the sticker with their name on the front.
|
# ? Oct 5, 2012 05:13 |
|
Well, I built a Supermicro SC847-based system based on some pointers I got here. Stood it up today and I'm quite impressed. Hardware is 2x E5620, 96 GB, 10gbit Intel x520-DA2 NIC, 32x 3TB 7200 RPM hard drives, and a ZeusRAM for ZIL. The drives are in 4x 8-disk RAID-Z2s, so 60.9 TB usable in 4u. Running NAS4Free 9.0.0.1. Iometer results from a server using the MS software initiator on a single path: Sequential read: 505 MB/sec Sequential write: 290 MB/sec 4k random read: 25k IOPS, 104 MB/sec, 1.2 ms avg latency. 4k random write: ~28k IOPS, 116 MB/sec, 1.1 ms avg latency. Around 20% CPU util during these tests. Still have a bunch more benchmarking to do before I decide on a final config and roll it into production. LACP, Jumbo Frames, different drive configs, and a hardware initiator are still on the table. I am also going to do a round without the ZIL to quantify the benefits of a $2500 8GB SSD. I am guessing my sequential write speeds are limited by the pool setup, and I want to get that number up since it is a backup server. I still can't get over those random IO numbers. It makes me want to put a database on it. Or my entire Dev VMware cluster. The ZeusRAM is the coolest toy I've had to play with in a while. Our $400k prod SAN can't touch those numbers, but it's also rock solid and does online updates. It's impressive what you can put together for <$15k, but I'd never use it for primary storage. KS fucked around with this message at 05:36 on Oct 5, 2012 |
# ? Oct 5, 2012 05:24 |
|
Does anyone use ZFS with Linux? Or is that asking for trouble?
|
# ? Oct 5, 2012 17:29 |
|
Xenomorph posted:Does anyone use ZFS with Linux? Or is that asking for trouble? Let me tell you about how our ZFS on Linux storage pool got totally hosed up and we had to download like 5TB of data from our offsite backup at Amazon S3 over DSL It is not production ready.
|
# ? Oct 5, 2012 17:50 |
|
We need to order 20 of these plastic retaining clips for our CX-4 to affix the front covers to shelves with no DAE. Our EMC rep is telling us that they cannot sell us this part -- is there any place to get these online? I've been striking out thus far.
|
# ? Oct 5, 2012 17:56 |
|
KS posted:Well, I built a Supermicro SC847-based system based on some pointers I got here. Stood it up today and I'm quite impressed. Wow, that's pretty nice. I'm looking at doing something similar for some second-tier storage here. Can I ask what motherboard and controllers you got? And which version of the chassis? There are so many options...
|
# ? Oct 5, 2012 17:56 |
|
Xenomorph posted:Does anyone use ZFS with Linux? Or is that asking for trouble?
|
# ? Oct 5, 2012 20:23 |
|
Misogynist posted:OpenIndiana and FreeBSD are similar enough to administer that it's really not worth the hassle. If you want a Linux userland with a kernel that better supports ZFS, look into Illumian (formerly Nexenta Core). FWIW, there's been some OpenIndiana drama recently. The project lead quit: http://article.gmane.org/gmane.os.openindiana.devel/1578 There are other server-focused Illumos derivatives that are more actively developed.
|
# ? Oct 5, 2012 21:14 |
|
Xenomorph posted:Does anyone use ZFS with Linux? Or is that asking for trouble? Check out btrfs as an alternative.
|
# ? Oct 5, 2012 22:28 |
|
hackedaccount posted:Check out btrfs as an alternative.
|
# ? Oct 5, 2012 22:43 |
|
I don't understand how Broadcom is still in business...it seems that any time I run into an iSCSI issue it involves some sort of Broadcom networking equipment
|
# ? Oct 5, 2012 23:18 |
|
Protokoll posted:We need to order 20 of these plastic retaining clips for our CX-4 to affix the front covers to shelves with no DAE. Our EMC rep is telling us that they cannot sell us this part -- is there any place to get these online? I've been striking out thus far. e: or Gaffer's tape.
|
# ? Oct 6, 2012 00:03 |
|
Protokoll posted:We need to order 20 of these plastic retaining clips for our CX-4 to affix the front covers to shelves with no DAE. Our EMC rep is telling us that they cannot sell us this part -- is there any place to get these online? I've been striking out thus far. Are these an EMC part? If so the rep is right, he can't sell them *but* he can't try other things to get them to you. Firstly see if you can find the EMC Part number either on it, in a manual or somewhere. 1.) Get them from customer services directly or via the rep. 2.) Get the part from the factory. It doesn't take much for a grunt to grab a handful of these and put them in a box. Speak to your TC, not your rep.
|
# ? Oct 6, 2012 09:23 |
|
Wicaeed posted:I don't understand how Broadcom is still in business...it seems that any time I run into an iSCSI issue it involves some sort of Broadcom networking equipment They're one of the few that you can buy PHYs from if you want to roll your own MAC in FPGA/ASIC for building network hardware.
|
# ? Oct 6, 2012 10:54 |
|
PCjr sidecar posted:FWIW, there's been some OpenIndiana drama recently. The project lead quit:
|
# ? Oct 6, 2012 14:38 |
|
We have two IBM SVC / V7000s and i have to say they're pretty solid. We're able to do some awesome mirroring from our old DS4800s and everything has been really smooth.
|
# ? Oct 7, 2012 06:49 |
|
Misogynist posted:I've been suggesting that people avoid OpenIndiana like the plague since the project was announced (we picked FreeBSD 9 for our standalone storage servers once OpenSolaris was officially discontinued) so this news is actually kind of vindicating.
|
# ? Oct 8, 2012 01:25 |
|
adorai posted:depending on your needs, freebsd could be a terrible choice. CIFS performance with solaris in kernel cifs murders samba on freebsd. For block level io via iscsi or file level via NFS, freebsd and solaris are equivalent on performance.
|
# ? Oct 8, 2012 02:38 |
|
Misogynist posted:I actually haven't seen bad performance at all on our FreeBSD 9 kit, and we're running over 10GbE. I'll see if I can replicate what you guys are all talking about.
|
# ? Oct 8, 2012 09:51 |
|
Vanilla posted:Are these an EMC part? He's right. Your TC probably has a small box of these in his car (Mine did).
|
# ? Oct 8, 2012 20:00 |
|
Is hardware like the netgear ReadyDATA too far low on the spectrum to discuss here? Looking for options as we've blown through 36TB of space on two equallogics over two years. Refurbs with dell warranties are on the table for my search.
|
# ? Oct 10, 2012 07:17 |
|
Generally, unless it's got redundant controllers you'll be better off asking in the NAS thread.
|
# ? Oct 10, 2012 07:42 |
|
incoherent posted:Is hardware like the netgear ReadyDATA too far low on the spectrum to discuss here? We're using some ReadyNAS boxes in our environment cause they're cheap and they have gobs of storage. And that's the problem. They're cheap and have gobs of storage. So management loves them and tends to gloss over all the weird issues and problems they're always having. Also Netgear support blows and they like to tell you to do a factory reset when you have issues and then upgrade the firmware. Except that doing a factory reset nukes your data.
|
# ? Oct 10, 2012 13:55 |
|
So this seems Enterprisey enough to ask here. We've got some expirement systems that write their data to a local disk. I'd like to continuously sync this data to a central server, but still keep the data on the local disk, just in case. Some of these systems are Windows and some are Unix. So I'd like some form of reverse caching like Windows does with file servers, but with the ability to directly access the files on both ends, and work with Unix as well. Any idea?
|
# ? Oct 10, 2012 22:58 |
|
FISHMANPET posted:So this seems Enterprisey enough to ask here. I haven't used it in a long time, and depending on the volume of data it may not be feasible, but take a look at Unison?
|
# ? Oct 10, 2012 23:02 |
|
Anyone have Gluster experience? The department I work in has been asked to take over management of around a dozen random whitebox storage servers. Due to the people involved, I don't want to flat-out tell them "loving hell no," so I'd like to have a Plan B where I can say "sure, we'll take it over, but we're not handling this trainwreck of servers individually; we're gonna run something with a central point of management on top."
|
# ? Oct 10, 2012 23:17 |
|
Oh man, I don't know whether to post in the poo poo That Pisses You Off thread or this thread first. Left my employer about 3 weeks ago. My IT director was clueless and lazy and the people above him were way too interested in being bad at business. They started having trouble with the VNX 5300 unit I have mentioned several times in this thread. At this point they still don't know what went wrong. 4 people in the IT department had access to that password, which was left as part of my documentation. None of them knew how to find it even after several emails informing them of where it was and me mentioning it several times before leaving. They couldn't log in to the SAN to fix the problem because they didn't know the password. So what did they do? They unplugged it. At like 8:30 in the morning. Of course, bad things happened. They're still down 12 hours later.
|
# ? Oct 11, 2012 01:45 |
|
just enjoy the schaudenfrude ok
|
# ? Oct 11, 2012 04:51 |
|
FISHMANPET posted:So this seems Enterprisey enough to ask here. It'd help if you were a bit more specific about your requirements. What do you want saved off? Just experiment data from one local drive? How often? Continuous is pretty nebulous. If it's a bunch of small files, you may be able to do that. Large files (as experiment data tends to be)? Much harder. What's your storage backend? NAS? SAN? Is it CIFS? NFS? Is rsync good enough? How much data is written? Would you notice a performance hit over gigabit to a mapped drive? Does the client care if it's actually a local drive (i.e. is it doing direct disk access), or only that it "looks" like a direct drive (has a drive letter in Windows, is somewhere in the filesystem in UNIX)? If it doesn't care: Have you considered Gluster on UNIX exported to the Windows clients as a mapped drive? Would the client notice the difference? Can you export iSCSI LUNs from a system running ZFS or a "real" SAN that can do snapshots, and just schedule those with a cronjob? You've got a million ways to go here, and a little help would help narrow it down a lot.
|
# ? Oct 11, 2012 06:30 |
|
Internet Explorer posted:Oh man, I don't know whether to post in the poo poo That Pisses You Off thread or this thread first. There may or may not be a way for a local CE to assist in recovering said password...
|
# ? Oct 11, 2012 07:34 |
|
Amandyke posted:There may or may not be a way for a local CE to assist in recovering said password... No. Unplugging it is easier / faster.
|
# ? Oct 11, 2012 10:01 |
|
paperchaseguy posted:just enjoy the schaudenfrude ok
|
# ? Oct 11, 2012 10:51 |
|
Internet Explorer posted:No. Unplugging it is easier / faster. Which i'm sure they are now finding out! Who the gently caress unplugs any piece of enterprise hardware let alone an array?! That array has been a nightmare for you from the start. Was this just a mess up or is it usually like that where you used to work?
|
# ? Oct 11, 2012 11:02 |
|
Vanilla posted:Which i'm sure they are now finding out! Who the gently caress unplugs any piece of enterprise hardware let alone an array?! That array has been a nightmare for you from the start. It was always like that, but I had total control over everything and I never would have allowed them to unplug it. Then I would have been the bad guy because after I got it fixed the response would have been "we could have been up earlier if you had rebooted it." The director of IT was just completely checked out and had no concept of anything more complex than a physical server with attached storage. And we had been doing virtualization with shared storage for about 6 years. And half the shop uses Citrix, so you'd think there'd be more understanding from that point of view.
|
# ? Oct 11, 2012 12:41 |
|
Hooray just got some funding for some more storage! Not too terribly much, 20k, but still happy nonetheless. Now time to versus with Netapp, Dell, and EMC
Dilbert As FUCK fucked around with this message at 16:24 on Oct 11, 2012 |
# ? Oct 11, 2012 16:17 |
|
evol262 posted:It'd help if you were a bit more specific about your requirements. To be honest I don't really know what the data is yet, as I haven't actually been there for an experiment. We've got a 9 node LeftHand SAN with about 30 TB of total space, and a VMware cluster, so I can setup pretty much whatever I want. I had a hard time explaining the concept of network storage to the "IT Manager" so I'm not entirely sure what he's envisioning yet. My suggestion, which is very simple, is just send everything to a file server. Instead of the system writing data to c:\experiment, have it write to \\server\experiment. And then Mr "IT Manager" got in a tizzy about how all the software would have to be rewritten for each experiment blah blah blah. It was pretty bad. But Eventually I got it in his head, but we decided that we'd like a copy to be local in case anything bad would happen and we wouldn't lose data. Our experiments cost hundreds of thousands of dollars to run sometimes, so we like as many safeguards as we can get. And when I say continuous, I really mean continuous. So as a file is written to a local directory, that same data is streamed to a networked location.
|
# ? Oct 11, 2012 16:59 |
|
Is anyone using Replay Manager with Exchange 2010? We're moving from our old PE2950 Exchange2007 install to a virtualized Exchange2010 install, and I'm trying to figure out if we want to use RDMs to give us the option of using Replay Manager, or stick with VMFS VMDKs and our current (Microsoft DPM) backup methods.
|
# ? Oct 11, 2012 18:33 |
|
Corvettefisher posted:Hooray just got some funding for some more storage! Not too terribly much, 20k, but still happy nonetheless. Now time to versus with Netapp, Dell, and EMC Not to be overly spammy, but if you can use refurb equipment, my company deals in all 3 of those product lines. I'd be happy to quote out some gear for you if you're interested.
|
# ? Oct 12, 2012 05:47 |
|
|
# ? Apr 18, 2024 06:24 |
|
FISHMANPET posted:To be honest I don't really know what the data is yet, as I haven't actually been there for an experiment. We've got a 9 node LeftHand SAN with about 30 TB of total space, and a VMware cluster, so I can setup pretty much whatever I want. You could always mount the share to a drive letter if that would make him feel more better.
|
# ? Oct 12, 2012 06:22 |