|
Hi Guys, I'm CommieGIR and I'm a storage addict:
|
# ¿ Dec 24, 2015 03:54 |
|
|
# ¿ Apr 19, 2024 09:19 |
|
fletcher posted:Also, can we get a closeup of that super rad cellphone looking thing in the back corner? From the corner of ages: There is a Slimnote laptop hidden the box in the back. All original packaging. 1MB of RAM. CommieGIR fucked around with this message at 15:32 on Dec 24, 2015 |
# ¿ Dec 24, 2015 15:24 |
|
Jago posted:What's your wattage at the wall? Significant to be sure, but really only works out to ~$20 a month. fletcher posted:Very cool! Thanks for sharing. Your storage addiction is quite impressive as well! I also have a DEC Alpha Server and an ancient Compaq Proliant Quad Pentium 2, and an HP UX Pizzabox.
|
# ¿ Dec 24, 2015 21:21 |
|
BobHoward posted:I think you should see someone about your electronics hoarding problem I actually just threw out a bunch of machines, but yes I'm a hoarder. Mostly server stuff.
|
# ¿ Dec 24, 2015 23:21 |
|
sharkytm posted:It reminds me of my dorm room in college. I heated it with nothing but computers, mostly discarded Sun and DEC servers/workstations. I have a Sun SPARCStation 5 that is just around for nostalgia purposes. The DEC AlphaServer works, but its not worth running.
|
# ¿ Dec 25, 2015 01:15 |
|
BobHoward posted:You seem to have a few 5.25" half-height and full-height drives, this shames me as the best I can do is a 3.5" half-height monolith (it is all black and very squared off) that was the last generation of HDD sold by Micropolis before they exited the HDD market. I have 2-3 old UltraSCSI 320 controllers, and at least 5-10 drives. Oh, and this. And an older SATA Supermicro Server
|
# ¿ Dec 25, 2015 15:28 |
|
Don Lapre posted:grab a set of allen keys and just take the thing apart ....most drives are torx screws?
|
# ¿ Dec 26, 2015 06:17 |
|
phosdex posted:Can also just step up to micro ATX which has a lot more choices for boards and you don't need to use SO-DIMMS. Fractal Design has their Node 804 for mATX which I use and am very happy with. You could, or you could go with a itx setup like I have, that runs a quad core AMD A6 and a full PCIe slot and a mini PCIe slot for an SSD. Perfect for a small NAS, and you can get the whole setup for a little less than $200 plus power supply.
|
# ¿ Jan 7, 2016 19:28 |
|
phosdex posted:yeah but its amd It's a $200 NAS with a PCIe slot free for an actual controller. Quit bitching
|
# ¿ Jan 7, 2016 23:20 |
|
phosdex posted:that wasn't bitching you dumb boob It was sarcasm, calm down.
|
# ¿ Jan 8, 2016 16:33 |
|
Jesse Iceberg posted:From what I gathered, he had three separate hardware RAID5 volumes on three different controllers that he then striped across as one logical volume in software? That seems ill-advised. Here's the rub: This is an SSD RAID at that. He does not need nest RAID in SSDs, the performance boost will not be worth it. Nor does the risk of losing your array make any minimal performance boost worth it. Nested RAID only really makes sense if you need intense performance from standard SAS or SCSI disks, not from disks like SSDs where you've already mitigated those issues, and even then his little home build box will likely not even touch the performance levels of a purpose built SAN device. And the SAN device would likely be somewhat cheaper.
|
# ¿ Jan 8, 2016 16:47 |
|
redeyes posted:Yeah I watched that Linus video and it made my teeth hurt. The guy has incredibly expensive hardware thrown at him like the wind and he does that. I was frankly waiting to the end when he lost all the data and was going to laugh manically. That RAID abomination is one of the stupidest builds I can even imagine. The only reason I might feel a little sympathy for him is the fact that his controllers only support so many SATA connections per controller. I get that, then you gotta kinda play. But in that case, it makes more sense to RAID 5 per controller, then RAID-5 all three controllers together at the OS level in order to tie them all into a single array. With this nested scenario, you mitigate a controller failure as well as a disk failure at the controller level. But its still going to cause performance headaches. He keeps ranting and raving about the performance, but in reality he would've likely gotten that performance at a single RAID level instead of nested. But, in reality, the proper thing to do is get a controller that actually supports handling all those devices on a single controller. A little more pricey, but mitigates headaches down the road. Oh, and make proper backups. That too. CommieGIR fucked around with this message at 17:24 on Jan 8, 2016 |
# ¿ Jan 8, 2016 17:21 |
|
mayodreams posted:I have zero sympathy because RAID5 in a workplace is a really bad idea, and it really should be RAID6 or RAID10. What it comes down to is a lack of understanding on what the actual risks are, and where your points of failure lie. Given 3 RAID cards, the solution is not to stripe them in RAID0, especially for consumer grade SSDs. RAID 10 or RAID 60 are my go to favorites. ZFS is my quick, easy, and reliable favorite.
|
# ¿ Jan 8, 2016 19:22 |
|
Don Lapre posted:Samsung has data migration software for free UBCD also comes with free cloning software installed and is USB bootable.
|
# ¿ Jan 10, 2016 17:52 |
|
eames posted:
I'm really really saddened. I threw Corral on a Dell R710 over the weekend, and I'm actually pretty impressed despite some install bugs I encountered. I mean, its a NAS with a Hypervisor, I'm running Windows and Red Hat VMs on it, and Plex as a Docker Container. I don't know if I'll switch back to the 9.X dev fork...
|
# ¿ Apr 13, 2017 17:00 |
|
Methylethylaldehyde posted:Intel, HP and Cisco all have transceiver brand lockin, you have to use their lovely branded ones or it flat out won't work, which made it loving hellish getting my HP switch to talk to my Intel NIC using a copper cable. I think Cisco still has the iOS command that lets you use the non-branded ones, but intel and HP's response was more or less 'eat a dick' when I asked them how to disable it. HP has only gotten worse too, they've been locking out firmware updates if your device is outside of its warrenty period. So, my new setup: Dell PowerEdge R710 2 x Xeon E5630 Quad Cores @ 2.53 Ghz 64 GB ECC Registered DDR3 2 x 100 GB Dell Enterprise SATA SSDs in RAID-1 (OS) 6 x 146GB SAS 10ks in RAID-6 (VM Storages) 8 x 500GB SAS in RAID-50 (Data Storage) in an MD-1000 Direct Attatched array 4TB Buffalo USB 3.0 External Drive for backups three times a week FreeNAS Corral 10.04 Plex in Docker Container running on a bootDocker VM Windows Server 2016 in VM Runs about 220 watts at full load. Most of this was migrated off a Dell 2950 that was sucking down almost one and a half times the power with half the performance. CommieGIR fucked around with this message at 17:47 on Apr 13, 2017 |
# ¿ Apr 13, 2017 17:25 |
|
TTerrible posted:Where is the person that shat on me for saying I didn't like the way 10 was going? Glad to see IX listening and rethinking this. It was more the Lead Developer left and nobody could fill the vacuum. That and the Java package that does the GUI is not being developed anymore. I can't wait to see what they replace it with, but I'm still happy with it overall.
|
# ¿ Apr 13, 2017 18:17 |
|
DrDork posted:Joke's still on you, then: other than the UI (presumably), the vast majority of 10's feature set will be getting back-ported to 9.x. I think they made mention of intending to put Docker support into 9.10.3 or somesuch, though I didn't catch if that will also be tied with dropping legacy jail support, or will be in addition to jails. They say currently, bhyve will be included in the May release, as well as Docker Container support via bhyve, and a new UI, so yeah, a lot of stuff from Corral is getting back ported, but I know they are not resuing the MotageJS UI due to lack of development.
|
# ¿ Apr 13, 2017 22:37 |
|
Matt Zerella posted:You can pay for unraid and get pretty great features, smooth as hell upgrades, and non emulated docker FreeNAS is not 'emulating' docker, its running it in a boot2docker VM. Its a full docker install, on a lightweight Linux distro. http://boot2docker.io/
|
# ¿ Apr 14, 2017 01:32 |
|
TTerrible posted:I'm appreciating this post a lot. Man, you are such a buzzkill. Some of us like it.
|
# ¿ Apr 14, 2017 02:40 |
|
TTerrible posted:I can understand liking it. I love the idea of the UI not locking up on operations and stuff. I quoted that post because I like the idea of someone in iX just slipping the (NOT FOR PRODUCTION) on the link while a heated debate was going on about what to do. I haven't had my UI lock up on operations yet. The only major bug I've dealt with was the installer just doesn't work. Only way I got it to install was to install FreeNAS 9.0, then change the update path, and let it self-update to Corral. Otherwise, it just wouldn't install.
|
# ¿ Apr 14, 2017 02:44 |
|
TTerrible posted:Sorry I meant the middleware thing where you could set something going on the admin pages and then navigate away and the operation would continue. That's what I meant. I can start an operation (or an operation can be kicked off via the cron) and it'll show up on the list as in process even if I leave the page.
|
# ¿ Apr 14, 2017 02:57 |
|
TTerrible posted:Yes, this. Oh okay, my mistake!
|
# ¿ Apr 14, 2017 03:19 |
|
caberham posted:I actually have an empty hardware box with 6 X 4TB hard drives on a supermicro board and octo CPU. But I rather learn how to run everything in a VM before I start tinkering with the actual box Learning, sure. Production? You want the NAS OS to have hardware level access. CommieGIR fucked around with this message at 21:15 on Apr 14, 2017 |
# ¿ Apr 14, 2017 18:19 |
|
IOwnCalculus posted:That autocorrect Its high speed, and crashes often
|
# ¿ Apr 14, 2017 21:16 |
|
IOwnCalculus posted:Applies to my NAS right now Sometime in the last few days it stopped responding on HTTP, SSH, and CIFS, but it's still working on NFS... which is what my docker box uses so everything else is still working. My NAS is stable, but my UPS batteries have kicked the bucket, so while the OS drives are RAID-1'ed, I try to keep current back ups incase we get a hickup and it scramles the drives. Stutes posted:I was ready to upgrade my DS1512+ on day one if they had launched with the new Atom processors. Why would they launch a model with a 4-year-old known defective processor? I wish Intel/NAS manufacturers would push low voltage Xeons instead of Atoms. http://www.anandtech.com/show/8357/exploring-the-low-end-and-micro-server-platforms/17 CommieGIR fucked around with this message at 15:52 on Apr 15, 2017 |
# ¿ Apr 15, 2017 15:49 |
|
IOwnCalculus posted:FreeNAS 10, of course Yeah, if you plan to run off a USB drive, always ensure its going to setup and run from a RAMdisk after booting to prevent excessive read/write.
|
# ¿ Apr 16, 2017 03:36 |
|
necrobobsledder posted:At work we have a 5 Petabyte HDFS cluster that is dying and there's a lot of bad blocks it had that we didn't discover until we went to go read files. If we had ZFS, we'd have saved ourselves a whole heck of a lot of effort trying to figure out which files were corrupted. I want to know more, if you can, because most hardware RAID conducts block level checks and will alert you. What happened?
|
# ¿ Apr 23, 2017 16:16 |
|
DrDork posted:"I lost a disk and spent two years with no redundancy" vv Mine is setup to send an email alert if a disk fails, so I'm not used to not having redundancy.
|
# ¿ Apr 24, 2017 16:32 |
|
redeyes posted:I just ran into an issue with my Dell Poweredge T410. System has 6 SATA ports and I have 6 HD's plus a boot SSD. Figured, no problem I'll stick in a cheapo Asmedia SATAx2 PCI-E card. Well the drat T410 won't complete post with it in the system AND a HD hooked to it. If I stick the controller in the system and boot, no problem, boots fine. Hook one HD to it and the Dell SATA ACHI bios freaks out and says all the drives are uninitialized at post and then you get nothing. Try disabling the built in SATA controller and then installing the card.
|
# ¿ Apr 26, 2017 01:42 |
|
Smashing Link posted:Very quick question: are iSCSI LUNs inherently less stable than typical shared folders on a NAS? Any other downsides? My current use case is a mac mini syncing my Dropbox account to an iSCSI LUN on a DS1515+, and the LUN is then backed up weekly to a 4TB MyPassport and another 1515+ at my work. iSCSI is going to be more stable, because its usually used for actual boot devices like VMs.
|
# ¿ May 9, 2017 18:37 |
|
I'm still running Corral until they release 11 with VM support and Docker again. I'm quite happy where I am. And FreeBSD is the bee's knees. CommieGIR fucked around with this message at 01:26 on May 24, 2017 |
# ¿ May 24, 2017 01:23 |
|
So, I got my hands on some RAM and upgraded my home PowerEdge R710 and maxxed out the RAM. Then, boot2docker went nuts on my FreeNAS/FreeBSD install and messed up the networking, had to fix that.
|
# ¿ Aug 6, 2017 03:09 |
|
necrobobsledder posted:300 GB of RAM... are you running a Hadoop cluster or a home server? The RAM's power draw alone might rival my 8-disk NAS box. Its running a VM cluster and Docker containers for plex, has a PERC 6/e connected to a MD1000 array, FreeNAS is runn ing on two 100GB SSDs in RAID1 and the VMs are stored on a 500GB RAID6 array of 6 146GB SAS disks.
|
# ¿ Aug 7, 2017 23:55 |
|
Saukkis posted:Sounds like instead of spending money to move the drives to you garage you should buy a pair of cheaper 500GB SSDs to replace the RAID6 array. I'd love to, but all the drives are recycled, didn't spend a dime. If I had the money, I'd replace all the RAID6 with SSDs like you said. Paul MaudDib posted:Can I use regular SATA cables between a SAS device and a SAS controller? Which SAS controller? Chances are: No, unless you have an adapter. Some SAS controllers do break out into SATA style connectors, but generally are still carrying extra pins for the SAS addressing. I'd suspect not. SAS to SATA? Probably. SAS to SAS? Likely not. You'll be missing key pins. CommieGIR fucked around with this message at 13:32 on Aug 8, 2017 |
# ¿ Aug 8, 2017 13:28 |
|
Saukkis posted:I meant that if you want to put the drives in the garage you will have to buy new hardware, so it's better to buy SSDs instead. I think you have me mixed up. I'm not putting drives in my garage.
|
# ¿ Aug 8, 2017 14:10 |
|
redeyes posted:No just total bandwidth. I have 7x Hitachi 7200RPM NAS drives. They can hit between 150-200MB/s in general for a single drive and that is just fine. I think for that to work I would need 10G ethernet? How many devices do you plan to feed connections to for that? I mean, unless you are building a SERIOUS VM lab with 25-50 machines on it, just Gigabit will probably be fine, if you even saturate that. I second the call for iSCSI.
|
# ¿ Aug 8, 2017 15:56 |
|
Paul MaudDib posted:LSI something, and I can confirm it's pinned out into four SATA-style connectors (right now I just use it as a SATA controller). I don't know if that'll work, the connector you linked is for hooking a SATA to an SAS Controller.
|
# ¿ Aug 9, 2017 00:18 |
|
As long as it has the SAS connectors on the board, yes, that'll work just fine. But if it only has SATA, then its a different story.
|
# ¿ Aug 9, 2017 03:18 |
|
|
# ¿ Apr 19, 2024 09:19 |
|
Please use ECC RAM. It really does make a significant impact to data integrity and will prevent downtime. I've had machines run for 5-6 years without error due to it, whereas I've had desktops crash and burn repeatedly due to errors that ECC would have caught. That's also before you get into that boards designed to use ECC RAM are also generally designed to be fault tolerant of DIMM failures. If data integrity is not a big thing for your home NAS, fine, its up to you, but frankly its a worthwhile investment. As for in a datacenter environment, it used to be you could get away without ECC due to failover ability and you had more metal available to failover to, however, due to increasing consolidation due to virtualization, that's not as common anymore. CommieGIR fucked around with this message at 13:21 on Aug 10, 2017 |
# ¿ Aug 10, 2017 13:09 |