|
Is a dual-core Atom sufficient to run a large (12TB or so) raid-z2 server? Or does Atom's RAM limitation prevent acceptable performance? This would be for just a general media server that is infrequently accessing mostly sequential data. Is raid-z2 even worth it for that usage scenario? Does lightning strike twice THAT often?
|
# ¿ Mar 13, 2012 17:13 |
|
|
# ¿ Apr 26, 2024 12:55 |
|
Would it be possible (with a 4-jack NIC) to run a RADIZ2 in FreeNAS and something like m0n0wall in VirtualBox so the same machine can operate as both a NAS and router?
|
# ¿ Apr 20, 2012 19:03 |
|
UndyingShadow posted:You'd need bare metal VPN software, and I know there's a free version of VMWare that'll do it. You'll likely need a raid card that can be "passed through" to the FreeNAS vm as I don't think you can directly access storage volumes through VMware. Also, I don't know why you'd need a 4 port NIC, 2 should be enough. Maybe I'm misunderstanding this but I was intending to have FreeNAS be the "host" OS and running the VM on top of it so I could use ZFS. Is this stupid/impossible? I'm assuming since you can install VirtualBox on FreeBSD you can install it on FreeNAS. I'm aware this throws the "boot from a thumb drive" proposition out the window, but since I was intending to also use it for SABNzb and Sickbeard I was going to need an OS drive anyways.
|
# ¿ May 2, 2012 08:08 |
|
UndyingShadow posted:Freenas is designed to be as appliance like as possible, which means that if you install the FreeNAS image on a larger hard drive, it just won't see any of the space past 2 gig. I have no idea what would be involved in running the VM on top of FreeNAS, but it really really wasn't designed to be used this way, and since the FreeNAS support forums are all but useless for even basic things, I bet configuring it and getting it working would be a nightmare. So looks like I'm after more of a general purpose server then. The secondary purpose is of course to learn (by doing) FreeBSD/Networking/VM. Thanks for your input!
|
# ¿ May 2, 2012 20:26 |
|
Would someone mind taking a look at this build for a 12TB (with raidz2) FreeNAS setup? Thanks in advance! http://secure.newegg.com/WishList/PublicWishDetail.aspx?WishListNumber=20731586
|
# ¿ Jul 2, 2012 01:20 |
|
So I bought this today:code:
Is the CIFS/Samba support really as bad as people say it is? All of my computers are Windows and only 1 of them has Pro, so NFS wouldn't do me much good. Does the NAS4Free SMB implementation work any better?
|
# ¿ Dec 7, 2012 04:49 |
|
LmaoTheKid posted:Why not give Solaris/OpenIndiana and napp-it a shot? Their CIFS is kernel mode and apparently really good. I think I will do this! This sounds pretty painless and will force me to actually brush up on my UNIX(like) skills. Any preference on Solaris 11 vs OpenIndiana? Thanks! EDIT: Well should have just read napp-it's manual, he recommends OpenIndiana. forbidden dialectics fucked around with this message at 06:00 on Dec 7, 2012 |
# ¿ Dec 7, 2012 05:13 |
|
Christmas came a little early this year... forbidden dialectics fucked around with this message at 16:14 on Dec 14, 2012 |
# ¿ Dec 14, 2012 07:38 |
|
movax posted:I know you're super excited but no need for us to see what you're copying to and from places. *cough* It's in this: code:
|
# ¿ Dec 14, 2012 16:15 |
|
I've had 6 Reds from B&H photo running for about 6 months, all's well:code:
|
# ¿ May 23, 2013 23:32 |
|
I've had a large 6-drive RAID-Z2 array running on OpenSolaris for over a year now. Is there any maintenance that needs to be done, software wise? I haven't noticed any decline in speed and there are no hardware or software errors on the drives. SMART readout looks perfect.
|
# ¿ Feb 17, 2014 05:11 |
|
IOwnCalculus posted:Scrub the pool once in a while, I do it weekly via crontab. Not much else to do. Set a job to scrub once a month. Thanks!
|
# ¿ Feb 17, 2014 06:28 |
|
The catch this season was pretty good, boys. Testing...(definitely not going to burn the house down) Doing it right this time, with like, actual server hardware and not crap that my neighbor used to mine bitcoins with Had to split up the testing across 2 different computers, and the one with only 2 drives attached finished much quicker (the write/read speeds decrease with more drives attached, probably saturating whatever link the USB controller is connected to). The first batch was good, so let's shuck! These are HGST air-filled DC HC320 drives that have been de-tuned to 5400 RPM. While maybe not quite as nice as the helium filled Reds people were getting from other shucking drives, for $150 an 8TB datacenter drive is pretty goddamn good. Eventually going to migrate the 6 3TB Reds that I have in my current NAS over to this enclosure (so, 6x 8TB drives + 6 3TB drives, running in separate RAIDZ2 pools). They're nearing 6 years old but still don't throw any errors, so I'll keep using them for low-value storage (trashy reality shows, etc). And yeah, yeah a CLC on a server is stupid as hell but it was just laying around and I had nothing else to use it on, so whatever.
|
# ¿ Oct 28, 2018 00:45 |
|
taqueso posted:Is the buttons trackpad actually better at tracking or is it just for the buttons? The trackpad actually sucks rear end, use the nib, it's the only way to fly
|
# ¿ Oct 28, 2018 01:05 |
|
Atomizer posted:What software are you using / what's your testing process? Since I didn't want to have to set up a dual-boot partition with Linux (and use smartctl and badblocks), I used GSmartControl to do a short SMART test (2 minutes) before starting anything, then used h2testw to write/verify each drive (~30 hours). Then I ran the "long" SMART test from GSmartControl (~15 hours). Because it's Windows, you're only running it on the filesystem, so the last couple hundred reserved MB doesn't get tested...but I'll take those odds. Without the fans, the drives were getting up to 55C, which is way too hot. Added the fans and they held at around 38C, which is perfect.
|
# ¿ Oct 28, 2018 03:53 |
|
I heard you guys like hard drives 6x8TB in one RADIZ2 pool, 6x3TB in another. About 40 TiB usable (for now). Running ProxMox with separate LXC containers for Turnkey Fileserver, Plex, SABNzdb, Sonarr, etc. ZFS pools are handled by the host and mounted as bind points to the containers.
|
# ¿ Oct 31, 2018 20:06 |
|
nerox posted:What case is that? Lian-li A75. Kind of a disappointing case from Lian-li in terms of quality/finish, but it was quite cheap and is literally the only tower case I've found with 12 bays.
|
# ¿ Oct 31, 2018 20:46 |
|
Farmer Crack-rear end posted:Nice, I've got one of those cases myself. Your picture threw me off at first because I saw that tower PC in the background as an open door or something and was a bit confused. "that looks so much like my file server's case, but what's with the door?" Yeah that's a pc-v3000 in the back all torn apart for my other project.
|
# ¿ Oct 31, 2018 21:41 |
|
Sorry I just transitioned to a Rieserfs system, please don't tell my wife.
|
# ¿ Nov 6, 2018 00:37 |
|
H110Hawk posted:RAID is not backup. If you haven't touched enough computers to see a raid controller poo poo the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup. I backup all of my critical files onto a striped 1000-drive 1.44MB floppy array.
|
# ¿ Nov 8, 2018 02:52 |
|
Rusty posted:Okay, I have another question. I am not sure unraid is for me, and freenas just looks like a fancy web UI on top of FreeBSD. Is there any reason I shouldn't just use a Linux Distro for a NAS box? Are the dedicated NAS stuff better at memory management? Are there features I need like ZFS? I just want a dedicated box for storage and Plex. I'm thinking of just using CentOS with mdadm. Look into ProxMox, it's what I used for my new setup and it's pretty dang cool. It does ZFS if you want it, but otherwise whatever Debian supports if you don't. Run everything in LXC containers, with one of them being Turnkey Fileserver (there's a default template in the ProxMox library). Then one for Plex, SABnzbd, etc.
|
# ¿ Nov 13, 2018 23:24 |
|
One thing that was a consideration item for me was that Plex does not support hardware transcoding on FreeBSD; it's Windows or Linux only. If your processor supports Quicksync, this can be really useful if you and your spouse have 4k (or even 1080p) media that you want to watch at the same time on your phones on a plane (or other bandwidth limited areas), for example. It is technically "worse" than the software transcoder and does require a Plex Pass ($$). Just a data point to consider. It does work perfectly using GPU passthrough in an LXC container.
|
# ¿ Nov 15, 2018 00:37 |
|
Sub Rosa posted:How hard is it to get hardware transcoding working in plex if plex is in a docker? I would need to find a low profile GPU I think too, but I would love to add something like this to my N54L. Not hard at all. Basically the same process as LXC, install the drivers on your base OS and use passthrough to the container. If your CPU supports Quicksync you might see additional benefit with a discrete GPU but it's probably not required unless you're doing a bunch of 4k HEVC transcodes.
|
# ¿ Nov 15, 2018 16:58 |
|
Sniep posted:tl;dr: WD purples are basically the same as the reds I believe the Seagate "archive" drives use SMR which is really bad for NAS type workloads.
|
# ¿ Nov 28, 2018 04:40 |
|
Tivac posted:Pulling the trigger on the Synology 918+, also gonna grab a few drives. WD Elements 8TB are even cheaper and are usually HGST DC320s.
|
# ¿ Dec 1, 2018 03:33 |
|
|
# ¿ Apr 26, 2024 12:55 |
|
EVIL Gibson posted:Please. Don't use USB. A usb flash drive doesn't have SMART info that let you know it is beginning to die. There are a few but they are weird I never understood this recommendation. They were making it well, well after you could get 256 GB SSDs for like $50.
|
# ¿ Jul 25, 2020 02:33 |