|
That money pit of a table server is most glorious thing I have seen in quite a while, and totally made up for my very lovely Monday I am having.
|
# ¿ Dec 22, 2014 19:53 |
|
|
# ¿ Apr 28, 2024 00:08 |
|
I have 6 3TB WD Reds in a RAIDZ2 on FreeNas 9.3 in my current 'production' configuration at home. I also threw an old Intel 80GB SSD in as cache. Mine is a VM with 8GB as well. Since I added the SSD cache, I sustain 120mb/s across my gigabit network to my desktops. What kind of network cards / interfaces are you using? If you are serious about using something like this, I STRONGLY recommend getting some Intel PCI-E nics for your server. The crap Realtek ones on most motherboards are garbage when it comes to actual performance. I have a HD HomeRun Prime setup for CableCard tuners across the the network, and I have never been able to get my DVRS to work reliably with the lovely onboard networking. Another point about RAIDZ on FreeNas: don't do it for data you care about. They scream up and down about how its depreciated and no one should use it. IMHO, no one should use RAID5 for data they care about.
|
# ¿ Dec 24, 2014 14:51 |
|
Sasquatch! posted:I take it you're decently happy with Crashplan? I've been using CrashPlan+ for years and love it. My family's computer back up to both my server here and to the ~~CLOUD~~. If I am going to restore 50GB of data, I'd rather throw it on a USB drive and roll vs waiting weeks to pull it down.
|
# ¿ Dec 25, 2014 04:49 |
|
DNova posted:I'm a big fan of ZFS RAIDZ. I did that until I was scared by everyone saying how bad it is if a drive fails. I bought 2 more drives and now am doing RAIDZ2.
|
# ¿ Jan 20, 2015 22:13 |
|
UndyingShadow posted:I'm getting ready to build a hybrid VMWARE esxi (never used it before, but I need to increase my server OS knowledge and this seems like a good platform for it) and NAS box. Here's what I have so far: I run an almost identical setup to what you are looking to build. If you would like to do esxi, I would strongly recommend you get a board with a supported chipset (opteron/xeon) from VMware's HCL. As for the networking, you can get away with using only 2 NICs for your setup. I have two vSwitches: WAN and LAN. Each vlan is connected to one of my onboard Intel NIC's and then to the cable modem and a gigabit switch respectfully. pFense gets two vmxnet3 (or E1000 which is easier but uses WAY more CPU cycles under load) that bridge the WAN and LAN vSwitches. I then have other gigabit switches and a 802.11AC router in bridge mode for wireless. I also am running FreeNAS and passing through an LSI 9211-8i for an HBA. Having everything on vmxnet3 and local to VMware is awesome because it is a 40gb connection, and I've moved data from VM to VM from the FreeNAS store at 129Mb/sec.
|
# ¿ Jan 22, 2015 01:25 |
|
UndyingShadow posted:Noted on the networking. Trying to find a server motherboard seems like it'd be super expensive. Is there something I'm missing? I have the Sandy Bridge version of this board and a Xeon E3 1230 processor. http://www.newegg.com/Product/Product.aspx?Item=N82E16813132014 It has a C226 chipset that supports a Xeon or certain i3 processors. The added bonus is there is a driver pack for ESXi 5.5 that enables support the onboard i210 Intel nics. mayodreams fucked around with this message at 02:31 on Jan 22, 2015 |
# ¿ Jan 22, 2015 02:28 |
|
Boner Wad posted:You're running a virtualized router? Have you had any issues with that setup? I've been thinking about switching ISPs and I'd go from 30/3 to 100/5. I'd think this would keep up possibly. Sorry I had the week from hell at work. I am running a basic Xeon workstation board with a chipset (C206) that is supported on VMware's HCL. The good news is that pfsense 2.2 recently dropped, and I spent some time updating to that this weekend. The big change, particularly for a 0.1 release, is change from FreeBSD 9.x to FreeBSD 10.x. The most important part of the change log for me is native kernel support of vmxnet3, which loving OWNS because I spent a few hours trying to get vmware tools to install before and it was painful. My original install was the x86 version, and I later found out that you are not supposed to run the x86 version on x86-64 capable processor, so the reinstall this weekend was to x86-64 and it is a bit more performant.
|
# ¿ Feb 2, 2015 15:07 |
|
phosdex posted:Pfsense actually updated from FreeBSD 8, so it's a pretty big jump. Oh right. I knew that but was still catching my breath from digging my car out this morning.
|
# ¿ Feb 2, 2015 18:43 |
|
I've found that if you are not using a hardware Intel nic, you will be disappointed.
|
# ¿ Mar 15, 2015 00:06 |
|
SopWATh posted:What does this mean? I've been buried with work and haven't responded to this yet. From my experience, almost any onboard consumer NIC is going to suck if you are looking at high performance on a gigabit network. I have a Silicon Dust HD Home Run that takes a Cable Card and streams it across a network. I've tried onboard Realtek and Intel and both will drop frames and lose the buffer on live tv playback regardless of CPU or chipset in the box. They also dramatically limited the throughput when I used it as a WIndows fileserver to the tune of like 30-40mb/sec. Changing it out for a hardware intel nic like this one: http://www.newegg.com/Product/Product.aspx?Item=N82E16833516167&Tpk=N82E16833516167 alleviated my network issues with live TV and slow file transfers. My desktop has a hardware intel nic built in (Asus Z68) and I can get 110mb/sec to the file server to/from my SSDs.
|
# ¿ Mar 22, 2015 15:24 |
|
Nam Taf posted:E: though I'm starting to reach the point where single drives of 8TB is a *lot* of chances at very small probability failures and that worries me. Maybe raid6 is the best next step. ZFS does own but it can't alleviate the probability of a drive failing during a rebuild of a RAID5 volume, particularly as the size of the individual drives goes up. RAIDZ2 for life.
|
# ¿ Apr 21, 2015 13:43 |
|
iSCSI and FC can add storage to a server that is backed with more disk, has SSD cache, is centrally managed, etc. I think the most common use of iSCSI now is to provide storage to virtualization hosts. A key distinction is that iSCSI / FC is block level storage that presents the server with what looks like another local disk. So in the case of a Windows file server, an iSCSI LUN is attached to the server, and you would need to format it NTFS and then share it out from there. While you can move iSCSI LUNs between servers, it is generally not a good idea.
|
# ¿ Apr 22, 2015 00:13 |
|
I had 4x3TB Red drives in a RAIDz1 and it max out my gigabit connection on reads and just under on writes. SSD cache will help with reads. Now i have 6x3TB Red in a RAIDz1 with an 80GB SSD cache and I max out gigabit on pretty much everything.
|
# ¿ Apr 22, 2015 20:24 |
|
Desuwa posted:I run Crashplan on my Windows desktop and have it backup both to and from my NAS and to their butt. The backup to the NAS (including files from the NAS to itself) covers my desktop dying or reasonable file recovery needs. It's possible to make Crashplan run on FreeBSD/FreeNAS but it requires Linux emulation and that's one more layer of stuff I really don't want to be blocking me if something breaks and I need to do a restore. I had problems with this until I just manned up and presented an iSCSI LUN from FreeNAS to the 2012 R2 VM running Crashplan. I have all of my and my family's machine backing up to my server and the for about 4 years now and can't say enough good about it.
|
# ¿ May 20, 2015 15:17 |
|
devilmouse posted:What's the current "best" platform for ZFS? It's finally time I upgrade my old x86 Solaris Express install from back in '09 or '10 but I haven't paid much attention to ZFS in the meantime. The only real requirement is that I can run Linux VMs on it. FreeNAS has been great for me. The 9.x release was a bit spotty at first, but has been rock solid for a while now.
|
# ¿ Jun 5, 2015 14:06 |
|
Sounds like they have a custom firmware that prevent them from working like standard drives.
|
# ¿ Jun 6, 2015 16:31 |
|
FreeNAS 9.3 Folks: A recent update killed my sustained throughput on CIFS shares so it would go full speed for a few seconds, crater to zero, full speed, crater, etc. I found this thread on the FreeNAS forums: https://forums.freenas.org/index.php?threads/cifs-directory-browsing-slow-try-this.27751/ And in particular, the section "#4 Disable DOS attributes" resolved my issues by putting the following settings in the Auxiliary parameters section of the CIFS service config: code:
|
# ¿ Jul 13, 2015 18:20 |
|
Desuwa posted:RAID is not a backup. 1000x this. I have my data in a RAIDz6, but still make monthly backups to external hard drives, and anything I really care about gets backup to the CrashPlan butt. Relying on RAID as a backup, even with replication to another system, is playing with fire.
|
# ¿ Jul 15, 2015 04:03 |
|
In case someone else has a similar config, FreeNAS 9.3.1 moves the drivers from supporting FW16 to FW20 for the LSI 9211-i8 and the other rebranded cards. After updating, it gives a yellow warning that there is a driver and firmware mismatch. I ended up just passing the card through to my Server 2012 R2 vm and updating it and everything was golden.
|
# ¿ Aug 27, 2015 20:21 |
|
UndyingShadow posted:So you're saying if I have an IBM M1015 or similar I need to update the firmware on the card to FW20? If you move to 9.3.1 you do. I've been doing the updates since 9.3.0 and the 9.3.1 roll up was when they implemented the driver change after a long soak on the beta path. FW20 came out like last year, so its not bleeding edge or anything.
|
# ¿ Aug 27, 2015 21:05 |
|
sleepy gary posted:To further elaborate, what I am envisioning is a laptop and a handful of 2.5" USB hard drives, probably in RAIDZ-2. Frequent power loss is probable, so everything will have backup power (the laptop inherently plus something I will probably have to build for the hard drives*). So far this is the best I've come up with that meets the requirements (very small/portable, serviceable, ability to use laptop as a normal computer with the array and disks unmounted). This sort of setup will probably only cause you pain. Look at a small NAS that can do iSCSI and use an Ethernet crossover cable and save some sanity.
|
# ¿ Oct 22, 2015 14:25 |
|
sleepy gary posted:Not sure why you guys are getting hostile. I am asking you for pitfalls I haven't considered but nobody has given me any other than what I already knew, which is that it will be a little unwieldy when everything is attached. You can't plug drives in in the wrong order for ZFS arrays. Responses to your questions are less hostility, and more warnings that what you are planning is really stupid and will fail spectacularly at some at point, and take some or all of your data with it. You come to a forum with experts and ask questions and then refute the answers. But what do I know? I am just the technical lead for a prominent consumer/corporate order company that manages hundreds of terabytes of enterprise grade storage. Go right ahead with your plan!
|
# ¿ Oct 23, 2015 13:33 |
|
red19fire posted:Final question: is there a way to connect the NAS directly to the desktop? I've tried before and couldn't even find the drive. I don't even need 90% of the NAS features, it's mainly a file archive for my one-man operation. Technically you can use iSCSI to directly attach a LUN from a NAS or use NFS to share directly as well. However, I would not recommend you do either of those things and stick to SMB/CIFS rather than AFP is possible.
|
# ¿ Nov 5, 2015 22:39 |
|
necrobobsledder posted:There are some consequences for using SMB / CIFS if you're a fairly hardcore Mac user - no Spotlight indexing and other metadata like tagging on those network drives. This has led me down a dark path of trying to get the netatalk service on FreeNAS 9.x to support all the filesystem metadata structures I'd need to make AFP work fine, but ultimately I've come to the conclusion that if you want most of the Mac features, you're just going to have to give a raw block device to an OS X machine and let it do its HFS+ thing. Now I'm experimenting with iSCSI targets for an OS X El Capitan VM and after my trial so far I'm about to bite the bullet and pay for an iSCSI initiator on OS X because the free ones basically suck. Oh I get it. I am using FreeNAS for Time Machine backups for that same reason. The ATTO and GlobalSAN initiators suck rear end. However this is a thing now it seems: http://www.kernsafe.com/product/macos-iscsi-initiator.aspx
|
# ¿ Nov 5, 2015 23:41 |
|
Yeah I had the same issue updating 9.3 over the summer with my LSI 9211-8i HBA. Flashing it to the most recent version will fix it.
|
# ¿ Dec 30, 2015 20:22 |
|
I have zero sympathy because RAID5 in a workplace is a really bad idea, and it really should be RAID6 or RAID10. What it comes down to is a lack of understanding on what the actual risks are, and where your points of failure lie. Given 3 RAID cards, the solution is not to stripe them in RAID0, especially for consumer grade SSDs. If he could have put those cards into HBA mode and used a software raid that was NOT windows, like ZFS, and then even did a RAIDz1, he would have had a much better experience. Hell, even just using Windows Storage Spaces in HBA mode would have been better. That is just a prime example of someone 'good with computers' who tries doing a business class (not even enterprise) implementation based on those principals is basically doomed to fail.
|
# ¿ Jan 8, 2016 18:48 |
|
Vidaeus posted:Do you mean the cabling setup? It goes like this: Realtek nics can give really lovely performance in NAS configs. A hardware Intel nic can make a huge difference. http://m.newegg.com/Product/index?i...uQ&gclsrc=aw.ds
|
# ¿ Feb 4, 2016 00:31 |
|
PerrineClostermann posted:So I installed FreeNAS on my old Core 2 Duo machine, with 6gb RAM. my drives are plugged into SATA2 ports. The pool is configured in RAIDZ2. I'm getting 30MB/s transfer speeds, would that be pretty typical for this kind of setup? What NIC are you using? lovely onboard Reltek devices don't handle a lot of throughput well. I bought Intel NICs for all my devices for this reason.
|
# ¿ Feb 24, 2016 14:03 |
|
Call Me Charlie posted:WD released a 8TB version of the My Book Desktop External drive. Has anybody heard anything about it? My approach is getting a 8TB internal/external drive to back things up to. Putting all of your eggs into a single cheap usb enclosure is very risky. I've seen a lot of them fail.
|
# ¿ Mar 9, 2016 13:49 |
|
priznat posted:AsRock Rack makes good stuff, buy with confidence. I bought a Supermicro board for my new server build and IPMI works out of the box. There is even an iPhone app!
|
# ¿ Mar 13, 2016 04:55 |
|
Has anyone upgrade from FreeNAS 9.3 to 9.10 yet?
|
# ¿ Apr 1, 2016 15:37 |
|
Anime Schoolgirl posted:I tried an LSI 9211-8i and it wasn't recognized by anything I had, nor could I flash it to "IT mode" (ie the only thing useful for one of these, and none of those being sold come flashed like that) It might have been a DOA card, in which case my luck is amazing. You are trying to use a server part on a consumer OS in Windows 10, so I am not surprised it doesn't work. The server equivalent to Windows 10 is not out yet, so there are no drivers that would probably work. Ubuntu will probably need drivers too. I have that exact card running on ESXi passed through to my FreeNAS VM for 3 years and have never had a problem with it.
|
# ¿ Aug 16, 2016 16:27 |
|
Has anyone else had issues with the most recent FreeNAS 9.10 update and streaming video playback? I had to revert to 201606270534 and everything works fine again. I am using OpenElec and Windows 10 as clients and both had issues maintaining a SMB connection. I've been working on a couple of projects people might be interested in: I wrote an OwnCloud guide using FreeNAS and Ubuntu with iSCSI for storage. I've been really happy with it, and using Let's Encrypt for the SSL certificate is awesome because you don't have to deal with self-signed certs. www.neckbeard.org I wanted another project for my RPI3 and I found http://www.runeaudio.com/ which is a badass headless audio player. My primary goal was AirPlay, but it does Spotify and a number of other streaming services too. An added bonus is that you can mount an SMB share and have it play files from your NAS too.
|
# ¿ Aug 24, 2016 17:23 |
|
During my Vista years, my boot volume was a WD Raptor RAID0. That was WAY fast, but only like 72GB, and the whole RAID0 thing. I never lost any data, but I was drat sure to have my actual personal data on a 2nd drive.
|
# ¿ Oct 21, 2016 15:24 |
|
What are you using for your 10g switch? I would check that your firmware and drivers line up. Is Server 2016 and Windows 10 actually supported for the 10g HBA? From the HDD perspective, 4 disks, even in RAID 10, isn't a ton of spindle speed.
|
# ¿ Oct 22, 2016 15:19 |
|
Walked posted:Like I said; to eliminate HDD as the bottleneck I'm going from SSD (850pro 1tb) --> SSD RAID 0 (2x 850pro on hardware RAID0, with battery and 5112mb cache, write-back enabled/forced); I should very easily be doing more than 2gbit/sec; maybe not maxing out 10gbe, but notably better than what I'm seeing. You didn't mention VMware, so that complicates things a lot. The HBA you are using is supported in ESXi 6.0, but it looks like you need to download the driver from VMware: VMware Download Also you should check if your server is also on the compatibility list, and if it needs additional drivers. The vanilla ESXi image will 'work' until it doesn't.
|
# ¿ Oct 22, 2016 16:17 |
|
Meant to edit but hit reply. Was your WIndows VM using E1000 or VMXnet3?
|
# ¿ Oct 22, 2016 16:21 |
|
I don't feel 10g is worth the cost yet unless you want a baller home lab with separate storage and hypervisor. But you can also accomplish that with LACP on a managed switch. Unless you have a very high performing storage system, you are really not going to push the limits of more than gigabit in the home. Even in the entry level enterprise level 10g isn't that necessary unless you have a lot of load on the storage. A lot storage fabric is 4g/8g connections. Our production ESXi hosts are 2 x 10g twinax for networking and 2 x 8g FC for storage. They host anywhere from 20-50 VMs and I'd have to look, but I doubt they really push the storage that much. The dual connections are really for redundancy rather than aggregate bandwidth. The benefit of having everything using VMXnet3 on a single ESXi host is that everything is 10g internally. Of course, you can't put your storage vm on that storage, but stuff like FreeNAS is supposed to boot from a USB/SD card anyway on bare metal.
|
# ¿ Oct 22, 2016 17:14 |
|
AppleTalk was the networking protocol before TCP/IP came in the late 90's. AFP was a file access/sharing protocol that sat on top of AppleTalk.
|
# ¿ Dec 8, 2016 20:29 |
|
|
# ¿ Apr 28, 2024 00:08 |
|
Storing hard to replace / irreplaceable data on consumer grade storage is asking for trouble, especially given your large capacity requirement. There is a very good reason that enterprise storage systems are expensive, and it is mostly for performance and reliability. One giant RAID6 volume is a very bad idea. This is where ZFS and zvols will help you spread it out over more disks. You should look at IX Systems for TrueNAS gear. Whatever method you choose, you need to factor backup in as well, which will also probably be tape based for that kind of data set.
|
# ¿ Dec 28, 2016 00:54 |