|
thebigcow posted:Does that 140 number include monitors? That said, given past benchmarks, a Sandy Bridge does seem to idle at around 100W. dorkanoid posted:I've been looking for one of those - which one did you get?
|
# ? Sep 15, 2014 17:30 |
|
|
# ? Mar 28, 2024 15:59 |
|
I watched my Sandy Bridge Xeon system bootup connected to a Kill-A-Watt 30 min ago and it never went north of 105 watts, and I'm on here doing random desktop stuff and it's reading a fairly steady 64 w. I'm running:
I could understand being off about 10% or so, but I'm at least 25% off of your figures. Care to describe your software and hardware setup better? What's kind of hilarious is my NAS uses more power at "idle" than my desktop, and that's the one that's running a Haswell i3. That runs at 78 w with 8 5400 RPM drives that never spin down, but it was about 54w or so when I had the drives spin down (it's a mini ITX ASRock C224 board). The PSU is a 300w 1U PSU that's supposed to be pretty ok, but I dunno how efficient it really is. That's what the UPS reports, and the Kill-A-Watt gave me that figure when I had it hooked up to that as well, so I figure they're pretty close to the right measurements. The Kill-A-Watt gave me about 12w for my Mac Mini which puts it real close to Apple's specs, for reference, so I don't distrust my meters even for low power measurements. I could match your current power figure with my desktop and rather piggy NAS together
|
# ? Sep 16, 2014 02:38 |
|
My hardware is this: - Intel Core i7-2600 - Asus P8P67 EVO - 4x Corsair XMS3 DDR3-1600 4GB - EVGA Geforce GTX 780 - Soundblaster ZxR - Samsung 840 Pro SSD - 2x WD Caviar RE4 T2B - Corsair AX860i PSU - Arctic Freezer 13 CPU cooler - 2x Corsair 200mm case fans - Oculus Rift powered down When the disks stop spinning, it drops to 128W. The Corsair PSU is the variant with the digital voltage control, should that have an influence on the measurement? Also, I've disabled the Hyper-V hypervisor for giggles, but that didn't change a thing.
|
# ? Sep 16, 2014 11:04 |
|
I finally decided to ditch my tower full of drives for a Drobo 5N. I wish I'd done it from the start. This thing loving rules. It's also got a great developer network, so things like sickbeard, rutorrent, etc can all run on the box as well, so I'm really not losing anything in the way of functionality.
|
# ? Sep 16, 2014 16:37 |
|
Combat Pretzel posted:My hardware is this: Have you tried removing components one by one? I don't have integrated graphics on my CPU, but you do, so you could pull the GTX 780 and check that? Thing is, everyone's measured the idle power consumption of that card to be slightly lower than my GTX 680, so that'd be peculiar if your GPU is the primary difference. Beyond the GPU, I can only guess it's the RE4 drives, RAM, cooler, the extra case fan, and your motherboard being ATX with lots of goodies on it combined with the higher wattage rating PSU that can make up the difference, but even still I doubt you'd get another 60 watts higher idle on the hardware side. As for the software side of the equation, I fired up a CentOS 6.5 USB stick and it's running at 74 watts right after booting up and is around 70 watts after several minutes of idling, so it's within 10% of the power efficiency of Windows 7. I watched a Youtube video (granted, it's not running with Flash, which may be the bigger problem) and with the nouveau driver I'm still only seeing it go up to 79 watts in Firefox.
|
# ? Sep 16, 2014 17:11 |
|
I guess I have to go through pulling everything. Part of the power draw may be the graphics card tho. I've found an Excel sheet where I noted some things down, that I've measured with my current clamp a long while ago, and the only different parts where an older more inefficient PSU and an older graphics card. The system drew 100W back then, and that possibly with apparent instead of real load, since clamps aren't supposed to be good at that. For something else, any recommendations in regards running NTFS on a ZVOL shared via iSCSI? Especially in relation to compression, since LZ4 seems to be the hot poo poo? I figured default 8KB volblocksize (to give the compression some data to work with) with 8KB clusters on top (to avoid two unrelated 4KB clusters hogging a block and possibly loving with the compression), and 4KB iSCSI sectors for it to haul rear end over the network (512B vs 4KB in my VM testbed makes like 30% difference).
|
# ? Sep 19, 2014 00:54 |
With NAS4Free how do I do an NFS share to any 192.168.1.* address on my network?
|
|
# ? Sep 19, 2014 06:30 |
|
fletcher posted:With NAS4Free how do I do an NFS share to any 192.168.1.* address on my network? Authorised network should be set to 192.168.2.0 /24.
|
# ? Sep 19, 2014 06:43 |
IOwnCalculus posted:Authorised network should be set to 192.168.2.0 /24. Ok what is that wizardry and why does it still not work code:
code:
fletcher fucked around with this message at 07:34 on Sep 19, 2014 |
|
# ? Sep 19, 2014 07:20 |
I think IOwnCalculus typo'ed as you want to use 192.168.1.0/24. If you want to know more, look up CIDR notation - and be glad you dont have to learn to calculate IPv6 CIDR in your head. BlankSystemDaemon fucked around with this message at 09:56 on Sep 19, 2014 |
|
# ? Sep 19, 2014 09:52 |
D. Ebdrup posted:I think IOwnCalculus typo'ed as you want to use 192.168.1.0/24. Ah! That makes more sense. Learned something useful in the process now too. But it still didn't work code:
|
|
# ? Sep 19, 2014 10:44 |
|
I went and bought the N54L along with 4 disks in ZRAID2 on FreeNAS. It only came with 4 GB ram, but I tested it while waiting for the extra ram to arrive. I was only able to achieve sequential write speeds of around 65 MB/s. Now, with a total of 12 GB ram, it easily hits 100 MB/s. Does anyone know if the MiniDLNA plugin is supposed to scan for updated files automatically? Right now I have to disable and enable the plugin for it to register, and I'm getting tired of that.
|
# ? Sep 19, 2014 12:23 |
|
fletcher posted:Ah! That makes more sense. Learned something useful in the process now too. Have you checked the permissions on the directory you are sharing or the logs on the nfs server?
|
# ? Sep 19, 2014 16:30 |
thebigcow posted:Have you checked the permissions on the directory you are sharing or the logs on the nfs server? /mnt/my_stuff is 0777, I'm not seeing any errors in the /var/log/system.log anymore though for my failed mount attempts. Not sure why
|
|
# ? Sep 19, 2014 19:17 |
Also, and I don't know if this matters, but the nfs client is a VirtualBox VM with NAT networking. Maybe I need to forward some ports to the VM or something?
|
|
# ? Sep 19, 2014 21:58 |
Since you have NAT for your guest OS', you might run into problems with regard to what IP nfs on the client side reports to the server. Can't you use network bridging instead?
BlankSystemDaemon fucked around with this message at 23:35 on Sep 19, 2014 |
|
# ? Sep 19, 2014 23:25 |
D. Ebdrup posted:Since you have NAT for your guest OS', you might run into problems with regard to what IP nfs on the client side reports to the server. Can't you use network bridging instead? Ah that's a good point, I'll give it a shot with a bridged network after work tonight. Thanks!
|
|
# ? Sep 19, 2014 23:28 |
|
D. Ebdrup posted:I think IOwnCalculus typo'ed as you want to use 192.168.1.0/24. Yes, I was reading too closely on my own configuration and put in my own subnet instead I'd say to rule the VM networking in/out, see if you can mount the NFS directly from the host system. I have mine set to 192.168.2.100/32 since 192.168.2.100 is the only client that connects to it, but if I did 192.168.2.0/24 then anything on that subnet should be fair game. What firmware version are you on? IOwnCalculus fucked around with this message at 00:24 on Sep 20, 2014 |
# ? Sep 20, 2014 00:20 |
Woot, bridged networking did the trick. Thanks guys! I'm on NAS4Free 9.1.0.1 (431). Should I upgrade? I'm always scared to...
|
|
# ? Sep 20, 2014 04:32 |
|
I would, pretty sure there's some significant security updates between there and the current 9.2.0.1. The closest thing I ever had to a problem with updating N4F was when I was running it in a VM and encountered some really loving weird MSI interrupt errors that would halt the boot; I just had to manually edit the boot scripts each time I updated to disable msi/msix.
|
# ? Sep 20, 2014 05:22 |
|
Eh, apparently ZVOLs with default volblocksize on RAIDZ with 4K sector disks are pretty wasteful. So if you're using iSCSI or something, you might want to consider redoing the volumes backing the extents. http://zfsblog.com/2013/07/why-a-zfs-volume-references-more-space-than-refreservation/
|
# ? Sep 21, 2014 23:15 |
|
I'm getting only ~15-20 MB/s transfer from NAS to client on a wired gigabit network. NAS is a NAS4Free with 8 x 3TB WD Red drives in a RAIDZ2 and a total size ~15TB. Drives are connected via SATA to an ASRock H77 Pro4/MVP Mainboard. Network is off an Intel Pro 1000/GT PCI NIC. In order to increase performance am I better off looking at my networking setup or looking to move the drives off the SATA connectors and onto a dedicated RAID card?
|
# ? Sep 22, 2014 04:28 |
|
Nystral posted:I'm getting only ~15-20 MB/s transfer from NAS to client on a wired gigabit network. NAS is a NAS4Free with 8 x 3TB WD Red drives in a RAIDZ2 and a total size ~15TB. Drives are connected via SATA to an ASRock H77 Pro4/MVP Mainboard. Network is off an Intel Pro 1000/GT PCI NIC. That is very, very low and there is something wrong with your configuration.
|
# ? Sep 22, 2014 05:26 |
|
What's the client machine?
|
# ? Sep 22, 2014 05:54 |
|
The test client machine is a win 7 dell laptop three spinning disks and 1 SSD. Other machines are a mix of dell and apple laptops, but they're mostly wireless so speed is less of a concern there. Route is NAS -> Netgear R7000 (running TomatoUSB) -> Cisco 8 port SOHO switch -> Laptop. My configuration is 8 x 3TB WD Red drives in a RAIDZ2 connected via SATA. There is no caching drive. NAS4Free is serving up the share via AFP, CIFS/SMB2, and NFS. Dell is obviously going via CIFS/SMB2.
|
# ? Sep 22, 2014 14:03 |
|
Are there any options for a very tiny 4-disk (2.5") NAS but with a real motherboard with ECC support? DS414slim is perfect except for the fact that I want ZFS and not SHR.
|
# ? Sep 22, 2014 23:53 |
|
DNova posted:Are there any options for a very tiny 4-disk (2.5") NAS but with a real motherboard with ECC support? DS414slim is perfect except for the fact that I want ZFS and not SHR. The Cooler Master Elite 110 and the Silverstone ML06 are two mITX cases that support 4x2.5" drives. But I don't think you'll ever be able to get something that's truly size comparable to the slim since you're constrained by the size of the mITX motherboard.
|
# ? Sep 23, 2014 00:28 |
|
DNova posted:Are there any options for a very tiny 4-disk (2.5") NAS but with a real motherboard with ECC support? DS414slim is perfect except for the fact that I want ZFS and not SHR.
|
# ? Sep 23, 2014 00:35 |
|
Nystral posted:My configuration is 8 x 3TB WD Red drives in a RAIDZ2 connected via SATA. There is no caching drive. Also remember that some operations are just slow. If you're pushing over 1GB worth of 25Kb JPEGs, that's gonna transfer a lot slower than a single 1GB avi or something.
|
# ? Sep 23, 2014 14:41 |
|
Someone with access to a decent sample size says that, among others, some WD Red 3TB drives fail at an above average rate. Short read, graphs and specific model numbers behind the link. Thought this was relevant to this thread.
|
# ? Sep 24, 2014 22:05 |
|
Synology just announced their new 4-bay high-end product (replaces DS412+). CPU is now a 2.4Ghz atom and ram is bumped to 2GB. https://www.synology.com/en-us/products/DS415+
|
# ? Sep 24, 2014 22:12 |
|
Flipperwaldt posted:Someone with access to a decent sample size says that, among others, some WD Red 3TB drives fail at an above average rate. Oh I've got about 5 of those here, in various computers/servers. Time to check model numbers.
|
# ? Sep 25, 2014 06:44 |
|
dorkanoid posted:Oh I've got about 5 of those here, in various computers/servers. Time to check model numbers. Here's the table and graph, for those unwilling to click a link:
|
# ? Sep 25, 2014 10:38 |
|
great, i've got nine of the wd red 3TB drives between two servers. how about those low failure rates on the hitachis tho? did not expect that.
|
# ? Sep 25, 2014 13:14 |
|
Does anyone have any experience with Asustor? I've been looking at a new 4 bay Synology but Asustor has a 6 bay for not much more. Specs are a lot better too.
|
# ? Sep 25, 2014 14:17 |
|
Nulldevice posted:great, i've got nine of the wd red 3TB drives between two servers. how about those low failure rates on the hitachis tho? did not expect that. That's pretty much how their last HDD analysis played out, too. In fact, if I remember correctly, they closed the article by saying if they could get them at a good price reliably, they'd opt for nothing but Hitachis--but that "at a good price reliably" is apparently a lot harder for those drives.
|
# ? Sep 25, 2014 18:30 |
|
Quick question. Say I have a dozen WD Red 4TB drives and I want to make a Raid-Z array for storing digital multimedia files that will be streamed to maybe 3 devices simultaneously. Am I better off going with a single Raid-Z2 array, or with a pair of Raid-Z1 arrays? I'm leaning towards Z2 just because of that second drive of parity information. The array size should be pretty much the same with a single array versus 2 arrays with half the drives, right? Or is there some more overhead with larger arrays?
|
# ? Sep 25, 2014 18:50 |
|
Mthrboard posted:Quick question. Say I have a dozen WD Red 4TB drives and I want to make a Raid-Z array for storing digital multimedia files that will be streamed to maybe 3 devices simultaneously. Am I better off going with a single Raid-Z2 array, or with a pair of Raid-Z1 arrays? I'm leaning towards Z2 just because of that second drive of parity information. The array size should be pretty much the same with a single array versus 2 arrays with half the drives, right? Or is there some more overhead with larger arrays? Don't do a single array. One of the reasons is that scrubing will take forever and a day.
|
# ? Sep 25, 2014 19:20 |
|
I'd do two Z2s or four Z1s, but that's just me.
|
# ? Sep 25, 2014 19:23 |
|
|
# ? Mar 28, 2024 15:59 |
|
I have mine set up in two Z1 sets of 4 drives each. A full scrub takes about 12 hours, with about 75% of the pool used. That's only 18TB capacity though, since I've got 2 and 4TB drives. I think there's a technical reason that you want to have an odd number of drives in a Z1 though, but I forget exactly what it is.
|
# ? Sep 25, 2014 20:05 |