|
I've been rolling my own for awhile now (and I'm also an idiot) but thread consensus seems to be unraid. Why that over FreeNAS?
|
# ¿ Oct 2, 2018 09:23 |
|
|
# ¿ Apr 26, 2024 13:27 |
|
Thanks, Gunslinger and Miniboss. That makes sense but I already understand FreeNAS and also I've got enough hardware to throw it into overkill mode so I'm good with it. Xeon E5-2620v3, 64GB RAM, NVMe 256GB L2ARC, 256GB SSD SLOG, 1xZ1 @4x4TB ea shucked, 1x Z1 @ 2TB overall (forget the capacities) with a Dell MD1000 with 14x 1TB 6Gb SAS drives as a rsync backfilled to 10GbE
|
# ¿ Oct 3, 2018 05:17 |
|
Atomizer posted:
|
# ¿ Feb 9, 2019 11:33 |
|
BobHoward posted:LMAO. Before HGST was Hitachi it was IBM's HDD business unit. IBM included the San Jose campus in the sale, so HGST R&D and corporate HQ never moved during the entire time Hitachi operated it as a wholly-owned subsidiary. (I don't know if manufacturing moved around but I'm sure it was already outside the US at the time of sale.) They appear to have finally moved after getting sold to WD... to WD's campus... which is also in San Jose. I mean, yes, I'm fundamentally aware of this, but my company is also a "made in USA" processor board manufacturer so when we tout that and then clients bitch about commodity components, it's, like, erm, you do realize every processor, NIC, and every other IC on this this was surface mounted in the US but didn't originate here, right? tl;dr: the military is dumb and reactionary. Supermicro story, true or not, has them shook.
|
# ¿ Feb 9, 2019 12:26 |
|
necrobobsledder posted:The US military is scared of supply chain infiltration because the US itself practices supply chain compromises and any other nation state could do the same to us and has absolutely done so in the past, specifically China but not so much Russia because Soviet manufacturing was never as globalist as Asian manufacturing. Because the US has such a poor manufacturing climate for commodity but high-tech goods like motherboards, ICs, etc. it is basically impossible to "buy American" for a full supply chain from top to bottom (see: Apple's vain efforts to try to manufacture various products in the US). It'll pay so much for this security that it's really hurting the Pentagon to fund a lot of tech projects and is therefore looking outward to companies like Google and Amazon to do what defense contractors used to do for several decades (because those contractors were built primarily to service the government and military officers' retirement plans to the T no matter how nonsensical, not to actually build efficient, world-class tech as a primary mission). This is driving up revenue for the big tech companies and keeping DC beltway bandit pay down substantially. /this is not an empty quote because I hate working with military prime contractors and is absolutely true. They'll swear up and down they suck the dick of the "warfighter" but, at the end of the day, they're in it for profit. Profit means commodity/COTS product.
|
# ¿ Feb 9, 2019 22:41 |
|
Yeah they've discontinued a lot of the older HBAs over the years but the 9207 just keeps trucking along. The 8i is what I use for my internal drives with freenas and an expander backplane, zero complaints. Been rock solid for many years.
|
# ¿ Feb 13, 2019 01:46 |
|
The best, lowest power option is a Quadro P2000.
|
# ¿ Mar 5, 2019 14:40 |
|
Volguus posted:That's a holy-moly expensive card. I said, "best." It's single slot and a Quadro and doesn't require external power which makes it usable in a lot of COTS/retired rackmount server environs, as well as playing nice with pass thru for a lot of hypervisors which Nvidia tries to limit for the GTXs. I just happen to like it, additionally, cause I have access to them for testing purposes through work and have a lot of experience with them. edit to say: We're buying them for, well, less than what I see as street in not-super-astronomical-volume, so I can see why you'd react that way looking at the Amazon price. If you're paying more than 350 USD you're paying *way* too much. Crunchy Black fucked around with this message at 04:52 on Mar 7, 2019 |
# ¿ Mar 7, 2019 04:50 |
|
*sees 50 new posts in this thread, thinks, well this is gonna go one of two ways...* *click*
|
# ¿ Mar 28, 2019 17:42 |
|
Beaucoup Haram posted:How good is GPU encoding ? Could it do 10 1080p streams simultaneously ? If I could cut down on the resource requirements I agree it would be a lot easier to find something that fit. Quadro P2000.
|
# ¿ Apr 10, 2019 13:16 |
|
Moving my total situation from a discombobulated Bulldozer with local drives and a dual xeon e5-2603v3 with a Dell MD1000 to a single 2603-V3 with some added NVMe. Should cut down on power and make things a little more manageable. Hopefully the arrays come along from Corral as I plan on upgrading in place. Cross your fingers.
|
# ¿ Apr 29, 2019 00:04 |
|
IOwnCalculus posted:The most picky it should get is if it complains about not exporting the zpool first. Thanks IOC! Came home from dinner a little more lit than I was planning on so postponing for now but I feel better about it overall.
|
# ¿ Apr 29, 2019 02:52 |
|
Okay, fight does not go well in getting my pools migrated over to 11.x from Corral. Basically, I had a motherboard that was hosting a Dell MD1000 DAS enclosure that was its own pool with some NVMe cache. I've inserted that motherboard into the 2u chassis that was hosting 2 other pools with internal drives and some SATA cache. This is worrisome because it sees the pools that that particular hardware didn't create and has imported them successfully. It does not see the Pool in the MD1000 as available to import. It is showing the disks in the Disks/Import disk dialogue but no active pool with it. Is it safe/necessary to import the drives the pool is on in this instance since it's not auto-grabbing the whole pool? Based on the outputs I saw during install I thought it had seen it when it did its initial queries for pools and I didn't have to do it for the two others. Crunchy Black fucked around with this message at 01:14 on Apr 30, 2019 |
# ¿ Apr 29, 2019 23:19 |
|
Anyone have any thoughts? Going to try importing the disks tonight if I don't have any feedback saying, "wait, don't!"
|
# ¿ May 1, 2019 22:35 |
|
Yeah as others have stated, the Purples have firmware that is specific for sustained, streaming writes. Probably not ideal in a typical consumer NAS situation, at least insofar as this thread is concerned.
|
# ¿ May 3, 2019 20:50 |
|
Atomizer posted:The last time I inquired about that in this thread, the answer included the clarification that the video/streaming behavior of those specific drives only applied in that "multiple video stream" scenario, so they function as regular drives in other applications. I mean, that may be true, but I can't claim to understand the mechanism by which the drive would know that it was in that situation and optimize for it?
|
# ¿ May 6, 2019 17:58 |
|
H110Hawk posted:There are SCSI commands to handle the stream provisioning, update its status, etc. You then write your data into that scsi stream. Pry open a HDD based DVR and you will find these disks, put in an otherwise equivalent or better non-DVR based disk and the device may simply reject it for being unable to issue those commands. I know some ex-cisco guys that would find this post really funny but it makes sense, thanks! D. Ebdrup posted:Harddrives are a black box, but presumably they use some sort processing in the firmware to look at the contents of the cache and if it matches a particular set of heuristic/magic datapoints then the specific features kick in? Having the cache of a drive actively parsing its own contents for file attributes seems incredibly taxing and unnecessary, no? e: so what's the common knowledge when you import a pool into freenas and you can't see any file structure in the share? Am I missing something here? I hope... Crunchy Black fucked around with this message at 04:49 on May 7, 2019 |
# ¿ May 7, 2019 04:47 |
|
H110Hawk posted:If you could drag them out back and beat them for those ATA Flash disks which were totally not just off the shelf parts with some secret added to it I would appreciate it. Make sure to use a Louisville Slugger® brand bat, accept not substitutes. I'd love to know the situation that has you quite so in stitches but, unfortunately, I can't quite elaborate any further due to confidentiality concerns.
|
# ¿ May 7, 2019 05:07 |
|
Hadlock posted:
|
# ¿ May 10, 2019 16:54 |
|
Okay, finally figured it out. I had NFS shares trying to be the same pathname as my SMB shares. BSD does not like this! But, yay! Found the data. Glad I didn't give up, the only reason I didn't is that when trying to create a share, the web interface for freeNAS could still enumerate the file structure, which was my clue that the data was there but also that I am an idiot.
|
# ¿ May 10, 2019 22:49 |
|
ProjektorBoy posted:"This Guy Shucks" - The Consumer NAS/storage megathread Inshallah. Everything back up and running on the stable tree: More RAM is sitting on the bench, just need to bring it down to install. Probably get to that this morning.
|
# ¿ May 12, 2019 15:32 |
|
Schadenboner posted:How do I feel about this guy's build list: https://blog.briancmoses.com/2019/03/diy-nas-2019-edition.html ? The vast, vast majority of ruggedized edge of cloud/IoT devices use 2.5" disks with no problem. I've used those seagates he specs in a lot of stuff like that with no issues and would recommend them. If you like his form factor and don't want to rackmount eventually, I find no fault in his logic. I'd probably quadruple the ram, but you can see the specs I'm running at home on the last page just for personal use; I like to overkill things.
|
# ¿ May 13, 2019 02:28 |
|
D. Ebdrup posted:A Xeon E5 at up to 3.2GHz with 12 threads, 32GB memory and a Mellanox NIC? That sounds like a rad machine to run FreeBSD on. ahem, *2 mellanox 10gbe nics* just don't have the other interface up and teamed yet since, bizarrely, FreeNAS doesn't support more than one interface with DHCP?
|
# ¿ May 13, 2019 17:30 |
|
That thing is cool as poo poo. D. Ebdrup posted:My apologies, it's an even cooler setup. . D. Ebdrup posted:
|
# ¿ May 14, 2019 00:56 |
|
Got an email yesterday from my Seagate contact. Exos X16 16TB drives should be hitting channel sales by July. e: Paul MaudDib posted:CPUs I'm actually pretty sure Westmere was never designed to be drop in compatible with Bloomfield. e2: my bad, they should be pin-compatible, just my 2c, though, there were some...interesting incompatibilities that came up in those early tick/tocks w/r/t BIOS, etc. Crunchy Black fucked around with this message at 08:41 on May 17, 2019 |
# ¿ May 17, 2019 08:35 |
|
Hoobastank4ever97 posted:https://www.amazon.com/gp/product/B07CMH78R5 already sold out and back up to 234 for me.
|
# ¿ May 19, 2019 14:01 |
|
redeyes posted:The 16TB dual drive units are about $167 right NOW. So how does the controller work on this board? It says RAID0 ready...could I shuck it and put two lower capacity drives in it to RAID0 them automatically or?
|
# ¿ May 19, 2019 16:07 |
|
Alzabo posted:I have a spare Intel x79 system laying around, converting it into a FreeNAS box doesn't seem like a bad idea? Basically any LSI/Avago/Broadcom that is in "IT mode" e: if you held a gun to my head, here you go: https://www.ebay.com/itm/New-LSI-Me...XAAAOSwdGFYwCX-
|
# ¿ May 24, 2019 05:17 |
|
RIP power bill.
|
# ¿ May 27, 2019 12:52 |
|
Atomizer posted:good advice Also, just in case someone hasn't seen it, Lee Hutchinson of ARS did an incredible deep dive, as he is wont to do, of building a Steam Cache Server to alleviate a lot of headache if you already have a NAS and do have some sort of failure. https://arstechnica.com/gaming/2017/01/building-a-local-steam-caching-server-to-ease-the-bandwidth-blues/
|
# ¿ Jun 1, 2019 13:05 |
|
JESUS. I can't remember the last time I saw a legitimate full length card. What the gently caress host are you going to put that in? Cool pickup, though, mind if I ask price?
|
# ¿ Jun 3, 2019 22:41 |
|
Nice! Yeah those drives would be power hogs, but that seems like a solid plan. Probably not the most dense option but a poo poo pile of IOPS!
|
# ¿ Jun 3, 2019 23:39 |
|
CommieGIR posted:I don't know if the P800 Controller supports SSD caching, but I may split the array into half SSDs and half 1-2TB SATA for density. HP claims the P800 can handle up to 900GB SAS per drive, and others have claimed up to 2TB per disk. We'll see. Holy hell, you changed your AV, didn't realize it was you, Commie! Hope all is well. Been following your snapchat of various Audi shenanigans. Crunchy Black fucked around with this message at 23:55 on Jun 3, 2019 |
# ¿ Jun 3, 2019 23:53 |
|
IOwnCalculus posted:Looks like it's a LSI SAS1078, so unless HP hosed something up, it should support up to (but not above) 2TB per drive. Could you not technically go with any other external SAS HBA to get around this limitation?
|
# ¿ Jun 3, 2019 23:58 |
|
CommieGIR posted:Reporting back: ZFS and iSCSI are not friends. At all. Going to have to move to NFS. Yeah I have had a similar experience personally and professionally.
|
# ¿ Jun 16, 2019 09:33 |
|
CommieGIR posted:Yeah, I'm gonna see how many 128 or 240 GB ssds I can pick up for slog Man, you guys up by the dam must get your power for nothing!
|
# ¿ Jun 16, 2019 23:32 |
|
Are you really having to reboot your storage server that often?
|
# ¿ Jun 17, 2019 20:37 |
|
Can anyone explain to me why FreeNAS refuses to have more than one interface on DHCP. Is it a limitation of BSD or is there some other constraint? I don't ultimately care, I can set up manual IPs and reservations, just shocking to me that it's a restriction.
|
# ¿ Jun 22, 2019 15:32 |
|
D. Ebdrup posted:The real question is, after you asked this last time, why haven't you setup LACP? Sorry missed that one and just circled back around to this problem. /might've also been a bit impaired when I wrote the last post... Thanks!
|
# ¿ Jun 25, 2019 23:18 |
|
|
# ¿ Apr 26, 2024 13:27 |
|
yeah for VM stuff these days, *ESPECIALLY* ephemeral stuff for CI/CD/builds etc, just get 2 SSDs and enjoy
|
# ¿ Jul 6, 2019 20:00 |