|
mayodreams posted:Why are our vm's so slow!?!
|
# ? Feb 18, 2016 02:41 |
|
|
# ? Apr 26, 2024 11:37 |
|
adorai posted:AHHAAHAHAHAH. My end users have been known to complain if we 10ms for more than a few seconds. Well, to be fair 10ms of write latency is very different than 10ms of read latency and is more likely cause noticeable application performance changes.
|
# ? Feb 18, 2016 03:00 |
|
Can someone point me in the right direction of some decent Netapp 101 resources? All of my storage experience has been with Compellent and a little bit of EMC; and I just threw my hat in the ring for a position that uses UCS/Netapp for their vmware clusters. I'd like to be able to talk somewhat intelligently about it should I get called. "I haven't used netapp, but <compellent concept> translates to <netapp concept>, etc"
|
# ? Feb 18, 2016 19:04 |
|
Trying to get a general Nimble vs NetApp price comparison but can't find solid NetApp numbers can someone give me a general idea of a FAS2520 with 8ish tb. Our Nimble is 25k but I really want that but boss wants to see how much a NetApp would be.
|
# ? Feb 26, 2016 23:27 |
|
NetApp is priced purely on a "How much can I squeeze from your company" basis. You need an actual quote from a vendor. I could write a number here but it would have no meaning.
|
# ? Feb 26, 2016 23:53 |
|
I can give you a ballpark list price on a bundle in GBP. I can't share partner purchase pricing.
|
# ? Feb 27, 2016 00:07 |
|
Yea, there's really no way to give useful pricing to someone else since it's based on partner level, promos, how deep the NetApp AM is willing to go, how deep the partner AM is willing to go, and how deep the competition has gone. But NetApp can certainly be price competitive with Nimble of they choose. Nimble has been suffering some from financial ills of late, it will be interesting to see if they are willing to discount deep to get customers or if they start feeling pressure to actually make money at some point.
|
# ? Feb 27, 2016 05:20 |
|
Has anyone done a onsite demo of Qumulo or very unlikely, running it in production? If so, send me a PM. I have some questions that are probably best answered outside of the forum.
|
# ? Mar 1, 2016 23:48 |
|
Company is considering putting in a whole lot of security cameras, needs to spec ~300 TB of usable space for video footage. Any recommendations for a good mixture of enterprise support and low performance/low cost? My first thought was a Cisco UCS 3160 or 3260 with a bunch of 7200RPM 6TB drives in a RAID 6, but I don't usually deal with arrays that large. Since this is video footage that will be rarely accessed unless needed, reliability and retention are the main considerations.
|
# ? Mar 2, 2016 08:09 |
|
Seeking opinions on a diy FreeNAS / ZFS // EXT4 RAID 10 of Samsung EVO Pro 850 SSDs versus what vendors with optimized file systems might offer?
|
# ? Mar 2, 2016 11:55 |
|
Vendors selling you managed storage will always be more expensive, if that is what you are asking. Do you intend to do software mirror-stripe within the zfs pool as opposed to hardware raid10? Finally, I have a hypervisor at home baked by an 850 EVO for production, an old big hdd for local backup and snapshots, and Google Cloud Nearline for archiving. I intend to have the SSD last the (e: sp) five years needed until the next big storage revolution becomes cheaper (probably going to be Intel's ram-like ssd). I host steam servers, minecaft, music streaming, and a few sites. Even under load, I am not projected to go over the EVO's write volume warranty before then. Are you dead sure you want the PRO? Potato Salad fucked around with this message at 13:35 on Mar 2, 2016 |
# ? Mar 2, 2016 13:25 |
|
Asimov posted:Company is considering putting in a whole lot of security cameras, needs to spec ~300 TB of usable space for video footage. Any recommendations for a good mixture of enterprise support and low performance/low cost? My first thought was a Cisco UCS 3160 or 3260 with a bunch of 7200RPM 6TB drives in a RAID 6, but I don't usually deal with arrays that large. Since this is video footage that will be rarely accessed unless needed, reliability and retention are the main considerations.
|
# ? Mar 2, 2016 15:00 |
|
Vulture Culture posted:Can't answer this without the I/O requirements or some basic idea thereof. How many cameras writing concurrently? What framerate, resolution, codec? Do they stream video directly to the disk or do they buffer files locally and burst-dump the whole thing at once? Yeah I was about to say, an installation that large your camera installer/var should be doing the legwork for you, and be able to at least provide you with iops requirements, etc.. If they're not going to/can't provide you with a turn-key system or solid specifications for the archiving portion of the storage, then get another VAR.
|
# ? Mar 2, 2016 15:27 |
|
Panda, for some reason I thought I was in the SSD thread, so I answered as though you were building a home system. What is your budget like, and what kind of capacity do you need? Is this going to back any systems like hypervisors or heavy-load databases? Potato Salad fucked around with this message at 23:57 on Mar 2, 2016 |
# ? Mar 2, 2016 15:29 |
|
Panda Time posted:Seeking opinions on a diy FreeNAS / ZFS // EXT4 RAID 10 of Samsung EVO Pro 850 SSDs versus what vendors with optimized file systems might offer? It will be cheaper and also probably slower, not fully redundant and unsupported, so you will become 24/7 support for a system you probably don't fully understand, at least assuming there is any important data on there.
|
# ? Mar 2, 2016 19:26 |
|
devmd01 posted:Yeah I was about to say, an installation that large your camera installer/var should be doing the legwork for you, and be able to at least provide you with iops requirements, etc.. If they're not going to/can't provide you with a turn-key system or solid specifications for the archiving portion of the storage, then get another VAR. Good points, I'll bring it up with the VAR. It's quite early in the project so we were just spitballing. 200 cameras with a mix of models results in 1500 Mbps network datarate, 1250 Mbps storage datarate when I run it through a few online calculators. I'm sure the reseller has some storage people I can talk to. Thanks guys.
|
# ? Mar 2, 2016 21:01 |
|
Potato Salad posted:What is your budget like, and what kind of capacity do you need? Is this going to back any systems like hypervisors or heavy-load databases? This is for 4k video w/ a 10G LAN. The budget is 'can I convince the management this is legit for x price and y value versus our current set up'. (50k max?) I'm considering a smaller SSD hardware RAID 10 of about 2-10TB, along with a few pools of enterprise HDDs totaling 100TB of usable space, in some mix of RAID 6 or RAID 10. I'm a little uncertain about the performance returns versus cost on SAS drives versus SATA for this purpose. Turn key vendor hardware and custom file systems look to be upwards of $1000/TB, and that's for HDDs not SSDs. NippleFloss posted:It will be cheaper and also probably slower, not fully redundant and unsupported, so you will become 24/7 support for a system you probably don't fully understand, at least assuming there is any important data on there. Assuming we buy a few extra duplicate RAID cards and drives and do backups outside of this system, I think my main concern is any eventual software/file system/electronic level quagmire fuckaroos that might bite me?
|
# ? Mar 3, 2016 01:17 |
|
If you can get 100TB usable with a bit of SSD from an actual vendor for 50k then I would be more than surprised.
|
# ? Mar 3, 2016 01:27 |
|
Thanks Ants posted:If you can get 100TB usable with a bit of SSD from an actual vendor for 50k then I would be more than surprised.
|
# ? Mar 3, 2016 02:27 |
|
Thanks Ants posted:If you can get 100TB usable with a bit of SSD from an actual vendor for 50k then I would be more than surprised. SSDs aren't especially useful in high throughput applications like this. Spinning media does fine writing big sequential data streams. Panda Time posted:Assuming we buy a few extra duplicate RAID cards and drives and do backups outside of this system, I think my main concern is any eventual software/file system/electronic level quagmire fuckaroos that might bite me? ZFS is a very mature file system, the likelihood of data loss is going to be from misconfiguration on your part or misbehaving commodity hardware. Part of what you pay for when you buy from a vendor is QA on all of the different hardware and firmware pieces and a comfortable assurance that they will behave as expected. For example, how do you know that the disks you buy or the raid controller (that you won't even be using with ZFS) will actually obey a cache flush request properly? The other half is that storage performance problems can be really tricky to troubleshoot and you're going to be on the hook for that. A system that is serving data too slowly is exactly as useful as a system that is down. And if it's down long enough that's functionally equivalent to a data loss event.
|
# ? Mar 3, 2016 03:15 |
|
NippleFloss posted:SSDs aren't especially useful in high throughput applications like this. Spinning media does fine writing big sequential data streams.
|
# ? Mar 3, 2016 05:14 |
|
Vulture Culture posted:OTOH, enough sequential data streams writing concurrently and you now have a really big random data stream, so the devil's in the details of exactly what's writing and how Yea, but these days most storage, even if you roll your own with ZFS, will journal that in a very small amount of flash and turn it back into a number of sequential streams. Basically the benefit of anything more than a small amount of SSD as a journal will be completely wasted because there will be no appreciable read caching and he can't buy nearly enough SSD to fit all of the workload on it, and it probably wouldn't help too much anyway. To the OP I'd probably look at NetApp E-series since it's pretty much build for high ingest rate applications and isn't too expensive, though I doubt anything worth buying can be had for ~$500/TB. Isilon would work too, but would probably be overkill.
|
# ? Mar 3, 2016 05:35 |
|
NippleFloss posted:Yea, but these days most storage, even if you roll your own with ZFS, will journal that in a very small amount of flash and turn it back into a number of sequential streams. Basically the benefit of anything more than a small amount of SSD as a journal will be completely wasted because there will be no appreciable read caching and he can't buy nearly enough SSD to fit all of the workload on it, and it probably wouldn't help too much anyway. e: I agree that spinning disk should be fine but if possible I'd probably go with RAID-10 over something with parity Vulture Culture fucked around with this message at 06:03 on Mar 3, 2016 |
# ? Mar 3, 2016 05:54 |
|
Vulture Culture posted:e: I agree that spinning disk should be fine but if possible I'd probably go with RAID-10 over something with parity
|
# ? Mar 3, 2016 06:25 |
|
Whatever you end up doing, don't forget being able to read files while all that stuff is being written. Having the whole thing fall apart because the video you need isn't in cache anymore and the read requests mess up streaming all those writes is no good.
|
# ? Mar 3, 2016 18:28 |
|
If you're using storage that offers CIFS/SMB access, be sure to poke them about BadLock. This looks like it's a protocol bug and if it is your storage OS is likely vulnerable. I'm going to be bugging my NetApp reps ASAP and you should too.
|
# ? Mar 24, 2016 17:46 |
|
Number19 posted:If you're using storage that offers CIFS/SMB access, be sure to poke them about BadLock. This looks like it's a protocol bug and if it is your storage OS is likely vulnerable. I'm going to be bugging my NetApp reps ASAP and you should too. I poked EMC/Isilon about it last week and they're aware of the bug and are investigating... I suppose we'll know more on the 12th.
|
# ? Mar 29, 2016 18:38 |
|
Amandyke posted:I poked EMC/Isilon about it last week and they're aware of the bug and are investigating... I suppose we'll know more on the 12th. I got a similar reply from my account team at NetApp. It doesn't instill a lot of confidence in me that they will have something ready.
|
# ? Mar 29, 2016 20:26 |
|
the illumos guys have indicated that illumos is not vulnerable. That gives me hope for our oracle zfs appliances, and honestly, any other NAS with a proprietary CIFS stack as well.
|
# ? Mar 30, 2016 03:17 |
|
Our current stuff is a pair of Netapp 2240’s with 5TB of 15k drives for VM datastores and 40TB of nearline. We need ~200TB of nearline and ~15TB for VM datastores, NAS head(s) for at least SMB (current datastore is on NFS, but I’m not married to that), on 10GBE. Our current solution is mainly bottlenecked at the controllers (WAFL writes/CIFS-SMB CPU usage). We ingest of bunch of raw sequencing data on the regular, so being able to scale out to around a PB before EOL would be cool. I like how the netapp boxes can just serve all the protocols we need, but I have how CPU bottlenecked we are on WAFL writes. Also I'm a grumpy fart and know nothing of CDOT. Anyone in particular we should talk to? Netapp will show us their plan, Dell is coming soon. Tegile could do what we want but I don't know how sustainable they are. Pure is flash only. Nimble is iSCSI only.
|
# ? Apr 12, 2016 10:27 |
|
It's NetApp's year end so push them. You can probably get big enough controllers that will give tons of CPU headroom if you squeeze them hard enough. On another note: still no word from NetApp on Badlock. Sit on your storage vendors for info once the embargo breaks if this poo poo is bad. Anyone who does CIFS/SMB is likely affected and will require immediate patching.
|
# ? Apr 12, 2016 16:49 |
|
Number19 posted:It's NetApp's year end so push them. You can probably get big enough controllers that will give tons of CPU headroom if you squeeze them hard enough. Microsoft hasn't released anything either. The Badlock page itself hasn't been updated since 3/31. Whole lot of 0 information going around.
|
# ? Apr 12, 2016 16:55 |
|
That means it's either not a huge deal and this is all hype Or the world burns in 45 minutes Get yo popcorn I guess
|
# ? Apr 12, 2016 17:15 |
|
evil_bunnY posted:Our current stuff is a pair of Netapp 2240’s with 5TB of 15k drives for VM datastores and 40TB of nearline. Nimble has fibre channel https://www.nimblestorage.com/blog/introducing-our-fibre-channel-san/
|
# ? Apr 12, 2016 18:07 |
|
We want 10GBE since that's what we have now.
|
# ? Apr 12, 2016 18:10 |
|
Badlock is a wet fart, but your storage vendors still likely rely on Samba for things and it looks like there are lots of Samba bugs that need to be fixed. This might be worse for them than it is for Microsoft. Time will tell I guess.
|
# ? Apr 12, 2016 18:36 |
|
Number19 posted:Badlock is a wet fart Wait, you don't have file shares out facing the wide open internet??????
|
# ? Apr 12, 2016 19:07 |
|
How else are people supposed to get to their documents from out the office?
|
# ? Apr 12, 2016 19:10 |
|
https://kb.netapp.com/support/index?page=content&id=9010080 So NetApp stuff will need patches. Moey posted:Wait, you don't have file shares out facing the wide open internet?????? It sounded like a RCE against SMB which would have been super exploitable in awful ways. Even if they're not internet facing, it's easy for a malware author to drop something in your network and do terrible things like have access to these systems to launch ransomware attacks or other data exfiltration. As it is the MITM can be used to change file permissions on Samba servers (which most NAS storage vendors implement) meaning that some very bad things still can happen from this. For Windows this is boring. For Samba integrators? It might be a lot worse.
|
# ? Apr 12, 2016 19:15 |
|
|
# ? Apr 26, 2024 11:37 |
|
Glad my NetApp deployment is happening tomorrow and Thursday and not before this patch. I'm sure it's easy enough to do but I'm getting into this gently.
|
# ? Apr 12, 2016 20:11 |