|
Wicaeed posted:God help me, our company keeps deciding to go with this all-flash storage vendor named Kaminario. Well, Infosight is relatively new to HPE, being a Nimble product
|
# ? Apr 10, 2019 17:45 |
|
|
# ? Apr 26, 2024 04:04 |
|
YOLOsubmarine posted:Kaminario has been around for a while. They’re still a little niche, but it’s not bad stuff. Very fast, relatively simple. Are you working with their stuff currently? I am going to be migrating off some Nimble stuff now that HPE forcing us to dump the older Supermicro hardware. I assume Pure is going to be our best option, but am going to look EMC Unity as well. Moey fucked around with this message at 18:52 on Apr 17, 2019 |
# ? Apr 17, 2019 18:32 |
|
I wouldn't wish EMC gear on my worst enemies.
|
# ? Apr 17, 2019 19:24 |
|
I don't mind EMC, but yeah Nimble is far better.
|
# ? Apr 17, 2019 19:30 |
|
EMC Unity is fantastic. You should not judge it by prior VNX experience. Whole new system. Pricing is pretty good too.
|
# ? Apr 17, 2019 19:35 |
If you're going to look at Unity, wait until the Dell EMC world or whatever the upcoming show is coming in a few weeks. Unity is due for a refresh in hardware
|
|
# ? Apr 17, 2019 19:59 |
|
Digital_Jesus posted:EMC Unity is fantastic. You should not judge it by prior VNX experience. Whole new system. Pricing is pretty good too. Lol, that's what they said about VNX. That experience and their awful support was enough to sour me on them permanently.
|
# ? Apr 17, 2019 20:13 |
|
Having never worked with EMC arrays, what are the pain points? I am really just looking for some all flash storage block iSCSI storage. I would like to keep the Veeam SAN snapshot based backups that Nimble offers.
|
# ? Apr 17, 2019 22:23 |
|
NetApp EF-Series is pretty cheap and works with Veeam
|
# ? Apr 17, 2019 22:24 |
|
Didn't see the EF series on Veeam's compatibility sheet. https://www.veeam.com/backup-from-storage-snapshots.html I'll probably reach out and get some NetApp pricing too. Is the EF series block storage? Or still NAS? I have never kept up with their offerings. Edit: Looks like FAS and AFF are the NAS style lines, and E-Series and SolidFire are just block storage. Moey fucked around with this message at 23:19 on Apr 17, 2019 |
# ? Apr 17, 2019 22:55 |
|
https://www.netapp.com/us/media/tr-4471.pdf E-series is just block storage, it's the result of the Engenio acquisition I think Edit: I think I might have misread your initial post and thought you were talking about direct SAN backups. Thanks Ants fucked around with this message at 23:44 on Apr 17, 2019 |
# ? Apr 17, 2019 23:36 |
|
Moey posted:Having never worked with EMC arrays, what are the pain points? As someone that worked with VNX/Clariion stuff, the old interfaces were atrocious and legitimately a pain in the rear end to use. Unity has moved to all HTML5 management interfaces and they integrate well with vCenter. I have no complaints with them and, at least in my area, EMC support is top notch and I have never had an issue in my ~3 years or so of selling them. They're also pretty drat cheap for a "small" all-flash array. Nimble pricing has routinely been 1.5-2x more for the same storage capacity.
|
# ? Apr 18, 2019 02:27 |
|
It's been about a year since the last "where to go for NAS that's not Isilon" discussion. Anyone got something cool they've seen recently? We're looking for a single namespace in the PB range. Our app requires SMB/CIFS/NTFS-ACLS and writes billions of 16-32kb files. Tiering would be great as the data gets used hot and heavy at first but then cools rapidly. We've looked at Gluster but the red hat support team was an absolute joke when it came to supporting SMB. In 6 weeks they couldn't get it to support active directory. Scale out would be nice, but really just looking to manage large scale app target nas (limited client count).
|
# ? Apr 19, 2019 18:33 |
|
Qumulo at a guess, product looks good but was too pricey for us.
|
# ? Apr 19, 2019 19:30 |
|
Digital_Jesus posted:EMC Unity is fantastic. You should not judge it by prior VNX experience. Whole new system. Pricing is pretty good too.
|
# ? Apr 19, 2019 22:57 |
|
Not sure why everyone is bitching about emc support.. I've had very few bad experiences, and they were in the olden days (clariion). We've got vnx/unity/vmax/isilon/centera/data domain, support handle all our code upgrades and and hardware faults are rectified quickly.. Compared with other vendors like HP 3PAR (urrrgh) and VMware their support has been great.
|
# ? Apr 20, 2019 02:19 |
|
what the gently caress happened to VMware and Dell support? did team merges really gently caress them up so badly?
|
# ? Apr 20, 2019 05:31 |
|
I dunno, but we've had a VMware support case open for over a year with no end in sight. They just keep asking for logs over and over again. Our TAM has even been kicking them and it's still going nowhere. Luckily it's just an annoyance and not causing us major issues.
|
# ? Apr 20, 2019 05:47 |
|
KennyG posted:It's been about a year since the last "where to go for NAS that's not Isilon" discussion. Anyone got something cool they've seen recently? Cohesity, Qumulo, Cloudian, Vast Data Also, on the subject of Unity, I’m not really sure what it’s selling point is supposed to be beyond “its EMC and they’ll give it away.” I’m sure it’s fine, but I’ve never met someone who worked with both Pure and Unity and preferred the Unity.
|
# ? Apr 20, 2019 23:28 |
|
KennyG posted:Our app requires SMB/CIFS/NTFS-ACLS and writes billions of 16-32kb files.
|
# ? Apr 21, 2019 15:41 |
|
KennyG posted:We're looking for a single namespace in the PB range. Our app requires SMB/CIFS/NTFS-ACLS and writes billions of 16-32kb files. You're basically the worst case scenario. If you can stripe those into megabyte or larger chunks your app and storage needs would simplify dramatically.
|
# ? Apr 21, 2019 20:54 |
|
Vulture Culture posted:Are you looking for any kind of replication? Because that's going to be an absolute shitshow on any platform with these file counts. Transitioning off of our ZFS-based snapshot delta replication onto an enterprise replication has been an experience...
|
# ? Apr 21, 2019 23:06 |
|
H110Hawk posted:You're basically the worst case scenario. If you can stripe those into megabyte or larger chunks your app and storage needs would simplify dramatically. Story of my life. As the app itself doesn't do that and we don't own the app, does anyone have any suggestions of a gateway/tool that would do that and not completely poo poo itself at this scale? It seems like such an easy thing to do to take a SQL/noSQL database and store the NTFS ACL/Posix data (posix paths should be so loving easy to implement!), front it with some NVME/Cache on a small cluster of compute/cache nodes and bundle the data and store it on the back end in CEPH/Gluster/S3/Swift. If we were doing it in the Exabyte range at ludicrous scale I'd hire 25 devs and roll our own. However, hiring .1 devs isn't really feasible to get anything done. At this scale it seems just as ludicrous to run Gluster/XFS unsupported on Centos and pray for the best as it does to roll your own NAS gateway. The closest we found to this in a commercial product is StrongLink (https://www.strongboxdata.com/stronglink) and to a lesser extent StarFish but neither of them really give you the confidence that they can handle the use case. KennyG fucked around with this message at 14:57 on Apr 23, 2019 |
# ? Apr 23, 2019 14:53 |
|
KennyG posted:It seems like such an easy thing to do to take a SQL/noSQL database and store the NTFS ACL/Posix data (posix paths should be so loving easy to implement!), front it with some NVME/Cache on a small cluster of compute/cache nodes and bundle the data and store it on the back end in CEPH/Gluster/S3/Swift. If we were doing it in the Exabyte range at ludicrous scale I'd hire 25 devs and roll our own. However, hiring .1 devs isn't really feasible to get anything done. At this scale it seems just as ludicrous to run Gluster/XFS unsupported on Centos and pray for the best as it does to roll your own NAS gateway. This is essentially what the newer commercial products in this space do. Cohesity uses a distributed NoSQL db for metadata and an extent store on the back end that just distributes data across nodes using replication factors or erasure coding.
|
# ? Apr 23, 2019 19:52 |
|
Goodbye VNX5400! We are running the disk wipe and then it is getting yanked out of the production datacenter ASAP.
|
# ? Apr 23, 2019 20:59 |
|
Digital_Jesus fucked around with this message at 21:04 on Apr 23, 2019 |
# ? Apr 23, 2019 21:02 |
|
CURSED. IMAGE.
|
# ? Apr 23, 2019 22:13 |
|
It burns
|
# ? Apr 24, 2019 18:47 |
|
Would high read/write latency (100,000 m/s) generally equate to physical connections issues? There's no way the severs I have on the storage device is pumping out that much data.
|
# ? Apr 25, 2019 03:54 |
|
100,000 ms is 100 seconds. Did someone trip over the network cables for your SAN, then plug them back in a minute and a half later? What's the storage unit? Can you check your switch interface for statistics? Some more details are needed, because something is seriously jacked up.
|
# ? Apr 25, 2019 04:30 |
|
lol internet. posted:Would high read/write latency (100,000 m/s) generally equate to physical connections issues?
|
# ? Apr 25, 2019 05:24 |
|
In actually shocked whatever tool you're using will count that high. Are you bouncing that signal off the moon using radio?
|
# ? Apr 25, 2019 05:44 |
|
I think this is the right place to ask this: Is there a software that force-caches all the metadata on a SMB share on the server so I don't have to wait 2 minutes for the file server to iterate through a 40k files and puke up the list I need to run a powershell script against? I don't need it to cache the actual files, I just want dir to run fast as possible without needing to bump everything to flash.
|
# ? Apr 27, 2019 08:09 |
|
Methylethylaldehyde posted:I think this is the right place to ask this: Is there a software that force-caches all the metadata on a SMB share on the server so I don't have to wait 2 minutes for the file server to iterate through a 40k files and puke up the list I need to run a powershell script against? I don't need it to cache the actual files, I just want dir to run fast as possible without needing to bump everything to flash. Why are you listing all files on the share to find the one you need?
|
# ? Apr 27, 2019 18:11 |
|
Pile Of Garbage posted:Why are you listing all files on the share to find the one you need? Because users are retarded, more or less. Also we have a folder with like 20k files in it we can't move or change, and trying to navigate it an exercise in tedium.
|
# ? Apr 28, 2019 03:19 |
|
Methylethylaldehyde posted:Because users are retarded, more or less. Also we have a folder with like 20k files in it we can't move or change, and trying to navigate it an exercise in tedium. In my experience I've only ever seen issues with SMB when operating over WAN links with high latency like VSAT (1s minimum RTT). In these situations we deploy WAN optimisers which accelerate SMB by caching requests and files (Supports signing+sealing by doing MITM with an intermediate signing cert). This would probably work internally as well however it sounds like your SMB server is the bottleneck. Is it a NAS or just a Windows Server machine with shared folders configured?
|
# ? Apr 28, 2019 13:54 |
|
Yeah, in my experience SMB 3 works a lot better than previous versions with higher latencies, but it's still not a WAN protocol.
|
# ? Apr 28, 2019 14:51 |
|
Pile Of Garbage posted:In my experience I've only ever seen issues with SMB when operating over WAN links with high latency like VSAT (1s minimum RTT). In these situations we deploy WAN optimisers which accelerate SMB by caching requests and files (Supports signing+sealing by doing MITM with an intermediate signing cert). This would probably work internally as well however it sounds like your SMB server is the bottleneck. Is it a NAS or just a Windows Server machine with shared folders configured? Server 2012 reading huge piles of crap off a set of 10 HDDs over iSCSI. It's still local network, but if the folder metadata isn't in the cache, it takes a longass time for the files to show up completely.
|
# ? Apr 28, 2019 15:11 |
|
Methylethylaldehyde posted:Server 2012 reading huge piles of crap off a set of 10 HDDs over iSCSI. It's still local network, but if the folder metadata isn't in the cache, it takes a longass time for the files to show up completely. Wait, so is the Server 2012 machine physical or virtual? And are the "10 HDDs over iSCSI" separate drive mounts from a single target or multiple targets? Either way it sounds like you've outgrown your setup and should probably look into getting a dedicated NAS appliance that supports SMB. Not to endorse them or anything but at my last gig we had a NetApp NAS with SMB exports handling huge dumb file structures much bigger than yours without issues.
|
# ? Apr 28, 2019 18:35 |
|
|
# ? Apr 26, 2024 04:04 |
|
Ontap had (still has?) an inode limit on shared volumes that you can raise but you need to be careful if you have a workload that creates billions of files.
|
# ? Apr 28, 2019 19:18 |