|
Internet Explorer posted:How's LeftHand doing these days? Nutanix is the biggest. vSAN is probably next. But there's no need to rush to get to hyperconverged. Traditional architectures still work fine.
|
# ? Mar 7, 2017 18:56 |
|
|
# ? Mar 28, 2024 15:18 |
|
Has anyone bought anything through http://www.enterasource.com/ ? I'm looking at some secondary storage, as our primary is a pair of Tegile units. This would be for storing some video and video projects that our marketing department creates, so the Tegile isn't the best suited for it. I'm ok with the fact it's refurbed, parts support is through them. I just wanted to see if anyone had experience with them. Below is the quote if anyone wants to see; They are MD1200's sitting under PE R710's. Dell PowerVault MD1200 (12x 3.5" LFF Hard Drive Option) 12x 3TB 7.2K RPM NL SAS 6Gbps Hot Swap Drives 2x Controllers 2x 600W Power Supplies Rail Kit Front Bezel 1x Mini SAS to Mini SAS Cable 2x Power Cords 1x Dell PERC H810 1GB Raid Controller (Full Height) Purchase Price - $2,425/ea Total Purchase Price - $4,850 + $100 Shipping
|
# ? Mar 8, 2017 15:25 |
|
Never purchased from that site before, but have gotten some used servers/storage from the Dell Enterprise Outlet before as well as stikc.com, never had any issues.
|
# ? Mar 8, 2017 20:24 |
|
Over the weekend I had to replace a controller for a Compellent SC4020. The physical swap was easy enough but the controller that was sent out had a REALLY old OS/firmware loaded. What should have been a 2-3 hour event ended up being 8 hours as I had to work with Dell support to manually upgrade the controller through the various stages...
|
# ? Mar 8, 2017 20:28 |
|
Dell never really struck me as being a company that have given a poo poo about any of the storage products that they have developed or bought in.
|
# ? Mar 8, 2017 20:47 |
|
Well the original controller wasn't "failed" yet, but they suspected it might soon so they suggested a replacement. Apparently it was a fluke that we got a controller with an older OS. Over all though we have had very little issues with the SC4020 and I have found dealing with their support to generally be fine. One thing I found out during the upgrade processes was that the controllers are running FreeBSD.
|
# ? Mar 8, 2017 21:24 |
|
bigmandan posted:Over the weekend I had to replace a controller for a Compellent SC4020. The physical swap was easy enough but the controller that was sent out had a REALLY old OS/firmware loaded. What should have been a 2-3 hour event ended up being 8 hours as I had to work with Dell support to manually upgrade the controller through the various stages...
|
# ? Mar 8, 2017 21:35 |
|
evil_bunnY posted:LMAO, how is your reaction not "you're welcome tomorrow at 9AM with an updated controller" It was the weekend, I had already swapped the controller out, and at that point I just wanted to get it over with and not have to open up another maintenance window.
|
# ? Mar 9, 2017 18:08 |
|
Fellow Isilon admins who use SnapshotIQ- are there any methods for calculating storage requirements based on retention policies? I've been asked to enable it on our cluster, but we're constantly running between 80-90% capacity.
|
# ? Mar 9, 2017 23:10 |
|
Moey posted:Never purchased from that site before, but have gotten some used servers/storage from the Dell Enterprise Outlet before as well as stikc.com, never had any issues. Well I have one sitting on my bench and it seems to work just fine. Product came packaged well, drives in a seperate shipping box and fired up no problems. Guess we will see how long they last.
|
# ? Mar 17, 2017 16:51 |
|
Mr-Spain posted:Has anyone bought anything through http://www.enterasource.com/ ? Provided you can still get a support agreement from the primary vendor (Netapp/EMC/Whoever) I don't see the problem.
|
# ? Mar 17, 2017 16:59 |
|
Rhymenoserous posted:Provided you can still get a support agreement from the primary vendor (Netapp/EMC/Whoever) I don't see the problem. This is an important point. About a year ago we were in the process of purchasing new FC switches to our HP blade systems. Then one guy realized that the only thing that matters is that the blade chassis has a support contract and HP will swap any parts no questions asked. So instead of buying new switches for 4k a piece we could just eBay used switches for 1/10th the price.
|
# ? Mar 17, 2017 23:55 |
|
Saukkis posted:This is an important point. About a year ago we were in the process of purchasing new FC switches to our HP blade systems. Then one guy realized that the only thing that matters is that the blade chassis has a support contract and HP will swap any parts no questions asked. So instead of buying new switches for 4k a piece we could just eBay used switches for 1/10th the price. I mean I'd still try to use the used equipment but still. If I have an ironclad support agreement from the primary vendor I'm happy. That's where they make all the money anyways.
|
# ? Mar 20, 2017 17:59 |
|
Rhymenoserous posted:I mean I'd still try to use the used equipment but still. If I have an ironclad support agreement from the primary vendor I'm happy. That's where they make all the money anyways. Nah, they aren't car dealers, they make money on product.
|
# ? Mar 20, 2017 18:49 |
|
big money big clit posted:Nah, they aren't car dealers, they make money on product. I mean, EMC does but that's because they charge an arm and a leg.
|
# ? Mar 20, 2017 22:11 |
|
Rhymenoserous posted:I mean, EMC does but that's because they charge an arm and a leg. Everybody does. Support is more expensive than development and support of aging hardware is particularly expensive so they want you to replace your hardware frequently. Support costs in later years get higher to incentivize buying new hardware. I've worked for both a vendor and a VAR and trust me, they'd love it if you bought something new at the end of every support term rather than extending.
|
# ? Mar 21, 2017 00:31 |
|
Rhymenoserous posted:Provided you can still get a support agreement from the primary vendor (Netapp/EMC/Whoever) I don't see the problem. Agreed, however for what it's doing I'm fine with just parts support from the reseller. If it was anywhere near our main production stuff I wouldn't have considered it.
|
# ? Mar 21, 2017 16:54 |
|
I have a netapp that is on its 8th year, maintenance from netapp was nothing short of outrageous. It's about 25% as much from a third party, but obviously only covers hardware.
|
# ? Mar 23, 2017 00:43 |
|
Did nobody run the numbers of trading it in or is there a reason you're keeping it around?
|
# ? Mar 23, 2017 01:25 |
|
adorai posted:I have a netapp that is on its 8th year, maintenance from netapp was nothing short of outrageous. It's about 25% as much from a third party, but obviously only covers hardware. Well if it's that old it's probably at or near end of software support as well, so hardware is all you'll get from NetApp after that. No doubt it'a much cheaper to just buy a new one.
|
# ? Mar 23, 2017 02:34 |
|
I have yet to own a piece of hardware where the support costs were tenable past year 5. With most vendors it's 50/50 past year 3 unless you're aggressive about locking in pricing when you commit to the sale.
|
# ? Mar 23, 2017 05:06 |
|
adorai posted:I have a netapp that is on its 8th year, maintenance from netapp was nothing short of outrageous. It's about 25% as much from a third party, but obviously only covers hardware. We only buy third party contracts from Zerowait and they're great. You can get a decent amount of pro-serve style chatting with them during normal business hours, best practices advice, etc. We use it instead of me pretending I know things about netapps. Yes I do largely only drop in here to be a Zerowait fanboy but they've done right by me for over a decade now.
|
# ? Mar 23, 2017 17:13 |
|
Vulture Culture posted:locking in pricing when you commit to the sale.
|
# ? Mar 24, 2017 13:55 |
|
evil_bunnY posted:This is the important part. It's never been an issue for us to get year 4 and 5 basically for free (like maybe 10%) but if you call them up at the end of year 3 looking to extend wooooo boy you're getting taken for a ride. Most vendors just won't do more than 5, and that's how often we refresh stuff that's not cisco switches.
|
# ? Mar 24, 2017 15:00 |
|
The storage market has mostly moved to flat rates for the first five years, driven by serious competition from startups. Too bad the rest of the hardware market isn't as competitive.
|
# ? Mar 24, 2017 15:42 |
|
Who should I be looking at for cost effective storage these days? Just took a new position that needs a relatively cost-conscious virtualization storage system. What's the current hotness in that space?
|
# ? Mar 24, 2017 18:54 |
|
Walked posted:Who should I be looking at for cost effective storage these days? Just took a new position that needs a relatively cost-conscious virtualization storage system. What's the current hotness in that space? Can you be more specific about your requirements? Size and workload?
|
# ? Mar 24, 2017 19:57 |
|
Moey posted:Can you be more specific about your requirements? Size and workload? Sure; we're aiming for something 20-30tb for a very generic "virtualization" workload; no particular high io workloads; but significant VM usage. Clustered environment. I've used the Equallogic PS6000 series in the past and it's handled this workload well; but I want to see if there's another option to consider this time through, as cost is a bigger consideration here.
|
# ? Mar 24, 2017 20:24 |
|
Curious if anyone has any thoughts why this happened: I work for a company, we resell software/equipment (in a field that's small enough you could narrow me down if I was more specific.) We have a support/maintenance contract with one of our local customers, upgraded them to the newest version a few months ago. While we did this, we transitioned from physical to virtual servers. We have one server that hosts our application, one that hosts MSSQL. Customer's IT department installed and prepped the SQL server (we finalized install), and they installed the SQL instance datastores on an iSCSI drive hosted on a Nimble controller. Yesterday about 11:23 AM, the iSCSI drive suddenly died. Windows reported it as a RAW partition, disconnecting and reconnecting did nothing, and ... well let me post the description the IT Manager gave to the IT director: quote:Description of Event So his boss sends to [Division Director] that they'll look into it, but who really needs to look into it is us, because they don't know our program (keep in mind we have nothing to do with the storage solution or how it was set up, that was all their IT department, and this server only hosts MSSQL). So I do some investigating and find two things in event viewer that kicked off when it started and just kept repeating afterwards: Nimble service: I/O Complete. Serial Number 1C4101B848DC3A536C9CE90097376601. Address 01:00:01:00. Path ID 0x77010001. Path Count 2. CDB 2A 00 01 EE 3D D7 00 00 01 00 -- -- -- -- -- --. Sense Data -- -- --. SRB Status 0x04. SCSI Status 0x18. Queue Status 0, 1. ALUA State: Active Optimized. Device State: Active Optimized. Ntfs service: The system failed to flush data to the transaction log. Corruption may occur in VolumeId: E:, DeviceName: \Device\HarddiskVolume6. ({Device Busy} The device is currently busy.) (repeat every few seconds until the iSCSI connection was terminated). Any ideas? Anyone ever seen this before? Tl;dr: Customer's nimble iSCSI share suddenly kicked the bucket, and only on our instance, and it's up to me to figure out why
|
# ? Mar 24, 2017 20:53 |
|
Walked posted:Sure; we're aiming for something 20-30tb for a very generic "virtualization" workload; no particular high io workloads; but significant VM usage. Clustered environment. If you're fully virtualized Tintri would be my pick. All flash has gotten pretty inexpensive if your reduction ratios are good. Pure and NetApp have good offerings there, with Pure being my preference if you're okay with block only.
|
# ? Mar 24, 2017 21:29 |
|
Walked posted:Sure; we're aiming for something 20-30tb for a very generic "virtualization" workload; no particular high io workloads; but significant VM usage. Clustered environment. Probably worth talking to Tegile, they can hit fairly low end pricing as well.
|
# ? Mar 24, 2017 23:56 |
|
Maneki Neko posted:Probably worth talking to Tegile, they can hit fairly low end pricing as well. We've had a number of Tegile customers who have had performance issues due to cache exhaustion on their arrays. It falls over HARD when your dedupe table and metadata don't all fit in RAM and SSD.
|
# ? Mar 25, 2017 00:28 |
|
maxallen posted:Curious if anyone has any thoughts why this happened: I don't think this is something you're going to be able to troubleshoot over the forums. I will say that if you guys are not responsible for the virtualization layer, the networking layer, or the storage layer, you also shouldn't be the ones troubleshooting it. Your software didn't cause this. They should probably open a ticket with the virtualization vendor (VMware?) and start from there. The first thing that comes to mind for me is lack of CHAP authentication on the LUNs and a Windows machines connecting to the VMware datastore and initializing the drive, but that is a pretty wild guess. I'd also take a look at their backup software and if it was running at the time.
|
# ? Mar 25, 2017 01:06 |
|
Walked posted:Sure; we're aiming for something 20-30tb for a very generic "virtualization" workload; no particular high io workloads; but significant VM usage. Clustered environment. I have a pretty similar environment, running a handful of Nimble arrays here, nothing all flash, just hybrid. Great balance of performance and space for the price. Config/management is stupidly simple.
|
# ? Mar 25, 2017 01:15 |
|
With HP buying them I'm a little less excited to replace my current hardware with Nimble gear.
|
# ? Mar 25, 2017 01:34 |
|
Internet Explorer posted:With HP buying them I'm a little less excited to replace my current hardware with Nimble gear. Yeah, I'll let the thread know if everything goes to poo poo. Spoke with our reps recently and they claimed "nothing will change." If it goes to hell, I wouldn't mind trying out Pure.
|
# ? Mar 25, 2017 01:55 |
|
Internet Explorer posted:With HP buying them I'm a little less excited to replace my current hardware with Nimble gear. Nimble is out of the running for us as an HP company.
|
# ? Mar 25, 2017 02:08 |
|
Moey posted:Yeah, I'll let the thread know if everything goes to poo poo. Spoke with our reps recently and they claimed "nothing will change." New place has a pure in prod, just set up another on DR, it took us 5 minutes to set up replication today, 2 of that was plugging in the cables.
|
# ? Mar 25, 2017 02:11 |
|
Nimble was fine, but I won't recommend them post HP purchase. Pure is pretty simple but Tintri is even more so. If you're fully virtualized you should really check it out, it's the lowest touch storage imaginable and it's a good consistent performer in hybrid configs. If you have physical hosts that need to be connected Pure is a good option, but they size based on expected data reduction and if they get it wrong you can run out of space well before you'd expect.
|
# ? Mar 25, 2017 02:46 |
|
|
# ? Mar 28, 2024 15:18 |
|
Internet Explorer posted:I don't think this is something you're going to be able to troubleshoot over the forums. I will say that if you guys are not responsible for the virtualization layer, the networking layer, or the storage layer, you also shouldn't be the ones troubleshooting it. Your software didn't cause this. They should probably open a ticket with the virtualization vendor (VMware?) and start from there. The first thing that comes to mind for me is lack of CHAP authentication on the LUNs and a Windows machines connecting to the VMware datastore and initializing the drive, but that is a pretty wild guess. I'd also take a look at their backup software and if it was running at the time. Thanks. I was hoping maybe someone could steer me in the direction of "Oh thats if you don't have hotfix blahblahblah installed there's a small chance Windows will utterly murder an NTFS iSCSI target" or some sort of known issue thing. FYI it's running on HyperV, but the guest is actually connecting to the iSCSI target. Yeah I know it shouldn't be us troubleshooting it, but I don't think their IT director likes us very much, and I also have a feeling he said "oh it's one of their servers, let them figure it out". Also, considering they have a relationship with Nimble, maybe make Nimble figure it out. I sent my response to my boss for him to forward onto the customer regardless, where I explained iSCSI like I was talking to a child and why you'd use it, and how I have no idea what caused it, only what happened and what I found in the logs.
|
# ? Mar 25, 2017 03:56 |