Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Spring Heeled Jack
Feb 25, 2007

If you can read this you can read

TigCobra posted:

I'll bite. Curious what issues you are having?

A bit late on the reply, but:

- Ran into an issue with available storage space (aka the SAN went down on a Sunday) due to the way it handles dedupe in post combined with the way our dev team handles database refreshes.
- Having to deal with raid tiering period
- Getting poor dedupe numbers vs what was estimated by Dell, having to buy extra drives for space requirements
- Manual whitespace reclamation in vSphere because our block size was set wrong

Some of this stuff is the fault of the install tech Dell sent us, but ultimately I blame my boss' boss for going with this unit over Pure or an AF Nimble for some ungodly reason.

Adbot
ADBOT LOVES YOU

fatman1683
Jan 8, 2004
.

SlowBloke posted:

Any qnap with a x86 something is flash aware(even ones with atoms but those tends to be equipped with very few pcie lanes) and supports vStorage API. You can find chassis starting at 2 grand with ryzens (TS-877-1600-8G) or core i3 (TVS-872XU-i3-4G) if you want plenty of PCIe 3.0 lanes. I had plenty of bad experiences doing homelabs with ghettorigged storage so i now prefer to have storage on its own chassis, even if it's slightly more expensive than embedding it onto the esxi servers.

If you still want to go homegrown remember to check if the U.2 backplane/controller includes the cables. I've found u.2 cables to be hilariously expensive in my area.

Yeah, I'm going to be breaking out the storage into its own chassis in the future, this is an intermediate step until I have the money to build a second server just to run the hyp. Going to be passing through the HBA so I should be able to just move the disks over and restore the config onto the new setup, when I get to that point.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Any opinions on an IBM Storwiz v7000 for a home lab?

Thanks Ants
May 21, 2004

#essereFerrari


Seems like a good way to turn money into heat and noise

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Thanks Ants posted:

Seems like a good way to turn money into heat and noise

Yeah, thats what most home labs do. I'm.more curious how itd fare and if it supports aftermarket drives like the Dell Powervaults

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CommieGIR posted:

Yeah, thats what most home labs do. I'm.more curious how itd fare and if it supports aftermarket drives like the Dell Powervaults

Since when do powervaults take non Dell branded drives? Unless you are just going SAS expansion to a non-powervault controller.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

Since when do powervaults take non Dell branded drives? Unless you are just going SAS expansion to a non-powervault controller.

Ive beeb able to use reformatted Netapps in Powervaults before, and some even accept consumer grade SATA

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CommieGIR posted:

Ive beeb able to use reformatted Netapps in Powervaults before, and some even accept consumer grade SATA

The MD line? Would love to fill a MD3200i with cheap shucked drives for some home packrating.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Moey posted:

The MD line? Would love to fill a MD3200i with cheap shucked drives for some home packrating.

Yup! I havent really found drives they wont accept other than weird issues with drives over 2TB

Moey
Oct 22, 2010

I LIKE TO MOVE IT

CommieGIR posted:

Yup! I havent really found drives they wont accept other than weird issues with drives over 2TB

Hmmm, I've got some testing to do. Looking at their support sheet, they dont have any large WD drives as supported.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Well, since no one knows, Im just gonna jump on this v7000 and see what I can do

Crunchy Black
Oct 24, 2017

by Athanatos
I'm pretty pleasantly surprised with the prices the SAS models of those can be had for. Maybe later this year the MD1000 will get replaced after all. I'm running out of space in the 25u.

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

CommieGIR posted:

Well, since no one knows, Im just gonna jump on this v7000 and see what I can do

Well if you want my opinion, it seems overkill. I'm assuming someone surplused it to you? Upgrade the code before the swma ends. Best home config is probably iscsi with DRAID 6 on the back end.

No clue if it would accept untested drives, but since it's probably out of warranty you could try.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

paperchaseguy posted:

Well if you want my opinion, it seems overkill. I'm assuming someone surplused it to you? Upgrade the code before the swma ends. Best home config is probably iscsi with DRAID 6 on the back end.

No clue if it would accept untested drives, but since it's probably out of warranty you could try.

Its largely for this guy:

Pile Of Garbage
May 28, 2007



Almost missed V7000 chat. Is it FC or FCoE and if the former do you actually have all the necessary switches and HBAs? What are you actually getting, just one management/disk tray or additional disk trays? If the latter again do you have the required SAS cables to chain them?

You'll want it on latest possible code, I've seen some insane bugs on lower firmware levels.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Pile Of Garbage posted:

Almost missed V7000 chat. Is it FC or FCoE and if the former do you actually have all the necessary switches and HBAs? What are you actually getting, just one management/disk tray or additional disk trays? If the latter again do you have the required SAS cables to chain them?

You'll want it on latest possible code, I've seen some insane bugs on lower firmware levels.

Blessed username/post combo.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Pile Of Garbage posted:

Almost missed V7000 chat. Is it FC or FCoE and if the former do you actually have all the necessary switches and HBAs? What are you actually getting, just one management/disk tray or additional disk trays? If the latter again do you have the required SAS cables to chain them?

You'll want it on latest possible code, I've seen some insane bugs on lower firmware levels.

Its just a single Disk Controller, Its FCoE, and I do have the necessary interfaces and cables.

Ill see if I can get the firmware, but its been off and racked for some time so its likely well out of support.

greatapoc
Apr 4, 2005
We’re slowly getting closer to dropping a stack of cash on a NetApp HCI solution to migrate away from our HP c7000 blade enclosure which is running 4 Hyper-V nodes. The c7000 is connected to an AFF200 by FC and we have a FAS at second site for snap mirrors.

NetApp have proposed a 4 compute node 4 storage node cluster but our vendor are extremely cautious about selling it to us because of their lack of proof and experience in Hyper-V. NetApp gave us an online demo with an engineer in the US (we’re in Australia) which showed him installing Windows Server just fine but everything in the background was all VMWare. He was using vSphere to move around the various nodes in his demo cluster and even had to install ESXi before nuking it to load up the Windows iso. They didn’t show anything like VMM or Hyper-V manager or anything so we’re still left wondering if it really works properly.

They’re offered to sell it to us with a 90 day buy back if it doesn’t work. I imagine that contract would have to be iron clad and no doubt heavily weighed in their favour. I love the concept of it in theory, solving our storage issues and compute requirements in one hit as well as being able to snapshot directly to our DR site sounds rather appealing. We’re just unsure about being a guinea pig for running Hyper-V.

Another thing is how do we actually handle the migration from the blades? The HCI only does iSCSI and we’re currently entirely FC. All the VM nodes somehow need to be migrated from the AFF onto the SolidFire and then run from the compute nodes. Current line of thought is that we could possibly connect the blades to the HCI over iSCSI and present it as a storage device to that, as well as connect the HCI to the AFF as iSCSI and then somehow do a Hyper-V live migration. Has anyone dealt with anything like that before?

Thanks Ants
May 21, 2004

#essereFerrari


I'd want more than 90 days as that time is going to disappear pretty quickly

greatapoc
Apr 4, 2005

Thanks Ants posted:

I'd want more than 90 days as that time is going to disappear pretty quickly

Yeah for sure. Honestly I feel there’s an approximately 0% chance the kit is ever leaving the rack no matter what guarantee they have. They weaseled their way out of the 3:1 guarantee on the AFF due to claiming too much pre-compressed data. However we’re sort of also feeling like they really want to prove it and will do everything in their power to make it work. The problem is we just have no capacity for this to be an experiment and it needs to work and continue to work from day 1.

Thanks Ants
May 21, 2004

#essereFerrari


Do they have any reference customers they can put you in touch with? If there's no other NetApp HCI users in your city/state that are running Hyper-V then it's probably also worth taking that into consideration in terms of how it affects being able to employ staff/contractors with the required skills.

Everything I can see points to NetApp's solution being a VMware one - the deployment guide makes no mention of Hyper-V at all, and the nodes come with vSphere installed on them already. I don't think some presales guy installing Windows Server into VMware counts as showing you Hyper-V working on their product.

Thanks Ants fucked around with this message at 13:46 on Feb 12, 2020

greatapoc
Apr 4, 2005
We’ve asked them 3 times now, most recently today, to give us some references that we can talk to candidly about their experiences with Hyper-V and they’ve said they’ll get back to us.

Indeed we’ve seen the same with all of their documentation regarding VMware. Hell the second step of the installation process is pointing it at your vcenter. The hardware demo involved him fully installing ESXi in trial mode then opening the IPMI to point it at the windows server iso to erase it. We understand there’s tight integration and fancy management within vSphere and that’s fine, we have no issue managing it from the web browser, we just need assurances that we’re not one patch away from Hyper-V totally making GBS threads itself or just plain not working with our existing setup.

We’re getting pretty sick of the runaround but if it does everything it says it can do on paper it’s really the best solution for our needs so we want it to work or we’re in for a fairly major infrastructure overhaul.

Thanks Ants
May 21, 2004

#essereFerrari


There's nothing in their own documentation about anything other than VMware

https://docs.netapp.com/hci/topic/com.netapp.doc.hci-ude-17P1/GUID-53A3D8A9-71FF-40A4-B236-8CCDB0E36A67.html

I wouldn't feel comfortable deploying it. Perhaps if the 90-day buyback comes with their own people on site deploying it for you and you raise some test cases to their support team during that time to see if they just get stuck at the first sign of something that isn't VMware running on it, but nothing I can see hints at this being a Hyper-V platform outside of the head of whoever wants to sell you it.

Thanks Ants fucked around with this message at 14:04 on Feb 12, 2020

Docjowles
Apr 9, 2009

+1 for being extremely skeptical of this. Reeks of a sales team trying to land a deal with zero concern for it actually being an appropriate solution. And once your production workloads are migrated, if it’s poo poo are you seriously going to go through the pain of rushing everything back off to return the thing before the deadline?

That said, I did have an old boss with a knack for capitalizing on situations like this. “Ok, you’re obviously desperate to show off this use case and open a new market. But we would literally be customer 1 or 2. We would consider deploying it and give all the references you want, if you knock 90% off that quote you just gave me. And commit some engineers to helping support it so that the references we end up giving are positive ones.” This got us some baller F5 and Cisco/NetApp FlexPod gear we never would have ordinarily been able to afford :v: (and a poo poo ton of bugs to work through) Definitely a gamble.

Docjowles fucked around with this message at 14:44 on Feb 12, 2020

Vanilla
Feb 24, 2002

Hay guys what's going on in th
It's common for Hyper-V to be lower down the priority list with all vendors. Expect fewer plugins, less interoperability, less features, less prevalence in demo centers - all simply because VMware has the lions share of the market. However, this is not your fault - you have to go with whatever works best for your chosen stack.

You could always ask for a Proof of Concept - no payment until it's working against a set criteria. Quite often these get weighed heavily towards the vendor still.....but this way around is a lot easier than trying to get your money back off a vendor. It also shows their confidence in the solution - will they put kit on the ground for 60 days? Will they have an engineer at HQ available to help - just a few calls or a webex when you get a bit stuck?

Given zero references I think you're well within your rights to ask for a POC.

Edit; just seen your username. Great A POC. Fitting.

Vanilla fucked around with this message at 20:22 on Feb 12, 2020

H110Hawk
Dec 28, 2006
Tell your reseller it's a try-and-buy that you can back out of for any reason. Accept nothing less. Beat it up the day you get it - rip out cables mid-hyper-v-migration with synthetic load generators on it pegging out the cpus and disks. Pull a power supply. Pull all of the power supplies. See what happens. If it's hot swappable, swap it. Disks, controllers, cards, cables, etc. Make mistakes. Constantly. Write it down as you go.

If your reseller and vendor aren't actively nervous at your test plan you are either not testing it hard enough or they are 100% confident in their product. By the end of testing you should be 100% confident in how the device is going to perform. Some of your testing should induce fault conditions and that's fine, but how does the device recover from them is extremely useful information.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
I just imagine some sales man is asking their support staff if they can run hyper v in VMware right this minute. :eng99:

SlowBloke
Aug 14, 2017

Axe-man posted:

I just imagine some sales man is asking their support staff if they can run hyper v in VMware right this minute. :eng99:

You actually can(but not the other way around)

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

I think I've made this post before and I think it was in this thread, but NetApp HCI is not hyperconverged. There are compute nodes that are just 1/2RU blades that fit into the enclosure and have CPU and memory, and there are storage nodes which are just 1/2RU blades that fit into the enclosure and create a Solidfire cluster. The compute nodes form a cluster through the normal means of the hypervisor, the storage nodes create a completely distinct storage cluster, and the compute talks to the storage cluster over the network.

From a storage access perspective it is no different than if you bought solidfire and some servers and built a cluster yourself.

This means that you can run basically anything on it because there's no shared storage layer that is dependent on the hypervisor. If the compute nodes meet the hardware requirements for the OS/hypervisor then you can run it. You can run physical servers if you want.

The HCI-like capabilities are really just external orchestration of building and expanding the cluster, performing updates, and provisioning storage. That stuff is built out for VMware, but mostly not for anything else you want to run on there. Hyper-V will still run, but it will be a much more manual process to build and moderately more manual to manage.

If it's real cheap it may still be worth doing because it's no worse than buying servers and storage separately. If you want true hyperconverged with Hyper-V you need to look at Nutanix or Hyperflex. I don't think most customers really benefit from HCI though.

greatapoc
Apr 4, 2005

YOLOsubmarine posted:

I think I've made this post before and I think it was in this thread, but NetApp HCI is not hyperconverged.

Yes you have and it was to me and I thank you for it, it’s great information. Everything you say is pretty much what we’ve been told by NetApp and it is what I would like to believe. We would just like to see it in action. What they can’t properly answer is what we “lose” by using Hyper-V since everything is so tightly integrated with VMWare and whether there is any potential for an update from either NetApp or Microsoft to break the whole thing. We get the impression that all of their customers are using VMWare and will be well supported with any issues while Hyper-V will be best effort.

They’ve found one customer running Hyper-V that we’re going to have a phone hookup with early next week so it’ll be interesting to hear their experiences.

Digital_Jesus
Feb 10, 2011

Axe-man posted:

I just imagine some sales man is asking their support staff if they can run hyper v in VMware right this minute. :eng99:

You absolutely can and I run a nested hyper-v cluster on vmware for vendor support reasons on some very expensive to replace software.

Replication / Failover Works fine. Veeam backs up both the hyper-v host OS at the vmware level and the hyper-v guests as well.

Just gave the hyper-v vmware guests some dedicated storage to talk to and it was off to the races.

Axe-man
Apr 16, 2005

The product of hundreds of hours of scientific investigation and research.

The perfect meatball.
Clapping Larry
If it has the hardware to support it, it makes sense. I am surprised if that doesn't give you weird issues so I mean if it is successful that is pretty neat. It would certainly allow for some vmsphere solutions. I remember hyperv being way more finicky so it might just be my limited experience.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

I need ceph help

What version of Linux should I run? What about ceph itself? I want it to be a straightforward install.

To go with that, what tutorial should I be reading? The getting started guide on the ceph site did not help much with Debian 10 and then CentOS 7

Aunt Beth
Feb 24, 2006

Baby, you're ready!
Grimey Drawer

YOLOsubmarine posted:

I think I've made this post before and I think it was in this thread, but NetApp HCI is not hyperconverged. There are compute nodes that are just 1/2RU blades that fit into the enclosure and have CPU and memory, and there are storage nodes which are just 1/2RU blades that fit into the enclosure and create a Solidfire cluster. The compute nodes form a cluster through the normal means of the hypervisor, the storage nodes create a completely distinct storage cluster, and the compute talks to the storage cluster over the network.
We are transitioning to NetApp HCI from Cisco UCS and holy gently caress is the HCI a simpler product if you’re not some giant fuckoff shop with a dedicated UCS guy/guys.

That said NetApp HCI is not true HCI, yes. They actually are defining a new segment “disaggregated HCI” that means well-orchestrated wizard-driven deployment of servers and storage, and I actually really like the simplicity of a wizard to manage the environment without being forced to buy storage when I need compute or compute when I need storage.

Thirdly, to whoever was getting a V7000, I maintained them in the field when I worked for IBM and may god have mercy on you. I hope their code has improved in the past 3 years but I doubt that since IBM couldn’t care less about hardware since they can’t directly use it to manipulate their stock price.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Aunt Beth posted:

We are transitioning to NetApp HCI from Cisco UCS and holy gently caress is the HCI a simpler product if you’re not some giant fuckoff shop with a dedicated UCS guy/guys.

That said NetApp HCI is not true HCI, yes. They actually are defining a new segment “disaggregated HCI” that means well-orchestrated wizard-driven deployment of servers and storage, and I actually really like the simplicity of a wizard to manage the environment without being forced to buy storage when I need compute or compute when I need storage.

Thirdly, to whoever was getting a V7000, I maintained them in the field when I worked for IBM and may god have mercy on you. I hope their code has improved in the past 3 years but I doubt that since IBM couldn’t care less about hardware since they can’t directly use it to manipulate their stock price.
What, you're not a fan of Dojo Toolkit?

Aunt Beth
Feb 24, 2006

Baby, you're ready!
Grimey Drawer

Vulture Culture posted:

What, you're not a fan of Dojo Toolkit?
Is that what the V7000 UI is developed in?

Also just because I’m feeling ranty what they really should have done in storage, instead of mashing SVC tech into a lovely low end array and coming up with Storwize, is added more compute abilities to XIV and gotten well ahead of the HCI curve. XIV’s hardware architecture is/was amazingly well-suited to evolve into HCI. Alas, stock buybacks were more important than R&D

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Aunt Beth posted:

Is that what the V7000 UI is developed in?

Also just because I’m feeling ranty what they really should have done in storage, instead of mashing SVC tech into a lovely low end array and coming up with Storwize, is added more compute abilities to XIV and gotten well ahead of the HCI curve. XIV’s hardware architecture is/was amazingly well-suited to evolve into HCI. Alas, stock buybacks were more important than R&D
XIV was groundbreaking when it was released, but the implementation was too feature-poor to compete with IBM's own SVC and far too expensive to match Isilon's much more affordable attempt at the exact same architecture

Aunt Beth
Feb 24, 2006

Baby, you're ready!
Grimey Drawer

Vulture Culture posted:

XIV was groundbreaking when it was released, but the implementation was too feature-poor to compete with IBM's own SVC and far too expensive to match Isilon's much more affordable attempt at the exact same architecture
As I understood it was never really meant to compete with SVC, it was a high performance block storage device made of off the shelf x86 parts. SVC could frontend storage from anyone and everyone to smartly handle tiering, migrations, et.
As a former IBMer I drank the kool-aid and am incapable of seeing the value in any EMC storage product, so I have no experience with Isilon other than seeing them in some data centers.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Just racked a Dell/EMC Unity XT 480F to demo.

I'll report back next week if it functions as intended or if it is a hot dumpster fire.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Aunt Beth posted:

As I understood it was never really meant to compete with SVC, it was a high performance block storage device made of off the shelf x86 parts. SVC could frontend storage from anyone and everyone to smartly handle tiering, migrations, et.
As a former IBMer I drank the kool-aid and am incapable of seeing the value in any EMC storage product, so I have no experience with Isilon other than seeing them in some data centers.
I must have brain-farted writing that. I meant SVC+DS8000, which is the architecture that a lot more clients in need of top-end performance ended up on

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply