Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
I'm looking at implementing a two EMC VNXe3150's paired with EMC Data Domain DD620's. This is in a VMware 5.0 Windows environment split between our main office in Seattle and a smaller office in Portland.

Our main office is running an IBM DS3300 SAN paired with one newer x-series server and 3 older IBM x-series servers and a Portland office with just one IBM x-series server. lovely Netgear switches, no VLAN'ing. Backups are currently at the file system level of the VM's, which makes me extremely nervous. On top of that, we're paying some local IT company over a grand a month to maintain the BDR's and to provide offsite backups. Not exactly an ideal set up.

I'm looking at replacing basically our entire infrastructure (servers, SANs, switches, backup appliances) but my questions are with regards to the SAN/Backup pieces. I want to use each office as the primary off-site backup location and want to replicate backups of Seattle to Portland and vice versa. That way I can eliminate the monthly charge we're paying for our offsite backups as well as have backups at the VM level.

Any thoughts on the hardware/implementation/pitfalls, etc would be appreciated. Also; anyone have a recommendation for a cheap cloud service for dumping the backups as a secondary off-site backup?

Adbot
ADBOT LOVES YOU

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Not exactly SAN related but I need some opinions. I'm currently in the early stages of an infrastructure refresh for our two offices. After my CFO balked at an initial ~130k cost to implement new backups, storage, servers and networking at once, I've broken the project down into stages. The first stage being backups. We're paying $2300/mo for lovely backups that aren't at the VM level and we need to get rid of that cost. I want to get our backups at the vmdk level, then replicate between our two offices.

After threatening to go with an equalogic/powervault SAN paired with Veeam, EMC lowered their price down to 30K for a pair of 12tb DD620's. I was just about ready to pull the trigger when Dell made an offer worth considering. They're trying to sell me a couple VRTX boxes with two M520 blades and 12tb (I think) of disk paired with AppAssure as a backup solution with the ability for more than just backup.

The original idea was to replace our backups then move on and replace our core switching, SAN (possibly implement one in Portland as well) and grab a couple R720's or something similar for both offices. However, this whole VRTX "shared infrastructure" thing at least sounds nice and sounds like a really simple way to tackle this huge project. The idea of having one hardware vendor is also enticing. Should I be running away as fast as I can or is this a viable solution for a production environment? Is AppAssure a complete piece of poo poo? Should I be looking at two VRTX chassis in each offices for more of a redundant setup?

Here's an idea of our Windows environment:

The main office has:
3x IBM x-series servers running ESXi 5.0
IBM DS3300 w/ EXP3000 iSCSI SAN (~9tb raw)
14 VM's including Exchange 2010, several DC's, several SQL application servers, and some various file servers.

Smaller 2nd office has:
1x IBM x-series server running ESX 5.0
4 VM's which, including the DC are pretty much all just file servers.

MPLS connection has a maximum throughput of 12mb between the offices, but is affected by internet and site-to-site network usage.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Anyone ever use the ASM-VE (Auto Snapshot Manager) software that's packaged with Equalogic SANs for DR purposes?

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
So I've had this exact (HP N54L running FreeNAS 9.2.1.2) box connected to various ESXi boxes of different versions (4.0, 5.0 and 5.1) in the past several months. Each time it seemed rather finicky to get this thing connected and I stupidly never kept track of what exactly I did to get it working. Mainly because I wasn't working with production data. I basically changed iSCSI settings here and there and rescanning from vmware until it connected. Can't seem to get it connected to a host at the moment.

Anyway, I did a factory reset of FreeNAS and configured an IP address, DNS, gateway and configured iSCSI by...

1) Creating a Portal to the IP of the box 192.168.0.32:3260
2) Setting up an initiator. I left it default to allow all initiators and authorized networks. I later set the authorized network to 192.168.0.0/24
3) Created a file extent to /mnt/ZFS36/extent with a size of 3686GB. (browsed to this directory and the file exists and is 3.6tb)
4) Created a target, then a target/extent association.

I created a software iSCSI adapter, added a NIC and IP, pointed it to the portal address, and VMware picks up the target name, but doesn't connect. There's got to be something simple here I'm over looking...

goobernoodles fucked around with this message at 21:46 on Jun 20, 2014

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Anyone know how to get IBM on the phone without waiting for a callback for SAN issue? loving waiting for a callback.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
So it looks like Veeam backups last night filled up our SAN and locked up our hosts. Trying to get IBM on the horn, but has anyone dealt with this situation before? How can I delete these failed snapshots to get the hosts responsive again? :supaburn:

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Nice avatar.

It doesn't look like the SAN didn't actually fill up. I think one of the LUN's changed it's preferred path and the hosts weren't able to see it on the new path and became unresponsive. Ran IBM's "Redistributed logical drives" utility and everything came back up. God drat I forgot how much I loving hate IBM support. They look for a reason to get off the phone from the moment you start talking to them.

Don't get me started on IBM and their firmware updates.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

cheese-cube posted:

What model are you using if you don't mind me asking?
DS3300 with an EXP3000 expansion. Full with 300 and 600gb SAS drives.

The_Groove posted:

We run firmware on a ton of netapps that's approved/tested by IBM's cluster team, but is generally old enough that their website for firmware downloads has aged off the version we need by the time it's been approved.
When my old boss quit and I inherited this lovely infrastructure, the firmware on everything was waaaaaaay out of date. IBM gave me instructions on doing the upgrade. They didn't mention that the firmware was so old I needed to hop versions first. That bricked one of our servers.

It happened a second time. This time I told them what happened on the other server and specifically asked if we needed to hop versions. The dude assured me we didn't have to. Bricked the primary UEFI*. Ugh.

goobernoodles fucked around with this message at 04:02 on Jul 30, 2014

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Not sure if I should post this in the Windows server thread or here but... Anyone have a software recommendation or method to bidirectionally sync a couple of shares from a windows file server to a another SMB share? I'm looking into potentially implementing Egnyte which requires your files to be on their pre-built VM to sync to their cloud location. Since not everyone in my company is going to need this, I was thinking of getting a relatively cheap ReadyNAS and sync our file server shares to it, which would in-turn sync to the cloud location. Just need something to keep the existing file server and the NAS in sync locally.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

cheese-cube posted:

Any reason why you can't use DFSR?
Haven't played with DRS in a few years - can a replication target be a non-windows server?

e: Definitely should have posted this in the Windows thread. Whoops.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Can someone recommend a 24 port switch that will be dedicated to iSCSI traffic? Pretty small VMware environment of IBM 3 hosts (Qlogic HBAs) and a IBM DS3300 SAN running about 20 vms.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Moey posted:

Are your hosts just connecting 1 gig? What kind of switches does do you currently use? You will probably want to keep these a similar brand just so working on them is similar. Make sure to budget for two switches so you have redundancy as well.

You will be able to find something from each vendor, so it really boils down to personal preference. I am really liking working with Juniper's stuff for the past year and a half.

A pair of the 24 port Juniper EX3300 would probably fit the bill (with 1gbe connections). Throw them into a virtual chassis and then you can manage both units as one logical switch.
Yeah, just 1gb. I currently have a Netgear GSM7248 and GS724AT switches on the rack that the hosts and iSCSI and and other things plug into. I'm hoping to get a full infrastructure refresh (servers, switches, SAN) on the budget for next year, but I want to go through our existing setup and redo the iSCSI networking as a stopgap until that point. We've had occasional, ongoing issues with this setup and I really want to get the iSCSI traffic segregated.

On another note, are there any relatively cost effective SANs that allow for mixing flash and mechanical drives that I should look into? I'd like to be able to put the SQL databases for an application server or two and an RDS server on flash and put the rest on cheaper disks.

goobernoodles fucked around with this message at 23:28 on Nov 14, 2014

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Cross posting this from the virtualization thread. Probably belongs here anyway:

goobernoodles posted:

One of my two offices has only one host on local storage running a DC and some file, print, and super low-end application servers. It's a small office with about 20-30 people. The long term plan is to replace the core server and storage infrastructure in our main office, then potentially bringing the SAN and servers to the smaller office to improve on their capacity as well as have enough resources to act as a DR site. Until then though, I was planning on loading up a spare host down with 2.5" SAS or SATA drives in order to get some semblance of redundancy down there, as well as being able to spin up new servers to migrate the old 2003 servers to 2012. Right now, there's ~50Gb of free space on the local datastore. I'm looking for at least 1.2tb of space on the server I take down. I'm trying to decide on what makes the most sense from a cost, performance, resiliency and future usability standpoint. I'm trying to keep everything under a grand.

The spare x3650 I have has 8 total 2.5" bays (I have 3x 73Gb 10k SAS drives handy) but the downside is that 2.5" SAS drives are pretty spendy from what I've found so far. At least IBM drives, anyway.

I've been considering grabbing another IBM x3650 with 3.5" trays for about $130 a few blocks away, since, for some reason I have 4 500Gb IBM 7.2k SATA drives laying around. No idea why. We don't have any IBM servers with 3.5" bays. :iiam: At that point though, if I chose to go SATA, I might as well load the thing up with much larger drives since they're so cheap.

I was thinking of installing either ESXi or Freenas, though I'm open to trying something else to present the storage. I also have a spare SAS controller as well as plenty of memory and a couple HBA's. I've never actually tried it - you can mix SAS and SATA drives on the same controller, right - assuming different RAID arrays?

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Thanks Ants posted:

Egnyte has quite a few customers but it gets really expensive once you put enough features on to make it workable in an AD environment.
Engyte is one of my long term potential projects/solutions that I haven't gotten to yet. I'm considering buying a cheap NAS or other form of storage to run the proprietary Egnyte file server VM off of, the idea being to replicate from our existing file server to the Egnyte one. Point would be to not have our file server entrenched in a subscription service and to be able to limit licensing costs to only the people who need to access files remotely. Not sure how I'll replicate between the two without problems though. Robocopy?

...Anyone using FreeNAS for production file servers?

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
I'm going to have to read into that one a little more. If files copied to the share via robocopy or something along the lines of rsync won't be synchronized to the ~*~ cloud ~*~ then that definitely won't work for me.

On another note... has anyone paired servers with directed attached storage and created an extremely cheap "SAN" using FreeNAS? To this point I've only used FreeNAS with just the server's built in capacity. I was thinking of using either a poweredge 2900 paired with an MD1000 or MD3000 to really give me room to add capacity and redundancy. Anyone know if DAS enslosures only work with their own manufacturer's servers? Could I use an IBM EXP3000 with a Dell server or an MD1000 with an IBM server? I'm only using the storage provided by these servers for archive file servers that are being backed up as well, and for a test lab environment.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Rhymenoserous posted:

If you are using Server 2012 Branchcache is a possibility: https://technet.microsoft.com/en-us/library/dd425028.aspx
Honestly, DFS might work, but Branchcache could be a good alternative if it ends up eating up too much bandwidth. I'm getting 50Mbps fiber (up from 20Mbps EoC) this month god willing, paired with 100Mbps Microwave (Burstable to 1Gbps) as a secondary connection (whenever they finally put up the equipment on the building we relay off of...) so the bottleneck should be the job site connections. I was thinking of having a cheap server like a HP N54L or something running 2012 R2 Essentials or standard acting as a DFS target for our network shares (or perhaps just the job folders they need on-site) as well as replicating my WDS deployment share to be able to reimage computers at sites. Could potentially save a couple hundred bucks and set it up as a router and firewall with an IPSec tunnel back to the office I suppose, or would that be a massive security risk? Alternative would be to use a Sophos Red 10 which are about 200-250 iirc.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Gwaihir posted:

That reminds me that I have a shitload of IBM DAS boxes to test out on some old R710s too! 144 * 139gig 15k disks isn't exactly the latest and greatest, but it's not like I care about the power or cooling bill :v:
:getin:

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

NippleFloss posted:

I work for a VAR, excitement about technology is a big factor in driving sales. Things like Pure or Solidfire or Nutanix where you can wow people with a technology presentation or a whiteboard or have an engineer explain all of the cool and unique stuff that they are doing generate excitement from the technical folks in the room, and those are often the ones making the recommendations or controlling the direction the purchasing conversation takes. And those people go out and evangelize to other people that they know in the industry and suddenly they want to know about whatever the cool technology of the month is and they want to buy it when their next purchase cycle comes around. Their actual needs are often a much lower priority than being sold on cool technology, or features they don't actually need.

The fundamental problem that Nimble has right now is that their pitch is "it's simple and has flash!" which is true of a whole lot of other things out there now and those things might also do file or object protocols or have a much more polished ecosystem or simply seem more interesting from a technical standpoint. We had a customer that picked Tegile over Tintri (probably my favorite storage array out there, and it's radically simple and low touch) BECAUSE it didn't have enough knobs and dials. They wanted something they could tinker with and "tune" rather than something that they could plug in and promptly forget the password to because they'd never need to touch until it was time to retire it. Some people don't want "thing that just works" they want something *interesting*.

If you compare them feature to feature against competing products Nimble is lacking. No deduplication, inline or post-process. No file protocols. Limited scale out. Active/Passive design. No QoS. Read cache only on SSD. Years late to the all flash game. Limited 3rd party integration.

The fact that it works really well at doing the basic job of a storage array, serving up storage in a consistent and performant manner and protecting data integrity, doesn't really show up on a comparison sheet. This is partly because it's hard to market "it actually works" because everyone says that and also because for the most part all of their competitors also work. I think Nimble is, dollar for dollar, more performant than any other hybrid array out there (Tintri is really fast, but starts at a higher price point), but any of them can perform well enough for any customer and pricing can be massaged to make it competitive, generally. So what ultimately wins the sale is the whiz bang factor, or the feature list, or the comfort level of going with an established provider that isn't a startup with a $7/share stock price. Five years ago, when Nimble was founded, they were a unique offering, but they haven't changed much in five years, and the industry has. Hybrid storage isn't disruptive, it's normal. Tens of thousands of IOPs in a small array isn't exceptional, it's normal, because SSD is really cheap.

And, like I said earlier, the enterprise storage market is shrinking. There are fewer deals out there and more competitors and Nimble just isn't well positioned to win a lot of those deals. They NEED to make money to stay afloat, which is something they've never done, and they've got to do it in an incredibly competitive and shrinking market where they don't have the most interesting technology and they have less money than all of their publicly traded competitors. They're in a bad spot, as a company. That's no reflection on the quality of the arrays, which are very solid, but sometimes good products still fail.
Great post and I'm in the middle of trying to pick the best route to take for our infrastructure. Made a post two weeks ago at the end of the day in which my brain was fried:

goobernoodles posted:

Anyone have any strong opinions on which is “best” out of these options?

Option 1 (58k) – HP DL360 G9 paired with a HP MSA 2040 connected directly w/ 12Gbps SAS cables. 384Gb, 48 physical cores, 2x 10gb nics and 2x SAS HBA’s per server. 14Tb usable capacity, 2x 400gb ssd’s for read cache. I guess you can pay for a license to get auto data tiering and enables write caching to SSD’s.

Option 2 (66k) - probably more like 70-80k) – 3 node Nutanix solution with 24Tb raw capacity, 384Gb memory, 36 physical cores. They claim 50% usable capacity which puts us at 12Tb usable, which is cutting it too close. This was quoted at $65k and sounded like they were going to drop it further. Sounds like if I wanted to get one for Portland, I might be able to get two for around 100k. Waiting on a quote for the next bundle up w/ 32Tb raw, 8 core procs and more memory and expect it to be about 70-80k hopefully.

Option 3 (64k) – HP DL360’s mentioned above but paired with a Nimble CS235 with 24Tb raw. They claim 24TB usable due to compression.

I'm going to dedicate some time into reading into all of these options (was also quoted a NetApp san for around the cost of a nimble) but as of right now, it seems like both the Nutanix and HP/Nimble options are pretty compelling. Nutanix's pricing is apparently the lowest it's ever been due to their quarter ending at the end of the month, and it's the last quarter before they go public. Both sound great from a support standpoint, although Nutanix has a leg up there considering it's just one vendor. That's a huge plus for me since it's currently just me here and and getting buried alive. If you want to continue reading for more info on the environment feel free;

Commence rambling:

I've been hoping to replace my company's primary servers and storage for years. But due to budget constraints, a poo poo ton of major issues that needed to be ironed out as well as long overdue big projects, it’s been continually pushed back. I’m also completely overloaded and actively trying to find a helpdesk guy to shield me from being interrupted every goddamn 5 minutes. Looks like it's finally going to happen here pretty soon. The company has a little over 200 total employees and about users with 110 workstations. The Seattle office has 3 IBM x3650 servers (original, M2 and M3 versions) in production, paired with an IBM DS3300 iSCSI SAN with 7.1Tb usable capacity. 28 physical cores with 172Gb of memory. ~30VM’s and Veeam backups to a Quantum dedupe appliance (DXi4601). The Portland office has a single, ancient IBM x3550 M2 and veeam backups to another DXI. CPU isn’t a concern. We’re typically at about 25-35% utilization. Memory is 90-95% all day every day. I’m using more storage than I have on our SAN, in production, only because I’ve shifted some low-importance production servers and test machines, etc to FreeNAS storage. I have about 9tb of crap storage I can use in a pinch in each office.
For the sake of background info, we recently got a 1Gbps Comcast layer 2 fiber connection to connect the two offices, along with a 50Mb fiber internet connection in Seattle and a Microwave connection that should be upgraded within the month to be capable of bursting to 1Gbps. I also got a Comcast business coax connection installed recently in Portland for web traffic and as an additional backup connection that I could turn a "RED tunnel" or IPSec tunnel up on if the fiber were to go down. Network hardware consists of Sophos UTM's at each site and HP Procurve switches. Seattle has a 5412R and Portland, a 2920 iirc. Unifi AP’s.

Seattle

• Web app for accounting, cost projections, AP, HR, etc. SQL based, Tomcat front-end. Max 21 concurrent users.
• Another server that runs reports off of said SQL db.
• Exchange 2010 w/ mailbox database that just surpassed 1Tb. That number should drop massively once I have the storage to create new mailbox db’s, to migrate mailboxes to. No quotas were ever implemented and now the database is too big to offline defrag without taking forever.
• Another SQL based application server – max 10 users.
• Handful of other small application servers with 5-10 users each.
• DC’s, DHCP, Print, a couple file servers, WDS/MDT server, WSUS..
• RDS gateway and broker server and a separate session host server. I’m currently piloting this and holding out until we have the new hardware to open it up to all staff. If usage takes off, it could be a resource hog.

Portland is a babby.

They have a couple of dinky application servers, but I fully plan on consolidating those into servers in Seattle as I migrate the Seattle applications to 2012 R2. No real need to keep them segregated when we have a 1Gbps connection to Portland. Also plan on combining the Portland file server with the Seattle one which adds about 1.5Tb to the capacity needs when not taking into account any sort of compression or deduping. Basically, in the long run, there isnt’ going to be much in Portland other than supporting servers like… DHCP, a DC, DHCP, DFS copy of the Seattle file server, downstream WDS, WSUS and (maybe) Kaspersky servers and a veeam proxy server if it makes sense.

Basically looking for any sort of different perspectives. The potential for implementing a similar solution (such as two nutanix setups) is there and I need to at least pitch the idea. Didn't really think it was a possibility until I had an informal 30 min chat with the CFO about where I am looking into this poo poo. Haven't really thought too much about what exactly that would get us. DR site, what else? Original plan was to move our existing IBM garbage to Portland after implementing in Seattle.

...Jesus christ. I really need to go home now.
I'm not sure if I caught the CFO on a good day today or what, but I got basically a blank check to pull the trigger on any of the 3 solutions we've been looking at - for both offices. I know this year has been a fairly good year for us and she's got some budget to work with. Sounds like if I keep it under 150k she'll pretty much sign off on the purchase.

As of this morning, I was leaning towards Tegile or Nutanix. Tegile seems to have a leg up on Nimble at least in my mind right now due to far more usable capacity for a bit less money. When you factor in the additional protocol support that I initially didn't really put much weight into, it really looks like a pretty flexible option. That would mean that I could do odd-ball backup jobs like our email archive server, which requires an SMB share, directly to the SAN. I'm thinking that I could a) vastly decrease the RPO by using snapshots for day-to-day backups as decreasing the RTO for those scenarios where we need to revert a VM or recover files. Right now, it's a bit of a chore just due to having to wait 1-5 minutes for Veeam to mount a backup before I can recover a file. Also, I'm hoping that whatever solution we move forward with will let me eliminate the support costs for our Quantum DXi4601's that act as our Veeam Storage targets currently. We have two, with Veeam backup replication jobs taking care of the replication between our two sites. Support is over 8k annually which blows my goddamn mind. I need to confirm they're not going to just turn into bricks or something if out of support, but I figure I can relegate them to being much longer term backups with replication site to site. This project has kind of blown up into one that is now vastly larger than I anticipated quickly now that the CFO is open to opening up the wallet for both sites and now I'm scambling to make sure than I'm not shooting myself in the foot with regards to BCDR with any of these options.

We can do storage level replication with any of the 3, Veeam replication, VMware replication... is there a reason that decision should be made before going any of these? Sounds like there's some relatively minor differences between the options as far as granularity but at least from the storage side of things, they're all effectively pretty similar. I could ramble on incoherently for a while about all of the potential other options for things we could do with each solution, but I simply don't have the time figure out every single possibility and what's the best fit for us. Unless I'm missing something, the introduction of flash has really made finding the "best" solution less about sheer by the book performance numbers since there's no real way to know how a proprietary file system will work for any given workload, no? Going down that rabbit hole thus far has led to a circle-jerk of counter-arguments and usually coming down to "WELL OUR FILE SYSTEM IS BETTER, YOU'LL SEE" :smugbird:

I'm waiting on Nimble to quote a CS300 with around 36Tb of usable capacity, even though that's way more storage than I was aiming for originally. I was thinking 15-20Tb would be a good place to start.

I can put a Nutanix cluster with 96 logical cores, 768Gb RAM, and ~18Tb usable in place in Seattle along side a smaller cluster with 72 cores, 384Gb RAM, ~12TB usable in Portland. The comparable solutions with HP servers and Nimble/Tegile arrays are roughly in the area combined are roughly 10-15k less. When you factor in the potential for increased consulting costs with any of the server/SAN options, it's pretty much a wash as far as cost goes.

The biggest #1 question that I have no real idea how to answer is which one is fundamentally the strongest from a storage perspective. Logically, it seems like the Nutanix approach of trying to localize data to the host of the VM may produce the best performance since most of the read/writes are going to direct attached storage eliminating a lot of "hops." While the main argument other vendors make is that you've got to carve out CPU & memory from the nodes for use by the virtual storage controller, it does give us a great deal of flexibility there to increase RAM/CPU if necessary. The biggest question mark there for me is whether or not a virtual storage controller paired hitting direct attached storage will perform better than the SAN's. That, and Nutanix has SATA drives whereas the Tegile and Nimble are... SAS? Not sure on those - I just shot those questions off to the vendors. It's a poo poo-ton of money to just throw a dart at a wall.

It's hilarious that I'm posting an increasingly similar (incoherent) post to my last one at nearly the same exact time, but I need to leave to go to the space needle or something. Holy gently caress trying to write this entire post while people are sanding a conference room table with a sander made for floors was a bad idea.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

NippleFloss posted:

Which is best from a storage perspective depends on what your main criteria are for data storage. They are all better in some areas that others.

Nutanix - No storage management. Once it's up and running (which takes a few hours at most) you aren't managing storage at all, you're simply creating VMs on a datastore. There's no provisioning LUNs or managing VMFS filesystems. Of course, you can still take snapshots and configure replication, but datastore management is non-existent. You also have per VM granularity for those operations, so per VM snapshots and replication, which can be really useful when replicating for DR since you don't have to replicate a whole set of VMs that reside in volume just so you can turn one of them on at the remote site. The downside of Nutanix that we've seen is that it can struggle with monolithic workloads (think large SQL servers, i.e. anything that drives a lot of IO from a single VM) and the cache really needs to be sized correctly to ensure that you have a sufficient amount for hot data. Data locality (having cache on the servers) is nice in theory, but in practice the interconnect is only a small portion of overall latency in an IO, so having the data directly on the node is only noticable for very high IO operations where the difference between a quarter second and a half second is tens of thousands of IOPs. The IO also still has to traverse the drive bus, which is slower than NVMe or direct memory access.

Nimble - Rock solid, very performant, and a good ratio of raw to usable storage before data reductions. If you're comfortable with iSCSI already it's a very good option that's pretty low touch and that you probably won't ever have to worry about too much. It is limited from a features perspective though, so if you want to run block protocols, or you like the per VM management features of the Nutanix it's going to leave you wanting for more. But the basic feature set works very well, and it now has VEEAM support, which is big if you want to use VEEAM. In place controller and cache upgrades are very easy on anything above the CS215. You get more usable cache since it isn't protected, but if you lose a cache drive you lose all of that data and performance can go south very quickly. Double edged sword.

Tegile - Very flexible, lots of features, fairly easy to manage. We've seen some performance issues here as well and it's very sensitive to proper cache sizing, especially the metadata cache. Tends to fall over if you overrun the metadata cache since that's also used to store the inline deduplication table. If your performance requirements are moderate and you want to be able to do block and file from the array it's a good option. It will be more complicated to manage than either of the other two, largely due to the additional features, but also because they hide a lot of the complexity behind professional services and it will occasionally become obvious if you need to engage them for a case. Tegile also doesn't support in place controller upgrades at this time, so if you need to refresh down the road you're looking at a forklift upgrade right now. That may change in the future.

Were it me I'd probably do Nutanix if my workload was mostly distributed across a number of moderately busy VMs, and Nimble if my workload was a few large, heavy hitting VMs. Tegile only if I really liked the multi-protocol feature set. But you'll probably be fine and happy with any of them, because most people are.
Thanks for this. Very helpful.

What would you guys recommend if I wanted to say... supplement a Nutanix cluster with a cheap SAN for archive storage?

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Thanks Ants posted:

NetApp E-Series or IBM v3700
Staying the gently caress away from IBM SAN's for the rest of time. That's the second recommendation for the E series in the last day. I'll check out the pricing. Thanks.

goobernoodles fucked around with this message at 16:16 on Jun 2, 2016

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Welp, pulled the trigger on a 3x Cisco UCS C220 M4 servers w/ 2630v3's, 1.2Tb of RAM and a Tegile T3530 along with a 2x servers and a T3100 array for our 2nd office. Cha-ching.

e: Well, that and a HP 5406R for the 2nd office and 10Gb SFP+ modules for both offices. Should be quite the upgrade from our 8 year old x3650's and DS3300.

goobernoodles fucked around with this message at 04:28 on Jun 11, 2016

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Docjowles posted:

Why the Cisco C series vs rando rack mount boxes from HP/Dell, out of curiosity? Did they come in with a super competitive price? UCS is cool but the extreme flexibility seems kinda wasted, and like it's just adding complexity/points of failure on a three node deployment.

Not trying to be a nitpicky rear end in a top hat; genuinely interested :)
Waaaaay better pricing than the HP"s we were looking at. It was something like 5 grand less or more for an apples to apples comparison with 3x hosts and 256Gb of memory. They offered a 3rd party memory option for cheap which could have reduced the price further. Regardless, I was able to bump up to 368Gb of memory per host and still stay lower than HP. (I think)

I started talking to this VAR for network consulting. Since they're also in the server/storage market, I heard them out even though I was pretty far into the conversation with Nimble and Nutanix. It turned out the owner of the company is a client of ours - and he owes us a bit of money. While our CEO definitely gave me a little hint that he wanted me to go with these guys if everything was apples to apples, I was hesitant until they came back with a T3530 with a 3.5Tb all-flash tray. That pretty much took performance out of the equation.

Mr-Spain posted:

That's the switch I rolled for my backend, how did you get it configured?
Are you just talking about the physical configuration? Dual management controllers and PSU's, one 8x SFP+ for servers and storage and 3x 20-port 1Gb port modules with 2x SFP+. Got another 8x SFP+ and a few more of the 20 port, 2x SFP+ modules to put into our 5412R at the main office as well. I wanted some more 1G ports in both offices as well as a few more SFP+ modules than I need right now for flexibility down the road. I think all of that came out to 14k - the switch is HP renew. Forget if the modules and whatnot are new or not, but I guess it doesn't really matter with the whole lifetime warranty thing. There's a company I go through in Seattle that has consistently had the best pricing on HP gear that I can share if anyone is interested.

goobernoodles fucked around with this message at 17:32 on Jun 13, 2016

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Being in the construction industry, our file servers are filled up by 70% images. While I've done one-off image resizing and compression runs, I have to talk with departments before and after in order to avoid situations where they get resized to something too small. They need to be able to zoom in and see minor details.

Does anyone have any recommendations for preferably a command line utility that I could set up to do compress photos at 90% quality on a schedule? I want to set it up so it runs every day/week whatever makes sense. Looks like there are plenty of options, looking for first hand experience.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Internet Explorer posted:

I really don't know much about image compression and haven't worked in that kind of environment, so there goes the first hand experience requirement, but...

There was a really interesting article recently about a compression that Dropbox developed that they just recently open sourced. May be worth a look? https://blogs.dropbox.com/tech/2016/07/lepton-image-compression-saving-22-losslessly-from-images-at-15mbs/
Interesting, I'll read into it. Thanks.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

GrandMaster posted:

Hah, ours sounds the same - tons of civil engineering related photos, saved at the highest possible resolution.

I use this powershell script:
http://poshcode.org/621

You set the minimum horizontal resolution that will be resized, anything smaller gets left alone and larger gets resized to that minimum. Reduced our total image size usage by about 70%.
Hell yes, that's what I'm talking about. Thanks man!

Adbot
ADBOT LOVES YOU

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
Anyone have any experience with HP's StoreVirtual VSA? Only looking at it because I'm looking for cheap servers and shared storage for a site that was set up with no budget previously. They're running on a $50 poweredge 2950 right now, and I'm trying to get approval to at least buy some refurb servers and shared storage of some sort. Open to any suggestions.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply