|
Very nice, thanks! We're going to be moving our server lineup to ESX 3i, and we're trying to determine whether we can get by with DAS or if we need a SAN. We're a small company, but we have a couple DB-centric apps that run. Here's the setup: 1 SBS 2003 box, serving ~5 users 1 Web based management system with a SQL backend 1 Ticketing system with a SQL backend 1 Server 2k3 box as a backup DC We'd create a new VM for hosting SQL for both the management and ticketing systems, so the load would go down on those VMs but another one would be picking up the slack. The new server would be a DL360 G5 with dual quad Xeons with 16GB RAM. Our current switch is a POS Linksys 16 port managed GB job - it's fine for what we're doing now, but I'm not sure about iSCSI traffic. Our storage and IO needs aren't going to change a whole lot in the next few years. The DAS we're looking at is a MSA60 with ~900GB usable space (12 146GB SAS drives in RAID 10). For a SAN, we'd be looking at the MSA 2000 series running on iSCSI, probably with a similar drive setup. Any thoughts about what would be the best bang for the buck? The SAN alone is $15k+, while DAS + the server comes in at $11k. edit: we're a HP shop. Richard Noggin fucked around with this message at 12:51 on Aug 29, 2008 |
# ¿ Aug 29, 2008 12:49 |
|
|
# ¿ May 8, 2024 04:28 |
|
I suspect the best use case of vSAN is extending the value of previous capex by repurposing hardware that already meets the requirements, not buying new.
|
# ¿ Jun 27, 2014 20:08 |
|
sudo rm -rf posted:I've got a budget of ~10k. Need to get an iSCSI solution for a small VMware environment. Right now we're at 2-3 hosts, with about 30 VMs. I can do 10GE, as it would hook into a couple of N5Ks. 8 2TB NL SAS drives for 30 VMs? Have you done any IOPS analysis? I've seen bigger setups fail with more drives and less VMs.
|
# ¿ Jun 28, 2014 00:02 |
|
sudo rm -rf posted:No. This is all new to me (a theme for most of my posts in these forums, haha). How is that usually done? Then why get a SAN?
|
# ¿ Jun 29, 2014 16:36 |
|
sudo rm -rf posted:Because our Domain Controllers, DHCP Server, vCenter Server, AAA Server and workstations are also VMs, and at the moment everything is hosted on a single ESXi host. I can get more hosts, their cost isn't an issue - but additional hosts do not offer me much protection without being able to use HA and vMotion. That's a reasonable need, yeah? See, that changes the game. You said earlier that you needed the storage for 30 infrequently used VMs that were basically jumpboxes. Without knowing the rest of your environment, I'd be inclined to tell you to put the workstation VMs on local storage and your servers on the SAN, but at least get 10K drives. Speaking to general cluster design, our standard practice is 2 host clusters, and add more RAM or a second processor if need be, before adding a third host.
|
# ¿ Jun 29, 2014 21:40 |
|
Moey posted:I still prefer a minimum of 3 hosts. That way of I I put one into maintenance mode, I can still have another host randomly fail and poo poo will still restart. Not too likely but servers are pretty cheap now a days. The amount of time that a host spends in maintenance mode is very, very small. Wouldn't you rather take that other $5k that you would have spent on a server and put it into better storage/networking? Just to give everyone an idea, here's our "standard" two host cluster: - 2x Dell R610, 64GB RAM, 1 CPU, 2x quad-port NICs, ESXi on redundant SD cards - 2x Cisco 3750-X 24 port IP Base switches - EMC VNXe3150, drive config varies, but generally at least 6x600 10k SAS on dual SPs - vSphere Essentials Plus
|
# ¿ Jun 30, 2014 22:56 |
|
KS posted:I tend to agree that running multiple small clusters is non-optimal. You have to reserve 1 host's worth -- or 1/n of the workload where n = number of hosts -- of capacity in the cluster in case of failure. The bigger your clusters get, the more efficient you are. It is not efficient to run a bunch of 2-node clusters at 50% util, compared to one big cluster at (n-1)/n percent util. We don't run multiple small clusters. This would be per-client. Our clients, just like a lot of posters here, don't have unlimited resources, so it's a bang for the buck type of thing. The choice between better hardware and a server that sits idle all day long is pretty much a no-brainer.
|
# ¿ Jul 1, 2014 12:35 |
|
NippleFloss posted:If you don't have the equivalent, resource wise, of one server sitting idle then your cluster isn't going to survive a host failure gracefully. The problem is that in your cluster that means 50% of resources need to be reserved, whereas for larger clusters it's a smaller percentage for n+1 protection. In large environments, yes, the design does not make sense. The majority of our clients don't fall into that segment. Given a fixed budget for a high-availability setup, we have simply chosen to go quality over quantity. The workload can happily run on one host, so we have redundancy covered. For workloads that can't, then yes, three hosts makes sense.
|
# ¿ Jul 1, 2014 18:51 |
|
Cavepimp posted:My VNX 5200 showed up yesterday and I already wish I had a 2nd one to replace our VNXe 3300. The VNXe was easier to set up initially, but the stripped down interface started to bother me over time. Especially the missing performance related info/reporting. Did you purchase the monitoring suite for the VNX? Without it you don't get that info. I also really hate how EMC charges an arm and a leg just to be able to view performance info.
|
# ¿ Jul 10, 2014 15:55 |
|
adorai posted:We discovered recently that our oracle array has disks which are consistently at or above 95% utilization. It's amazing that the array was able to mask this kind of problem from us for long enough to get to this point. To make sure I wasn't lying to my users I've been running in VDI since I noticed the issue while I work on quotes for more storage, and other than the big logon push in the morning the system stays pretty drat usable. We've been burned a few times by Oracle storage (7120) and it got sent down to the minors where it can be a non-critical unit. We have a VNX 5200 instead, and love it. A shame too - the 7120 was ridiculously expensive for what we got.
|
# ¿ Jul 10, 2014 23:06 |
|
Martytoof posted:What would you guys do if you were given an HP server with 8 empty 2.5" bays and asked to create a Veeam server on a tight budget? I'm thinking of going with 8 of these: We have Veeam repos on slower storage than that. Your bottleneck will most likely be the 1Gb ethernet connection, not disk. Other factors unconsidered I'd say you could expect 70-90MB/s.
|
# ¿ Aug 14, 2014 21:33 |
|
Internet Explorer posted:On the Dell MD3200 series you can actually connect up to 4 hosts, shared storage, with active/active controllers. For someone just looking at a small VMware Essentials Plus network with no-frills needed, it's great. I'll offer a counter opinion: Dell md32xx series arrays suck. They're overpriced, offer a crappy management solution (really? a 1GB download just to get management software, PLUS having to upgrade said 1GB package with every firmware release?), and only offer block. For the same or less money you can get a VNXe3150 or 3200 with unified storage and built-in management. In casual observance, the EMCs perform better than the Dells too. Oh - did I mention that Dell acknowledges there is a very real possibility of data corruption during firmware updates? Yeah...not impressed.
|
# ¿ Sep 12, 2014 00:11 |
|
skooky posted:Pretty much everything you posted just then is wrong. Interesting. Every one I've done has resulted in having to upgrade the array manager after. I've had several Dell techs speak adamantly of the "possibility", and with enough conviction that we don't even think about doing the upgrade with any sort of IO to the SPs. EMC is plenty happy with active upgrades. But hey, it's really just Ford vs Chevy.
|
# ¿ Sep 12, 2014 16:20 |
|
PCjr sidecar posted:nah just that operating near capacity brings most sans to their knees They're just not meant to handle a load like that.
|
# ¿ Oct 12, 2014 00:58 |
|
We use Catalyst 3750-X switches in small storage environments with great success. We even break the rules and share duties with L3 routing.
|
# ¿ Nov 16, 2014 23:32 |
|
devmd01 posted:I refuse to support any SAN in a production environment without a maintenance contract in place, unless its something they really, really don't give a poo poo about on there. Ultimately its my rear end on the line if something goes south, and I want that contract in my back pocket to call up and make them fix it when there's a goddamn production outage. This, 1000x this. Unless you have a team with vendor-level knowledge of the product and keep spares, this is gospel.
|
# ¿ Nov 25, 2014 15:21 |
|
What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.
|
# ¿ Feb 23, 2015 22:37 |
|
Yup, 1GbE. We do have the option of going host-->controller and bypassing switching altogether with the VNXe3200s we usually deploy, but I'm not sure if that's a good idea.
|
# ¿ Feb 23, 2015 23:25 |
|
NippleFloss posted:4900 series switches will handle storage traffic better than 3750s owing to a much larger shared port buffer space. Bursty traffic or mismatched egress/ingress rates (all common for network storage) can overload the relatively small buffers on the 3750s and lead to packet drops. Yeah, that I know, but at 4x the cost.
|
# ¿ Feb 25, 2015 20:10 |
|
NippleFloss posted:A refurbished 4948 is few hundred bucks more than a refurb 3750X-24T. Interesting. I'll have to check it out. List is like 2x I think.
|
# ¿ Feb 26, 2015 14:48 |
|
AFAIK EMC SANs (and probably the vast majority of SANs in general) use custom firmware. Chances are the controller won't even identify the drive.
|
# ¿ Mar 30, 2015 14:44 |
|
I don't see Meraki going anywhere. It's already rebranded as Cisco Meraki, and fills a nice gap in their product line allowing them to compete with the Aerohives of the world.
|
# ¿ Jul 30, 2015 14:00 |
|
Docjowles posted:You already got some good answers, but another decent tool I ran across recently is Microsoft's DiskSpd (formerly SQLIO). This is what Veeam recommends to test performance for their app.
|
# ¿ Aug 31, 2016 14:20 |
|
HP just picked up Simplivity for $650m. It will be interesting to see where they take the line, as I believe Simplivity's stuff is build on top of both Dell and Cisco hardware.
|
# ¿ Jan 18, 2017 16:30 |
|
Anyone have any experience with Nimble's Storage-On-Demand offering? If so, what's your environment like?
|
# ¿ Jul 12, 2017 14:34 |
|
adorai posted:1) I would buy another Nimble today, no problem. I don't care who owns them, the product is great. This. Just got an AF1000 with 24 disks last week and have been putting it through its paces. I had to deal with support already (weird issue with a disk dropping off at every reboot), but they were fantastic and US-based. Also, 35K IOPS with our normal 75/25 workload in iometer is just
|
# ¿ Jun 15, 2018 15:22 |
|
evil_bunnY posted:Does anyone make a good continuous backup appliance I can point at a bunch of SMB shares? About 30TB worth A good friend of mine is in sales at Rubrik. PM me and I can give you his email if you want.
|
# ¿ Jun 19, 2018 17:25 |
|
Spring Heeled Jack posted:Wow, HPE is about to lose a sale to Dell because CDW can’t manage to get pricing we’ve been asking for like two weeks. I really liked the AF40 but holy hell get your poo poo together. I can't stand CDW. Perhaps it's because we're not a mega-enterprise, but their pricing has been abysmal and we experienced a high AM turnover. SHI is now our preferred VAR.
|
# ¿ Jun 19, 2018 19:52 |
|
YOLOsubmarine posted:Also, Tintri’s new CEO quit after just three months. The local SE is telling us that they can’t get any answers from management about what to tell customers about the future of their support contracts. We’re gonna have a few real unhappy customers, including some pretty big IT companies. They're about to be delisted on NASDAQ. They will be out of cash by the end of the month if they don't get help. Don't hold your breath.
|
# ¿ Jun 21, 2018 00:33 |
|
YOLOsubmarine posted:They’ve got no chance of getting acquired until they go through bankruptcy. The question is can manage a bankruptcy, restructure and acquisition in a way that allows them to maintain their support organization to honor support contracts. I don't see any value for the investor. They weren't able to keep the product afloat and this has essentially poisoned the entire line. Would anyone really buy Tintri after this? I sure as hell wouldn't.
|
# ¿ Jun 21, 2018 13:43 |
|
|
# ¿ May 8, 2024 04:28 |
|
1000101 posted:One nice thing about Pure is their evergreen support program. Basically if you stay current they'll keep your gear current without having to do hardware refreshes. It sounds insane but it ends up being a great way to keep customers giving you money for your product. We have a similar deal with Nimble - we get free controller upgrades every three years, provided we keep maintenance on it. I don't have the specifics in front of me but I believe we need to renew for a minimum of three years at a time to be eligible for this.
|
# ¿ Sep 16, 2019 15:39 |