|
theperminator posted:We've lost faith in Dell and their Equallogic line, and we're thinking of switching over to something else. HP Storage is a loving clusterfuck, much like the rest of the company. I love our EMC VNXe units for small/mid deployments. I would check them out if they meet your needs. We've standardized as a company on EMC storage though. The big project next year is to retire an IBM V7000 setup and move it over to a VNX at one of our data centers. The other 2 main data centers are already running VNX systems.
|
# ? Oct 29, 2014 19:24 |
|
|
# ? Apr 27, 2024 02:02 |
|
Does anyone have experience with the Intel Modular Server's storage system and particularly it's "Shared LUN" feature? I'm under the impression that this would be a better choice to provide shared storage to the Hyper-V blades housed within compared to running a *nix storage distro of some sort on one of the blades and using iSCSI from there. The previous administrators of this machine built a whole bunch of small LUNs, one per VHD, and attached them to the individual blades so changing anything is a real pain in the rear end right now and any kind of failover or migration between blades is just a dream. I'm usually the one supporting using random *nix appliances instead of licensed features, but since we already have the hardware and it's only a few hundred bucks to license the feature it seems like the obvious solution unless it has some fatal flaw.
|
# ? Oct 29, 2014 20:26 |
|
So we've got a Pure FA420, Tintri T540, and an all flash NetApp 8060 in our lab if anyone is curious about any of those arrays. I've spent the past day getting the NetApp set up and running some benchmarks against it, but I haven't played with the others too much yet.
|
# ? Oct 31, 2014 20:15 |
|
Hopefully you can get cdot 8.3 rc1 on that 8060 as the boost to AFF performance looks great.
|
# ? Oct 31, 2014 22:30 |
|
bigmandan posted:We're pretty comfortable with racking and cabling the equipment, but the setup services include config of the storage units on 4 new hosts we are getting as well. This will be our fist SAN in our environment so having the setup and configuration done for us will be good. How much of each tier did you buy? Make sure you have them explain auto-tiering and storage profiles, its pretty straightforward.
|
# ? Nov 1, 2014 20:09 |
|
No one likes to talk actual costs paid but I'm trying to figure out how far I can push our VAR or if it's even worth the effort. In the realm of production NFS/CIFS appliance on a 500+TB scale what is a reasonable cost per gig? 40-50 apps driving 100-200 iops each - call it 10k total tops. I have some wildly different quotes. Last year we did a deal at $1.15/gb at about 120tb. I have a quote from Dell for a new deployment that's $.25, yet EMCs is $2.02. I have talked to EMC about dells quote and they seem uninterested in changing the pricing to meet the market. I like EMC more than dell but not 8x. Where I sit, looking at the market, disks in the capacities I'm looking at should be ~$0.50 for hdd and approaching $2.50 per effective flash gig. I know everyone has strengths and weaknesses but I can't help but think they are trying to exploit what they incorrectly feel is imperfect market information. Anyone run Isilon or .75+pb Compellent arrays behind fluidfs?
|
# ? Nov 2, 2014 23:06 |
|
Anyone here run a vsan in production? I find it funny when people don't understand distributed raid and the under lying arch of vsans. It's like wtf did you think it was? They are all hyped for it till I explain what a vsan is an it's limitations.
|
# ? Nov 2, 2014 23:32 |
|
Dilbert As gently caress posted:Anyone here run a vsan in production? I find it funny when people don't understand distributed raid and the under lying arch of vsans. It's like wtf did you think it was? If by vsan you mean a distributed block device and not vSAN®, then yes. We run Ceph RBD and Swift/Cinder, though those are less comparable than Ceph in production. We probably also have Nutanix in production somewhere, but I haven't seen it. People here mostly don't care where their storage comes from as long as it's fast and it doesn't poo poo the bed. I'm not sure I'd even try to explain loosely-coupled scale-out storage to non-technical people. Even then, I imagine it's the same hype as . People get excited about implementing it, buy have no idea how to make it do useful stuff once they have.
|
# ? Nov 3, 2014 01:19 |
|
Dilbert As gently caress posted:Anyone here run a vsan in production? I find it funny when people don't understand distributed raid and the under lying arch of vsans. It's like wtf did you think it was? I'm not sure what you'd say about vSAN specifically that doesn't apply to a bunch of scale out SAN solutions like Isilon, ExtemeIO, Nimble, Equallogic, Nutanix, Simplivity, etc... The biggest problem with vSAN is immaturity, but there's nothing inherently wrong with the architecture.
|
# ? Nov 3, 2014 02:43 |
|
Our cluster use the Isilon filesystem and we sometimes get a slight delay for files to be fully accessible to other nodes in the cluster. The files appear as truncated in the intervening period. Is anyone aware of a fix for this sort of behavior?
|
# ? Nov 3, 2014 17:56 |
|
KennyG posted:No one likes to talk actual costs paid but I'm trying to figure out how far I can push our VAR or if it's even worth the effort. In the realm of production NFS/CIFS appliance on a 500+TB scale what is a reasonable cost per gig? 40-50 apps driving 100-200 iops each - call it 10k total tops. I have some wildly different quotes. Last year we did a deal at $1.15/gb at about 120tb. I have a quote from Dell for a new deployment that's $.25, yet EMCs is $2.02. I have talked to EMC about dells quote and they seem uninterested in changing the pricing to meet the market. I like EMC more than dell but not 8x. I definitely like to talk actual costs paid, because it shifts the power away from the vendors and towards the consumers. We're not signing NDAs here. Compellent is decent block storage, but layering a NAS head in front of it doesn't put it in the same class as Isilon or Netapp for scale out NAS. That's probably a big reason for the price gap. That said, it doesn't look like you have high IO requirements or the need to scale, and you might be fine with the cheaper solution. I've never seen Dell's NAS head in the wild. I'd guess at 500+ TB you'd be one of their bigger customers -- might want to set up reference calls with other installations that size.
|
# ? Nov 3, 2014 18:31 |
|
devmd01 posted:How much of each tier did you buy? Make sure you have them explain auto-tiering and storage profiles, its pretty straightforward. Dell Compellent SC4020 6X 400GB SLC Wi (One Hot Spare) 1TB Usable R10 6X 1.6TB eMLC Ri (One Hot Spare) 6.4TB Usable R5-5 (7.4TB Flash, 29.60% Capacity) 24X 1TB 7.2K NLS (Two Hot Spare) 17.6TB Usable R5-9 The auto-tiering and storage profiles are pretty straight forward. The thing I was asking about was the Live Volumes (replication), specifically I'm interested in HA Synchronous Live Volumes.
|
# ? Nov 4, 2014 17:52 |
|
Signed off on our Nimble purchase last week. One of the big reasons I didn't want another EMC is because their whole support experience is awful. Today I have a faulty drive on the old EMC array and getting the SR in and taken care of has made me so happy that I made the choice I did. To start with, the loving 'attach' button does nothing on their site, in any browser. Even that used to work.
|
# ? Nov 4, 2014 21:23 |
|
I know there have been some book recommendations in this thread but frankly it is long and I can't seem to find the exact titles. I currently work in a T2'ish role at an F200 and would like to specialize in storage because frankly I think it's the tits. My real world experience is fairly limited but I need to start somewhere, I would ultimately like to land some EMC certs start job hunting for a junior storage role. Any book recommendations would be appreciated as would any general advice in this particular specialty.
|
# ? Nov 5, 2014 03:53 |
|
Something Awesome posted:I know there have been some book recommendations in this thread but frankly it is long and I can't seem to find the exact titles. I currently work in a T2'ish role at an F200 and would like to specialize in storage because frankly I think it's the tits. My real world experience is fairly limited but I need to start somewhere, I would ultimately like to land some EMC certs start job hunting for a junior storage role. Any book recommendations would be appreciated as would any general advice in this particular specialty. My general advice would be to spend a year in a junior admin position as a generalist that gets to do a little of everything (dba, storage, network, virt, sysadmin, scripting) before you specialize, but failing that, it's hard to beat the EMC education stuff as long as you have a general overview of NFS, LUNs, fabrics, etc
|
# ? Nov 5, 2014 05:12 |
|
Something Awesome posted:I know there have been some book recommendations in this thread but frankly it is long and I can't seem to find the exact titles. I currently work in a T2'ish role at an F200 and would like to specialize in storage because frankly I think it's the tits. My real world experience is fairly limited but I need to start somewhere, I would ultimately like to land some EMC certs start job hunting for a junior storage role. Any book recommendations would be appreciated as would any general advice in this particular specialty. If you give me an e-mail address I can send some free online NetApp training your way, that's all I got.
|
# ? Nov 5, 2014 07:43 |
|
I see a lot of Nimble users here. I have an opportunity to work for them. How do you find the kit? The people? Any bad things you've seen as a customer? Who would you consider their competitors? Did you look at Pure? Thanks
|
# ? Nov 5, 2014 18:58 |
|
Vanilla posted:I see a lot of Nimble users here. I have an opportunity to work for them. People have been super nice. Their techs have been responsive and proactive. We hear from our account rep all the time, she uses us a lot to answer questions for new clients. Nothing bad from a customer prospective. Nimble was a lot cheaper for us to get into. We also only require iSCSI block storage, Pure's flexibility was overkill. I also see Nimble sticking around more than Pure, as they seem to be gaining traction a lot faster. Out of curiosity, what state/area and roll you looking at taking?
|
# ? Nov 5, 2014 19:24 |
|
I haven't received my array yet, but I'll answer the applicable questions.Vanilla posted:The people? quote:Any bad things you've seen as a customer? quote:Who would you consider their competitors? quote:Did you look at Pure? *edit: To be clear, by feature parity I mean what they do, not how they do it. Both have high IOPS, some amount of capacity, thin snapshots, compression, replication, etc. Obviously how they accomplish these things can be wildly different. Erwin fucked around with this message at 20:23 on Nov 5, 2014 |
# ? Nov 5, 2014 19:32 |
|
I'm working with Nimble on putting together a deal for a customer and they seem pretty good so far. Easy enough to deal with and fairly quick turnaround on getting quotes back. It's a pretty small deal, but they're still putting in the effort to win it. The VAR I'm working with now has traditionally partnered exclusively with NetApp and really only sold other storage if the customer was adamant about getting it. So we've sold some Pure, and Tintri, and Tegile, but NetApp is really what we pitch. We've recently added Nimble as a second preferred storage partner because they are easy to work with, the product is easy to use, and you get very good performance for the price. They've carved out a pretty solid niche for themselves, and I'm still not sold on their financials yet, but they seem to be growing quickly enough that it won't be a problem. Pure isn't really competitive due to the extreme cost difference. I also have my doubts about Pure's longevity, after talking to some of their people recently, and seeing how hard it has been for them to land accounts, at least on the west coast. Tintri is good, easy, fairly fast VMware only storage. It's even simpler than Nimble, and has some nice integration with VMware. It's probably a good choice if you want VM only storage that is dead simple to manage, though I wonder about their longevity as well. Their features roadmap doesn't have anything very compelling on it. Tegile is just sort of there. If you're in the PacNW and looking for a good partner to work with on a Nimble purchase, I can point you to one of our sale reps.
|
# ? Nov 5, 2014 20:35 |
|
NippleFloss posted:I'm not sure what you'd say about vSAN specifically that doesn't apply to a bunch of scale out SAN solutions like Isilon, ExtemeIO, Nimble, Equallogic, Nutanix, Simplivity, etc... vSAN has a long way to go but it's just funny when you go to a meeting and explain the underlying arch to your director and he goes "oh". I get that there is a lot more QA, Support, and assurance to a business when purchasing a name but it is kinda funny when people go "wait, it's not black magic!?!"
|
# ? Nov 6, 2014 02:21 |
|
Rated PG-34 posted:Our cluster use the Isilon filesystem and we sometimes get a slight delay for files to be fully accessible to other nodes in the cluster. The files appear as truncated in the intervening period. Is anyone aware of a fix for this sort of behavior? What version of OneFS are you running?
|
# ? Nov 7, 2014 08:44 |
|
The rest of items for our SC4020's were received today! I decided to check everything out before we bring it over to our data centre and was quite surprised on how heavy the SSD's drives are compared to consumer drives. One thing I found interesting was that the platter drives in the disk shelf came pre-installed, but SSD's for the controller head were not.
|
# ? Nov 10, 2014 18:37 |
|
Amandyke posted:What version of OneFS are you running? Is there a way to determine this without direct access to the isilon server? I'm not a system admin.
|
# ? Nov 11, 2014 00:22 |
|
On the subject of costs, in q4 $3.7m will buy you 4 Vplex engines, 4 recoverpoints, 4 XTREMIO 20tb bricks, 12 isilon x nodes @134tb with 128gigs of ram and 1.6tb ssd each and a pair of maxed out vnx 5400s @~200tb, 4 FC switches and a crap load of licensing and everything is encrypted. This is gear for 2 sites... Xmas is going to loving rock. Pics next January of the install fest.
|
# ? Nov 11, 2014 02:43 |
|
|
# ? Nov 11, 2014 09:34 |
|
KennyG posted:On the subject of costs, in q4 $3.7m will buy you 4 Vplex engines, 4 recoverpoints, 4 XTREMIO 20tb bricks, 12 isilon x nodes @134tb with 128gigs of ram and 1.6tb ssd each and a pair of maxed out vnx 5400s @~200tb, 4 FC switches and a crap load of licensing and everything is encrypted. This is gear for 2 sites... Holy poo poo, nice. I wish my company was selling you that
|
# ? Nov 11, 2014 13:58 |
|
Rated PG-34 posted:Is there a way to determine this without direct access to the isilon server? I'm not a system admin. You'd likely have to ask a storage admin then.
|
# ? Nov 13, 2014 15:48 |
|
Holy mother of gently caress EMC licensing We buy a product a month ago and get an activation email, have to go to their LAC website to activate our entitlements, get sent an email with a loving certificate saying we can use their software and then have to call their licensing support rep to be told it's a 48 hour turn around to actually claim the license.
|
# ? Nov 13, 2014 22:09 |
|
Wicaeed posted:Holy mother of gently caress EMC licensing Welcome to doing anything with EMC. Good luck with their support. Look on the bright side, they used to snail mail you the certificate.
|
# ? Nov 13, 2014 22:58 |
|
Amandyke posted:You'd likely have to ask a storage admin then. Okay, I asked a sysadmin and it's version 7.0.1.10. $ isi uname -a Isilon OneFS v7.0.1.10 Isilon OneFS v7.0.1.10 B_7_0_1_233(RELEASE): 0x700015000A000E9:Tue
|
# ? Nov 14, 2014 00:08 |
|
For those of you involved in a new SAN evaluation, selection, and migration, how did that process work for you? I'm going to an interview on Monday where my key role in the first year would be doing just that and migrating them away from a 5+ year old EMC, and I want to be able to talk intelligently about going through that process. I have enough hands-on experience with Compellent and fiber channel administration but always on existing systems, or I wasn't involved in the selection/migration process. First step in my head is data, data, data. How many servers, expected growth, aggregate disk space needed, 95% percentile/average iops per lun, what TYPE of iops/applications using the san, existing storage fabric evaluation end-to-end, as well as 3-5 year business initiatives such as DR or 100% virtualization and expected natural growth. Next step is compiling it into a usable RFP that we could hand to vendors. Sure, any SAN vendor would be happy to assist you with gathering the data and telling you what you need, but I think it's important to have an idea of what those numbers should look like ahead of time. Once that's compiled, start doing research not only the incumbent's existing offerings, but key competitors. Once you have an idea of what the market looks like, start the vendor contact game and start that process of getting as many free lunches/sports tickets as possible. Get the proposals, evaluate side by side, narrow to 2-3, ask for demos and start the bidding wars, professional services for the install included of course. Once the SAN is in, hook up a test server and hammer away and play with it for a bit before beginning migration. Anyone have additional thoughts, ideas on how to go about this?
|
# ? Nov 14, 2014 02:52 |
|
devmd01 posted:Storage refresh When sizing for performance break things down into application groups and give that to the vendors. They will generally have tools for sizing thing like SQL, Exchange, file services, Oracle, SAP, etc...knowing what type of application it is will give them a good idea about how the workload breaks down beyond just raw IOPS numbers, and will also allow them to estimate things like compression or deduplication savings. It's generally more important to make sure you size performance correctly because capacity is fairly easy to get with large disk sizes, but unless you're heavily leveraging flash you can end up without enough spindles to support. But generally the vendor will be pretty good at solving performance and capacity equations to get what you need. You should focus on how you want to do things. Do you want to leverage array based snapshots for backup? How easy is it to recover the data? Do you want things like zero space cloning? If so, is getting a clone up a very manual process or can it done easily? Can it be handed off to someone like a DBA, so that they can clone their own test databases? If you aren't using snapshots for backup how will you manage it? If it's something like VEEAM or Commvault or NetBackup you should investigate how the different competitors integrate with those tools. Do you want de-duplication? Compression? Are there benefits to either/both for your workloads? Does your environment have off-peak hours when post-process scans can run, or would inline be a better fit? Will you be leveraging array based replication for DR? If so, how much work would be involved to stand up your DR site? Do they have an SRA for SRM? Are you leveraging things like Exchange 2010 DAGs and SQL 2012 Always-On availability groups that obviate the need for smarter storage? What's the management overhead like? Will you be creating a bunch of different raid groups of different types to house different classes of data, or will you just get one pool of storage for everything? Do you prefer the flexibility or the simplicity? How many off box tools are available to manage the array and automate tasks? Powershell integration? Single-pane-of-glass monitoring for the whole storage environment? Alerting and reporting? If you're leveraging VMware, what does their integration look like? What VAAI primitives do they support? Will they have VVOL support? VASA provider? Do they have a vCenter plugin, and, if so, does it do anything useful? How easy is it to add capacity? How easy is it to add performance? What types of things can be done non-disruptively? What types of things require an outage? What does your refresh process 3-5 years down the road look like? Rip and replace, or can they rotate new stuff in/out easily by, say, just replacing controllers live? How easy will it be to evacuate data when it's time for refresh? There are a lot of questions here and they really tie in to how you want your infrastructure to look generally. You can just treat storage as dumb blocks of IOPS and capacity, but most vendors have a lot of value add on top of that and you need to decide which of those things are important to you.
|
# ? Nov 14, 2014 03:48 |
|
Internet Explorer posted:Welcome to doing anything with EMC. Good luck with their support. Look on the bright side, they used to snail mail you the certificate. In a box the size of a TV
|
# ? Nov 14, 2014 10:07 |
|
NippleFloss posted:right click save as Perfect, this is exactly what I needed, thanks!
|
# ? Nov 14, 2014 12:45 |
|
Vanilla posted:In a box the size of a TV Haha yes! EMC sent me a physical entitlement certificate for a classroom training course in a giant box, padded with bubble wrap. I then had to jump online and use the code on the certificate to register. Ridiculous.
|
# ? Nov 14, 2014 13:58 |
|
Can someone recommend a 24 port switch that will be dedicated to iSCSI traffic? Pretty small VMware environment of IBM 3 hosts (Qlogic HBAs) and a IBM DS3300 SAN running about 20 vms.
|
# ? Nov 14, 2014 20:49 |
|
Are your hosts just connecting 1 gig? What kind of switches does do you currently use? You will probably want to keep these a similar brand just so working on them is similar. Make sure to budget for two switches so you have redundancy as well. You will be able to find something from each vendor, so it really boils down to personal preference. I am really liking working with Juniper's stuff for the past year and a half. A pair of the 24 port Juniper EX3300 would probably fit the bill (with 1gbe connections). Throw them into a virtual chassis and then you can manage both units as one logical switch.
|
# ? Nov 14, 2014 20:56 |
|
Moey posted:Are your hosts just connecting 1 gig? What kind of switches does do you currently use? You will probably want to keep these a similar brand just so working on them is similar. Make sure to budget for two switches so you have redundancy as well. On another note, are there any relatively cost effective SANs that allow for mixing flash and mechanical drives that I should look into? I'd like to be able to put the SQL databases for an application server or two and an RDS server on flash and put the rest on cheaper disks. goobernoodles fucked around with this message at 23:28 on Nov 14, 2014 |
# ? Nov 14, 2014 23:20 |
|
|
# ? Apr 27, 2024 02:02 |
|
Dell's MD3 series can do that, I'm not sure if they have any of the tiering features that generally make SSD worth having.
|
# ? Nov 14, 2014 23:33 |