|
I'm looking at implementing a two EMC VNXe3150's paired with EMC Data Domain DD620's. This is in a VMware 5.0 Windows environment split between our main office in Seattle and a smaller office in Portland. Our main office is running an IBM DS3300 SAN paired with one newer x-series server and 3 older IBM x-series servers and a Portland office with just one IBM x-series server. lovely Netgear switches, no VLAN'ing. Backups are currently at the file system level of the VM's, which makes me extremely nervous. On top of that, we're paying some local IT company over a grand a month to maintain the BDR's and to provide offsite backups. Not exactly an ideal set up. I'm looking at replacing basically our entire infrastructure (servers, SANs, switches, backup appliances) but my questions are with regards to the SAN/Backup pieces. I want to use each office as the primary off-site backup location and want to replicate backups of Seattle to Portland and vice versa. That way I can eliminate the monthly charge we're paying for our offsite backups as well as have backups at the VM level. Any thoughts on the hardware/implementation/pitfalls, etc would be appreciated. Also; anyone have a recommendation for a cheap cloud service for dumping the backups as a secondary off-site backup?
|
# ¿ Apr 15, 2013 19:56 |
|
|
# ¿ Apr 27, 2024 18:38 |
|
Not exactly SAN related but I need some opinions. I'm currently in the early stages of an infrastructure refresh for our two offices. After my CFO balked at an initial ~130k cost to implement new backups, storage, servers and networking at once, I've broken the project down into stages. The first stage being backups. We're paying $2300/mo for lovely backups that aren't at the VM level and we need to get rid of that cost. I want to get our backups at the vmdk level, then replicate between our two offices. After threatening to go with an equalogic/powervault SAN paired with Veeam, EMC lowered their price down to 30K for a pair of 12tb DD620's. I was just about ready to pull the trigger when Dell made an offer worth considering. They're trying to sell me a couple VRTX boxes with two M520 blades and 12tb (I think) of disk paired with AppAssure as a backup solution with the ability for more than just backup. The original idea was to replace our backups then move on and replace our core switching, SAN (possibly implement one in Portland as well) and grab a couple R720's or something similar for both offices. However, this whole VRTX "shared infrastructure" thing at least sounds nice and sounds like a really simple way to tackle this huge project. The idea of having one hardware vendor is also enticing. Should I be running away as fast as I can or is this a viable solution for a production environment? Is AppAssure a complete piece of poo poo? Should I be looking at two VRTX chassis in each offices for more of a redundant setup? Here's an idea of our Windows environment: The main office has: 3x IBM x-series servers running ESXi 5.0 IBM DS3300 w/ EXP3000 iSCSI SAN (~9tb raw) 14 VM's including Exchange 2010, several DC's, several SQL application servers, and some various file servers. Smaller 2nd office has: 1x IBM x-series server running ESX 5.0 4 VM's which, including the DC are pretty much all just file servers. MPLS connection has a maximum throughput of 12mb between the offices, but is affected by internet and site-to-site network usage.
|
# ¿ Sep 25, 2013 17:00 |
|
Anyone ever use the ASM-VE (Auto Snapshot Manager) software that's packaged with Equalogic SANs for DR purposes?
|
# ¿ Oct 1, 2013 22:59 |
|
So I've had this exact (HP N54L running FreeNAS 9.2.1.2) box connected to various ESXi boxes of different versions (4.0, 5.0 and 5.1) in the past several months. Each time it seemed rather finicky to get this thing connected and I stupidly never kept track of what exactly I did to get it working. Mainly because I wasn't working with production data. I basically changed iSCSI settings here and there and rescanning from vmware until it connected. Can't seem to get it connected to a host at the moment. Anyway, I did a factory reset of FreeNAS and configured an IP address, DNS, gateway and configured iSCSI by... 1) Creating a Portal to the IP of the box 192.168.0.32:3260 2) Setting up an initiator. I left it default to allow all initiators and authorized networks. I later set the authorized network to 192.168.0.0/24 3) Created a file extent to /mnt/ZFS36/extent with a size of 3686GB. (browsed to this directory and the file exists and is 3.6tb) 4) Created a target, then a target/extent association. I created a software iSCSI adapter, added a NIC and IP, pointed it to the portal address, and VMware picks up the target name, but doesn't connect. There's got to be something simple here I'm over looking... goobernoodles fucked around with this message at 21:46 on Jun 20, 2014 |
# ¿ Jun 20, 2014 21:43 |
|
|
# ¿ Jul 16, 2014 00:26 |
|
Anyone know how to get IBM on the phone without waiting for a callback for SAN issue? loving waiting for a callback.
|
# ¿ Jul 29, 2014 17:46 |
|
So it looks like Veeam backups last night filled up our SAN and locked up our hosts. Trying to get IBM on the horn, but has anyone dealt with this situation before? How can I delete these failed snapshots to get the hosts responsive again?
|
# ¿ Jul 29, 2014 18:12 |
|
Nice avatar. It doesn't look like the SAN didn't actually fill up. I think one of the LUN's changed it's preferred path and the hosts weren't able to see it on the new path and became unresponsive. Ran IBM's "Redistributed logical drives" utility and everything came back up. God drat I forgot how much I loving hate IBM support. They look for a reason to get off the phone from the moment you start talking to them. Don't get me started on IBM and their firmware updates.
|
# ¿ Jul 29, 2014 20:04 |
|
cheese-cube posted:What model are you using if you don't mind me asking? The_Groove posted:We run firmware on a ton of netapps that's approved/tested by IBM's cluster team, but is generally old enough that their website for firmware downloads has aged off the version we need by the time it's been approved. It happened a second time. This time I told them what happened on the other server and specifically asked if we needed to hop versions. The dude assured me we didn't have to. Bricked the primary UEFI*. Ugh. goobernoodles fucked around with this message at 04:02 on Jul 30, 2014 |
# ¿ Jul 30, 2014 00:24 |
|
Not sure if I should post this in the Windows server thread or here but... Anyone have a software recommendation or method to bidirectionally sync a couple of shares from a windows file server to a another SMB share? I'm looking into potentially implementing Egnyte which requires your files to be on their pre-built VM to sync to their cloud location. Since not everyone in my company is going to need this, I was thinking of getting a relatively cheap ReadyNAS and sync our file server shares to it, which would in-turn sync to the cloud location. Just need something to keep the existing file server and the NAS in sync locally.
|
# ¿ Aug 25, 2014 18:10 |
|
cheese-cube posted:Any reason why you can't use DFSR? e: Definitely should have posted this in the Windows thread. Whoops.
|
# ¿ Aug 25, 2014 18:51 |
|
Can someone recommend a 24 port switch that will be dedicated to iSCSI traffic? Pretty small VMware environment of IBM 3 hosts (Qlogic HBAs) and a IBM DS3300 SAN running about 20 vms.
|
# ¿ Nov 14, 2014 20:49 |
|
Moey posted:Are your hosts just connecting 1 gig? What kind of switches does do you currently use? You will probably want to keep these a similar brand just so working on them is similar. Make sure to budget for two switches so you have redundancy as well. On another note, are there any relatively cost effective SANs that allow for mixing flash and mechanical drives that I should look into? I'd like to be able to put the SQL databases for an application server or two and an RDS server on flash and put the rest on cheaper disks. goobernoodles fucked around with this message at 23:28 on Nov 14, 2014 |
# ¿ Nov 14, 2014 23:20 |
|
Cross posting this from the virtualization thread. Probably belongs here anyway:goobernoodles posted:One of my two offices has only one host on local storage running a DC and some file, print, and super low-end application servers. It's a small office with about 20-30 people. The long term plan is to replace the core server and storage infrastructure in our main office, then potentially bringing the SAN and servers to the smaller office to improve on their capacity as well as have enough resources to act as a DR site. Until then though, I was planning on loading up a spare host down with 2.5" SAS or SATA drives in order to get some semblance of redundancy down there, as well as being able to spin up new servers to migrate the old 2003 servers to 2012. Right now, there's ~50Gb of free space on the local datastore. I'm looking for at least 1.2tb of space on the server I take down. I'm trying to decide on what makes the most sense from a cost, performance, resiliency and future usability standpoint. I'm trying to keep everything under a grand.
|
# ¿ Apr 15, 2015 04:16 |
|
Thanks Ants posted:Egnyte has quite a few customers but it gets really expensive once you put enough features on to make it workable in an AD environment. ...Anyone using FreeNAS for production file servers?
|
# ¿ Jun 21, 2015 17:43 |
|
I'm going to have to read into that one a little more. If files copied to the share via robocopy or something along the lines of rsync won't be synchronized to the ~*~ cloud ~*~ then that definitely won't work for me. On another note... has anyone paired servers with directed attached storage and created an extremely cheap "SAN" using FreeNAS? To this point I've only used FreeNAS with just the server's built in capacity. I was thinking of using either a poweredge 2900 paired with an MD1000 or MD3000 to really give me room to add capacity and redundancy. Anyone know if DAS enslosures only work with their own manufacturer's servers? Could I use an IBM EXP3000 with a Dell server or an MD1000 with an IBM server? I'm only using the storage provided by these servers for archive file servers that are being backed up as well, and for a test lab environment.
|
# ¿ Jul 1, 2015 04:02 |
|
Rhymenoserous posted:If you are using Server 2012 Branchcache is a possibility: https://technet.microsoft.com/en-us/library/dd425028.aspx
|
# ¿ Jul 1, 2015 15:50 |
|
Gwaihir posted:That reminds me that I have a shitload of IBM DAS boxes to test out on some old R710s too! 144 * 139gig 15k disks isn't exactly the latest and greatest, but it's not like I care about the power or cooling bill
|
# ¿ Jul 1, 2015 20:01 |
|
NippleFloss posted:I work for a VAR, excitement about technology is a big factor in driving sales. Things like Pure or Solidfire or Nutanix where you can wow people with a technology presentation or a whiteboard or have an engineer explain all of the cool and unique stuff that they are doing generate excitement from the technical folks in the room, and those are often the ones making the recommendations or controlling the direction the purchasing conversation takes. And those people go out and evangelize to other people that they know in the industry and suddenly they want to know about whatever the cool technology of the month is and they want to buy it when their next purchase cycle comes around. Their actual needs are often a much lower priority than being sold on cool technology, or features they don't actually need. goobernoodles posted:Anyone have any strong opinions on which is “best” out of these options? As of this morning, I was leaning towards Tegile or Nutanix. Tegile seems to have a leg up on Nimble at least in my mind right now due to far more usable capacity for a bit less money. When you factor in the additional protocol support that I initially didn't really put much weight into, it really looks like a pretty flexible option. That would mean that I could do odd-ball backup jobs like our email archive server, which requires an SMB share, directly to the SAN. I'm thinking that I could a) vastly decrease the RPO by using snapshots for day-to-day backups as decreasing the RTO for those scenarios where we need to revert a VM or recover files. Right now, it's a bit of a chore just due to having to wait 1-5 minutes for Veeam to mount a backup before I can recover a file. Also, I'm hoping that whatever solution we move forward with will let me eliminate the support costs for our Quantum DXi4601's that act as our Veeam Storage targets currently. We have two, with Veeam backup replication jobs taking care of the replication between our two sites. Support is over 8k annually which blows my goddamn mind. I need to confirm they're not going to just turn into bricks or something if out of support, but I figure I can relegate them to being much longer term backups with replication site to site. This project has kind of blown up into one that is now vastly larger than I anticipated quickly now that the CFO is open to opening up the wallet for both sites and now I'm scambling to make sure than I'm not shooting myself in the foot with regards to BCDR with any of these options. We can do storage level replication with any of the 3, Veeam replication, VMware replication... is there a reason that decision should be made before going any of these? Sounds like there's some relatively minor differences between the options as far as granularity but at least from the storage side of things, they're all effectively pretty similar. I could ramble on incoherently for a while about all of the potential other options for things we could do with each solution, but I simply don't have the time figure out every single possibility and what's the best fit for us. Unless I'm missing something, the introduction of flash has really made finding the "best" solution less about sheer by the book performance numbers since there's no real way to know how a proprietary file system will work for any given workload, no? Going down that rabbit hole thus far has led to a circle-jerk of counter-arguments and usually coming down to "WELL OUR FILE SYSTEM IS BETTER, YOU'LL SEE" I'm waiting on Nimble to quote a CS300 with around 36Tb of usable capacity, even though that's way more storage than I was aiming for originally. I was thinking 15-20Tb would be a good place to start. I can put a Nutanix cluster with 96 logical cores, 768Gb RAM, and ~18Tb usable in place in Seattle along side a smaller cluster with 72 cores, 384Gb RAM, ~12TB usable in Portland. The comparable solutions with HP servers and Nimble/Tegile arrays are roughly in the area combined are roughly 10-15k less. When you factor in the potential for increased consulting costs with any of the server/SAN options, it's pretty much a wash as far as cost goes. The biggest #1 question that I have no real idea how to answer is which one is fundamentally the strongest from a storage perspective. Logically, it seems like the Nutanix approach of trying to localize data to the host of the VM may produce the best performance since most of the read/writes are going to direct attached storage eliminating a lot of "hops." While the main argument other vendors make is that you've got to carve out CPU & memory from the nodes for use by the virtual storage controller, it does give us a great deal of flexibility there to increase RAM/CPU if necessary. The biggest question mark there for me is whether or not a virtual storage controller paired hitting direct attached storage will perform better than the SAN's. That, and Nutanix has SATA drives whereas the Tegile and Nimble are... SAS? Not sure on those - I just shot those questions off to the vendors. It's a poo poo-ton of money to just throw a dart at a wall. It's hilarious that I'm posting an increasingly similar (incoherent) post to my last one at nearly the same exact time, but I need to leave to go to the space needle or something. Holy gently caress trying to write this entire post while people are sanding a conference room table with a sander made for floors was a bad idea.
|
# ¿ May 6, 2016 02:30 |
|
NippleFloss posted:Which is best from a storage perspective depends on what your main criteria are for data storage. They are all better in some areas that others. What would you guys recommend if I wanted to say... supplement a Nutanix cluster with a cheap SAN for archive storage?
|
# ¿ Jun 2, 2016 02:40 |
|
Thanks Ants posted:NetApp E-Series or IBM v3700 goobernoodles fucked around with this message at 16:16 on Jun 2, 2016 |
# ¿ Jun 2, 2016 16:13 |
|
Welp, pulled the trigger on a 3x Cisco UCS C220 M4 servers w/ 2630v3's, 1.2Tb of RAM and a Tegile T3530 along with a 2x servers and a T3100 array for our 2nd office. Cha-ching. e: Well, that and a HP 5406R for the 2nd office and 10Gb SFP+ modules for both offices. Should be quite the upgrade from our 8 year old x3650's and DS3300. goobernoodles fucked around with this message at 04:28 on Jun 11, 2016 |
# ¿ Jun 11, 2016 04:07 |
|
Docjowles posted:Why the Cisco C series vs rando rack mount boxes from HP/Dell, out of curiosity? Did they come in with a super competitive price? UCS is cool but the extreme flexibility seems kinda wasted, and like it's just adding complexity/points of failure on a three node deployment. I started talking to this VAR for network consulting. Since they're also in the server/storage market, I heard them out even though I was pretty far into the conversation with Nimble and Nutanix. It turned out the owner of the company is a client of ours - and he owes us a bit of money. While our CEO definitely gave me a little hint that he wanted me to go with these guys if everything was apples to apples, I was hesitant until they came back with a T3530 with a 3.5Tb all-flash tray. That pretty much took performance out of the equation. Mr-Spain posted:That's the switch I rolled for my backend, how did you get it configured? goobernoodles fucked around with this message at 17:32 on Jun 13, 2016 |
# ¿ Jun 13, 2016 17:29 |
|
Being in the construction industry, our file servers are filled up by 70% images. While I've done one-off image resizing and compression runs, I have to talk with departments before and after in order to avoid situations where they get resized to something too small. They need to be able to zoom in and see minor details. Does anyone have any recommendations for preferably a command line utility that I could set up to do compress photos at 90% quality on a schedule? I want to set it up so it runs every day/week whatever makes sense. Looks like there are plenty of options, looking for first hand experience.
|
# ¿ Jul 23, 2016 23:11 |
|
Internet Explorer posted:I really don't know much about image compression and haven't worked in that kind of environment, so there goes the first hand experience requirement, but...
|
# ¿ Jul 24, 2016 02:41 |
|
GrandMaster posted:Hah, ours sounds the same - tons of civil engineering related photos, saved at the highest possible resolution.
|
# ¿ Jul 29, 2016 14:32 |
|
|
# ¿ Apr 27, 2024 18:38 |
|
Anyone have any experience with HP's StoreVirtual VSA? Only looking at it because I'm looking for cheap servers and shared storage for a site that was set up with no budget previously. They're running on a $50 poweredge 2950 right now, and I'm trying to get approval to at least buy some refurb servers and shared storage of some sort. Open to any suggestions.
|
# ¿ Aug 3, 2016 16:53 |