|
Ive become more curious now. This Xen deployment isnt going to get very large, only 2 hosts and maybe 20 total VMs. However having a separate storage repository for each VM is sort of annoying. I think its time to do some googling.
|
# ¿ Jun 16, 2011 02:51 |
|
|
# ¿ Apr 26, 2024 05:20 |
|
So I should still be able to do Xenmotion and all the other bells and whistles by putting all my vdisks on a single SR?
|
# ¿ Jun 16, 2011 13:58 |
|
I dunno. It was what was shipped to me. So far though I like it a lot. More features and than Hyper-V for sure
|
# ¿ Jun 16, 2011 20:51 |
|
What would be a good option for a SMB/NFS virtual machine/appliance with deduplication? I am looking to run something on top of esxi that I can pass iscsi luns to that can act as a file server for my network. You can only get Windows Storage server as an OEM product and I would much rather run the gateway as a VM so I could take advantage of vmotion and HA.
|
# ¿ Jun 24, 2011 16:27 |
|
I am pretty curious about this too. Based on a pointer just a few posts up I started researching into Nexentastor and on paper the concept looks fantastic and like a perfect fit for an upcoming project.
|
# ¿ Jun 28, 2011 16:07 |
|
Certainly a concern but I am not sure how big of one it is at this point. The product still seems to be trucking along even almost a year after the announcement.
|
# ¿ Jun 28, 2011 17:29 |
|
InferiorWang posted:Speaking of lefthand, for those with a p4300/4500, how do you set up the NICs on your equipment? Do you bond the nics on each shelf/unit? Yes we bonded the NICs and we went further and had each NIC in the bond plug in to a separate switch stack member
|
# ¿ Jul 15, 2011 14:06 |
|
InferiorWang posted:So you did the adaptive load balancing then. I'm debating whether to do that or just go with LACP bonding to one switch. Switch one is a cisco 4507 while switch two is a 2960G. I wonder if there is any performance difference between the two options.
|
# ¿ Jul 15, 2011 14:58 |
|
Nomex posted:
We never could make disk to disk as our only source of backups make sense for us as a company. We dump initial backups to disk yes but we archive off weekly and monthly sets to LTO4. We havent found any solution yet for storing the entire set of company data that beats a turtle LTO5 in an offsite safe to protects against mega disaster or insane admin or both.
|
# ¿ Jul 18, 2011 17:30 |
|
I have a customer that ordered a expansion shelf in for an AX4 and asked me to put it in. I have zero experience with emc much less the clariion line and it looks like I need some sort of login to even get to the documentation. Is adding an expansion shelf a pretty easy prospect? Is it hot add? Or am I going to have to spin down the array?
|
# ¿ Dec 12, 2011 21:38 |
|
Ok I have a question about iscsi offload. I have a couple R710s with 4 onboard broadcom NICs that have the option to do iscsi offload and a quadport Intel gigabit nic. Broadcoms documentation says they see big huge CPU performance increases running high IO loads by using the iscsi initiator. Intel's documentation says they see no discernible difference in cpu usage when using their nics and a software initiator plus they say using their nics and the software initiator is better since it doesnt break the standard OS model (whatever that means). Who is correct? Have any of you done any testing at all in the real world with these scenarios?
|
# ¿ Jan 20, 2012 18:08 |
|
With all the bad press I have seen flying recently concerning EMC I would default to the netapp based on that alone. Pricing being cheaper is just icing on the cake.
|
# ¿ Jan 23, 2012 21:08 |
|
Yes thats a super open ended question. Whats your budget/feature requirements/etc/etc/etc
|
# ¿ Feb 14, 2012 21:55 |
|
FISHMANPET posted:Since we're on a push lately for redundant everything, I've been thinking about making everything reduntant. .... For Windows file sharing I can use DFS. (and probably others). Thats not entirely accurate. While DFS can give you higher availability than just having one file server, the service was primarily designed to solve problems with low bandwidth between two sites and needing file access between them. DFS relies heavily on Active Directory replication, which by definition only provides loose convergence. It also has some very big caveats dealing with file sizes, replication times, and a few other things I can't really remember right off the top of my head. If you want true high availability from your windows file servers the answer is the same as it has been for a decade: HA cluster.
|
# ¿ Feb 29, 2012 19:31 |
|
FISHMANPET posted:Thanks, I guess that's another thing to put on my list. Heres the good news: If you already have a SAN and are virtualizing then its really trivial nowadays to put together a windows cluster.
|
# ¿ Feb 29, 2012 21:00 |
|
the spyder posted:Does anyone have experience with 1+ PB storage systems? We have a project that may require up to 7.5TB of data collection per day. I have nothing of value to offer other than I would love to know the details of the system you end up using. I have long been fascinated with storing immense amounts of data and the technologies required to do so.
|
# ¿ May 3, 2012 18:18 |
|
Just to reinforce a point that has been made in this thread over and over again: We have a backup server populated with 12x 2tb 7.2krpm drives. When we purchased it several years ago we had it configured from the vendor as a raid 5 array. Mistake. We had a drive fail yesterday. No problem usually. We pulled the bad drive and hot swapped a spair in and popped in to the server manager to ensure the rebuild was taking place. Holy smokes. By the speed it is currently going the array will be finished rebuilding this time next week. Now here we are crossing our fingers that another drive doesnt flop in the next 6 days. ALWAYS configure huge arrays as raid 6 or similar. Lesson learned.
|
# ¿ May 4, 2012 13:37 |
|
FISHMANPET posted:I'm about ready to tear my hair out with DFS. I'm trying to create the first Namespace, and I get the error "RPC server not available." I have no idea what RPC server it's trying to contact, I can't find any failures in any logs, and googling hasn't come up with any useful solutions. I did this on another domain and it worked just fine, so I don't know what's different here. Its querying for a DC and the one it is attempting to connect to either does not exist any longer or is not reachable. Make sure all your DCs are online. Make sure you do not have a dc still listed in AD which has been taken offline without proper demotion. If you do, use ntdsutil to get rid of all references to it.
|
# ¿ May 5, 2012 00:10 |
|
I am sorry this is a tiny bit off subject and probably better suited to the backup thread but I think it will get better eyes here. 99% of my environment is backed up using Veeam, which we arent entirely happy with but thats a different story. What I am asking about is my file server cluster. I have a 2 node windows cluster serving files. The 2 nodes are both VMs and the cluster resource is a lun sitting out on the SAN. Veeam backs up the cluster nodes but of course does not back up the cluster resource lun. I am trying to come up with some ideas for the best way to do backups of the cluster resource. Any suggestions from anyone doing anything similar?
|
# ¿ May 14, 2012 14:25 |
|
Misogynist posted:What are you doing with your Veeam backup repository? The Veeam backup repository is direct attached storage on the backup server proper.
|
# ¿ May 14, 2012 14:46 |
|
I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own?
|
# ¿ Aug 21, 2012 22:03 |
|
Nomex posted:Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long. Thats the easy part. What I am wondering is if I am going to need to shut down the array or if it can survive the cluster being down for about 5 minutes.
|
# ¿ Aug 22, 2012 01:28 |
|
This will be the same subnet but what we are doing is upgrading to 10gig. These are top of rack switches so I am actually doing a one to one swap. Just about everything in these racks has storage on this array so I know I am going to need to shut down the servers. The more I think about it I guess I am going to have to power down the array as well to avoid any potential problems. I was just trying to avoid that. I am not sure why it just struck me as scary
|
# ¿ Aug 22, 2012 12:25 |
|
Number19 posted:
Zinger! FOM has been running on the array itself since we put it in. Nows a good a time as any to switch her up!
|
# ¿ Aug 22, 2012 16:26 |
|
You're probably not going to have an import feature with onboard raid. If you reinitialize your data is gone. I think youre probably hosed.
|
# ¿ Aug 27, 2012 20:42 |
|
Boogeyman posted:
If this is an MS SQL server you can verify this pretty trivially by looking at your cache hit ratio
|
# ¿ Aug 29, 2012 21:04 |
|
Boogeyman posted:I should have thought of that earlier (SQLServer:Buffer Manager - Buffer cache hit ratio, right?). I watched it for about five minutes and didn't see it drop below 99.99%. Yeah dude, youre totally answering all your queries out of RAM.
|
# ¿ Aug 29, 2012 22:07 |
|
I have a lun presented via iSCSI from a lefthand array that I have attached to a two node file server cluster. At some point in the near future I need to migrate that lun (or all the data on it) to a equallogic array. I have a few ideas on how I am going to go about it but if anyone has done this and has an easy method I would sure love to hear it.
|
# ¿ Sep 4, 2012 19:54 |
|
NTFS file shares.
|
# ¿ Sep 4, 2012 21:13 |
|
Thats where my logical thought keeps going. I was sort of hoping though there was some magical tool I had not heard about yet that could replicate a lun between two arrays from different vendors... and its free... and its easy to use... and uhh, it gives me 20 dollars when I double click on it
|
# ¿ Sep 4, 2012 21:25 |
|
Xenomorph posted:I just watched some videos that showed a user making a virtual disk on the target system, and then that virtual disk gets mounted on the initiator. So there is still overhead (NTFS -> virtual disk -> ext4) when writing/reading. Anyways, you don't understand how iSCSI works. It doesn't 'create' anything. It is simply the protocol used to transfer the data. There is no overhead any more than there is overhead in accessing any file system. Syano fucked around with this message at 02:30 on Sep 7, 2012 |
# ¿ Sep 7, 2012 02:24 |
|
Our environment is small enough that I can still afford dedicated switches for our SAN. I have a buddy who is an admin over on the other side of town who constantly rides me about wasting ports by dedicating switches just to SAN traffic and how I should be just running all the traffic through the same top of rack switch and VLANing from there. Who cares I say, as long as I can afford it I am going to keep it simple and keep layer 3 off my SAN switches.
|
# ¿ Sep 11, 2012 00:33 |
|
adorai posted:. Dedicating a switch (and nics) to iscsi is a waste.
|
# ¿ Sep 11, 2012 03:03 |
|
FISHMANPET posted:The thing that's pissing me off about it right now is trying to share a single LUN to multiple machines. When I do this, it says I probably want to create a server cluster and share the LUN to that instead. Which makes sense. Except, as far as I can tell, there's no way to do that in CMC. You may need to update CMC then. All recent versions you right click 'servers' and then choose 'New Server Cluster...'. You can even add your new member servers from this screen. You are going to have to get that done before you can add a LUN to multiple initiators at once
|
# ¿ Sep 12, 2012 16:00 |
|
This is a good a place as any. Discuss away and I am sure people will chime in
|
# ¿ Oct 16, 2012 13:38 |
|
I am still trying to wrap my head around what Microsoft's strategy is with Server 2012 and SMB 3.0 and scale out file servers for applications. Everything starts out looking awesome. Storing SQL and Exchange databases on your file servers is a pretty neat option. Having your hyper-v stores on an SMB share is pretty awesome too. So you move from that to thinking high availability at the file server level and you start reading about scale-out file servers. At this point things start looking fantastic. Unified storage for my MS shop on an active-active file server cluster. Then it hits you... you still have to have shared storage for all this to work. So is Microsoft's strategy for all of this for me to build this out with the file server cluster acting as the filer and all the back-end storage still being done via 3rd party iscsi or fiberchannel kits? Or heck by shared SAS shelfs? And if that is the strategy why would I not just avoid the middle man and connect my hyper-v, sql and exchange application services directly to the iSCSI targets? I think I am missing something here.
|
# ¿ Nov 8, 2012 17:27 |
|
Corvettefisher posted:
Huh? How the crap do you get drinking MS kool aid out of me asking if someone can shed some light on the MS storage strategy for server 2012?
|
# ¿ Nov 8, 2012 18:03 |
|
cheese-cube posted:
This is par for the course with Microsoft. That being said it starts to become a neat solution when you begin putting all the pieces together... that is until you address the shared storage for the cluster. I could see this as a wicked solution if you could take local storage of servers and throw it together akin to what the VSA does but if you still have to have iscsi or fiberchannel targets on the back end then I sort of don't see the point. Maybe this is just a natural progression of things... ie Microsoft puts out Server 2012 with SMB 3.0 support and the idea is that 3rd parties ala netapp/emc/etc pick up and implement SMB 3.0 support soon and thats their idea of the end to end solution.
|
# ¿ Nov 8, 2012 20:15 |
|
Scale computing does something sort of like you guys are talking about I believe. I have no idea how it runs or what technologies it is built on or anything. I just had someone tell me about it once and I looked at the web page once. Here it is if you want to read more... I think I will when I get a free moment http://www.scalecomputing.com/products/hc3/features-and-benefits
|
# ¿ Nov 8, 2012 20:57 |
|
|
# ¿ Apr 26, 2024 05:20 |
|
Moey posted:That's pretty strange that the Equallogic line cannot expand with just shelves. So every time you want to grow your storage, you are pretty much buying an entire new SAN? Check out scale computing hc3. They are doing exactly what we are talking about. They call hyper convergence. I have no clue what their hyper visor is though. EDIT: I did some reading on the scale hc3 solution last night. It looks neat in theory. You buy a cluster of nodes that serves both your compute needs and your storage needs. If you need to expand you just buy another node and it adds to everything: more storage, more memory, more compute power more network capacity. If you need just storage you can buy just storage nodes. Of course they aren't very forthcoming on their site about the technology under the hood they use to make this happen. Still, if it works would be sort of neat. Syano fucked around with this message at 14:30 on Nov 9, 2012 |
# ¿ Nov 9, 2012 05:15 |