|
I've been trying to set up oVirt on my home lab for a few days now and haven't been able to get it to stand up. I'm thinking that perhaps my storage choices are the issue. Clearly I don't know what I'm doing, which is why I'm trying to stumble through this and learn as I go so forgive me if I'm just being dumb here. Can a single host act as both an iSCSI target and initiator? I've been trying to make it work that way and it's not really working but I could be doing a million other things wrong. My current set up is: 4 x 256GB ssd drives in RAID5 sda(yeah I know, but it's fine for a home lab) 4 x 146GB 10k drives in RAID5 sdb sdb is partitioned for the host Centos install with about 150GB left over that I planned on eventually making an iSCSI target block backstore for the ISO storage domain. sda has no file system or partitions and is set up as an iSCSI block backstore for the main oVirt storage domain. I'm trying to set up the oVirt hosted engine (4.1 on Centos 7.3) - during the hosted engine setup script it connects to the iSCSI target just fine but when it actually tries to stand up the engine as a VM I get a variety of exotic errors which have little info out there on Google to help me. I know I could just do this with NFS which might make more sense, but I had never touched iSCSI before and wanted to give it a shot. Like I said, I know a ton of things could be going wrong here but I thought I should at least check that it's possible to have a single host as both the initiator and target - everything that comes up when I try to Google it is talking about two different initiators hitting a single target, which I know is bad.
|
# ¿ Jul 12, 2017 00:03 |
|
|
# ¿ Apr 19, 2024 10:40 |
|
Vulture Culture posted:It's just TCP, so there shouldn't be any roadblocks here. Are you using LIO as your target/initiator? How are you configured? What errors are you seeing? Thanks for the reply. I am indeed using LIO, but I waffle back and forth on whether storage is the issue. I can connect just fine from my Ubuntu laptop if I set that up as an initiator. The first go around I was just getting an error along the lines of "Unable to mount storage" or something along that line - it was a oVirt script error, so not very descriptive. I didn't think to go down to the logs and get the real error. The best I could figure is that multipath was fighting between LVM volumes and the iSCSI path. I nuked everything and started over without any LVMs on the host. I got closer that time but got an error from oVirt about the system being unstable and never able to launch. I did dig into the logs there, it was a permission issue in a weird place with the kvm user. I went down the rabbit hole troubleshooting that for a while but never got anywhere. I started a third time and actually got the hosted vm to launch but didn't actually get the engine appliance up on it. After a reboot the hosted-engine can't access the storage despite it still being online, available in multipath, and shows up in lsblk. I think it's just oVirt being complicated and me still learning. I'm starting from scratch (again) and we'll see if I get any farther. If I don't get it on this shot I'm going to try NFS I guess. At first the oVirt documentation seemed really good, but I feel like I keep coming up with questions that aren't addressed anywhere. If I ever make it work I'm definitely going to write up a guide for anyone else foolish enough to go down this road. It's also frustrating working with on prem bare metal again. I'm almost tempted to delay setting this up and try to learn puppet basics so when I want to nuke it and start over I can get back to a base install quicker.
|
# ¿ Jul 13, 2017 00:32 |
|
Will posting in this thread let me read it? Stay safe ghost posts!
|
# ¿ Jan 8, 2018 22:04 |