Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Potato Salad
Oct 23, 2014

nobody cares


Saukkis posted:

Another fun mess. Oracle DB cluster with vSAN. Really tight schedule, need to limit core count, very little experience with vSAN. Original plan was for three 16-core nodes, until we realized it should really be 4 nodes. So if we drop to 12 -core nodes we can either choose a pathetically weak processors, unnecessarily powerful and expensive CPUs, or a single 12-core CPU.

The 12-core CPU looks like the most practical option, but a single CPU VMware node feels like a bad idea. Will it be able to handle all the PCIe devices we need. At least Gartner considers 1-CPU a nifty trick for budget conscious CIOs.

Single slot isn't a bad idea. Make sure you've got all memory channels populated correctly and you're good to go.

I am not a lawyer nor am I able to provide opinions about Oracle licensing: go take a second look at what specific component in a hypervisor requires Oracle rdbms licenses. That should help steer how to extract value from this purchase.

Potato Salad fucked around with this message at 22:14 on Mar 20, 2019

Adbot
ADBOT LOVES YOU

Zorak of Michigan
Jun 10, 2006

Saukkis posted:

Another fun mess. Oracle DB cluster with vSAN. Really tight schedule, need to limit core count, very little experience with vSAN. Original plan was for three 16-core nodes, until we realized it should really be 4 nodes. So if we drop to 12 -core nodes we can either choose a pathetically weak processors, unnecessarily powerful and expensive CPUs, or a single 12-core CPU.

The 12-core CPU looks like the most practical option, but a single CPU VMware node feels like a bad idea. Will it be able to handle all the PCIe devices we need. At least Gartner considers 1-CPU a nifty trick for budget conscious CIOs.

Six blades, each with two 4-core CPUs.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
So migration to the new cluster took about 120 seconds of actual outage time for the six VMs that I migrated. FWIW, these were Windows Server 2016 VMs.

I mounted the old stores in the new cluster, powered all of them off, migrated them and powered them back on and they came right back up, no additional reboot required. Having migrated compute, I then migrated storage to the new cluster storage that took about fifteen minutes total (but transparent to the end user).

I had a four hour outage window on the books, so being able to tell folks that the change was complete after thirty minutes was p. cool. :feelsgood:


Thanks Potato Salad, BangersInMyKnickers, Vulture Culture, DevNull, Moey for your advice!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Saukkis posted:

Another fun mess. Oracle DB cluster with vSAN. Really tight schedule, need to limit core count, very little experience with vSAN. Original plan was for three 16-core nodes, until we realized it should really be 4 nodes. So if we drop to 12 -core nodes we can either choose a pathetically weak processors, unnecessarily powerful and expensive CPUs, or a single 12-core CPU.

The 12-core CPU looks like the most practical option, but a single CPU VMware node feels like a bad idea. Will it be able to handle all the PCIe devices we need. At least Gartner considers 1-CPU a nifty trick for budget conscious CIOs.

If you’re running enterprise plus then you can use host anti-affinity rules to limit oracle workloads to a subset of hosts in the cluster to keep the license count down. You can even use CPU affinity to further restrict it, though that has its own challenges.

Potato Salad
Oct 23, 2014

nobody cares


Affinity isn't enough; there is no way to prove to a judge that the relevant db vms never ran on unlicensed silicon within the same hypervisor cluster. Only a cluster is strong enough legal separation. Affinity has been defeated in public court.

I did not say this. I was not here.

Potato Salad
Oct 23, 2014

nobody cares


There is a minimum allowable cooldown period in which you may alter which silicon you've decided to bestow thy licenses.

Don't invite Oracle audits, read about VMware+Oracle rdbms licensing extensively.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Is there a way I can wipe a disk from within ESXi?

I have shoved a previously used disk into an ESXi 6.5 box for more storage, but it turns out that I've previously used the disk for an ESXi installation so there are old partitions on it.

code:
partedUtil getptbl /vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039 gpt
77825 255 63 1250263728

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 1250263694 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Any attempt to partedUtil delete fails with

code:
Error: Read-only file system during write on /dev/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039
Unable to delete partition 3 from device /vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039
Goddamn ESX and Linux for recognizing old partitions and making them goddamn bulletproof.


I tried

code:
dd if=/dev/null of=/vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039 bs=512 count=1
and received
code:
dd: can't open '/vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039': Function not implemented

Methanar
Sep 26, 2013

by the sex ghost

Agrikk posted:

Is there a way I can wipe a disk from within ESXi?

I have shoved a previously used disk into an ESXi 6.5 box for more storage, but it turns out that I've previously used the disk for an ESXi installation so there are old partitions on it.

I recently did this. I thought it was extremely frustrating too.

lsblk to figure out what labels refer to what storage backend. Then you'll want to delete unnecessary partitions.

code:
[root@esxi:/dev/disks] ls
mpx.vmhba32:C0:T0:L0
mpx.vmhba32:C0:T0:L0:1
mpx.vmhba32:C0:T0:L0:5
mpx.vmhba32:C0:T0:L0:6
mpx.vmhba32:C0:T0:L0:7
mpx.vmhba32:C0:T0:L0:8
mpx.vmhba32:C0:T0:L0:9
naa.6848f690ef0b72001f6320c182e69de9
naa.6848f690ef0b72001f6320c182e69de9:1
naa.6848f690ef0b720023fe71f648ee744e
naa.6848f690ef0b720023fe71f648ee744e:1
naa.6848f690ef0b720023fe71f648ee744e:2
naa.6848f690ef0b720023fe71f648ee744e:3
naa.6848f690ef0b720023fe71f648ee744e:4

[root@esxi:/dev/disks] rm naa.6848f690ef0b720023fe71f648ee744e:1 naa.6848f690ef0b720023fe71f648ee744e:2 naa.6848f690ef0b720023fe71f648ee744e:3 naa.6848f690ef0b720023fe71f648ee744e:4

[root@esxi:/dev/disks] partedUtil mklabel /dev/disks/naa.6848f690ef0b720023fe71f648ee744e msdos

[root@esxi:/dev/disks] vmkfstools -C vmfs3 -b 8m -S datastore1 /vmfs/dev/disks/naa.6848f690ef0b720023fe71f648ee744e:1

Methanar fucked around with this message at 05:22 on Mar 22, 2019

SlowBloke
Aug 14, 2017

Agrikk posted:

Is there a way I can wipe a disk from within ESXi?

I have shoved a previously used disk into an ESXi 6.5 box for more storage, but it turns out that I've previously used the disk for an ESXi installation so there are old partitions on it.

code:
partedUtil getptbl /vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039 gpt
77825 255 63 1250263728

5 8224 520191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
6 520224 1032191 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
7 1032224 1257471 9D27538040AD11DBBF97000C2911D1B8 vmkDiagnostic 0
2 1843200 10229759 EBD0A0A2B9E5443387C068B6B72699C7 linuxNative 0
3 10229760 1250263694 AA31E02A400F11DB9590000C2911D1B8 vmfs 0
Any attempt to partedUtil delete fails with

code:
Error: Read-only file system during write on /dev/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039
Unable to delete partition 3 from device /vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039
Goddamn ESX and Linux for recognizing old partitions and making them goddamn bulletproof.


I tried

code:
dd if=/dev/null of=/vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039 bs=512 count=1
and received
code:
dd: can't open '/vmfs/devices/disks/t10.ATA_____WDC_WD6400AAKS2D00A7B2________________________WD2DWCASYE288039': Function not implemented

I used diskpart on a win workstation to clean the partition table on esxi disks without major issues

Open an admin CMD shell

-diskpart
-list disk
-select disk N (where N is the number of the esxi disk)
-clean

It won't guarantee safe data removal/wipe but the partition layout will be wiped out clean.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Potato Salad posted:

Affinity isn't enough; there is no way to prove to a judge that the relevant db vms never ran on unlicensed silicon within the same hypervisor cluster. Only a cluster is strong enough legal separation. Affinity has been defeated in public court.

I did not say this. I was not here.

Yeah, I went through this poo poo years ago when VMware was first getting widespread adoption and their line was essentially that virtualization is a schrodinger's cat where their code was touching all and none of the silicon in a host all at once so all individual cores had to be licensed as their own socket. Not surprised it got worse since then. The org ended up shelling out for a site license to not deal with Larry's poo poo.

Don't buy Oracle products.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I once had an Oracle rep threaten to sue me because I referred to "the user" of a specific Solaris server while discussing a support extension and she started screaming and accusing me of illegally renting Oracle's technology to other people

evil_bunnY
Apr 2, 2003

BangersInMyKnickers posted:

The org ended up shelling out for a site license to not deal with Larry's poo poo.

Vulture Culture posted:

I once had an Oracle rep threaten to sue me because I referred to "the user" of a specific Solaris server while discussing a support extension and she started screaming and accusing me of illegally renting Oracle's technology to other people

I like how every time we discuss larry's hobby new horror stories come out.

Digital_Jesus
Feb 10, 2011

BangersInMyKnickers posted:

Virtualization is a schrodinger's cat where their code was touching all and none of the silicon in a host all at once.

I'm gonna use this line in a board meeting.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Digital_Jesus posted:

I'm gonna use this line in a board meeting.

Their logic to this was some extremely tortured bullshit. Our hosts were dual-socket quad cores at the time and we were allocating 2vCPUs to this single VM running their middleware stack. This was all extremely early on so we bough sufficient licensing to cover the both sockets of the host the VM would be on and then configured affinity rules to keep the VM contained to that host by default in an attempt to comply with the language of their licensing. Even that was a stretch in my mind because I knew that the hypervisor was going to be mapping both vCPUs to the same socket because you don't split numa boundaries unless you have to. Then their sales shithead said this wasn't not good enough because you apply their processor licenses to a VM guest it only covers the individual cores instead of the socket so we would need 8 licenses to cover the whole host. This was because paravirtual drivers offload "Large Amounts" of CPU cycles from the guest to the hypervisor where those are executed on the other cores on the host outside the bounds of the VM, and this means you're stealing money from Larry. Oh and you have HA/vMotion enabled? Well you're going to need to license all 3 hosts in the cluster too because it could potentially run there eventually.

Or you could just do a bare-metal install with those same 2 licenses and be 100% compliant??

Or, as their sales dipshit helpfully pointed out, we could use Oracle's dogshit virtualization stack and these problems would magically go away!

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Potato Salad posted:

Affinity isn't enough; there is no way to prove to a judge that the relevant db vms never ran on unlicensed silicon within the same hypervisor cluster. Only a cluster is strong enough legal separation. Affinity has been defeated in public court.

I did not say this. I was not here.

I’m unaware of any US court case on this issue other than Mars v Oracle which was settled long before going to trial.

The only contractually impactful language around licensing is in the License agreement which doesn’t clarify anything around virtual environments. Oracle’s own (non contractual) statements around constitutes a processor where Oracle is “installed and/or running” are expansive enough to require you to license every ESX host managed by every vCenter in your environment since cross vCenter migration makes them all potential landing spots for VMs containing Oracle bits. Even a cluster would not be a sufficient boundary.

“Oracle programs are installed on any processors where the programs are available for use. Third party VMware technology specifically is designed for the purpose of allowing live migration of programs to all processors across the entire environment. Thus, Oracle Enterprise Edition is installed and available for use on all processors in a V-Center.”

In reality you can leverage DRS along with maintaining appropriate logs for the requisite audit term showing that the Oracle VMs have not visited non-licensed hypervisor hosts. This requires tighter administrative control over your environment but is certainly valid. You can even have your MSA with oracle amended to account for this if you’re willing to do the necessary work up front to get sign off. Usually this involved documenting and describing the architecture and your operational controls and logging facilities.

Whether it’s worth the effort vs just buying more licenses is going to vary from case to case, but generally Oracle is banking on customers being too scared or lazy to push back.

The most technically correct solution is to pitch your Oracle software and account team off a bridge.

Zorak of Michigan
Jun 10, 2006

YOLOsubmarine posted:

The most technically correct solution is to pitch your Oracle software and account team off a bridge.

This is good advice but also sad because it deprives you of someone you can vent your spleen on. I pride myself on being a nice guy (when acting in my professional capacity) and treating people in such a way that they will want to work with my company in the future. I realized long ago that those rules don't apply to Oracle. Even if we found an Oracle rep who knew their poo poo and wanted to help us, they would just reassign the poor bastard, or drive them to quit. Now I use the slightest Oracle-related inconvenience as an excuse to vent all my accumulated frustration all at once on whoever's currently assigned to make their pathetic excuses to us. No, I do not regard the ticket as closed. No, "must have been cosmic rays" is not an acceptable root cause analysis. No, it is not acceptable that your stupidly overpriced hardware crashes more often than a ten year old Dell. Yes, these meetings are going to continue. Tell your VP I said "suck it" but also that he should tell his Mom hello for me.

Digital_Jesus
Feb 10, 2011

BangersInMyKnickers posted:

Oh and you have HA/vMotion enabled? Well you're going to need to license all 3 hosts in the cluster too because it could potentially run there eventually.

TBF this is how MS does all of their licensing for server poo poo, sooooooo thats not unique.

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

I'll admit that my need for MS software was generally limited to OS licensing and SQL, but no that was never the case.

Internet Explorer
Jun 1, 2005





That's how it works for Windows Server OSes unless you have Datacenter licensing. You have to license for the max number of Windows Server VMs that could run on all of your hosts.

Methanar
Sep 26, 2013

by the sex ghost
lol if you pay for software

TheFace
Oct 4, 2004

Fuck anyone that doesn't wanna be this beautiful
I always wondered why my last job kept all their Oracle poo poo on relatively cheap, somewhat random, physical hardware, instead of going virtual. Now I know... gently caress oracle licensing. Holy hell

Internet Explorer
Jun 1, 2005





TheFace posted:

I always wondered why my last job kept all their Oracle poo poo on relatively cheap, somewhat random, physical hardware, instead of going virtual. Now I know... gently caress oracle licensing. Holy hell

they don't call it "One Rich rear end in a top hat Called Larry Ellison" for nothing

Methanar
Sep 26, 2013

by the sex ghost

Internet Explorer posted:

they don't call it "One Rich rear end in a top hat Called Larry Ellison" for nothing

https://www.youtube.com/watch?v=-zRN7XLCRhc&t=2307s

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Internet Explorer posted:

That's how it works for Windows Server OSes unless you have Datacenter licensing. You have to license for the max number of Windows Server VMs that could run on all of your hosts.

Since when? Last I did this standard gave you two guests regardless of underlying hardware, enterprise was 4, and datacenter was unlimited by hypervisor socket.

Thanks Ants
May 21, 2004

#essereFerrari


BangersInMyKnickers posted:

Don't buy Oracle products.

Internet Explorer
Jun 1, 2005





BangersInMyKnickers posted:

Since when? Last I did this standard gave you two guests regardless of underlying hardware, enterprise was 4, and datacenter was unlimited by hypervisor socket.

Maybe we're misunderstanding each other?

Say you have 18 VMs spread across 3 hosts in the same cluster. You need enough licenses for 54 servers. 18 on each of the hosts.

Digital_Jesus
Feb 10, 2011

BangersInMyKnickers posted:

Since when? Last I did this standard gave you two guests regardless of underlying hardware, enterprise was 4, and datacenter was unlimited by hypervisor socket.

Since 2012.

Server Standard - 2 Guest OS per complete licensing of sockets in the host. Minimum of 2 socket required to be licensed.
Server Datacenter - Unlimited Guest OS per complete licensing of all physical sockets in the host.

Starting with 2016 they changed to per-core models instead of per-socket.

VMware / Hyper-V you're required to license all sockets / cores in the cluster as well, and standard only lets you run 2VMs per set of licenses.

Been that way for a loooooong time.

Enterprise died with Server 2008R2 as a licensing concept.

E: Server 2012 Standard was 2-sockets per license per 2 guest VMs. My bad.

Digital_Jesus fucked around with this message at 23:38 on Mar 22, 2019

BangersInMyKnickers
Nov 3, 2004

I have a thing for courageous dongles

Digital_Jesus posted:

Since 2012.

Server Standard - 2 Guest OS per complete licensing of sockets in the host. Minimum of 2 socket required to be licensed.
Server Datacenter - Unlimited Guest OS per complete licensing of all physical sockets in the host.

Starting with 2016 they changed to per-core models instead of per-socket.

VMware / Hyper-V you're required to license all sockets / cores in the cluster as well, and standard only lets you run 2VMs per set of licenses.

Been that way for a loooooong time.

Enterprise died with Server 2008R2 as a licensing concept.

E: Server 2012 Standard was 2-sockets per license per 2 guest VMs. My bad.

That makes sense. 2012 was around the last time I had to deal with this nonsense and that's when we hit the breaking point that it was more cost effective to just license the sockets in the cluster with datacenter.

Internet Explorer
Jun 1, 2005





BangersInMyKnickers posted:

just license the sockets in the cluster with datacenter.

forever and ever, amen.

Digital_Jesus
Feb 10, 2011

I sat down and did the math at some point and theres definitely a breakpoint in the Essentials+ / Hyper-V 3-hosts-and-a-SAN styled model where Datacenter is more effective, but its juuuuuuuuuuuuuust outside the realm of most companies saying "gently caress it buy it when we get audited"

Thanks Ants
May 21, 2004

#essereFerrari


I did this a couple of years back when the 2012 core licensing came in and you have to run such tiny quantities of VMs for Datacenter to not make sense vs. Standard that you probably are never going to fully utilise the hardware anyway.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
One of the guys at work just bought a low-end Nvidia GPU and he's planning to use if for VDI.

I've got a spare, el-cheapo, Nvidia 710 card that I use on my headless boxes if I need to get a screen up.

I'm thinking that maybe I could use my card for VDI in my ESXi host or something.

What are the advantages and how is it done? Can I expect a Windows 10 RDP session through GPU VDI that is comparable to sitting in front of a Windows machine?

I realise that there will be network latency, so what are the advantages of adding a dedicated GPU and what platform is it used on? ESXi? KVM?

Digital_Jesus
Feb 10, 2011

Theres no real advantage of having a gpu attached to your vdi instance unless youre gonna do somethinf that requires a gpu.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Ah. Maybe my mate is thinking about using multiple monitors or some weird, advanced virtualization poo poo then. I'll ask him tomorrow.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
He needs a Grid card really if he wants to do the vGPU stuff. An older K1 card can be had via eBay for cheap enough and doesn't require licensing like the newer cards do.

some kinda jackal
Feb 25, 2003

 
 
Is there anything you guys can think of which can hit all of the following:

- provide pci passthrough
- run a windows nt guest
- type 2 hypervisor preferred, so rdp or vnc are not required to interact.

I have some fairly esoteric software that only runs on nt, requires specific pci cards to operate some lab equipment, and is interactive so I’d prefer not to muddy up with remote connections. Getting tired of the hardware this runs on failing so I’m hoping to virtualize it on commodity new hardware.

some kinda jackal fucked around with this message at 02:27 on Apr 5, 2019

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Martytoof posted:

Is there anything you guys can think of which can hit all of the following:

- provide pci passthrough
- run a windows nt guest
- type 2 hypervisor preferred, so rdp or vnc are not required to interact.

I have some fairly esoteric software that only runs on nt, requires specific pci cards to operate some lab equipment, and is interactive so I’d prefer not to muddy up with remote connections. Getting tired of the hardware this runs on failing so I’m hoping to virtualize it on commodity new hardware.
You're not going to find "commodity new hardware" with PCI slots. I wouldn't expect the boutique add-in cards shipped with high-end microscopy setups to work with an IOMMU, in the first place, but a PCIe->PCI adapter is only going to complicate things. Your best option might be to try an old Yorkfield Core 2-era setup with both VT-d support and a hardware PCI slot, but stuff might be funky with the IOMMU groups on those motherboards.

Honestly, you're probably going to have an easier time just building a new box of old poo poo.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
My Linux server motherboard has a mixture of PCIe and PCI slots, if that's what he's after.

I know this because I was doing some work on it a few weeks ago, in the darkness of my cabinet with a torch in my mouth, and thought "why isn't this GPU fitting into this slot?". On closer examination I found out that I had been (carefully) trying to insert a modern GPU into a PCI slot!

My motherboard is a Gigabyte X150M-ECC. I think it supports IOMMU/passthough but I haven't tried on that particular board. I just run a bunch of KVM Linux boxes and docker on it.

Wibla
Feb 16, 2011

Sigh, I need a compact esxi host with room for 4-6 3.5" drives and 16ish GB ram, but the microserver gen8 is out of production and the gen10 is apparently garbage?

Adbot
ADBOT LOVES YOU

some kinda jackal
Feb 25, 2003

 
 

Vulture Culture posted:

You're not going to find "commodity new hardware" with PCI slots. I wouldn't expect the boutique add-in cards shipped with high-end microscopy setups to work with an IOMMU, in the first place, but a PCIe->PCI adapter is only going to complicate things. Your best option might be to try an old Yorkfield Core 2-era setup with both VT-d support and a hardware PCI slot, but stuff might be funky with the IOMMU groups on those motherboards.

Honestly, you're probably going to have an easier time just building a new box of old poo poo.

All good points. Commodity hardware may have been a stretch since I can get PCI slots, just not on most common motherboards. In any event, I have a stockpile of old equipment to lean on, and I will continue to help service this thing until the pile runs out, at which point I hope they stop calling me :D

I'll keep fiddling with it but it is looking like a pain in the rear end.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply