Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
World z0r Z
May 26, 2013

my lab... pic sucks.



from top down

Juniper SA-2500 SSLVPN
Palo Alto PA-200 firewall
Juniper SRX210
Juniper SRX210
Avocent ACS6016 console server
Cisco 3750 Switch
Cisco 3550 Switch
Cisco 3550 Switch
power strip
Cisco 3650 Switch
Juniper J4300 Router
Cisco 3845 ISR Router
Juniper J6300 Router
Cisco 2811 ISR router with NAM
Juniper M7i router
HP DL360 G7 with 8 1G NICS for Vsphere

I'm an ex-WAN guy (for now) so my lab was mostly WAN-centric until I added SSLVPN, firewalls and VSphere this year after phasing out all my 2500 and 2600 devices. The M7i is logically split into 16 routers using "logical systems" which is similar to Nexus VDC. So that is my "WAN" - everything else breaks out into remote sites off the WAN. Call manager express on the 2811 and Call manager on a VM in VSphere for some 7960's I also have. I typically don't have this whole rack turned on at once. I only configure what I want to train on: WAN, LAN, VOIP, firewalls, ESX and server related tech, etc. For quick IOS L3 stuff I fire up a Cisco IOU VM and build things virtually.

Adbot
ADBOT LOVES YOU

Drighton
Nov 30, 2005

Last time I tried to install vSphere 4 on a non-server I discovered the driver problem. What should we look for when selecting a consumer motherboard for a white box lab?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Drighton posted:

Last time I tried to install vSphere 4 on a non-server I discovered the driver problem. What should we look for when selecting a consumer motherboard for a white box lab?

I use supermicro, but really google is your friend.

Return Of JimmyJars
Jun 24, 2006

by FactsAreUseless
I'm going to be the voice of dissonance and say that as long as you power down your lab when you're done getting that eBay Dell is fine. The idea behind the lab is to build experience and you're not going to get familiar with how a real bare metal server is setup by building a generic beige box. Server hardware and the consumer hardware people on here are recommending are radically different. You can also tuck your lab into the garage or basement if the noise is that obtrusive.

evol262
Nov 30, 2010
#!/usr/bin/perl

Return Of JimmyJars posted:

I'm going to be the voice of dissonance and say that as long as you power down your lab when you're done getting that eBay Dell is fine. The idea behind the lab is to build experience and you're not going to get familiar with how a real bare metal server is setup by building a generic beige box. Server hardware and the consumer hardware people on here are recommending are radically different. You can also tuck your lab into the garage or basement if the noise is that obtrusive.

If he doesn't have a garage or basement, he's screwed.

A "generic beige box" is still a bare-metal setup. It's not ESXi on ESXi.

Server hardware and consume hardware differ very, very little these days, unless you think "ECC instead of non-ECC; SAS instead of SATA" is "drastic". The big difference you'd see is using an OEM-customized ESXi image that has drivers built in. Big whoop. The VMware experience is just the same. A lot of whiteboxes will do hardware monitoring out of the box with ESXi.

This whole "just turn it off" thing is insane. Once you get reasonably used to having an AD environment, you're going to tie it into the rest of your network. Then what? Leave your 1U running all the time?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I think the most important thing in a lab isn't about installing it on a Dell/HP/etc because at the end of the day installing ESXi/Citrix/Hyper-v on x86 is pretty much the same all around. The important thing to take away from it is the knowledge of the workings of the systems such as esxi, GNS3, Windows/Linux, etc.

Getting some vendor hardware is great but not always the main goal. Yes there are some thing's like the Cisco UCS's configuration that is sometimes daunting but there are bunches of simlabs for that which do it without spending a bunch on rebranded HP servers.

Probably the only real thing I would say for a lab where buying vendor HW is useful is for storage, but even then there are some many java programs that simulate the environment which vendors usually give out for little to nothing

Turnquiet
Oct 24, 2002

My friend is an eloquent speaker.

I am starting small and keeping VM instances limited to VirtualBox on a MBA. Can I setup three VMs in Vbox, build a domain controller on one, a web server on another, and a federation server on a third? Or will all my VMs be perfect little islands without any networking hardware?

AreWeDrunkYet
Jul 8, 2006

Turnquiet posted:

I am starting small and keeping VM instances limited to VirtualBox on a MBA. Can I setup three VMs in Vbox, build a domain controller on one, a web server on another, and a federation server on a third? Or will all my VMs be perfect little islands without any networking hardware?

You can create a virtual network for your VMs, and you can bridge that to a physical NIC.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Turnquiet posted:

I am starting small and keeping VM instances limited to VirtualBox on a MBA. Can I setup three VMs in Vbox, build a domain controller on one, a web server on another, and a federation server on a third? Or will all my VMs be perfect little islands without any networking hardware?

They will be virtually networking in RAM, vBox has some nice network manager that as long as they are all on the same "vlan" they will work.

Just make sure you only give the ram/cpu what is needed

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

evol262 posted:

This whole "just turn it off" thing is insane. Once you get reasonably used to having an AD environment, you're going to tie it into the rest of your network. Then what? Leave your 1U running all the time?

I hear windows for workgroups works nice.

World z0r Z
May 26, 2013

For what it's worth the G7 and up HP proliant 2U boxes like a DL380 are really really quiet for what they are. They might spin up for 5 seconds on boot but they are no noisier than a video card playing games.

evol262
Nov 30, 2010
#!/usr/bin/perl

World z0r Z posted:

For what it's worth the G7 and up HP proliant 2U boxes like a DL380 are really really quiet for what they are. They might spin up for 5 seconds on boot but they are no noisier than a video card playing games.

Terrible-config DL380 G7s are still 3 times as expensive as a Haswell i5 build.

Ron Burgundy
Dec 24, 2005
This burrito is delicious, but it is filling.
Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour.

evol262
Nov 30, 2010
#!/usr/bin/perl

Ron Burgundy posted:

Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour.

What? Yes, it is. You can nest virtualization-capable guests in Player or KVM. I don't think you can in VIrtualbox.

Ron Burgundy
Dec 24, 2005
This burrito is delicious, but it is filling.
Well gently caress. It never worked for me in VBox so I Googled around a bit and saw some outdated threads about it being the same on everything except Workstation which is like a million dollars AUD :australia:

Maybe I should have actually tried it...

klosterdev
Oct 10, 2006

Na na na na na na na na Batman!
I've got a copy of Packet Tracer lying around due to some Cisco classes I took a while back. Out of curiosity, is it illegal to distribute the software? I'm asking because PT has no DRM to speak of.

evol262
Nov 30, 2010
#!/usr/bin/perl

klosterdev posted:

I've got a copy of Packet Tracer lying around due to some Cisco classes I took a while back. Out of curiosity, is it illegal to distribute the software? I'm asking because PT has no DRM to speak of.

Are you daft?

See here.

quote:

The Packet Tracer software is available free of charge ONLY to Networking Academy instructors, students, alumni, and administrators that are registered Academy Connection users.

"It has no DRM so it must be free for distribution" is an incredible argument.

ate shit on live tv
Feb 15, 2004

by Azathoth
Just :filez: it doesn't matter.

Swink
Apr 18, 2006
Left Side <--- Many Whelps
Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.



Ron Burgundy posted:

Still not possible to nest 64-bit guests under VMWare Player or Virtualbox with ESXi hey? Guess that rules out Server 12, seems to only come in 64 flavour.

I was trying to find a way to nest Win7 > Vmware Workstation > HyperV > Win8 VM. I found a blog where some guy did it but only with a brand new CPU and some editing of config files. I cant find the blog right now but it might be possible.

edit: Here's the blog - http://www.veeam.com/blog/nesting-hyper-v-with-vmware-workstation-8-and-esxi-5.html

The dude reckons you need an i7 Nehalam core.

Swink fucked around with this message at 01:22 on Aug 13, 2013

Ron Burgundy
Dec 24, 2005
This burrito is delicious, but it is filling.
I actually managed to do what I needed to do with VMWare Player 5.0.2 and ESXi 5.1.0. Server 2012 and Windows 8 64-bit work fine inside.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Swink posted:

Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.
Openfiler or openindiana (or some other illumos distro) are good places to learn storage basics. netapp also has a VM, you might be able to get a demo of it.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Swink posted:

Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.

Openfiler, FreeNAS and Microsoft's iSCSI target are all solid ways to get into centralized storage using iSCSI.

evol262
Nov 30, 2010
#!/usr/bin/perl

Swink posted:

The dude reckons you need an i7 Nehalam core.
:hurr:
gently caress that guy. It's not even remotely true on KVM, and I don't see why it would be on VMware, either. It performs better with EPT (and the list of processors with EPT includes loving Celerons), but it's not a requirement. VMware has a checkbox now. You don't need to manually edit configs.

BlueBlazer
Apr 1, 2010
My lab is a cluster of stuff I bribed the recyecler out of over the last year.

Im sure I will get "That's overkill you moron."
or
"That must kill your power bill."

I host all of my VM's on just two boxes that I payed a whopping 200$ for, so I can't complain.

A Sunfire x4600 w/ 64 GB ram and 8 AMD 8220 Dual cores. It's one of those machines that seems like its overkill but if you can wrangle the power management not terrible.

Lots of old servers are still viable for certain applications if you wrestle with power-on scheduling and usage times.

a QNAP 8 bay NAS w/ 6 TB Split between my archive and VM's.

I've been trying to get my hands on some larger Cisco hardware, but for switching I use an old AT-8000s that has plenty of life left in it.

Hook up with your local PC recycler. They usually are the repository for your cities servers, most of which are usually old crappy G3 series HP, and Pentium III Dell servers. Sometimes there is something worth getting.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Swink posted:

Can I use openfiler or similar to learn about storage? I know virtually nothing and dont know where to start.

I use ZFS and Freenas 9.0.1 has HW acceloration and ZFS is amazing for a lab environment if you have 8GB ram and an SSD to dedicate to it.

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
Packet tracer talk: it's good for the CCENT/CCNA, terrible for labbing anything real. At least they now offer IOS 15 and 29xx series routers.

You definitely don't need to share it, plenty of others have. Stay safe.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Has anyone built a homebrew fiber channel or infiniband SAN?

I'm getting the itch to redo my current home/lab iSCSI SAN and experiment with Server 2012 and its storage capabilities and am thinking about building a dedicated storage box and a switched FC/IB environment.

eBay shows some aging Mellanox infiniband gear for a couple hundred bucks, but before I jump down the rabbit hole and start investigating component compatibility I want to see if someone else has a trip report.

I want to play with some new gear, not reinvent the wheel here.

Tekhne
Sep 11, 2001

Last night I purchased a Dell PowerEdge C6100 off eBay for my home lab. I was originally going to build two boxes with Core i5s and ~32GB of ram each until I stumbled across this gem. For about $770 I got a chassis with four independent nodes each with 2 Quadcore Xeon L5520s and 24GB of ram. Total that makes it 8 physical CPUs (32 cores) and 96GB between all four server nodes. Each node can be powered up independently from the rest, and they all share the same power supply. According to ITPro's review on this model, all four nodes at idle will draw only 348W (going up to 964W at full utilization).

What am I going to use this for? I'll pop a USB stick into each server and install VMWare ESXi on them. Also I'll throw in a spare hard drive in each server and install HyperV on two and XenServer on the other two. I'm planning on going through lots of different scenarios that I encounter in my job - SBS migrations, Exchange upgrades, Citrix XenApp deployments, VMware View, XenDesktop, etc etc.

In case anyone is interested this is the unit I purchased - http://www.ebay.com/itm/251283578250?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649

Indecision1991
Sep 13, 2012

Tekhne posted:

Last night I purchased a Dell PowerEdge C6100 off eBay for my home lab. I was originally going to build two boxes with Core i5s and ~32GB of ram each until I stumbled across this gem. For about $770 I got a chassis with four independent nodes each with 2 Quadcore Xeon L5520s and 24GB of ram. Total that makes it 8 physical CPUs (32 cores) and 96GB between all four server nodes. Each node can be powered up independently from the rest, and they all share the same power supply. According to ITPro's review on this model, all four nodes at idle will draw only 348W (going up to 964W at full utilization).

What am I going to use this for? I'll pop a USB stick into each server and install VMWare ESXi on them. Also I'll throw in a spare hard drive in each server and install HyperV on two and XenServer on the other two. I'm planning on going through lots of different scenarios that I encounter in my job - SBS migrations, Exchange upgrades, Citrix XenApp deployments, VMware View, XenDesktop, etc etc.

In case anyone is interested this is the unit I purchased - http://www.ebay.com/itm/251283578250?ssPageName=STRK:MEWNX:IT&_trksid=p3984.m1439.l2649

Oh oh, people here dont like refurb servers so dont expect praise or anything. I learned the hard way...either way good find, let me know how the sound is.

Swink
Apr 18, 2006
Left Side <--- Many Whelps
People bitch a lot but the hardware is less important than just spinning up the software and learning/testing.

Whatever hardware you have, post about what you're learning.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I was about to go to bed but my sperg kicked in, OP here you go

SO YOU WANT TO BY A DELL POWEREDGE: EASY GUIDE TO SAVING A BUTT-TONE OF MONEY

So you want to buy a poweredge for your lab? Cool here are some points to think about prior to buying that waste of money. First of let me say when I was first getting into VMware and such I thought getting a Dell Poweredge/HP Proliant/etc would be the poo poo and be MUCH more valuable learning than a server, then I ran the facts and figures.

:siren:PROTIP: No one gives a poo poo you can install an OS/Hypervisor onto a hardware platform:siren:
Seriously;
installing ESXi is like, Enter, F11, Enter, F11, Enter, and Enter.
Hyper-V 2012 is similar even less clicks.
Citrix is similar to ESXi but feels a bit more linuxy but is incredibly straight forward.

Congratulations, you are not able to install ESXi/hyper-V/Citrix on HP/Dell/IBM/UCS/other.

The important part of a lab is not how to install an OS on a HW platform unless you are shooting for your A+ and A+ job, that is probably the only time an employer will care. The important part of setting up a HyperVisor/Server OS is not the "can you install it" but "can you make it usable and understand what you did". Hardware platform familiarity is becoming less and less of a requirement as we move more and more into the virtualization realm. Today most of my installs are scripted, to the point where I boot off USB and let the .KS/unattend.xml finish it, comeback in 5 minutes and configure anything else. While you may need to understand the importance of auto-deployments of windows/linux/Vmware, realize you can do this all in ESXi running on a Cheap rear end 600 dollar build which will curb stomp your Dell server you are getting that shipped with no HDD's, hope you have some good network storage!

Common misconceptions of LAB environments
  • Installing on Vendor Hardware will work better!
    Mostly not true, most of your server hardware is probably 3-4 years old and won't really compare to what you can get on the desktop market in the way of parts.

  • I can just upgrade the Ram in my Dell Server which was 200 bucks and came with 16GB of Ram with some off newegg!
    Probably not true, most ram for servers isn't your run of the mill desktop ram, most will require ECC and may be vendor specific. Long story short it will cost you much more than you estimate

  • I need to know how to install it onto the Vendor hardware
    No you really need to watch like a few YT videos and blammo done, focus on configuring the Software and Services

  • I can just throw in my Western Digital or Seagate Drives into make up for the fact it shipped with no drives!
    Not always true, most won't accept a drive unless it has a signature from dell/hp/etc and those drives are costly(hence why your server didn't come with any)!

  • There are some Hardware pieces I can't mimic like iDrac, iLO, or Cisco's CMC!
    If you are going to a job where they let you deal with things like multiple VM servers and clusters, and yet you can't be bothered to watch a 5 minute video or understand what it means when it is asking for an IP address and password you have bigger issues. Furthermore, Supermicro offers some similar features on their boards which allow for similar configuration

  • I can't get enough ram/CPU in the Whitebox Servers I NEED a Dell/HP
    True some things just won't fit your needs for ram/cpu in the box, but you shouldn't be building a 1:1 production unless you are going for something like a VCDX, even then 32GB and a 8 Core CPU will take you farther than you think. If you still need more, look at supermicro, most take desktop ram fairly well and run stuff without a hitch.


Remember your lab environment is to teach you the concepts and to familiarize yourself with the Software and Services you are configuring. It does not have to be better than your production environment.

TIPS OF A VIRTUAL ENVIRONMENT
:eng101: Only assign what the VM needs, this is also true in a production environment. If it only is running AD/DNS/DHCP, it could probably run happy on 512MB and 1 vCPU, You'll probably run out of RAM/DISK IOPS BEFORE you congest your CPU. Unless you are doing some really crazy poo poo or have a 2-3 year old server/pc.
:eng101: Invest in SSD's, SATA disks are SLOOOW for VM's that require shared resources, invest in some SSD's
:eng101: Don't overbuy, this is a really common mistake, buy what you need for what you are doing and upgrade as needed.
:eng101: Look into things like VirtualBox or VMwareWorkstation, and updating your Gaming rig, PRIOR to spending 800 on some dell HW. I have built many PoC labs for my VCP/VCP-DT in workstation, it's a bit slower than ESXi white boxing but 100% DOABLE
:eng101: ESXi can run ESXi on top of ESXi, it can also run Hyper-V and Citrix. Often building 1 beefy box can outweigh multiple lower end boxes.

Dilbert As FUCK fucked around with this message at 04:46 on Aug 21, 2013

Tekhne
Sep 11, 2001

^^^ Good post, but I don't understand the hate against refurb server hardware for a lab. I'm assuming your post is at least in partial reaction to my previous one since you mentioned a Dell without HDDs. While I did enjoy your sperg on installing the hypervisors, I too can do them in my sleep. That being said there isn't much reason to put someone down over something like that. I'm sure no one gives a poo poo that you can tie your own shoes (I'm assuming here) but I bet you were pretty proud the first time you did it. One thing I didn't mention is I already have a FreeNAS setup with bonded NICs. I also have two PCs with i7 3370's and 16GB of ram. These are my gaming machines and like you suggested in your thread I've been using Workstation to build up my lab on top of them for quite a while. Its been working fine, however an upgrade was in order as I am wanting to get into performance testing, DirectPath scenarios, automation, etc.

Due to these needs I wanted to move away from the inception build and have the hypervisor on the physical hardware. The obvious choice was build up two new whiteboxes and dedicate them for my lab. The cost would have been roughly $1100 or so. Once I found the C6100 for $770 and that it contains four independent servers within its chassis, I was sold. Sure the L5520 line was released in 2009, but its got plenty of power for what I am trying to do. The power consumption is low and the noise / heat won't be an issue as I have a dry basement that could use some heating in the winter.

Docjowles
Apr 9, 2009

Different people have different needs and wants. But this is SH/SC and we have to turn everything into a rant :v: If you're an IT geek the thought of your own home server rack probably sounds cool and everyone gravitates toward that without always knowing the downsides. I think multiple posters in this thread have been burned by buying a sweet eBay server farm only to end up never using it because it's ear-piercingly loud and adds a small fortune to their power bill. So there's some backlash against that.

Of course there are some people who know what they're getting into, and don't care. Maybe you can rack everything out in a detached garage where you'll never hear it. Or you want to play with Fiber Channel or something and you just can't do that easily with a couple white boxes. Fine, great, go hog wild. You are the 1% that doesn't need to be saved from your inner sperg because you actually have a reason to buy that poo poo.

evol262
Nov 30, 2010
#!/usr/bin/perl

Tekhne posted:

^^^ Good post, but I don't understand the hate against refurb server hardware for a lab. I'm assuming your post is at least in partial reaction to my previous one since you mentioned a Dell without HDDs. While I did enjoy your sperg on installing the hypervisors, I too can do them in my sleep. That being said there isn't much reason to put someone down over something like that. I'm sure no one gives a poo poo that you can tie your own shoes (I'm assuming here) but I bet you were pretty proud the first time you did it. One thing I didn't mention is I already have a FreeNAS setup with bonded NICs. I also have two PCs with i7 3370's and 16GB of ram. These are my gaming machines and like you suggested in your thread I've been using Workstation to build up my lab on top of them for quite a while. Its been working fine, however an upgrade was in order as I am wanting to get into performance testing, DirectPath scenarios, automation, etc.

Due to these needs I wanted to move away from the inception build and have the hypervisor on the physical hardware. The obvious choice was build up two new whiteboxes and dedicate them for my lab. The cost would have been roughly $1100 or so. Once I found the C6100 for $770 and that it contains four independent servers within its chassis, I was sold. Sure the L5520 line was released in 2009, but its got plenty of power for what I am trying to do. The power consumption is low and the noise / heat won't be an issue as I have a dry basement that could use some heating in the winter.

I really don't even have the words. I work on RHEV/oVirt, from home. I have a lab. I have L5520s literally sitting on the floor because it's not worth the power bill and added runtime of the AC to have them on. I have a full-height rack in my office and it's not worth my time to have L5520s racked up because IPC is horrifyingly low, nested virtualization on them sucks, and performance is worse than my W530. To some point more cores buys you more vCPUs without hammering on interrupts, but I'm not sure why the next step "modern hardware" is automatically "5 year old decommissioned hardware". Hint: they're not using it any longer for a reason.

For the cost of your C6100, you could have 2 hex core Visheras with 32GB of memory each, which will support advancements in virtualization over the intervening 4 years (there are a lot), cost you 1/4 of the power bill, generate 1/4 of the heat, 10% of the noise, and generally run circles around those L5520s on anything other than distributed compiles and cluster databases (but realistically, you probably don't have the IOPS to make either of those relevant). How is it a "gem"?

evol262 fucked around with this message at 17:31 on Aug 21, 2013

Stealthgerbil
Dec 16, 2004


I think it would be cool to get one of those C6100s with 8x six core L5639s but only because all those cores...

I bought a $300 C1100 with dual X5570s at 2.93ghz (a little faster then the L5520s) and it has been a great lab pc but that's also $300 I could have put towards getting a haswell based server. Like that's the cost of an e3-1270v3. Don't get me wrong, I like my dual X5570 setup but it definitely is old. I kind of want to collocate it at one of those $50 a month cheap places and use it to host something but meh.

Also good luck if you ever want to collocate a c1100, they use 1-2 amps of power alone which will make it totally not worth it. I heard the sandy bridges use like half an amp for a single processor setup in a typical 1u setup.

Count Thrashula
Jun 1, 2003

Peak Performance.

Buglord
Is there any Juniper router or switch that doesn't cost a million dollars*?

*(a million figurative dollars)

Now that I have juniper routers set up in GNS3 and working, I might do a write up. I'm pretty impressed with JunOS so far.

FasterThanLight
Mar 26, 2003

QPZIL posted:

Is there any Juniper router or switch that doesn't cost a million dollars*?

*(a million figurative dollars)

Now that I have juniper routers set up in GNS3 and working, I might do a write up. I'm pretty impressed with JunOS so far.
I'm using a J2320 at home, got it for about $200 on ebay. Looks like the same guy still has a bunch available.

sanchez
Feb 26, 2003

Dilbert As gently caress posted:

I was about to go to bed but my sperg kicked in, OP here you go

SO YOU WANT TO BY A DELL POWEREDGE: EASY GUIDE TO SAVING A BUTT-TONE OF MONEY



I'd say the first thing to do is scout around where you work. Due to virtualization consolidation plenty of orgs have spare PE2950's full of 10k drives lying around doing nothing. Find one with enough ram and you're good to go, I can't imagine many managers would have a problem with one of their guys rebuilding an old server and running it in their rack if it's for legit self improvement and not a torrent box/irc server.

smokmnky
Jan 29, 2009
So I totally agree that being able to install ESXi on a box isn't that impressive just like being able to install Windows 7 isn't either but I would like to know once you have it installed and a few VMs running what would you consider an "accomplishment" in regards to actual VMWare work? Is it getting them networked and talking to each other? I've been "deploying" VMs for a little while now but I'd like to get some more knowledge and working into what makes a good VMWare admin

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

smokmnky posted:

So I totally agree that being able to install ESXi on a box isn't that impressive just like being able to install Windows 7 isn't either but I would like to know once you have it installed and a few VMs running what would you consider an "accomplishment" in regards to actual VMWare work? Is it getting them networked and talking to each other? I've been "deploying" VMs for a little while now but I'd like to get some more knowledge and working into what makes a good VMWare admin

What makes a good VMware admin subject matter knowledge of:

SANs (FC and/or iSCSI), including best practices for multipathing, how to handle LUN masking and replication, etc
Scripting -- PowerCLI is the standard, but you can use anything you want
Systems Administration -- you're almost certainly going to end up hands-on with some of your VMs, and you should be comfortable in any OS running on your VMware environment, especially sysprep if you deal with Windows
Networking -- Know when to use link aggregation and when not to. Understand VLANs and how they work, as well as how to segment your network and troubleshoot problems.
Disaster recovery -- enough said; large VMware environments almost always have a DR site somewhere, and you should be familiar with scoping the required resources and setting up processes to ensure that a hot (or cold, depending on your environment) environment is ready
Performance tuning -- know how the VMware scheduler works, and when 2 vCPUs are actually better than one. Know how dense you can make your environment. Get a handle on how many IOPS you need.
Resiliency -- keeping critical services up through failures. Nobody wants your virtualized AD controllers to die.
VDI -- plays into performance tuning/density/systems admin
Imaging -- fading, but "golden images", templates, linked clones, and other ready-to-go images are still important.

Nobody is going to hand you a configured environment and say "plug in your servers, assign these addresses, and collect a paycheck". Realistically, you'll help design the environment and administer it on a day-to-day basis, probably including the guests. A good virtualization admin has (or has had in the past) a hand in every pot.

  • Locked thread