vLab for VCAP-DCA

With the Cisco DUCUI exam passed and out of the way I have decided to go back to the VMware VCAP5-DCA which I should have started last month.

Powering the lab backup I am greeted with everyone’s favorite message box. ‘Your evaluation┬álicense has expired.’ Time to rebuild the lab again :-(

Anyway I thought this maybe a good opportunity to blog about my Lab.

After attending VMworld 2012 and watching Simon Gallagher’s @vnif_net http://vinf.net/ #vBrownbag talk about vTARDIS I decided to give it ago.

My vTARDIS hardware.

  • HP DL380 G7 with 1 x Xeon E5649 @ 2.53Ghz (Six cores)
  • 36GB RAM
  • 2 x 146GB SAS, 5 x 600GB SAS and 5 x 1TB SATA SFF Drives.
  • 4 x NICS (only using 1) and iLO3

The disks are configured as:

2 x 146GB RAID 1 (Host ESXi boot drives, ISO store)

5 x 600GB RAID 5 (Split in 2 x 1TB LUNs for ESXi Datastores and vSAN LUNs)

6 x 1TB RAID 10 (vSAN LUNs)

host_datastore

I have 4 vSwitches on the host External, iSCSI01, iSCSI02 and vMotion

host_network

All the vSwitches on the host need the security policy changing to allow promiscuous mode.

vswitch_promis_mode

For the VCAP lab I am running a Domain Controller, VMware Virtual Centre, MS SQL2008r2, NexentaStor 3.1.3.5 CE and 4 x ESXI 5 servers.

host_vhosts

The NexentaStor (vSAN01) Virtual Machine is configured with 10 VMDKs.

  • 1 boot disk, 4 disk on the RAID5 array and 4 disks on the RAID 10 array for vDatastore and 1 disk as a NFS store.

vsan01_hardware

In the Nextenta GUI the disks are configured as below.

vsan_disks

vsan_disks_1

vsan_disks_2

vsan_disks_3

The VM has 3 network cards – 1 NIC for management and 1 NIC to each iSCSI vSwitch.

vsan_nics

The vSANs iSCSI Network cards are configured with Jumbo frames.

This means that the host vSwitches, guest ESXi vSwitches and guest vNICs need to be set with a MTU of 9000 as well.

  • Host

host_mtu_9000

  • Guest

guest_mtu_9000

As lots of people have blogged about create nested ESXi servers so I’m not going to :-) William Lam has a very good blog post here.

but each vESXi is configured as below

guest_vmx

So 3 Windows 2008r2 servers, 3 ESXi 5.0 servers, 1 Nexenta, 3 Debian installs and a OVF deployment later I have…

guest_datacentre

vESXi storage

guest_storage_1

Hardware Acceleration!!!!

guest_storage_2

and vESXi networking before configuring VLANs, NIC teaming and migrating to DV Switches.

guest_networking

Time to get started…

get_started