This blog has moved to its own domain: www.vaspects.com
Please update your bookmark.
Virtual Aspects
Anything on Virtualization and Cloud Computing I come across...
Wednesday, June 19, 2013
Friday, May 31, 2013
IPv6 Hands-On Lab
This blog has moved to its own domain: www.vaspects.com
Please update your bookmark.
http://www.vaspects.com/2013/05/31/ipv6-hands-on-lab/
Just to keep you updated (no, this blog is not a straw fire!):
A colleague & I have meanwhile set up an IPv6 test lab on the same hardware I use in my home lab. And this means a complete setup: DHCPv6, RA, static IPv6, tunnels, firewalling, broad range of client OS - the whole nine yards.
It's going to take some time to write a series of blog posts describing the setup, and I'm still tempted to use IPv6 for the vSphere infrastructure as well. Maybe even for iSCSI, although it's no longer officially supported...
So stay tuned, there's a lot of stuff coming soon! Just have to finish the setup, writing - and my holidays. :-)
Please update your bookmark.
http://www.vaspects.com/2013/05/31/ipv6-hands-on-lab/
Just to keep you updated (no, this blog is not a straw fire!):
A colleague & I have meanwhile set up an IPv6 test lab on the same hardware I use in my home lab. And this means a complete setup: DHCPv6, RA, static IPv6, tunnels, firewalling, broad range of client OS - the whole nine yards.
It's going to take some time to write a series of blog posts describing the setup, and I'm still tempted to use IPv6 for the vSphere infrastructure as well. Maybe even for iSCSI, although it's no longer officially supported...
So stay tuned, there's a lot of stuff coming soon! Just have to finish the setup, writing - and my holidays. :-)
Tuesday, May 7, 2013
Workaround for the vCenter Server appliance 5.1U1 update delay
This blog has moved to its own domain: www.vaspects.com
Please update your bookmark.
http://www.vaspects.com/2013/05/07/workaround-for-vcenter-appliance-update-delay/
The update process from 5.1.x to 5.1 Update 1 contains a serious flaw. The update may take more than 45 minutes, some report more than one hour. VMware even mentions this in their release notes:
The generic update documentation KB article 2031331 "Updating vCenter Server Appliance 5.x" mentions even longer durations:
Well, there is a workaround, even a very simple one:
The .hmac files contain hashes of /usr/lib64/libcrypto.so.0.9.8 and /usr/lib64/libssl.so.0.9.8 used for FIPS compliance. When the corresponding packages are updated, these files are not deleted immediately:
-r-xr-xr-x 1 root root 1685176 Jul 10 2012 /usr/lib64/libcrypto.so.0.9.8
-r-xr-xr-x 1 root root 343040 Jul 10 2012 /usr/lib64/libssl.so.0.9.8
-rw-r--r-- 1 root root 65 Jan 11 2012 /usr/lib64/.libcrypto.so.0.9.8.hmac
-rw-r--r-- 1 root root 65 Jan 11 2012 /usr/lib64/.libssl.so.0.9.8.hmac
The mismatch between libraries (binaries) and hashes causes all applications using OpenSSL to fail with messages like
If the appliance itself does not properly start anymore, boot it from a Linux live CD (GParted or Parted magic are sufficient), mount the filesystem and delete the .hmac files. Perform a normal boot afterwards.
If the web UI allows to do a normal update, do so, and you should be fine.
Otherwise try it manually (the following steps assume you're familiar with Linux and you should check the prerequisites):
Please update your bookmark.
http://www.vaspects.com/2013/05/07/workaround-for-vcenter-appliance-update-delay/
The update process from 5.1.x to 5.1 Update 1 contains a serious flaw. The update may take more than 45 minutes, some report more than one hour. VMware even mentions this in their release notes:
Update of vCenter Server Appliance 5.1.x to vCenter Server Appliance 5.1 Update 1 halts at web UI while showing update status as installing updates*
When you attempt to upgrade vCenter Server Appliance 5.1.x to vCenter Server Appliance 5.1 Update 1, the update process halts for nearly an hour and the update status at Web UI shows as installing updates. However, eventually, the update completes successfully after an hour.
Workaround: None.
(http://www.vmware.com/support/vsphere5/doc/vsphere-vcenter-server-51u1-release-notes.html)
The generic update documentation KB article 2031331 "Updating vCenter Server Appliance 5.x" mentions even longer durations:
The update process can take approximately 90 to 120 minutes. Do not reboot until the update is complete.
(http://kb.vmware.com/kb/2031331)
Well, there is a workaround, even a very simple one:
- log in to the appliance via SSH as root
- execute "rm /usr/lib64/.lib*.hmac"
- perform the update using the web UI
The .hmac files contain hashes of /usr/lib64/libcrypto.so.0.9.8 and /usr/lib64/libssl.so.0.9.8 used for FIPS compliance. When the corresponding packages are updated, these files are not deleted immediately:
-r-xr-xr-x 1 root root 1685176 Jul 10 2012 /usr/lib64/libcrypto.so.0.9.8
-r-xr-xr-x 1 root root 343040 Jul 10 2012 /usr/lib64/libssl.so.0.9.8
-rw-r--r-- 1 root root 65 Jan 11 2012 /usr/lib64/.libcrypto.so.0.9.8.hmac
-rw-r--r-- 1 root root 65 Jan 11 2012 /usr/lib64/.libssl.so.0.9.8.hmac
The mismatch between libraries (binaries) and hashes causes all applications using OpenSSL to fail with messages like
fips.c(154): OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURERegarding the appliance update the vami-sfcb fails to start, thus delaying the whole update process until the maximum retry limit for this service is reached. If the appliance is rebooted before this timeout, the postinstall phase was not executed and the vCenter will not start anymore. Either because of said OpenSSL error or because the vpxd does not start with the error message
Database version id '510' is incompatible with this release of VirtualCenter.I was able to revive the appliance in my lab, but this is of course neither supported nor recommended. It runs fine again, but the state is not consistent and I would always recommend to boot it just one more time to perform a migration to a fresh installation and save the configuration & data. Depending on when the update was interrupted, your results may vary.
If the appliance itself does not properly start anymore, boot it from a Linux live CD (GParted or Parted magic are sufficient), mount the filesystem and delete the .hmac files. Perform a normal boot afterwards.
If the web UI allows to do a normal update, do so, and you should be fine.
Otherwise try it manually (the following steps assume you're familiar with Linux and you should check the prerequisites):
- Log in to the appliance via SSH as root
- cd /opt/vmware/var/lib/vami/update/data/job
- cd to the latest subdirectory, which should have the highest number
- Check if the update belongs to 5.1U1
head manifest.xml
You should see build 5.1.0.10000. - Attach the updaterepo ISO to the VM
- mount /dev/sr0 /media/cdrom (create if necessary)
- cd /opt/vmware/var/lib/vami/update/data/package-pool
- ln -s /media/cdrom/update/package-pool package-pool
- cd back to the job subdirectory
- ./pre_install '5.1.0.5300' '5.1.0.10000'
- ./test_command (may report "failed dependencies")
- cp -p run_command run_repair
- vi run_repair and change the first command from "rpm -Uv" to "rpm -Uv --no-deps --replacepkgs"
- ./run_repair (ignore "insserv: script jexec is broken" etc)
- Check if a duplicate vfabric-tc-server-standard package exists
rpm -q vfabric-tc-server-standard - If yes (more than one line of output), delete the older version, otherwise /usr/lib/vmware-vpx/rpmpatches.sh will fail
rpm -e vfabric-tc-server-standard-2.6.4-1 (in my case) - ./post_install '5.1.0.5300' '5.1.0.10000' 0
- ./manifest_update
- That's it basically, now just the cleanup
cd /opt/vmware/var/lib/vami/update/data
rm -r job/*
rm cache/* package-pool/package-pool
umount /media/cdrom - reboot
Wednesday, April 24, 2013
Minimizing the vCenter memory footprint - Appliance
This blog has moved to its own domain: www.vaspects.com
Please update your bookmark.
http://www.vaspects.com/2013/04/24/minimizing-vcenter-memory-appliance/
In my previous post I described how to reduce the vCenter memory requirements on Windows. Basically the same is true for the vCenter appliance, but the files are a bit harder to find. Besides that all disclaimer apply - this is in no way supported by VMware.
Single Sign On:
- /usr/lib/vmware-sso/bin/setenv.sh
- Change "JVM_OPTS" (default: "-XX:MaxPermSize=256M -Xms2048m -Xmx2048m") to "-XX:MaxPermSize=128M -Xms128m -Xmx256m"
Inventory Service:
- /usr/lib/vmware-vpx/inventoryservice/wrapper/conf/wrapper.conf
- Set wrapper.java.maxmemory (default: "3072") to "384" (MB)
Tomcat:
- /etc/vmware-vpx/tomcat-java-opts.cfg
- Change the default "-Xmx1024m -XX:MaxPermSize=256m" to "-Xmx512m -XX:MaxPermSize=256m" (or MaxPermSize to half of the Xmx value chosen before)
Web Client:
- /usr/lib/vmware-vsphere-client/server/bin/dmk.sh
- Change "JVM_OPTS" (default: "-Xmx1024m -Xms512m -XX:PermSize=128m -XX:MaxPermSize=256m") to "-Xmx384m -Xms256m -XX:PermSize=128m -XX:MaxPermSize=256m"
Log Browser:
- /etc/init.d/vmware-logbrowser
- Set "HEAP_PROP" (default: "-Xms128m -Xmx512m") to "-Xms128m -Xmx256m"
Profile Driven Storage:
- /usr/lib/vmware-vpx/sps/wrapper/conf/wrapper.conf
After these adjustments the VM memory can safely be reduced to 4-5 GB. But beware that - sadly enough - the Tomcat JVM still tends to eat up memory over time. Therefore I prefer to stick to 5 GB RAM, and here's the result:
top - 11:58:23 up 5 days, 19:05, 1 user, load average: 0.34, 0.57, 0.71
Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.8%us, 1.0%sy, 0.0%ni, 97.8%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 5091868k total, 4928832k used, 163036k free, 402632k buffers
Swap: 15735804k total, 23892k used, 15711912k free, 1445808k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4758 root 20 0 1244m 870m 10m S 0 17.5 55:40.59 java
5267 root 20 0 985m 620m 12m S 0 12.5 19:34.38 java
3490 root 20 0 989m 475m 12m S 1 9.6 46:57.03 java
3839 ssod 20 0 905m 333m 10m S 0 6.7 19:01.57 java
4980 root 20 0 702m 238m 11m S 0 4.8 17:36.23 java
4541 root 20 0 459m 227m 91m S 1 4.6 72:21.12 vpxd
3193 root 20 0 558m 125m 10m S 0 2.5 9:21.73 java
Please update your bookmark.
http://www.vaspects.com/2013/04/24/minimizing-vcenter-memory-appliance/
In my previous post I described how to reduce the vCenter memory requirements on Windows. Basically the same is true for the vCenter appliance, but the files are a bit harder to find. Besides that all disclaimer apply - this is in no way supported by VMware.
Single Sign On:
- /usr/lib/vmware-sso/bin/setenv.sh
- Change "JVM_OPTS" (default: "-XX:MaxPermSize=256M -Xms2048m -Xmx2048m") to "-XX:MaxPermSize=128M -Xms128m -Xmx256m"
Inventory Service:
- /usr/lib/vmware-vpx/inventoryservice/wrapper/conf/wrapper.conf
- Set wrapper.java.maxmemory (default: "3072") to "384" (MB)
Tomcat:
- /etc/vmware-vpx/tomcat-java-opts.cfg
- Change the default "-Xmx1024m -XX:MaxPermSize=256m" to "-Xmx512m -XX:MaxPermSize=256m" (or MaxPermSize to half of the Xmx value chosen before)
Web Client:
- /usr/lib/vmware-vsphere-client/server/bin/dmk.sh
- Change "JVM_OPTS" (default: "-Xmx1024m -Xms512m -XX:PermSize=128m -XX:MaxPermSize=256m") to "-Xmx384m -Xms256m -XX:PermSize=128m -XX:MaxPermSize=256m"
Log Browser:
- /etc/init.d/vmware-logbrowser
- Set "HEAP_PROP" (default: "-Xms128m -Xmx512m") to "-Xms128m -Xmx256m"
Profile Driven Storage:
- /usr/lib/vmware-vpx/sps/wrapper/conf/wrapper.conf
- Set wrapper.java.initmemory (default: "256") to "128" (MB)
- Set wrapper.java.maxmemory (default: "1024") to “384” (MB)
After these adjustments the VM memory can safely be reduced to 4-5 GB. But beware that - sadly enough - the Tomcat JVM still tends to eat up memory over time. Therefore I prefer to stick to 5 GB RAM, and here's the result:
top - 11:58:23 up 5 days, 19:05, 1 user, load average: 0.34, 0.57, 0.71
Tasks: 142 total, 1 running, 141 sleeping, 0 stopped, 0 zombie
Cpu(s): 0.8%us, 1.0%sy, 0.0%ni, 97.8%id, 0.3%wa, 0.0%hi, 0.0%si, 0.0%st
Mem: 5091868k total, 4928832k used, 163036k free, 402632k buffers
Swap: 15735804k total, 23892k used, 15711912k free, 1445808k cached
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
4758 root 20 0 1244m 870m 10m S 0 17.5 55:40.59 java
5267 root 20 0 985m 620m 12m S 0 12.5 19:34.38 java
3490 root 20 0 989m 475m 12m S 1 9.6 46:57.03 java
3839 ssod 20 0 905m 333m 10m S 0 6.7 19:01.57 java
4980 root 20 0 702m 238m 11m S 0 4.8 17:36.23 java
4541 root 20 0 459m 227m 91m S 1 4.6 72:21.12 vpxd
3193 root 20 0 558m 125m 10m S 0 2.5 9:21.73 java
Minimizing the vCenter memory footprint - Windows
This blog has moved to its own domain: www.vaspects.com
Please update your bookmark.
http://www.vaspects.com/2013/04/24/minimizing-vcenter-memory-windows/
With vSphere 5.1 the memory requirements of the vCenter server have dramatically increased. If all components reside on a single Windows server [VM], even the smallest inventory size will require 10 GB of memory, according to the VMware Installation and Setup guide. Although this document states a minimum of 4 GB memory for the vCenter Appliance, it is in fact configured for 8 GB RAM after deployment. This will most likely exceed or significantly reduce the resources of small home labs or all-in-one setups with VMware Workstation.
Please update your bookmark.
http://www.vaspects.com/2013/04/24/minimizing-vcenter-memory-windows/
With vSphere 5.1 the memory requirements of the vCenter server have dramatically increased. If all components reside on a single Windows server [VM], even the smallest inventory size will require 10 GB of memory, according to the VMware Installation and Setup guide. Although this document states a minimum of 4 GB memory for the vCenter Appliance, it is in fact configured for 8 GB RAM after deployment. This will most likely exceed or significantly reduce the resources of small home labs or all-in-one setups with VMware Workstation.
Is this necessary? Nope. But due to the default JVM memory settings a simple reduction of the VMs’ RAM could lead to swapping and have a negative impact on the overall performance, obviously. The following adjustments to the application settings will allow to reduce the VM memory to 4-5 GB. This posting covers a Windows-based vCenter server, the following post will be related to the Appliance.
No need to mention that all of this is absolutely not supported by VMware, right?
Prerequisites:
The vCenter server is installed on a Windows 2008 R2 server VM with SQL Server 2008 R2 Express and no noteworthy additional software or roles. The SQL Server setting “Maximum server memory” has been configured for a low value – 256 MB should be fine.
After installation of the vCenter Server components edit the following files and change the settings:
Single Sign On:
- C:\Program Files\VMware\Infrastructure\SSOServer\conf\wrapper.conf
- Set wrapper.java.additional.9="-Xmx" (default: "1024M") to "256M"
- Set wrapper.java.additional.14="-XX:MaxPermSize=" (default: "512M") to “128M” (or half of the Xmx value chosen before)
Inventory Service:
- C:\Program Files\VMware\Infrastructure\Inventory Service\conf\wrapper.conf
- Set wrapper.java.maxmemory (default: "3072") to "384" (MB)
Tomcat:
- C:\Program Files\VMware\Infrastructure\tomcat\conf\wrapper.conf
- Set wrapper.java.additional.9="-Xmx" (default: "1024M") to "512M" - "768M"
- Set wrapper.java.additional.14="-XX:MaxPermSize" (default: "256M") to half of the Xmx value chosen before
Web Client:
- C:\Program Files\VMware\Infrastructure\vSphereWebClient\server\bin\service\conf\wrapper.conf
- Set wrapper.java.initmemory (default: "1024m") to “256m”
- Set wrapper.java.maxmemory (default: "1024m") to “384m”
Log Browser:
- C:\Program Files\VMware\Infrastructure\vSphereWebClient\logbrowser\conf\wrapper.conf
- Set wrapper.java.maxmemory (default: "512") to "256" (MB)
Profile Driven Storage:
- C:\Program Files\VMware\Infrastructure\Profile-Driven Storage\conf\wrapper.conf
- Set wrapper.java.initmemory (default: "256") to "128" (MB)
- Set wrapper.java.maxmemory (default: "1024") to “384” (MB)
Orchestrator:
- C:\Program Files\VMware\Infrastructure\Orchestrator\app-server\bin\wrapper.conf
- Set wrapper.java.additional.3=-Xmn (default: "768m") to "256m"
- Set wrapper.java.initmemory (default: "2048") to "384" (MB)
- Set wrapper.java.maxmemory (default: "2048") to "512" (MB)
The latter two values must be higher as the Xmn value chosen before.
This settings are the lowest value I have personally used without experiencing any problem in an environment of two ESXi hosts and about three dozen VMs with half of them up & running, the other powered off or templates. To be perfectly honest I did not try to find out the absolutely lowest possible settings – the result of the first shot were satisfying enough, cutting the RAM requirements in half and thus roughly back to pre-5.1 times.
If you do run into problems, either regarding performance or even functionality, please post a comment and the parameter & value you changed to resolve it.
Monday, April 22, 2013
The home lab
This blog has moved to its own domain: www.vaspects.com
Please update your bookmark.
http://www.vaspects.com/2013/04/22/the-home-lab/
I suppose most of the virtualization blogs will include the description of the author’s test & lab gear, so I’ll start with that. :-)
Please update your bookmark.
http://www.vaspects.com/2013/04/22/the-home-lab/
I suppose most of the virtualization blogs will include the description of the author’s test & lab gear, so I’ll start with that. :-)
I decided not
to virtualize the lab itself, but to use real equipment. Yep, it’s possible to build
an all-in-one setup with a standard PC and VMware Workstation. But you’re not
able to try out the pros and cons of different network setups and
configurations or reproduce problems of customer environments. A high
performance PC with lots of RAM would even have been more expensive at that
time - I built my home lab in early 2011, so please keep in mind that it is 2 year
old stuff. So, here’s the list.
Two ESXi
hosts:
AMD Phenom II X6 1055T E0 (6 x 2.8 GHz) on Asus M4A88T-M mainboard with 24 GB RAM DDR3-1333. One HP NC360T Intel-based dual port NIC, one Intel Gigabit CT Desktop NIC, together with the onboard Realtek a total of 4 NICs. I got the HP NICs from eBay where you still can find them (or even genuine Intel dual port NICs) for around 50 Euro.
AMD Phenom II X6 1055T E0 (6 x 2.8 GHz) on Asus M4A88T-M mainboard with 24 GB RAM DDR3-1333. One HP NC360T Intel-based dual port NIC, one Intel Gigabit CT Desktop NIC, together with the onboard Realtek a total of 4 NICs. I got the HP NICs from eBay where you still can find them (or even genuine Intel dual port NICs) for around 50 Euro.
Storage
system:
Upgraded an existing Mini-ITX box with Intel Core2Duo E6750 (2 x 2.6 GHz) on Zotac G43-ITX mainboard with 4 GB RAM and 2 x 500GB + 2 x 320GB 2,5” HDDs. One HP NC360T Intel-based dual port NIC. Currently I’m running Ubuntu 12.04 LTS with iSCSI target, kernel mode NFS, DNS, NTP, DHCP and Kickstart server.
Upgraded an existing Mini-ITX box with Intel Core2Duo E6750 (2 x 2.6 GHz) on Zotac G43-ITX mainboard with 4 GB RAM and 2 x 500GB + 2 x 320GB 2,5” HDDs. One HP NC360T Intel-based dual port NIC. Currently I’m running Ubuntu 12.04 LTS with iSCSI target, kernel mode NFS, DNS, NTP, DHCP and Kickstart server.
Network:
LevelOne GSW-1676 16 port Gigabit “smart” switch. Which basically means its friggin’ complicated to properly configure the VLANs, trunks and port settings using the Web UI. I’d rather suggest to look for a Cisco SG200 series switch or the like.
LevelOne GSW-1676 16 port Gigabit “smart” switch. Which basically means its friggin’ complicated to properly configure the VLANs, trunks and port settings using the Web UI. I’d rather suggest to look for a Cisco SG200 series switch or the like.
The cost
was around 1000 Euro for the whole lab, which is not that much considering that
you have two physical boxes and a real network.
I chose AMD
since in my opinion they (still!) offer the best ratio of cores to cost. The single
thread performance of Intel CPU cores is superior, but with AMD you’ll get more
cores, and that usually better suits virtualization needs. The ASUS mainboard
officially supports only 4 GB DIMMs, and I started with 16 GB in each system.
Last year when the RAM got amazingly cheap, I tried a set of four 8 GB DIMMs
and found out that the board supports them without any problem, so the total
memory went up to 48 GB. When the vCenter memory dramatically increased with
vSphere 5.1 I was quite glad to have found the right time to expand the
resources. BTW: a guide on how to reduce the vCenter memory requirements down
to more home lab friendly 5 GB will follow soon.
Latest addition was a Juniper Netscreen-50 firewall. Used ones are around 40 Euro on
eBay. They have only 4 Fast Ethernet ports, but add another “real life”
complexity (like the switches) you’ll have to deal with when building real vSphere
environments. If you have the chance to grab one of these fine devices, I recommend to do so.
Let me introduce myself…
Dear reader! :-)
This blog just came to life, and I will use it to post my thoughts, findings, hints, tips & tricks around all virtualization aspects I will come across (and some other stuff maybe), with the main focus on VMware products.
My journey to Virtualization and Cloud Computing started in late 2005 with Solaris 10 Zones / Containers. Later on I started to focus on x86 technologies and VMware products. In early 2008 I took my first certification and became VCP3 #25734. Continued to keep my certification as VCP4 and VCP5-DCV and became VCAP4-DCD #483 in February 2011.
I’m working for a consulting company and usually available for challenging projects.
Subscribe to:
Posts (Atom)