Dekoratives Bildchen

Linux CPU Hotplug in VMware


  • CPU Hotplug is activated: VMware: Edit Settings -> Options -> Memory/CPU Hotplug -> CPU Hotplug
  • Note: You can only add CPU sockets. If you have configured more e.g. 4 cores per socket you can only add cores in steps of 4cpu-hotadd

Adding CPUs:

  • Raise the assigned sockets in the settings
  • Activate the added cores with the script below
cd /sys/devices/system/cpu
for i in cpu*/online
  if [ "`cat $i`" = "0" ]
    echo 1 > $i

Linux Memory Hotplug in VMware


  • Memory Hotplugging is activated for the VM: VMware: Edit Settings -> Options -> Memory/CPU Hotplug -> Memory Hot Add
  • Kernel module acpi_memhotplug is loaded

Memory expansion:

  • Raise assigned memory in the VM settings
  • RAM has to be taken online in the VM. For this execute the script below
cd /sys/devices/system/memory
for i in memory*
  if [ "`cat $i/state`" = "offline" ]
    echo "online" > $i/state;

I have tested this with SLES11 SP1,SP2 und SP3

vCenter SSO Upgrade Error 1603

Two days after release of vSphere 5.5 i have tried the upgrade of our vCenter and it was not a “Next-Next-Finish” thing.

During the upgrade of the SSO service a rollback happens and the only thing i found in the logfiles was a cryptic error 1603.

After a longer search i found a thread in the VMTN forums Error Upgrading vCenter Single Sign-on to 5.5 and one post directed me in the right direction.

One key in the windows registry was empty: HKLM\SOFTWARE\VMware, Inc.\VMware Infrastructure\SSOServer\FqdnIp
After i filled this key with the FQDN of our server, the install work flawlessly.

Homeserver 2.0

I got the idea for this project from the german IT magazine c’t. You can find the article in german here.

So i thought: Time to consolidate the mess behind my Desk and have all components (HP Microserver, external HDD enclosure, my router and all the power supplies) in one plain PC case. More cpu power, better expandability and perhaps lesser power consumption are nice benefits.

The components i’ve bought are:

  • Case: Nanoxia DS1, with additional internal HDD Cages
  • CPU: Intel e5-4570
  • Mainboard: ASUS Gryphon Z87, mATX, 5 years warranty, enough PCIe Slots for additional Controllers and low power consumption
  • Power Supply: Enermax Triathlor 300W, single-rail and should be enough for my setup
  • the 4x SataII Controller Digitus 30104 i used already in my Microserver, based on Marvell 88SX7042 Chipset
  • one PCIe 1Gbit NIC as external interface for my router vm

Today my case arrived, just unboxed it, added all internal hdd cages and took some pictures:

DSC_0001 DSC_0002

I think i have to remove one cage because the mainboard needs some space too.


The next parcel arrived. With mainboard, cpu and cpu cooler it looked like this


I don’t use the internal fan controller for two reasons: First, not to many power connectors on my power supply and second, the mainboard has four fan connectors.

In the next step the power supply found its way into the case and as you can see, not all of my ordered hdd cages will fit.

DSC_0007 DSC_0011

The next day and a shopping trip later i had enough sata power connectors and i connected all hdds and started the first time:


The power consumption compared to my old system:

  • HP N36L with external Fantec 4-bay Enclosure: 82W
  • new System: 70W

Finally finished. Took a longer time till I had time to do the last steps. Here are the pictures:

DSC_0017 DSC_0019 DSC_0024

One thing to say: You can only mount 2,5″ hdds in the upper two slots on the left side, because 3,5″ hdds will have contact to the mainboard. But anyway 12×3,5″ and 2×2,5″ is impressive. A 5in3 hdd cage in the 3×5,25″ external bays and you can mount 19 hdds.