Windows 10 Boot Recovery or Not

I woke on the morn of Good Friday to find the PC at the BIOS screen with a CPU overheat error. Despite the CPU fan being listed as spinning at 1800RPM, the CPU was cooking along at 75deg or so..

Time for a shutdown and vacuum out the dust. Something I’ve done routinely in the past, but this time it all went wrong. On starting the system again…. it didn’t.

After an extremely frustrating sequence of reboots, BIOS setting review, recovery USB key creation I think what happened was that I managed to clear the motherboard BIOS settings. Given that my boot disk was still a pair of Seagate Barracuda’s in Intel RAID1 (mirror), I think that this also was the primary cause of my subsequent inability to boot. That I had installed a SSD for Windows 10 system when I did a clean install (after the original Win7 to Win10 upgrade), was a cause of additional confusion.

Additional complications included building a windows system restore USB key (on another Win10 system), and trying to use the startup repair (available under system recovery menu). Also I think at some point I ran a Linux install-mbr utility. I also ran

bootsect /nt60 ALL / force
bootrec /fixbmbr
bootrec /fixboot
bootsec /scanos
bootrec /rebuildbcd

The RAID pair were an original MBR partion table, and the new SSD a GPT. From a hardware perspective the Asus P8P67 Pro included a 6Gbps SATA Marvel 9120 interface that I am fairly certain are addressable by UEFI BIOS, but not in the early start sequence of the Win10 kernel boot.

Finally, by the time all this had been tried, I ended up with an SSD that had a Microsoft Reserved Partition (16MB), an EFI system partition that was in fact relabelled data partition with my Win10 system in it (200GB or so), and a Recovery partition (500MB or so). At this stage, bootsect /scanos was no longer finding the Windows system install, although in some circumstances is it was located in a path like /?/Volume3/….

So how did I ultimately resolve?

  1. Made the decision to get the SSD bootable and abandon the RAID pair to start.
  2. Rebuilt the partitions on the SSD broadly following this with
    1. boot from windows recovery and run command prompt
    2. diskpart
    3. list disk
    4. select disk X
    5. list partition
    6. select partition 2 (the "EFI" partition that wasn't)
    7. setid EBD0A0A2-B9E5-4433-87C0-68B6B72699C7
    8. shrink desired=200
    9. create partition EFI size=200
    10. format quick fs=fat32 label="System"
    11. exit
    12. bootrec /fixboot
    13. bcdboot c:\windows /s b: /f ALL
      1. (Or maybe I ran bootrec /rebuildbcd later.Initially the fact that I was still plugged into the Marvell SATA meant that step (M) was failing).


Posted in IT

Testing Openstack with Ansible and all-in-one install on Hyper-V

I installed Openstack recently in order to get my head around some aspects. I used my desktop (16GB RAM, decent chunk of SSD & i7-2600 @ 3.4GHz).

After looking at the 50 ways to install Openstack, I went with the developer ansible automation. Installation proceeded as follows.

Install Deployment Host

This contains the ansible configuration and drives the process.

  1. Install Ubuntu as per requirements
    1. Installed 14.04 (yes, it’s old. They may fix that soon.)
    2. apt-get install aptitude build-essential git ntp ntpdate \
        openssh-server python-dev sudo
    3. git clone -b stable/mitaka \ \

Networking assignments

Network IP Range VLAN
Host management
Container Management Network 10
Tunnel (VXLAN) Network 20
Storage Network 11


My numbering was

Host Host mgmt Container mgmt Tunnel Storage


Making Hyper-V connect up VLAN trunks that are run as bonded Ethernets

Hyper-v manager can’t set trunk mode on adapters so doing this with powershell run as administrator

Get-VM Deployment | Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList 1-100 -NativeVlanId 0
Get-VM Node1 | Set-VMNetworkAdapterVlan -Trunk -AllowedVlanIdList 1-100 -NativeVlanId 0

Note – the NativeVlanId 0 is required to bridge this into the untagged management domain for the external network.

Also need to configure (Network Adapter | Advanced Features | MAC Address) to enable mac address spoofing (which the active/backup bonding will do)


Install Target Host

As per the docs and above. Could be improved by using MAAS but I won’t go there yet.


Configure networking

Did this by hand, but probably able to be skipped now that the ansible stuff generates this?


Deployment configuration questions – while configuring the yml and friends


Aged APT repo & keys… and figuring out that

Initially AOI failed in deployment due to untrusted packages; like a fool I tried hand-deploying the broken ones

apt-get install libasan0 libatomic1 libgomp1 libitm1 libc-dev-bin \
  linux-libc-dev libc6-dev libexpat1-dev libpython2.7-dev \
  libquadmath0 libtsan0 python3-libapparmor python3-pkg-resources \
  python3-apparmor apparmor-utils binutils cpp-4.8 libgcc-4.8-dev \
  gcc-4.8 libstdc++-4.8-dev g++-4.8 libdpkg-perl dpkg-dev \
  python2.7-dev python-software-properties

Running openstack-ansible setup-hosts.yml didn’t get me much further – failed again with security hardening – postfix install – so added added a no-authenticate in the yml files; not the right place though.

Stopped, thought, learnt and instead tried

apt-key update
apt-get update

Which (after agreeing to a key update from recollection) resolved the issues

Continuing on to manually run playbooks

openstack-ansible setup-hosts.yml
openstack-ansible setup-infrastructure.yml

Note the formating of the output of Ansible running the playbooks – should be no errors.

Confirmed with

ansible galera_container -m shell -a \
   "mysql -h localhost -e 'show status like \"%wsrep_cluster_%\";'"

Then finally installing openstack

openstack-ansible setup-openstack.yml


Wait something like an hour

It’s up!

Login at with admin / 5f915721a645bf38735ff099

Everything appeared to be running – but for Cinder volume block storage is down, probably because I my just have skipped some necessary LVM prep steps.

Posted in IT

Testing glusterfs on centos

Following along the CentOS howto using Centos 7.2

Just a couple of things have changed since it was written:

  1. As per the CentOS storage special interest group, you can now get the glusterfs packages without using wget to retrieve additional repos:
    yum install centos-release-gluster
    yum install glusterfs-server samba
  2. Evidently xfs filesystems should be formatted to inode size 512 bytes, not 256.
  3. If you want to delete a volume immediately after creating, you’ll need some incantations to re-use the bricks without re-formating them.

Along the way I asked myself the following questions

  1. Should each of the physical disks be surfaced as a separate volume group and allocated within a single logical volume using 100% of the VG? Allocating 100% prevents using LVM snapshots, but why would you ever want to do that (or is that a building block for GlusterFS snapshots?)  Is there something that could be done to use HSM and LVM to balance blocks on each node?
  2. What is the best filesystem for the bricks? XFS is the default. As above, it should be formatted with 512 byte inodes.
  3. What should the network segmentation and firewall zone configuration be?
  4. What brick layout, replica setting and stripe setting makes sense? The number of bricks are required to be the product of the number of replicas and stripes.
  5. How to best utilise SSD
    1. Through glusterfs file level tiering in the March gluster / Redhat storage tech preview? This version also allows access to erasure coding, lowering the cost of storage replication.
      1. immediately available in the Centos 7.2 build. BUT… works at the file level, so not so helpful with VMs. However if sharding is turned on, perhaps it works at the shard level?
    2. Through block level management with dm-cache or similar, preferably integrated to LVM?
    3. Through hardware RAID controller?
Posted in IT

Cropping multiple images with Gimp and script-fu

I recently had a series of video screenshots from a Gotomeeting screencapture. The presenter screen had resolution 1366×768; the meeting organiser 1920×1080.

The result was that all the images were surrounded with a black border. In turning the screencapture into a set of stills to represent as narrated video, I used the following in Gimp

  • Start Gimp and enter Filters | Script-fu | Console
  • In the console that comes up, paste the following to define the batch-resize function
(define (batch-resize pattern
(let* ((filelist (cadr (file-glob pattern 1))))
 (while (not (null? filelist))
        (let* ((filename (car filelist))
        (image (car (gimp-file-load RUN-NONINTERACTIVE
                                    filename filename)))
        (drawable (car (gimp-image-get-active-layer image))))
        (gimp-layer-resize drawable new-width new-height offx offy)
        (gimp-image-resize-to-layers image)
        (gimp-file-save RUN-NONINTERACTIVE
                        image drawable filename filename)
        (gimp-image-delete image))
        (set! filelist (cdr filelist)))))

  • Then in the same console you can run the following
(batch-resize "/path/to/screenshots/*png" 1366 768 -272 -157)

Note that this was on a windows system – and the format of the filesystem path is still with forward slash. Note also, this will replace existing files with modified versions.

If you want to perform some other form of processing on each file, the key function to replace is (gimp-layer-resize …)

Form more information, see the Script-fu Tutorial or the Gimp scripting manualIT

Posted in IT