Importing boot2root VM problems
In the last two weeks I took some time to do Beco do Exploit’s challenge. It’s a journey to own 30 boot2root machines in a month in order to practice penetration testing skills. Now reviewing my notes I’ll post something that caught my attention as interesting to do it.
The first one is setting those VMs up. Most of them worked just fine but for others I needed to do some hacks. Let’s talk about it.
Table of Contents
My VM setup
First things first I will explain how I set my VMs. I use LibVirt + Qemu + KVM
to set my VMs. Normally virtual machines are shared in VirtualBox or VMware
format (eg: .ova
), this way I first need to convert its disk to qcow2.
.ova
is just a tar file, so it’s just a matter of extract and convert it VMDK disk:
$ file 29_pegasus.ova
29_pegasus.ova: POSIX tar archive (GNU)
$ tar xvf 29_pegasus.ova
$ qemu-img convert -p -O qcow2 pegasus-disk1.vmdk 29_pegasus.qcow2
- I have configured a domain in LibVirt called
test
. It is a domain with a bridge connection to a virtual network interface, 2G of memory and 2 vCPUs:
$ virt-install --network bridge=hd0 \
--memory 2048 --vcpus 2 --arch x86_64 \
--graphics vnc --noautoconsole \
--disk ~/.labs/vms_disks/test.qcow2 \
--name test --import --os-variant generic
- The disk
~/.labs/vms_disks/test.qcow2
is a simple symbolic link to a VM I want to test in the moment:
$ ln -fs $PWD/29_pegasus.qcow2 ~/.labs/vms_disks/test.qcow2
Issues and Fixes
While importing these 30 machines I faced two kinds of issues:
- VM boots but networking does not works
- VM doesn’t even boot
The first problem was the most common. In that case I managed to fix it in two different ways: (1) fix ifname and (2) set IP statically. The former is simple just setting cmdline on boot, but the former is a nice trick.
Change cmdline parameters while booting
There are cases where VM’s devices changes its names while booting. In that case the VM network configuration will fail because it uses a different name than the actual device name.
Such cases can be easily solved. It is just a matter of edit the kernel command
line while the VM is booting. To do it, press ‘e’ on GRUB menu and add
net.ifnames=0
on the line starting by linux
.
Depending on your hardware maybe you need to add
biosdevname=0
as well. Read this link for more information about it.
Mounting QCOW2 disk
I used the previous tip for a while until I faced a VM that it does not made the trick so I need to debug. In order to do that I need to have access to the machine so I can take a look inside it.
The way I did it was by mounting the QCOW2 image and add a user for myself. First open the QCOW2 image as a disk and mount it:
sudo modprobe nbd max_part=8
sudo qemu-nbd --connect=/dev/nbd0 $PWD/09_DC.qcow2
sudo fdisk /dev/nbd0 -l
sudo mount /dev/nbd0p1 /mnt/
Then generate a hash from a simple password:
$ mkpasswd --method=SHA-512 --stdin
After it just add an entry on /mnt/etc/passwd
:
$ echo 'gildasio:$6$YU2R8SWMFaqNtmEa$Av6bMLtT5krldU9lbyZgG8xjGVZztflSB3CTSSYiI3ed.DCopGhWfUdl/47.cNaJvQ999EYsqZq3HUq57gX9m1:0:0::/:/bin/bash' | sudo tee -a /mnt/etc/passwd
Now umount it and start the VM:
sudo umount /mnt/
sudo qemu-nbd --disconnect /dev/nbd0
sudo rmmod nbd
All done, it’s just a matter of understand the problem inside the VM. In all cases I just add an IP address statically and back do root the VM.
Try to not get any extra information from the machine to not spoil yourself. Keep the challenge interesting :)
Change OS type
There was another case of a virtual machine not starting correctly:
I’ve dealt with similar issues in the past so my feeling leads me to believe
that it is some hardware/module incompatibility. virt-install
provides a
--os-variant
parameter that can define optimal configurations for a specific
OS type. Note that my test
domain is defined as --os-variant generic
because
I use a lot of OS types with it, but in this case it was a problem.
So I check the OS type from this specific machine:
$ grep -i ostype covfefe.ovf
<OperatingSystemSection ovf:id="95" ovf:version="6" vmw:osType="debian6Guest">
It is defined as Debian 6, so I just created another VM also defined as Debian
- Added
--check path_in_use=off
so I can use same~/.labs/vms_disks/test.qcow2
as disk without conflicts:
$ virt-install --network bridge=hd0 \
--memory 2048 --vcpus 2 --arch x86_64 \
--graphics vnc --noautoconsole \
--disk ~/.labs/vms_disks/test.qcow2 \
--name test_debian6 --import \
--os-variant debian6 --check path_in_use=off
The real change within this configuration is Debian 6 uses virtio devices:
$ diff -up test.xml test_debian6.xml
...
@@ -30,8 +35,8 @@
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/home/gildasio/.labs/vms_disks/test.qcow2'/>
- <target dev='hda' bus='ide'/>
- <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ <target dev='vda' bus='virtio'/>
+ <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
</disk>
<controller type='usb' index='0' model='ich9-ehci1'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x7'/>
...
Conclusion
Using these tips I was able to correctly boot all VMs. Hope this tips can help you too in similar circumstances. :)