Error while trying to install OS on (KVM) virtual guest machine

Hi,
I’m using Ampere Dev platform kit, and have successfully installed Ubuntu 22.04 Server for ARM processor. I have also installed and configured KVM and created Host instance using the same. However, when I try and install any ARM based VM OS (like Ubuntu Desktop for ARM or Windows11 for ARM), it does not recognize the virtual boot device or even the ISO file.
Error is: dsDxe: No bootable option or device was found.
Any suggestion / solution is most welcome.
Thanks.emphasized text

1 Like

I ran into something similar with RHEL 9; my issue may be similar to what you are experiencing. It would manifest as no bootable device found when starting a VM and connecting to either the virtual console or terminal; the instance would just drop to the EFI shell. Running “map -r” didn’t show any devices; I couldn’t even see the ISO I had mounted allowing navigation to the UEFI boot file.

My issue came down to missing specific packages for libvirt daemon drivers. I can confirm that KVM does work on the Altra Q64-22 CPU that is in my ASRock Rack kit. If you have a similar CPU with ARMv8, I’d check to see if virtualization is supported, as I’m not familiar with the development kit.

I’m taking a stab with the packages I have installed.

Cleaned up list of packages, again this is from an rpm based distro but packages may have similar names. I haven’t used Ubuntu in a while nor any Debian flavor on my Ampere box as of yet.

libvirt
libvirt-client
libvirt-client-qemu
libvirt-daemon
libvirt-daemon-common
libvirt-daemon-config-network
libvirt-daemon-config-nwfilter
libvirt-daemon-driver-interface
libvirt-daemon-driver-network
libvirt-daemon-driver-nodedev
libvirt-daemon-driver-nwfilter
libvirt-daemon-driver-qemu
libvirt-daemon-driver-secret
libvirt-daemon-driver-storage
libvirt-daemon-driver-storage-core
libvirt-daemon-driver-storage-disk
libvirt-daemon-driver-storage-iscsi
libvirt-daemon-driver-storage-logical
libvirt-daemon-driver-storage-mpath
libvirt-daemon-driver-storage-rbd
libvirt-daemon-driver-storage-scsi
libvirt-daemon-lock
libvirt-daemon-log
libvirt-daemon-plugin-lockd
libvirt-daemon-proxy
libvirt-dbus
libvirt-glib
libvirt-libs
python3-libvirt
qemu-guest-agent
qemu-img
qemu-kvm
qemu-kvm-audio-pa
qemu-kvm-block-blkio
qemu-kvm-block-rbd
qemu-kvm-common
qemu-kvm-core
qemu-kvm-device-display-virtio-gpu
qemu-kvm-device-display-virtio-gpu-pci
qemu-kvm-device-usb-host
qemu-kvm-device-usb-redirect
qemu-kvm-docs
qemu-kvm-tools
qemu-pr-helper
virt-install
virt-manager-common
virt-viewer ## Since I put X11 on this for troubleshooting something at the KVM when I was having issue getting the BMC to work.##
virt-what
virtiofsd

Hi Milind, did you get the answer to your question from Dennis, or do you still need help?

Dave.

I’m rebuilding my host this evening after moving it into a Fractal Design Pop Silent Mid-Tower Case while replacing my primary NVME from a Samsung 990 Pro to a Crucial T500 as I do not trust the Samsungs after they seem to randomly disconnect only on this board.

I tend to run the same configurations that I do for work so I’ll be installing RHEL 9.5 and will capture what packages and configurations I do that allow for KVM guest hosts/domains to run correctly.

…I’m also removing the rear I/O shield for reasons known to the forum

I finished rebuilding the host and got KVM/QEMU functioning without issue.

I’ve captured what I installed to reproduce it on your end (or if someone else searches for this and is unsure) under RHEL or a similar RPM-based distro.

I use RHEL as it’s what I run at work, and since this is a test platform for containers and running K8S through KVM, I keep this and an x86 host (i9-1400k, 192Gb )with what I run in the office—just heading off any potential comments about running Debian/Arch. I do run Debian and FreeBSD on most personal projects.

Repositories used

epel # Needed for bridge-utils package
rhel-9-for-aarch64-appstream-rpms # Should be able to use media with a local repo
rhel-9-for-aarch64-baseos-rpms # Should be able to use media with a local repo

Package Installation

dnf install qemu-kvm libvirt virt-install tpm2-tools tpm2-abrmd bridge-utils

Create a bridge

nmcli con add type bridge ifname br0

Add bridge to interface
I’m using a single interface on the host, which is LAN2, and the device name is enP3p3s0f1

nmcli con add type bridge-slave ifname enP3p3s0f1 master br0

Restart interface so that the bridge is active

ifconfig enP3p3s0f1 down; ifconfigenP3p3s0f1 up

Add your primary user account to the required groups for running the virt tools

usermod -aG libvirt,libstoragemgmt <user account>

Enable the daemons

for i in qemu network nodedev nwfilter secret storage interface
  do sudo systemctl enable --now virt${i}d{,-ro,-admin}.socket
done

Add a configuration file, either echo it or use a heredoc like my example. This is for your primary user to access virt tools, sort of like exporting the KUBECONFIG

cat >>~/.config/libvirt/libvirt.conf <<EOF
uri_default = "qemu:///system"
EOF

At this point, I cloned my internal repo, which has scripts for standing up KVM instances using cloud-init.

Let me know if you have any issues, and I’ll do what I can to help you get this up and running.

I connect to the console of a domain (VM/guest in libvirt speak) so that I can get the serial output and see if my cloud init process with the post deployment “runcmd” section is functioning:

I’m playing with cockpit as I need to implement controls and session recording at work, testing and validating with this host before I go break production. This shows that the domain has completed post in the KVM environment and is at a cloud-init stage as part of the deployment:

1 Like