[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [GSOC] Xen on ARM: create multiple guests from device tree
CC'ing Edgar who is co-mentoring this project On Sun, 4 Feb 2018, Denis Obrezkov wrote: > Hello all, > > I would like to participate in GSoC 2018 with the project Xen on ARM > related project. I have some previous experience with GSoC: > https://summerofcode.withgoogle.com/archive/2017/projects/4780624749527040/ > > Could you give me more details on the project? > > I have RPi3 and BBB boards, or should this work be done in emulator? Hello Denis, it is great to see interest in Xen on ARM and this project! Unfortunately RPi3 can't run Xen as far as I know due to their non-ARM interrupt controller without virtualization support. Otherwise it would have been a good dev board. The BeagleBoard doesn't have processors with virtualization support so it cannot run Xen either (it needs an Cortex A7 or A15). But that's not a problem, because the latest QEMU (2.11) can run Xen just fine. Build QEMU with --target-list=aarch64-softmmu, then you can run it with: qemu-system-aarch64 -machine virt,gic_version=3 \ -machine virtualization=true \ -cpu cortex-a57 -machine type=virt \ -smp 4 -m 2048 \ -serial stdio -monitor none \ -bios /path/QEMU_EFI.fd \ -netdev user,id=hostnet0 -device virtio-net-device,netdev=hostnet0 \ -drive if=none,file=$DISK1,id=hd0 -device virtio-blk-device,drive=hd0 Where DISK1 is your EFI ready disk image and QEMU_EFI.fd can be downloaded from: http://snapshots.linaro.org/components/kernel/leg-virt-tianocore-edk2-upstream/latest/QEMU-AARCH64/RELEASE_GCC5/QEMU_EFI.fd See the following for more detailed information: https://wiki.xenproject.org/wiki/Xen_ARM_with_Virtualization_Extensions/qemu-system-aarch64 Give it a try and let me know if you have any issues. Cheers, Stefano _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxxxxxxxxx https://lists.xenproject.org/mailman/listinfo/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |