[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: Virtio on Xen with Rust





On Thu, Apr 14, 2022 at 12:15 PM Viresh Kumar <viresh.kumar@xxxxxxxxxx> wrote:
Hello,

Hello Viresh

[Cc Juergen and Julien]

[sorry for the possible format issues and for the late response]

 

We verified our hypervisor-agnostic Rust based vhost-user backends with Qemu
based setup earlier, and there was growing concern if they were truly
hypervisor-agnostic.

In order to prove that, we decided to give it a try with Xen, a type-1
bare-metal hypervisor.

We are happy to announce that we were able to make progress on that front and
have a working setup where we can test our existing Rust based backends, like
I2C, GPIO, RNG (though only I2C is tested as of now) over Xen.

Great work!

 

Key components:
--------------

- Xen: https://github.com/vireshk/xen

  Xen requires MMIO and device specific support in order to populate the
  required devices at the guest. This tree contains four patches on the top of
  mainline Xen, two from Oleksandr (mmio/disk) and two from me (I2C).

I skimmed through your toolstack patches, awesome, you created a completely new virtual device "I2C". 
FYI, I have updated "Virtio support for toolstack on Arm" [1] since (to make it more generic), now V7 is available and I have a plan to push V8 soon.

 

- libxen-sys: https://github.com/vireshk/libxen-sys

  We currently depend on the userspace tools/libraries provided by Xen, like
  xendevicemodel, xenevtchn, xenforeignmemory, etc. This crates provides Rust
  wrappers over those calls, generated automatically with help of bindgen
  utility in Rust, that allow us to use the installed Xen libraries. Though we
  plan to replace this with Rust based "oxerun" (find below) in longer run.

- oxerun (WIP): https://gitlab.com/mathieupoirier/oxerun/-/tree/xen-ioctls

  This is Rust based implementations for Ioctl and hypercalls to Xen. This is WIP
  and should eventually replace "libxen-sys" crate entirely (which are C based
  implementation of the same).
 

FYI, currently we are working on one feature to restrict memory access using Xen grant mappings based on xen-grant DMA-mapping layer for Linux [1].
And there is a working PoC on Arm based on an updated virtio-disk. As for libraries, there is a new dependency on "xengnttab" library. In comparison with Xen foreign mappings model (xenforeignmemory),
the Xen grant mappings model is a good fit into the Xen security model, this is a safe mechanism to share pages between guests.

 

- vhost-device: https://github.com/vireshk/vhost-device

  These are Rust based vhost-user backends, maintained inside the rust-vmm
  project. This already contain support for I2C and RNG, while GPIO is under
  review. These are not required to be modified based on hypervisor and are
  truly hypervisor-agnostic.

  Ideally the backends are hypervisor agnostic, as explained earlier, but
  because of the way Xen maps the guest memory currently, we need a minor update
  for the backends to work. Xen maps the memory via a kernel file
  /dev/xen/privcmd, which needs calls to mmap() followed by an ioctl() to make
  it work. For this a hack has been added to one of the rust-vmm crates,
  vm-virtio, which is used by vhost-user.

  https://github.com/vireshk/vm-memory/commit/54b56c4dd7293428edbd7731c4dbe5739a288abd

  The update to vm-memory is responsible to do ioctl() after the already present
  mmap().

With Xen grant mappings, if I am not mistaken, it is going to be almost the same: mmap() then ioctl(). But the file will be "/dev/xen/gntdev".

 

- vhost-user-master (WIP): https://github.com/vireshk/vhost-user-master

  This implements the master side interface of the vhost protocol, and is like
  the vhost-user-backend (https://github.com/rust-vmm/vhost-user-backend) crate
  maintained inside the rust-vmm project, which provides similar infrastructure
  for the backends to use. This shall be hypervisor independent and provide APIs
  for the hypervisor specific implementations. This will eventually be
  maintained inside the rust-vmm project and used by all Rust based hypervisors.

- xen-vhost-master (WIP): https://github.com/vireshk/xen-vhost-master

  This is the Xen specific implementation and uses the APIs provided by
  "vhost-user-master", "oxerun" and "libxen-sys" crates for its functioning.

  This is designed based on the EPAM's "virtio-disk" repository
  (https://github.com/xen-troops/virtio-disk/) and is pretty much similar to it.

FYI, new branch "virtio_grant" besides supporting Xen grant mappings also supports virtio-mmio modern transport.

 

  One can see the analogy as:

  Virtio-disk == "Xen-vhost-master" + "vhost-user-master" + "oxerun" + "libxen-sys" + "vhost-device".



Test setup:
----------

1. Build Xen:

  $ ./configure --libdir=/usr/lib --build=x86_64-unknown-linux-gnu --host=aarch64-linux-gnu --disable-docs --disable-golang --disable-ocamltools --with-system-qemu=/root/qemu/build/i386-softmmu/qemu-system-i386;
  $ make -j9 debball CROSS_COMPILE=aarch64-linux-gnu- XEN_TARGET_ARCH=arm64

2. Run Xen via Qemu on X86 machine:

  $ qemu-system-aarch64 -machine virt,virtualization=on -cpu cortex-a57 -serial mon:stdio \
        -device virtio-net-pci,netdev=net0 -netdev user,id=net0,hostfwd=tcp::8022-:22 \
        -device virtio-scsi-pci -drive file=/home/vireshk/virtio/debian-bullseye-arm64.qcow2,index=0,id=hd0,if=none,format=qcow2 -device scsi-hd,drive=hd0 \
        -display none -m 8192 -smp 8 -kernel /home/vireshk/virtio/xen/xen \
        -append "dom0_mem=5G,max:5G dom0_max_vcpus=7 loglvl=all guest_loglvl=all" \
        -device guest-loader,addr=0x46000000,kernel=/home/vireshk/kernel/barm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \
        -device ds1338,address=0x20     # This is required to create a virtual I2C based RTC device on Dom0.

  This should get Dom0 up and running.

3. Build rust crates:

  $ cd /root/
  $ git clone https://github.com/vireshk/xen-vhost-master
  $ cd xen-vhost-master
  $ cargo build

  $ cd ../
  $ git clone https://github.com/vireshk/vhost-device
  $ cd vhost-device
  $ cargo build

4. Setup I2C based RTC device

  $ echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device; echo 0-0020 > /sys/bus/i2c/devices/0-0020/driver/unbind

5. Lets run everything now

  # Start the I2C backend in one terminal (open new terminal with "ssh
  # root@localhost -p8022"). This tells the I2C backend to hook up to
  # "/root/vi2c.sock0" socket and wait for the master to start transacting.
  $ /root/vhost-device/target/debug/vhost-device-i2c -s /root/vi2c.sock -c 1 -l 0:32

  # Start the xen-vhost-master in another terminal. This provides the path of
  # the socket to the master side and the device to look from Xen, which is I2C
  # here.
  $ /root/xen-vhost-master/target/debug/xen-vhost-master --socket-path /root/vi2c.sock0 --name i2c

  # Start guest in another terminal, i2c_domu.conf is attached. The guest kernel
  # should have Virtio related config options enabled, along with i2c-virtio
  # driver.
  $ xl create -c  i2c_domu.conf

  # The guest should boot fine now. Once the guest is up, you can create the I2C
  # RTC device and use it. Following will create /dev/rtc0 in the guest, which
  # you can configure with 'hwclock' utility.

  $ echo ds1338 0x20 > /sys/bus/i2c/devices/i2c-0/new_device

Thanks for the detailed instruction. 

 


Hope this helps.

--
viresh



--
Regards,

Oleksandr Tyshchenko

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.