[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm


  • To: Oleksandr Tyshchenko <olekstysh@xxxxxxxxx>, Masami Hiramatsu <masami.hiramatsu@xxxxxxxxxx>, Alex Bennée <alex.bennee@xxxxxxxxxx>
  • From: Wei Chen <Wei.Chen@xxxxxxx>
  • Date: Mon, 2 Nov 2020 07:23:48 +0000
  • Accept-language: en-US
  • Arc-authentication-results: i=1; mx.microsoft.com 1; spf=pass smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass header.d=arm.com; arc=none
  • Arc-message-signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com; s=arcselector9901; h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck; bh=6S2p8fLfA1cy8W9OIPNhLu4MpDe+iPaqmv4j2xsDyMI=; b=dY1Gx6fL3d3mVOY/LDMgl3MJcdl+VSFQMzuY63auYFXQkE82fb9bNjxu8MydYtfy/u+UUZfd4hxG++imJKyNCKZRFSXbm901HrV/9e2e7L9rqkHAStP7P3/72R6MrJYlrKXBsco9ttFgVNCk+UBT/e5W4+4b0ceB1BMlcoxloGEz89rgBkRmcLuGg08/l7v+aSCExc+cYWxs0zjCj9B3f2LjUClNkswuxBn+w0CtPN3Yqjc6MWgzd+sUfCWmqqbLDkqE3WZlnjer1+a8cUU+fDxNLqEnuHJ8+G4fjA9dYZxLVpsQDwF7tZgoz8Qu/KZXuquWrvUt0RPvV606WcS1BA==
  • Arc-seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none; b=mnzC5kP+Dc043b4hYo5X1kh2DfzSVFyo0UApvX3I7irsszf036vncFibu89XRivgNPB7va+IkcVOgfmGsXLZhs+qIwcwJ8ZI5EGyuFFhVAxMbSp4r6ZRt/PIOuMooCeGb2v1S3ohSuo/pY5KW4Ebif1kDc+1+UJ9aWLxa6ssQlT7PrgcJNj+9CjZEHXwX27l4thgK0kCy53dNVkmUNkjWtX+is42Y4OLa8hc+Vv96skRuCvrYUzMcPwGPKTi5XUIe9TKQgfyudFX2j0IDmc0uffJDUNmyS+N2O8fO3VTJapqLJELELgo7pA+Jsiyi3sEB19Z33VhXOuVaQy5aDN8Uw==
  • Authentication-results-original: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
  • Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>, xen-devel <xen-devel@xxxxxxxxxxxxxxxxxxxx>, Oleksandr Tyshchenko <oleksandr_tyshchenko@xxxxxxxx>, Paul Durrant <paul@xxxxxxx>, Jan Beulich <jbeulich@xxxxxxxx>, Andrew Cooper <andrew.cooper3@xxxxxxxxxx>, Roger Pau Monné <roger.pau@xxxxxxxxxx>, Wei Liu <wl@xxxxxxx>, Julien Grall <Julien.Grall@xxxxxxx>, George Dunlap <george.dunlap@xxxxxxxxxx>, Ian Jackson <iwj@xxxxxxxxxxxxxx>, Julien Grall <julien@xxxxxxx>, Tim Deegan <tim@xxxxxxx>, Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>, Volodymyr Babchuk <Volodymyr_Babchuk@xxxxxxxx>, Jun Nakajima <jun.nakajima@xxxxxxxxx>, Kevin Tian <kevin.tian@xxxxxxxxx>, Anthony PERARD <anthony.perard@xxxxxxxxxx>, Bertrand Marquis <Bertrand.Marquis@xxxxxxx>
  • Delivery-date: Mon, 02 Nov 2020 07:24:27 +0000
  • List-id: Xen developer discussion <xen-devel.lists.xenproject.org>
  • Nodisclaimer: true
  • Original-authentication-results: gmail.com; dkim=none (message not signed) header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
  • Thread-index: AQHWoxKpQGDpryS/tEKvZ0gj7CwnMKmuR8kAgAC6SwCAABIuAIAAFmWAgADwXgCAAjNoAIACOnTQ
  • Thread-topic: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm

Hi Oleksandr,

Thanks for the sharing of virtio-disk backend. I have tested it on arm FVP_base 
platform.
We used domain-0 to run virtio disk backend. The backend disk is a loop device.
    "virtio_disks": [
        {
            "backend_domname": "Domain-0",
            "devid": 0,
            "disks": [
                {
                    "filename": "/dev/loop0"
                }
            ]
        }
    ],

It works fine and I've pasted some logs:

-------------------------------------------
Domain-0 logs:
main: read backend domid 0
(XEN) gnttab_mark_dirty not implemented yet
(XEN) domain_direct_pl011_init for domain#2
main: read frontend domid 2
  Info: connected to dom2

demu_seq_next: >XENSTORE_ATTACHED
demu_seq_next: domid = 2
demu_seq_next: filename[0] = /dev/loop0
demu_seq_next: readonly[0] = 0
demu_seq_next: base[0]     = 0x2000000
demu_seq_next: irq[0]      = 33
demu_seq_next: >XENCTRL_OPEN
demu_seq_next: >XENEVTCHN_OPEN
demu_seq_next: >XENFOREIGNMEMORY_OPEN
demu_seq_next: >XENDEVICEMODEL_OPEN
demu_initialize: 2 vCPU(s)
demu_seq_next: >SERVER_REGISTERED
demu_seq_next: ioservid = 0
demu_seq_next: >RESOURCE_MAPPED
demu_seq_next: shared_iopage = 0xffffae6de000
demu_seq_next: buffered_iopage = 0xffffae6dd000
demu_seq_next: >SERVER_ENABLED
demu_seq_next: >PORT_ARRAY_ALLOCATED
demu_seq_next: >EVTCHN_PORTS_BOUND
demu_seq_next: VCPU0: 3 -> 7
demu_seq_next: VCPU1: 5 -> 8
demu_seq_next: >EVTCHN_BUF_PORT_BOUND
demu_seq_next: 0 -> 9
demu_register_memory_space: 2000000 - 20001ff
  Info: (virtio/mmio.c) virtio_mmio_init:290: 
mailto:virtio-mmio.devices=0x200@0x2000000:33
demu_seq_next: >DEVICE_INITIALIZED
demu_seq_next: >INITIALIZED
IO request not ready
IO request not ready

----------------
Dom-U logs:
[    0.491037] xen:xen_evtchn: Event-channel device installed
[    0.493600] Initialising Xen pvcalls frontend driver
[    0.516807] Serial: 8250/16550 driver, 4 ports, IRQ sharing disabled
[    0.525565] cacheinfo: Unable to detect cache hierarchy for CPU 0
[    0.562275] brd: module loaded
[    0.595300] loop: module loaded
[    0.683800] virtio_blk virtio0: [vda] 131072 512-byte logical blocks (67.1 
MB/64.0 MiB)
[    0.684000] vda: detected capacity change from 0 to 67108864


/ # dd if=/dev/vda of=/dev/null bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (64.0MB) copied, 3.196242 seconds, 20.0MB/s
/ # dd if=/dev/zero of=/dev/vda bs=1M count=64
64+0 records in
64+0 records out
67108864 bytes (64.0MB) copied, 3.704594 seconds, 17.3MB/s
---------------------

The read/write seems OK in dom-U. The FVP platform is a emulator, so the 
performance is no reference.
We will test it on real hardware like N1SDP.

Thanks,
Wei Chen

----------------------------------------------------------------------------------
From: Xen-devel <xen-devel-bounces@xxxxxxxxxxxxxxxxxxxx> On Behalf Of Oleksandr 
Tyshchenko
Sent: 2020年11月1日 5:11
To: Masami Hiramatsu <masami.hiramatsu@xxxxxxxxxx>; Alex Bennée 
<alex.bennee@xxxxxxxxxx>
Cc: Stefano Stabellini <sstabellini@xxxxxxxxxx>; xen-devel 
<xen-devel@xxxxxxxxxxxxxxxxxxxx>; Oleksandr Tyshchenko 
<oleksandr_tyshchenko@xxxxxxxx>; Paul Durrant <paul@xxxxxxx>; Jan Beulich 
<jbeulich@xxxxxxxx>; Andrew Cooper <andrew.cooper3@xxxxxxxxxx>; Roger Pau Monné 
<roger.pau@xxxxxxxxxx>; Wei Liu <wl@xxxxxxx>; Julien Grall 
<Julien.Grall@xxxxxxx>; George Dunlap <george.dunlap@xxxxxxxxxx>; Ian Jackson 
<iwj@xxxxxxxxxxxxxx>; Julien Grall <julien@xxxxxxx>; Tim Deegan <tim@xxxxxxx>; 
Daniel De Graaf <dgdegra@xxxxxxxxxxxxx>; Volodymyr Babchuk 
<Volodymyr_Babchuk@xxxxxxxx>; Jun Nakajima <jun.nakajima@xxxxxxxxx>; Kevin Tian 
<kevin.tian@xxxxxxxxx>; Anthony PERARD <anthony.perard@xxxxxxxxxx>; Bertrand 
Marquis <Bertrand.Marquis@xxxxxxx>
Subject: Re: [PATCH V2 00/23] IOREQ feature (+ virtio-mmio) on Arm



On Fri, Oct 30, 2020 at 1:34 PM Masami Hiramatsu 
<mailto:masami.hiramatsu@xxxxxxxxxx> wrote:
Hi Oleksandr,
 
Hi Masami, all

[sorry for the possible format issue]
 
>> >
>> >       Could you tell me how can I test it?
>> >
>> >
>> > I assume it is due to the lack of the virtio-disk backend (which I haven't 
>> > shared yet as I focused on the IOREQ/DM support on Arm in the
>> > first place).
>> > Could you wait a little bit, I am going to share it soon.
>>
>> Do you have a quick-and-dirty hack you can share in the meantime? Even
>> just on github as a special branch? It would be very useful to be able
>> to have a test-driver for the new feature.
>
> Well, I will provide a branch on github with our PoC virtio-disk backend by 
> the end of this week. It will be possible to test this series with it.

Great! OK I'll be waiting for the PoC backend.

Thank you!

You can find the virtio-disk backend PoC (shared as is) at [1]. 
Brief description...

The virtio-disk backend PoC is a completely standalone entity (IOREQ server) 
which emulates a virtio-mmio disk device.
It is based on code from DEMU [2] (for IOREQ server purposes) and some code 
from kvmtool [3] to implement virtio protocol,
disk operations over underlying H/W and Xenbus code to be able to read 
configuration from the Xenstore
(it is configured via domain config file). Last patch in this series (marked as 
RFC) actually adds required bits to the libxl code.   

Some notes...

Backend could be used with current V2 IOREQ series [4] without any 
modifications, all what you need is to enable
CONFIG_IOREQ_SERVER on Arm [5], since it is disabled by default within this 
series.

Please note that in our system we run backend in DomD (driver domain). I 
haven't tested it in Dom0,
since in our system the Dom0 is thin (without any H/W) and only used to launch 
VMs, so there is no underlying block H/W. 
But, I hope, it is possible to run it in Dom0 as well (at least there is 
nothing specific to a particular domain in the backend itself, nothing 
hardcoded).
If you are going to run a backend in other than Dom0 domain you need to write 
your own policy (FLASK) for the backend (running in that domain)
to be able to issue DM related requests, etc. Only for test purposes you could 
use this patch [6] that tweeks Xen dummy policy (not for upstream).
  
As I mentioned elsewhere you don't need to modify Guest Linux (DomU), just 
enable VirtIO related configs.
If I remember correctly, the following would be enough:
CONFIG_BLK_MQ_VIRTIO=y
CONFIG_VIRTIO_BLK=y
CONFIG_VIRTIO=y
CONFIG_VIRTIO_BALLOON=y
CONFIG_VIRTIO_MMIO=y
If I remember correctly, if your Host Linux (Dom0 or DomD) version >= 4.17 you 
don't need to modify it as well.
Otherwise, you need to cherry-pick "xen/privcmd: add 
IOCTL_PRIVCMD_MMAP_RESOURCE" from the upstream to be able
to use the acquire interface for the resource mapping.


We usually build a backend in the context of the Yocto build process and run it 
as a systemd service,
but you can also build and run it manually (it should be launched before DomU 
creation).

There are no command line options at all. Everything is configured via domain 
configuration file:
# This option is mandatory, it shows that VirtIO is going to be used by guest
virtio=1
# Example of domain configuration (two disks are assigned to the guest, the 
latter is in readonly mode):
vdisk = [ 'backend=DomD, disks=rw:/dev/mmcblk0p3;ro:/dev/mmcblk1p3' ]

Hope that helps. Feel free to ask questions if any.

[1] https://github.com/xen-troops/virtio-disk/commits/ioreq_v3
[2] https://xenbits.xen.org/gitweb/?p=people/pauldu/demu.git;a=summary
[3] https://git.kernel.org/pub/scm/linux/kernel/git/will/kvmtool.git/
[4] https://github.com/otyshchenko1/xen/commits/ioreq_4.14_ml3
[5] 
https://github.com/otyshchenko1/xen/commit/ee221102193f0422a240832edc41d73f6f3da923
[6] 
https://github.com/otyshchenko1/xen/commit/be868a63014b7aa6c9731d5692200d7f2f57c611

-- 
Regards,

Oleksandr Tyshchenko

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.