[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Xen 4.3 PCI passthrough possible bug



Works like a charm.  I do not have physical access to the computer this weekend to verify that the cards are isolated, but the HVM starts and appears to be working well.

When do you think Xen 4.4 will be released.  The article I read mentioned it will be released in 2014 (hinting towards the end of February).  I also read 'When it is ready.'

Any timeline would be great.

Thanks again for your help!


On Sat, Feb 8, 2014 at 10:37 AM, Mike Neiderhauser <mikeneiderhauser@xxxxxxxxx> wrote:
I will give it a shot.  Thanks!


On Sat, Feb 8, 2014 at 10:36 AM, Konrad Wilk <konrad.wilk@xxxxxxxxxx> wrote:

----- mikeneiderhauser@xxxxxxxxx wrote:
>
Ah, so you are looking for the     xen_pt: Fix passthrough of device with ROM.
which is not in the Xen 4.4-rc3 but in the master.

One thing you can do is:

cd xen/tools/qemu-xen-dir
git fetch upstream
git checkout origin/master
[you should see: "HEAD is now at 027c412... configure: Disable libtool if -fPIE does not work with it (bug #1257099)"]

Go back to main xen directory:
cd ../../../
./configure
make 
make install

and you should be using now an newer version of QEMU with the fix.


>

git clone -b 4.4.0-rc3 git://xenbits.xen.org/xen.git
>
Had to take some additional steps here to get all of the libs
# apt-get install build-essential 
# apt-get install bcc bin86 gawk bridge-utils iproute libcurl3 libcurl4-openssl-dev bzip2 module-init-tools transfig tgif 
# apt-get install texinfo texlive-latex-base texlive-latex-recommended texlive-fonts-extra texlive-fonts-recommended pciutils-dev mercurial
# apt-get install make gcc libc6-dev zlib1g-dev python python-dev python-twisted libncurses5-dev patch libvncserver-dev libsdl-dev libjpeg62-dev
# apt-get install iasl libbz2-dev e2fslibs-dev git-core uuid-dev ocaml ocaml-findlib libx11-dev bison flex xz-utils libyajl-dev
# apt-get install gettext
apt-get install libaio-dev
apt-get install libpixman-1-dev
./configure
make dist
make install
>
>
>
> On Fri, Feb 7, 2014 at 4:49 PM, Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> wrote:
>
> On Fri, Feb 07, 2014 at 04:29:18PM -0500, Mike Neiderhauser wrote:
> > I did not use the patch.  I was assuming it was already patched given
> > previous email.  Is the patch for qemu source or xen source?
>
>
It is for QEMU, but you are right - it should have been part
> of QEMU if you got the latest version of Xen-unstable.
>
> You didn't use some specific tag but just 'staging' ?
>
>
>
> >
> >
> > On Fri, Feb 7, 2014 at 4:01 PM, Konrad Rzeszutek Wilk <
> > konrad.wilk@xxxxxxxxxx> wrote:
> >
> > > On Fri, Feb 07, 2014 at 03:45:19PM -0500, Mike Neiderhauser wrote:
> > > > Ok. I started ran the initscripts and now xl works.
> > > >
> > > > However, I still see the same behavior as before:
> > > >
> > >
> > > Did you use the patch that was mentioned in the URL?
> > >
> > > > root@fiat:~# xl create /etc/xen/ubuntu-hvm-0.cfg
> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > libxl: error: libxl_qmp.c:448:qmp_next: Socket read error: Connection
> > > reset
> > > > by peer
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > > Connection refused
> > > > libxl: error: libxl_qmp.c:691:libxl__qmp_initialize: Connection error:
> > > > Connection refused
> > > > root@fiat:~# xl list
> > > > Name                                        ID   Mem VCPUs State Time(s)
> > > > Domain-0                                     0  1024     1     r-----
> > > >  15.2
> > > > ubuntu-hvm-0                                 1  1025     1     ------
> > > > 0.0
> > > >
> > > > (XEN)  Dom0 kernel: 64-bit, PAE, lsb, paddr 0x1000000 -> 0x23f3000
> > > > (XEN) PHYSICAL MEMORY ARRANGEMENT:
> > > > (XEN)  Dom0 alloc.:   0000000134000000->0000000138000000 (233690 pages to
> > > > be allocated)
> > > > (XEN)  Init. ramdisk: 000000013d0da000->000000013ffffe00
> > > > (XEN) VIRTUAL MEMORY ARRANGEMENT:
> > > > (XEN)  Loaded kernel: ffffffff81000000->ffffffff823f3000
> > > > (XEN)  Init. ramdisk: ffffffff823f3000->ffffffff85318e00
> > > > (XEN)  Phys-Mach map: ffffffff85319000->ffffffff85519000
> > > > (XEN)  Start info:    ffffffff85519000->ffffffff855194b4
> > > > (XEN)  Page tables:   ffffffff8551a000->ffffffff85549000
> > > > (XEN)  Boot stack:    ffffffff85549000->ffffffff8554a000
> > > > (XEN)  TOTAL:         ffffffff80000000->ffffffff85800000
> > > > (XEN)  ENTRY ADDRESS: ffffffff81d261e0
> > > > (XEN) Dom0 has maximum 1 VCPUs
> > > > (XEN) elf_load_binary: phdr 0 at 0xffffffff81000000 -> 0xffffffff81b2f000
> > > > (XEN) elf_load_binary: phdr 1 at 0xffffffff81c00000 -> 0xffffffff81d0f0f0
> > > > (XEN) elf_load_binary: phdr 2 at 0xffffffff81d10000 -> 0xffffffff81d252c0
> > > > (XEN) elf_load_binary: phdr 3 at 0xffffffff81d26000 -> 0xffffffff81e6d000
> > > > (XEN) Scrubbing Free RAM: .............................done.
> > > > (XEN) Initial low memory virq threshold set at 0x4000 pages.
> > > > (XEN) Std. Loglevel: All
> > > > (XEN) Guest Loglevel: All
> > > > (XEN) Xen is relinquishing VGA console.
> > > > (XEN) *** Serial input -> DOM0 (type 'CTRL-a' three times to switch input
> > > > to Xen)
> > > > (XEN) Freed 260kB init memory.
> > > > (XEN) PCI add device 0000:00:00.0
> > > > (XEN) PCI add device 0000:00:01.0
> > > > (XEN) PCI add device 0000:00:1a.0
> > > > (XEN) PCI add device 0000:00:1c.0
> > > > (XEN) PCI add device 0000:00:1d.0
> > > > (XEN) PCI add device 0000:00:1e.0
> > > > (XEN) PCI add device 0000:00:1f.0
> > > > (XEN) PCI add device 0000:00:1f.2
> > > > (XEN) PCI add device 0000:00:1f.3
> > > > (XEN) PCI add device 0000:01:00.0
> > > > (XEN) PCI add device 0000:02:02.0
> > > > (XEN) PCI add device 0000:02:04.0
> > > > (XEN) PCI add device 0000:03:00.0
> > > > (XEN) PCI add device 0000:03:00.1
> > > > (XEN) PCI add device 0000:04:00.0
> > > > (XEN) PCI add device 0000:04:00.1
> > > > (XEN) PCI add device 0000:05:00.0
> > > > (XEN) PCI add device 0000:05:00.1
> > > > (XEN) PCI add device 0000:06:03.0
> > > > (XEN) page_alloc.c:1460:d0 Over-allocation for domain 1: 262401 > 262400
> > > > (XEN) memory.c:158:d0 Could not allocate order=0 extent: id=1 memflags=0
> > > > (200 of 1024)
> > > > (d1) HVM Loader
> > > > (d1) Detected Xen v4.4-rc2
> > > > (d1) Xenbus rings @0xfeffc000, event channel 4
> > > > (d1) System requested SeaBIOS
> > > > (d1) CPU speed is 3093 MHz
> > > > (d1) Relocating guest memory for lowmem MMIO space disabled
> > > >
> > > >
> > > > Excerpt from /var/log/xen/*
> > > > qemu: hardware error: xen: failed to populate ram at 40050000
> > > >
> > > >
> > > > On Fri, Feb 7, 2014 at 3:39 PM, Konrad Rzeszutek Wilk <
> > > > konrad.wilk@xxxxxxxxxx> wrote:
> > > >
> > > > > On Fri, Feb 07, 2014 at 03:36:49PM -0500, Mike Neiderhauser wrote:
> > > > > > I was able to compile and install xen4.4 RC3 on my host, however I am
> > > > > > getting the error:
> > > > > >
> > > > > > root@fiat:~/git/xen# xl list
> > > > > > xc: error: Could not obtain handle on privileged command interface
> > > (2 =
> > > > > No
> > > > > > such file or directory): Internal error
> > > > > > libxl: error: libxl.c:92:libxl_ctx_alloc: cannot open libxc handle:
> > > No
> > > > > such
> > > > > > file or directory
> > > > > > cannot init xl context
> > > > > >
> > > > > > I've google searched for this and an article appears, but is not the
> > > same
> > > > > > (as far as I can tell).  Running any xl command generates a similar
> > > > > error.
> > > > > >
> > > > > > What can I do to fix this?
> > > > >
> > > > >
> > > > > You need to run the initscripts for Xen. I don't know what your distro
> > > is,
> > > > > but
> > > > > they are usually put in /etc/init.d/rc.d/xen*
> > > > >
> > > > >
> > > > > >
> > > > > > Regards
> > > > > >
> > > > > >
> > > > > > On Fri, Feb 7, 2014 at 1:40 PM, Mike Neiderhauser <
> > > > > > mikeneiderhauser@xxxxxxxxx> wrote:
> > > > > >
> > > > > > > Much. Do I need to install from src or is there a package I can
> > > > > install.
> > > > > > >
> > > > > > > Regards
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Feb 7, 2014 at 1:30 PM, Konrad Rzeszutek Wilk <
> > > > > > > konrad.wilk@xxxxxxxxxx> wrote:
> > > > > > >
> > > > > > >> On Fri, Feb 07, 2014 at 10:53:22AM -0500, Mike Neiderhauser wrote:
> > > > > > >> > I did not.  I do not have the toolchain installed.  I may have
> > > time
> > > > > > >> later
> > > > > > >> > today to try the patch.  Are there any specific instructions on
> > > how
> > > > > to
> > > > > > >> > patch the src, compile and install?
> > > > > > >>
> > > > > > >> There actually should be a new version of Xen 4.4-rcX which will
> > > have
> > > > > the
> > > > > > >> fix. That might be easier for you?
> > > > > > >> >
> > > > > > >> > Regards
> > > > > > >> >
> > > > > > >> >
> > > > > > >> > On Fri, Feb 7, 2014 at 10:25 AM, Konrad Rzeszutek Wilk <
> > > > > > >> > konrad.wilk@xxxxxxxxxx> wrote:
> > > > > > >> >
> > > > > > >> > > On Thu, Feb 06, 2014 at 09:39:37AM -0500, Mike Neiderhauser
> > > wrote:
> > > > > > >> > > > Hi all,
> > > > > > >> > > >
> > > > > > >> > > > I am attempting to do a pci passthrough of an Intel ET card
> > > > > (4x1G
> > > > > > >> NIC)
> > > > > > >> > > to a
> > > > > > >> > > > HVM.  I have been attempting to resolve this issue on the
> > > > > xen-users
> > > > > > >> list,
> > > > > > >> > > > but it was advised to post this issue to this list. (Initial
> > > > > > >> Message -
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > > http://lists.xenproject.org/archives/html/xen-users/2014-02/msg00036.html
> > > > > > >> )
> > > > > > >> > > >
> > > > > > >> > > > The machine I am using as host is a Dell Poweredge server
> > > with a
> > > > > > >> Xeon
> > > > > > >> > > > E31220 with 4GB of ram.
> > > > > > >> > > >
> > > > > > >> > > > The possible bug is the following:
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > > 40030000
> > > > > > >> > > > ....
> > > > > > >> > > >
> > > > > > >> > > > I believe it may be similar to this thread
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > > http://markmail.org/message/3zuiojywempoorxj#query:+page:1+mid:gul34vbe4uyog2d4+state:results
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > Additional info that may be helpful is below.
> > > > > > >> > >
> > > > > > >> > > Did you try the patch?
> > > > > > >> > > >
> > > > > > >> > > > Please let me know if you need any additional information.
> > > > > > >> > > >
> > > > > > >> > > > Thanks in advance for any help provided!
> > > > > > >> > > > Regards
> > > > > > >> > > >
> > > > > > >> > > > ###########################################################
> > > > > > >> > > > root@fiat:~# cat /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > ###########################################################
> > > > > > >> > > > # Configuration file for Xen HVM
> > > > > > >> > > >
> > > > > > >> > > > # HVM Name (as appears in 'xl list')
> > > > > > >> > > > name="ubuntu-hvm-0"
> > > > > > >> > > > # HVM Build settings (+ hardware)
> > > > > > >> > > > #kernel = "/usr/lib/xen-4.3/boot/hvmloader"
> > > > > > >> > > > builder='hvm'
> > > > > > >> > > > device_model='qemu-dm'
> > > > > > >> > > > memory=1024
> > > > > > >> > > > vcpus=2
> > > > > > >> > > >
> > > > > > >> > > > # Virtual Interface
> > > > > > >> > > > # Network bridge to USB NIC
> > > > > > >> > > > vif=['bridge=xenbr0']
> > > > > > >> > > >
> > > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > > >> > > > # PCI Permissive mode toggle
> > > > > > >> > > > #pci_permissive=1
> > > > > > >> > > >
> > > > > > >> > > > # All PCI Devices
> > > > > > >> > > > #pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1', '05:00.0',
> > > > > > >> '05:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # First two ports on Intel 4x1G NIC
> > > > > > >> > > > #pci=['03:00.0','03:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # Last two ports on Intel 4x1G NIC
> > > > > > >> > > > #pci=['04:00.0', '04:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # All ports on Intel 4x1G NIC
> > > > > > >> > > > pci=['03:00.0', '03:00.1', '04:00.0', '04:00.1']
> > > > > > >> > > >
> > > > > > >> > > > # Brodcom 2x1G NIC
> > > > > > >> > > > #pci=['05:00.0', '05:00.1']
> > > > > > >> > > > ################### PCI PASSTHROUGH ###################
> > > > > > >> > > >
> > > > > > >> > > > # HVM Disks
> > > > > > >> > > > # Hard disk only
> > > > > > >> > > > # Boot from HDD first ('c')
> > > > > > >> > > > boot="c"
> > > > > > >> > > > disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w']
> > > > > > >> > > >
> > > > > > >> > > > # Hard disk with ISO
> > > > > > >> > > > # Boot from ISO first ('d')
> > > > > > >> > > > #boot="d"
> > > > > > >> > > > #disk=['phy:/dev/ubuntu-vg/ubuntu-hvm-0,hda,w',
> > > > > > >> > > > 'file:/root/ubuntu-12.04.3-server-amd64.iso,hdc:cdrom,r']
> > > > > > >> > > >
> > > > > > >> > > > # ACPI Enable
> > > > > > >> > > > acpi=1
> > > > > > >> > > > # HVM Event Modes
> > > > > > >> > > > >> > > > > > >> > > > >> > > > > > >> > > > >> > > > > > >> > > >
> > > > > > >> > > > # Serial Console Configuration (Xen Console)
> > > > > > >> > > > sdl=0
> > > > > > >> > > > serial='pty'
> > > > > > >> > > >
> > > > > > >> > > > # VNC Configuration
> > > > > > >> > > > # Only reacable from localhost
> > > > > > >> > > > vnc=1
> > > > > > >> > > > vnclisten="0.0.0.0"
> > > > > > >> > > > vncpasswd=""
> > > > > > >> > > >
> > > > > > >> > > > ###########################################################
> > > > > > >> > > > Copied for xen-users list
> > > > > > >> > > > ###########################################################
> > > > > > >> > > >
> > > > > > >> > > > It appears that it cannot obtain the RAM mapping for this
> > > PCI
> > > > > > >> device.
> > > > > > >> > > >
> > > > > > >> > > >
> > > > > > >> > > > I rebooted the Host.  I ran assigned pci devices to
> > > pciback. The
> > > > > > >> output
> > > > > > >> > > > looks like:
> > > > > > >> > > > root@fiat:~# ./dev_mgmt.sh
> > > > > > >> > > > Loading Kernel Module 'xen-pciback'
> > > > > > >> > > > Calling function pciback_dev for:
> > > > > > >> > > > PCI DEVICE 0000:03:00.0
> > > > > > >> > > > Unbinding 0000:03:00.0 from igb
> > > > > > >> > > > Binding 0000:03:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:03:00.1
> > > > > > >> > > > Unbinding 0000:03:00.1 from igb
> > > > > > >> > > > Binding 0000:03:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:04:00.0
> > > > > > >> > > > Unbinding 0000:04:00.0 from igb
> > > > > > >> > > > Binding 0000:04:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:04:00.1
> > > > > > >> > > > Unbinding 0000:04:00.1 from igb
> > > > > > >> > > > Binding 0000:04:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:05:00.0
> > > > > > >> > > > Unbinding 0000:05:00.0 from bnx2
> > > > > > >> > > > Binding 0000:05:00.0 to pciback
> > > > > > >> > > >
> > > > > > >> > > > PCI DEVICE 0000:05:00.1
> > > > > > >> > > > Unbinding 0000:05:00.1 from bnx2
> > > > > > >> > > > Binding 0000:05:00.1 to pciback
> > > > > > >> > > >
> > > > > > >> > > > Listing PCI Devices Available to Xen
> > > > > > >> > > > 0000:03:00.0
> > > > > > >> > > > 0000:03:00.1
> > > > > > >> > > > 0000:04:00.0
> > > > > > >> > > > 0000:04:00.1
> > > > > > >> > > > 0000:05:00.0
> > > > > > >> > > > 0000:05:00.1
> > > > > > >> > > >
> > > > > > >> > > > ###########################################################
> > > > > > >> > > > root@fiat:~# xl -vvv create /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > Parsing config from /etc/xen/ubuntu-hvm-0.cfg
> > > > > > >> > > > WARNING: ignoring device_model directive.
> > > > > > >> > > > WARNING: Use "device_model_override" instead if you really
> > > want
> > > > > a
> > > > > > >> > > > non-default device_model
> > > > > > >> > > > libxl: debug: libxl_create.c:1230:do_domain_create: ao
> > > > > 0x210c360:
> > > > > > >> create:
> > > > > > >> > > > how=(nil) callback=(nil) poller=0x210c3c0
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda spec.backend=unknown
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:296:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda, using backend phy
> > > > > > >> > > > libxl: debug: libxl_create.c:675:initiate_domain_create:
> > > running
> > > > > > >> > > bootloader
> > > > > > >> > > > libxl: debug: libxl_bootloader.c:321:libxl__bootloader_run:
> > > not
> > > > > a PV
> > > > > > >> > > > domain, skipping bootloader
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c728: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_numa.c:475:libxl__get_numa_candidate:
> > > New
> > > > > best
> > > > > > >> NUMA
> > > > > > >> > > > placement candidate found: nr_nodes=1, nr_cpus=4,
> > > nr_vcpus=3,
> > > > > > >> > > > free_memkb=2980
> > > > > > >> > > > libxl: detail: libxl_dom.c:195:numa_place_domain: NUMA
> > > placement
> > > > > > >> > > candidate
> > > > > > >> > > > with 1 nodes, 4 cpus and 2980 KB free selected
> > > > > > >> > > > xc: detail: elf_parse_binary: phdr: paddr=0x100000
> > > memsz=0xa69a4
> > > > > > >> > > > xc: detail: elf_parse_binary: memory: 0x100000 -> 0x1a69a4
> > > > > > >> > > > xc: info: VIRTUAL MEMORY ARRANGEMENT:
> > > > > > >> > > >   Loader:        0000000000100000->00000000001a69a4
> > > > > > >> > > >   Modules:       0000000000000000->0000000000000000
> > > > > > >> > > >   TOTAL:         0000000000000000->000000003f800000
> > > > > > >> > > >   ENTRY ADDRESS: 0000000000100608
> > > > > > >> > > > xc: info: PHYSICAL MEMORY ALLOCATION:
> > > > > > >> > > >   4KB PAGES: 0x0000000000000200
> > > > > > >> > > >   2MB PAGES: 0x00000000000001fb
> > > > > > >> > > >   1GB PAGES: 0x0000000000000000
> > > > > > >> > > > xc: detail: elf_load_binary: phdr 0 at 0x7f022c779000 ->
> > > > > > >> 0x7f022c81682d
> > > > > > >> > > > libxl: debug:
> > > libxl_device.c:257:libxl__device_disk_set_backend:
> > > > > > >> Disk
> > > > > > >> > > > vdev=hda spec.backend=phy
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > > > token=3/0:
> > > > > > >> > > > register slotnum=3
> > > > > > >> > > > libxl: debug: libxl_create.c:1243:do_domain_create: ao
> > > > > 0x210c360:
> > > > > > >> > > > inprogress: poller=0x210c3c0, flags=i
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x2112f48
> > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > event
> > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 still
> > > > > waiting
> > > > > > >> > > state 1
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x2112f48
> > > > > > >> > > > wpath=/local/domain/0/backend/vbd/2/768/state token=3/0:
> > > event
> > > > > > >> > > > epath=/local/domain/0/backend/vbd/2/768/state
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vbd/2/768/state wanted state 2 ok
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x2112f48 wpath=/local/domain/0/backend/vbd/2/768/state
> > > > > token=3/0:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x2112f48: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/block add
> > > > > > >> > > > libxl: debug: libxl_dm.c:1206:libxl__spawn_local_dm:
> > > Spawning
> > > > > > >> > > device-model
> > > > > > >> > > > /usr/bin/qemu-system-i386 with arguments:
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > /usr/bin/qemu-system-i386
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > -xen-domid
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   2
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -chardev
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> > > socket,id=libxl-cmd,path=/var/run/xen/qmp-libxl-2,server,nowait
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -mon
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > chardev=libxl-cmd,mode=control
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -name
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > ubuntu-hvm-0
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vnc
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > 0.0.0.0:0
> > > > > > >> ,to=99
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -global
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> isa-fdc.driveA=
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -serial
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   pty
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -vga
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > cirrus
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -global
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> vga.vram_size_mb=8
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -boot
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > order=c
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -smp
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > 2,maxcpus=2
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -device
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > rtl8139,id=nic0,netdev=net0,mac=00:16:3e:23:44:2c
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -netdev
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > > type=tap,id=net0,ifname=vif2.0-emu,script=no,downscript=no
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -M
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   xenfv
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   -m
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:   1016
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > -drive
> > > > > > >> > > > libxl: debug: libxl_dm.c:1208:libxl__spawn_local_dm:
> > > > > > >> > > >
> > > > > > >> > >
> > > > > > >>
> > > > >
> > > file=/dev/ubuntu-vg/ubuntu-hvm-0,if=ide,index=0,media=disk,format=raw,cache=writeback
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > > token=3/1:
> > > > > > >> > > register
> > > > > > >> > > > slotnum=3
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210c960
> > > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210c960
> > > > > > >> > > > wpath=/local/domain/0/device-model/2/state token=3/1: event
> > > > > > >> > > > epath=/local/domain/0/device-model/2/state
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c960 wpath=/local/domain/0/device-model/2/state
> > > > > token=3/1:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210c960: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > > connected
> > > > > to
> > > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > > > type: qmp
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > > >> > > >     "id": 1
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "query-chardev",
> > > > > > >> > > >     "id": 2
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "change",
> > > > > > >> > > >     "id": 3,
> > > > > > >> > > >     "arguments": {
> > > > > > >> > > >         "device": "vnc",
> > > > > > >> > > >         "target": "password",
> > > > > > >> > > >         "arg": ""
> > > > > > >> > > >     }
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "query-vnc",
> > > > > > >> > > >     "id": 4
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_event.c:559:libxl__ev_xswatch_register:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > > token=3/2:
> > > > > > >> > > register
> > > > > > >> > > > slotnum=3
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210e8a8
> > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > > >> > > > libxl: debug: libxl_event.c:647:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 still
> > > > > waiting
> > > > > > >> state
> > > > > > >> > > 1
> > > > > > >> > > > libxl: debug: libxl_event.c:503:watchfd_callback: watch
> > > > > w=0x210e8a8
> > > > > > >> > > > wpath=/local/domain/0/backend/vif/2/0/state token=3/2: event
> > > > > > >> > > > epath=/local/domain/0/backend/vif/2/0/state
> > > > > > >> > > > libxl: debug: libxl_event.c:643:devstate_watch_callback:
> > > backend
> > > > > > >> > > > /local/domain/0/backend/vif/2/0/state wanted state 2 ok
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:596:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8 wpath=/local/domain/0/backend/vif/2/0/state
> > > > > token=3/2:
> > > > > > >> > > > deregister slotnum=3
> > > > > > >> > > > libxl: debug:
> > > libxl_event.c:608:libxl__ev_xswatch_deregister:
> > > > > watch
> > > > > > >> > > > w=0x210e8a8: deregister unregistered
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/vif-bridge online
> > > > > > >> > > > libxl: debug: libxl_device.c:959:device_hotplug: calling
> > > hotplug
> > > > > > >> script:
> > > > > > >> > > > /etc/xen/scripts/vif-bridge add
> > > > > > >> > > > libxl: debug: libxl_qmp.c:707:libxl__qmp_initialize:
> > > connected
> > > > > to
> > > > > > >> > > > /var/run/xen/qmp-libxl-2
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > > > type: qmp
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "qmp_capabilities",
> > > > > > >> > > >     "id": 1
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: debug: libxl_qmp.c:299:qmp_handle_response: message
> > > type:
> > > > > > >> return
> > > > > > >> > > > libxl: debug: libxl_qmp.c:555:qmp_send_prepare: next qmp
> > > > > command: '{
> > > > > > >> > > >     "execute": "device_add",
> > > > > > >> > > >     "id": 2,
> > > > > > >> > > >     "arguments": {
> > > > > > >> > > >         "driver": "xen-pci-passthrough",
> > > > > > >> > > >         "id": "pci-pt-03_00.0",
> > > > > > >> > > >         "hostaddr": "0000:03:00.0"
> > > > > > >> > > >     }
> > > > > > >> > > > }
> > > > > > >> > > > '
> > > > > > >> > > > libxl: error: libxl_qmp.c:454:qmp_next: Socket read error:
> > > > > > >> Connection
> > > > > > >> > > reset
> > > > > > >> > > > by peer
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: error: libxl_qmp.c:702:libxl__qmp_initialize:
> > > Connection
> > > > > > >> error:
> > > > > > >> > > > Connection refused
> > > > > > >> > > > libxl: debug: libxl_pci.c:81:libxl__create_pci_backend:
> > > > > Creating pci
> > > > > > >> > > backend
> > > > > > >> > > > libxl: debug: libxl_event.c:1737:libxl__ao_progress_report:
> > > ao
> > > > > > >> 0x210c360:
> > > > > > >> > > > progress report: ignored
> > > > > > >> > > > libxl: debug: libxl_event.c:1569:libxl__ao_complete: ao
> > > > > 0x210c360:
> > > > > > >> > > > complete, rc=0
> > > > > > >> > > > libxl: debug: libxl_event.c:1541:libxl__ao__destroy: ao
> > > > > 0x210c360:
> > > > > > >> > > destroy
> > > > > > >> > > > Daemon running with PID 3214
> > > > > > >> > > > xc: debug: hypercall buffer: total allocations:793 total
> > > > > > >> releases:793
> > > > > > >> > > > xc: debug: hypercall buffer: current allocations:0 maximum
> > > > > > >> allocations:4
> > > > > > >> > > > xc: debug: hypercall buffer: cache current size:4
> > > > > > >> > > > xc: debug: hypercall buffer: cache hits:785 misses:4
> > > toobig:4
> > > > > > >> > > >
> > > > > > >> > > > ###########################################################
> > > > > > >> > > > root@fiat:/var/log/xen# cat qemu-dm-ubuntu-hvm-0.log
> > > > > > >> > > > char device redirected to /dev/pts/5 (label serial0)
> > > > > > >> > > > qemu: hardware error: xen: failed to populate ram at
> > > 40030000
> > > > > > >> > > > CPU #0:
> > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> > > HLT=1
> > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > > >> > > > GDT=     00000000 0000ffff
> > > > > > >> > > > IDT=     00000000 0000ffff
> > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > > >> > > > EFER=0000000000000000
> > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > > >> > > > CPU #1:
> > > > > > >> > > > EAX=00000000 EBX=00000000 ECX=00000000 EDX=00000633
> > > > > > >> > > > ESI=00000000 EDI=00000000 EBP=00000000 ESP=00000000
> > > > > > >> > > > EIP=0000fff0 EFL=00000002 [-------] CPL=0 II=0 A20=1 SMM=0
> > > HLT=1
> > > > > > >> > > > ES =0000 00000000 0000ffff 00009300
> > > > > > >> > > > CS =f000 ffff0000 0000ffff 00009b00
> > > > > > >> > > > SS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > DS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > FS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > GS =0000 00000000 0000ffff 00009300
> > > > > > >> > > > LDT=0000 00000000 0000ffff 00008200
> > > > > > >> > > > TR =0000 00000000 0000ffff 00008b00
> > > > > > >> > > > GDT=     00000000 0000ffff
> > > > > > >> > > > IDT=     00000000 0000ffff
> > > > > > >> > > > CR0=60000010 CR2=00000000 CR3=00000000 CR4=00000000
> > > > > > >> > > > DR0=00000000 DR1=00000000 DR2=00000000 DR3=00000000
> > > > > > >> > > > DR6=ffff0ff0 DR7=00000400
> > > > > > >> > > > EFER=0000000000000000
> > > > > > >> > > > FCW=037f FSW=0000 [ST=0] FTW=00 MXCSR=00001f80
> > > > > > >> > > > FPR0=0000000000000000 0000 FPR1=0000000000000000 0000
> > > > > > >> > > > FPR2=0000000000000000 0000 FPR3=0000000000000000 0000
> > > > > > >> > > > FPR4=0000000000000000 0000 FPR5=0000000000000000 0000
> > > > > > >> > > > FPR6=0000000000000000 0000 FPR7=0000000000000000 0000
> > > > > > >> > > > XMM00=00000000000000000000000000000000
> > > > > > >> > > > XMM01=00000000000000000000000000000000
> > > > > > >> > > > XMM02=00000000000000000000000000000000
> > > > > > >> > > > XMM03=00000000000000000000000000000000
> > > > > > >> > > > XMM04=00000000000000000000000000000000
> > > > > > >> > > > XMM05=00000000000000000000000000000000
> > > > > > >> > > > XMM06=00000000000000000000000000000000
> > > > > > >> > > > XMM07=00000000000000000000000000000000
> > > > > > >> > > >
> > > > > > >> > > > ###########################################################
> > > > > > >> > > > /etc/default/grub
> > > > > > >> > > > GRUB_DEFAULT="Xen 4.3-amd64"
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT=0
> > > > > > >> > > > GRUB_HIDDEN_TIMEOUT_QUIET=true
> > > > > > >> > > > GRUB_TIMEOUT=10
> > > > > > >> > > > GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo
> > > Debian`
> > > > > > >> > > > GRUB_CMDLINE_LINUX_DEFAULT="quiet splash"
> > > > > > >> > > > GRUB_CMDLINE_LINUX=""
> > > > > > >> > > > # biosdevname=0
> > > > > > >> > > > GRUB_CMDLINE_XEN="dom0_mem=1024M dom0_max_vcpus=1"
> > > > > > >> > >
> > > > > > >> > > > _______________________________________________
> > > > > > >> > > > Xen-devel mailing list
> > > > > > >> > > > Xen-devel@xxxxxxxxxxxxx
> > > > > > >> > > > http://lists.xen.org/xen-devel
> > > > > > >> > >
> > > > > > >> > >
> > > > > > >>
> > > > > > >
> > > > > > >
> > > > >
> > >
>

>


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.