[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Xen-users] xen_pt_region_update: Error: create new mem mapping failed! (err: 22)



On Wed, Jan 24, 2018 at 9:59 PM, Håkon Alstadheim
<hakon@xxxxxxxxxxxxxxxxxx> wrote:
> I'm trying, and failing, to launch a vm with bios = 'ovmf' under xen 4.10.
>
> The domain launches OK as long as I do not pass any pci devices through,
> but with pci devices passed through,

Anthony,

Does OVMF support PCI pass-through yet?

 -George

> I get the following in the
> device-model.log:
> -----
> qemu-system-i386: -serial pty: char device redirected to /dev/pts/17
> (label serial0)
> [00:06.0] xen_pt_region_update: Error: create new mem mapping failed!
> (err: 22)
> [00:06.0] xen_pt_region_update: Error: remove old mem mapping failed!
> (err: 22)
> [00:07.0] xen_pt_region_update: Error: create new mem mapping failed!
> (err: 22)
> [00:07.0] xen_pt_region_update: Error: remove old mem mapping failed!
> (err: 22)
> [00:08.0] xen_pt_region_update: Error: create new mem mapping failed!
> (err: 22)
> [00:08.0] xen_pt_region_update: Error: create new mem mapping failed!
> (err: 22)
> [00:08.0] xen_pt_region_update: Error: remove old mem mapping failed!
> (err: 22)
> [00:08.0] xen_pt_region_update: Error: remove old mem mapping failed!
> (err: 22)
> -------
>
> Launch of the domain just hangs before the ovmf setup screen, nothing
> else happening in any of the logs as far as I can see.
>
> The domain runs fine without a 'bios=' line, but then it is not much use
> to me :-/.
>
> The devices in question are a display-card and a usb-3.0 card. Pasing
> either or both of the cards result in the same type of failure.
> --------------------------
> For completeness, here is output of xl info:
> xl info
> host                   : gentoo
> release                : 4.14.15-gentoo
> version                : #1 SMP Wed Jan 24 00:37:30 CET 2018
> machine                : x86_64
> nr_cpus                : 24
> max_cpu_id             : 23
> nr_nodes               : 2
> cores_per_socket       : 6
> threads_per_core       : 2
> cpu_mhz                : 2394
> hw_caps                :
> bfebfbff:77fef3ff:2c100800:00000021:00000001:000037ab:00000000:00000100
> virt_caps              : hvm hvm_directio
> total_memory           : 65379
> free_memory            : 17086
> sharing_freed_memory   : 0
> sharing_used_memory    : 0
> outstanding_claims     : 0
> free_cpus              : 0
> xen_major              : 4
> xen_minor              : 10
> xen_extra              : .0
> xen_version            : 4.10.0
> xen_caps               : xen-3.0-x86_64 xen-3.0-x86_32p hvm-3.0-x86_32
> hvm-3.0-x86_32p hvm-3.0-x86_64
> xen_scheduler          : credit
> xen_pagesize           : 4096
> platform_params        : virt_start=0xffff800000000000
> xen_changeset          :
> xen_commandline        : ssd-xen-dbg-noidle-marker-3
> console_timestamps=date iommu=1,intpost,verbose,debug
> iommu_inclusive_mapping=1 com1=57600,8n1 com2=57600,8n1 console=com2,vga
> dom0_max_vcpus=8 dom0_vcpus_pin=1 dom0_mem=7G,max:7G
> cpufreq=xen:performance,verbose sched_smt_power_savings=1
> core_parking=power nmi=dom0 gnttab_max_frames=256
> gnttab_max_maptrack_frames=1024 vcpu_migration_delay=2000
> tickle_one_idle_cpu=1 cpuidle=0 loglvl=all guest_loglvl=all sync_console
> apic_verbosity=debug e820-verbose=1 tmem=0
> cc_compiler            : gcc (Gentoo 6.4.0 p1.1) 6.4.0
> cc_compile_by          : hakon
> cc_compile_domain      : alstadheim.priv.no
> cc_compile_date        : Sat Jan  6 04:00:57 CET 2018
> build_id               : bd8a311cf81fe38a08e4f43b476409c2
> xend_config_format     : 4
>
>
>
> _______________________________________________
> Xen-users mailing list
> Xen-users@xxxxxxxxxxxxxxxxxxxx
> https://lists.xenproject.org/mailman/listinfo/xen-users

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxxx
https://lists.xenproject.org/mailman/listinfo/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.