| 
 Hi, 
  
some 
updates, 
  
When 'rmmod'ed the network driver from domU, I could 
save/migrate domU. 
lspci on migrated/restored domU hangs for a while and 
shows the correct pci BDF that is 
assigned. 
But dmesg on domU shows the following 
error, 
  
pcifront pci-0: pciback not 
responding!!! pcifront pci-0: pciback not responding!!! pcifront pci-0: 
pciback not responding!!! pcifront pci-0: pciback not 
responding!!! pcifront pci-0: pciback not responding!!! BUG: soft lockup 
detected on CPU#0! 
  
Call Trace:  <IRQ> 
[<ffffffff80257c0c>] 
softlockup_tick+0xd8/0xea  [<ffffffff8020f0ee>] 
timer_interrupt+0x3a1/0x3ff  [<ffffffff80257ef8>] 
handle_IRQ_event+0x4e/0x96  [<ffffffff80257fe4>] 
__do_IRQ+0xa4/0x105  [<ffffffff8020b0e0>] 
call_softirq+0x1c/0x28  [<ffffffff8020ceb7>] 
do_IRQ+0x65/0x73  [<ffffffff80377106>] 
evtchn_do_upcall+0xac/0x12d  [<ffffffff8020ac16>] 
do_hypervisor_callback+0x1e/0x2c  <EOI> [<ffffffff8020622a>] 
hypercall_page+0x22a/0x1000  [<ffffffff8020622a>] 
hypercall_page+0x22a/0x1000  [<ffffffff80214297>] 
xen_send_IPI_mask+0x0/0xf5  [<ffffffff8037664e>] 
force_evtchn_callback+0xa/0xb  [<ffffffff8031b2b8>] 
pci_user_read_config_dword+0x86/0x9d  [<ffffffff8031fd7c>] 
pci_read_config+0x114/0x1ae  [<ffffffff802bef00>] 
read+0x84/0xc1  [<ffffffff8027fff9>] 
vfs_read+0xcb/0x171  [<ffffffff802804bf>] 
sys_pread64+0x50/0x70  [<ffffffff8020ab6b>] 
error_exit+0x0/0x71  [<ffffffff8020a42e>] 
system_call+0x86/0x8b  [<ffffffff8020a3a8>] 
system_call+0x0/0x8b
  
Basically I am checking whether a driver domain can be 
migrated between two identical 
machines. 
For this, before starting domU, I had unbinded the pci 
function from network driver and binded to pciback on both source and 
destination systems. 
moreover assigned device has same BDF on both 
systems. 
 So just curious to know what I am 
trying is a supported feature or 
not. 
  
thanks for listening me. 
  
regards 
Masroor 
  
Hello, 
  
Is it possible to 
save a driver domain(with pass-through 
enabled)? 
  
I got the following 
error when tried to save it (xen v3.1.0). 
  
[2008-02-05 
20:05:21 3900] INFO (XendCheckpoint:349) Saving memory pages: iter 1  
69%ERROR Internal error: Fatal PT race (pfn d94, type 10000000) [2008-02-05 
20:05:21 3900] INFO (XendCheckpoint:349) Save exit rc=1 [2008-02-05 20:05:21 
3900] ERROR (XendCheckpoint:140) Save failed on domain vm1 (3). Traceback 
(most recent call last):   File 
"//usr/lib64/python/xen/xend/XendCheckpoint.py", line 109, in 
save     forkHelper(cmd, fd, saveInputHandler, 
False)   File "//usr/lib64/python/xen/xend/XendCheckpoint.py", line 337, 
in forkHelper     raise XendError("%s failed" % 
string.join(cmd)) XendError: /usr/lib64/xen/bin/xc_save 23 3 0 0 0 
failed [2008-02-05 20:05:21 3900] DEBUG (XendDomainInfo:1699) 
XendDomainInfo.resumeDomain(3) 
  
I checked the free 
memory available during saving and it 
has enough memory there. 
  
without pass-through 
migration is working well on the same environment. 
  
regards 
Masroor  
_______________________________________________
Xen-users mailing list
Xen-users@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-users 
 |