[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH 3/5] hvmemul_do_io: If the send to the ioreq server failed do not retry.



On 01/30/15 05:23, Jan Beulich wrote:
>>>> On 30.01.15 at 01:52, <dslutz@xxxxxxxxxxx> wrote:
>> I.E. do just what no backing DM does.
> 
> _If_ this is correct, the if() modified here should be folded with the
> one a few lines up.

Ok, will fold with the one a few lines up.  As expected without this
change the guest will hang (more details below).


> But looking at the description of the commit that
> introduced this (bac0999325 "x86 hvm: Do not incorrectly retire an
> instruction emulation...", almost immediately modified by f20f3c8ece
> "x86 hvm: On failed hvm_send_assist_req(), io emulation...") I doubt
> this is really what we want, or at the very least your change
> description should explain what was wrong with the original commit.
> 

Looking at these 2 commits, it looks to me like I need to handle these
cases also.  So it looks like having hvm_send_assist_req() return an int
0, 1, & 2 is the simpler way.  V2 on the way soon.

Which is the better codding style:

1) Add #defines for the return values?
2) Just use numbers?
3) Invert the sense 0 means worked, 1 is shutdown_deferral or
   domain_crash, 2 is hvm_complete_assist_req()?


I.E. (for just adding 2):


diff --git a/xen/arch/x86/hvm/emulate.c b/xen/arch/x86/hvm/emulate.c
index 2ed4344..c565151 100644
--- a/xen/arch/x86/hvm/emulate.c
+++ b/xen/arch/x86/hvm/emulate.c
@@ -227,10 +227,19 @@ static int hvmemul_do_io(
         else
         {
             rc = X86EMUL_RETRY;
-            if ( !hvm_send_assist_req(&p) )
-                vio->io_state = HVMIO_none;
-            else if ( p_data == NULL )
+            switch ( hvm_send_assist_req(&p) )
+            {
+            case 2:
                 rc = X86EMUL_OKAY;
+                /* fall through */
+            case 0:
+                vio->io_state = HVMIO_none;
+                break;
+            case 1:
+                if ( p_data == NULL )
+                    rc = X86EMUL_OKAY;
+                break;
+            }
         }


???



I would think it would be good for the code using hvm_has_dm()
to also call on hvm_complete_assist_req().  hvm_has_dm() is a subset of
the cases that hvm_select_ioreq_server() checks for.

Based on this, I think it would be better to remove the call on
hvm_has_dm() instead of adding a call to hvm_complete_assist_req().


   -Don Slutz
P.S. Details for hang:

Using:

 static bool_t hvm_complete_assist_req(ioreq_t *p)
 {
+    HVMTRACE_1D(COMPLETE_ASSIST, p->type);
...
):

I get the trace:

CPU1  745455716325 (+    1362)  VMEXIT      [ exitcode = 0x0000001e, rIP
 = 0x00101581 ]
CPU1  0 (+       0)  COMPLETE_ASSIST [ type = 0 ]
CPU1  0 (+       0)  HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1
io_state = 0 ret = 0 ]
CPU1  745455717846 (+    1521)  vlapic_accept_pic_intr [ i8259_target =
1, accept_pic_int = 1 ]
CPU1  745455718209 (+     363)  VMENTRY
CPU1  745455719568 (+    1359)  VMEXIT      [ exitcode = 0x0000001e, rIP
 = 0x00101581 ]
CPU1  0 (+       0)  COMPLETE_ASSIST [ type = 0 ]
CPU1  0 (+       0)  HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1
io_state = 0 ret = 0 ]
CPU1  745455721083 (+    1515)  vlapic_accept_pic_intr [ i8259_target =
1, accept_pic_int = 1 ]
CPU1  745455721422 (+     339)  VMENTRY
CPU1  745455722781 (+    1359)  VMEXIT      [ exitcode = 0x0000001e, rIP
 = 0x00101581 ]
CPU1  0 (+       0)  COMPLETE_ASSIST [ type = 0 ]
CPU1  0 (+       0)  HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1
io_state = 0 ret = 0 ]
CPU1  745455724299 (+    1518)  vlapic_accept_pic_intr [ i8259_target =
1, accept_pic_int = 1 ]
CPU1  745455724656 (+     357)  VMENTRY
CPU1  745455726009 (+    1353)  VMEXIT      [ exitcode = 0x0000001e, rIP
 = 0x00101581 ]
CPU1  0 (+       0)  COMPLETE_ASSIST [ type = 0 ]
CPU1  0 (+       0)  HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1
io_state = 0 ret = 0 ]
CPU1  745455727539 (+    1530)  vlapic_accept_pic_intr [ i8259_target =
1, accept_pic_int = 1 ]
CPU1  745455727899 (+     360)  VMENTRY
CPU1  745455729276 (+    1377)  VMEXIT      [ exitcode = 0x0000001e, rIP
 = 0x00101581 ]
CPU1  0 (+       0)  COMPLETE_ASSIST [ type = 0 ]
CPU1  0 (+       0)  HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1
io_state = 0 ret = 0 ]
CPU1  745455730803 (+    1527)  vlapic_accept_pic_intr [ i8259_target =
1, accept_pic_int = 1 ]
CPU1  745455731163 (+     360)  VMENTRY
CPU1  745455732525 (+    1362)  VMEXIT      [ exitcode = 0x0000001e, rIP
 = 0x00101581 ]
CPU1  0 (+       0)  COMPLETE_ASSIST [ type = 0 ]
CPU1  0 (+       0)  HANDLE_PIO [ port = 0x0cfe size = 2 dir = 1
io_state = 0 ret = 0 ]
CPU1  745455734049 (+    1524)  vlapic_accept_pic_intr [ i8259_target =
1, accept_pic_int = 1 ]
CPU1  745455734385 (+     336)  VMENTRY
...


> Jan
> 
>> --- a/xen/arch/x86/hvm/emulate.c
>> +++ b/xen/arch/x86/hvm/emulate.c
>> @@ -228,7 +228,11 @@ static int hvmemul_do_io(
>>          {
>>              rc = X86EMUL_RETRY;
>>              if ( !hvm_send_assist_req(&p) )
>> +            {
>> +                /* Since the send failed, do not retry */
>> +                rc = X86EMUL_OKAY;
>>                  vio->io_state = HVMIO_none;
>> +            }
>>              else if ( p_data == NULL )
>>                  rc = X86EMUL_OKAY;
>>          }
>> -- 
>> 1.8.4
> 
> 
> 

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.