[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v3 06/11] hvm/hpet: In hpet_save, call hpet_get_comparator.



>>> On 17.04.14 at 19:43, <dslutz@xxxxxxxxxxx> wrote:
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -495,6 +495,7 @@ static int hpet_save(struct domain *d, 
> hvm_domain_context_t *h)
>  {
>      HPETState *hp = domain_vhpet(d);
>      int rc;
> +    uint64_t guest_time;
>  
>      spin_lock(&hp->lock);
>  
> @@ -524,6 +525,12 @@ static int hpet_save(struct domain *d, 
> hvm_domain_context_t *h)
>          C(period[1]);
>          C(period[2]);
>  #undef C
> +        /* read the comparator to get it updated so hpet_save will
> +         * return the expected value. */
> +        guest_time = hp->hpet.mc64 - hp->mc_offset;

Wouldn't you be better off loading guest_time via guest_time_hpet()
right after taking the lock, using the variable also in the assignment
to mc64? I'm particularly suspecting an inconsistency in the
!hpet_enabled() case (i.e. when mc64 doesn't get updated
anymore). Iirc a later patch (in the earlier version of the series)
makes the adjustment in hpet_get_comparator() also conditional
upon the HPET being enabled, in which case the end effect would
be identical (due to hpet_get_comparator() then becoming a nop for
that case).

In any event, no matter that the immediately following comment is
malformed too, please fix your comment style (and feel free to
adjust the one below at once).

> +        hpet_get_comparator(hp, 0, guest_time);
> +        hpet_get_comparator(hp, 1, guest_time);
> +        hpet_get_comparator(hp, 2, guest_time);
>          /* save the 64 bit comparator in the 64 bit timer[n].cmp field
>           * regardless of whether or not the timer is in 32 bit mode. */
>          rec->timers[0].cmp = hp->hpet.comparator64[0];

Jan


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.