[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend driver
> -----Original Message----- > From: Stefano Stabellini [mailto:stefano.stabellini@xxxxxxxxxxxxx] > Sent: Tuesday, January 20, 2015 1:19 AM > To: Xu, Quan > Cc: qemu-devel@xxxxxxxxxx; xen-devel@xxxxxxxxxxxxx; > stefano.stabellini@xxxxxxxxxxxxx > Subject: Re: [v3 3/5] Qemu-Xen-vTPM: Register Xen stubdom vTPM frontend > driver > > On Tue, 30 Dec 2014, Quan Xu wrote: > > +int vtpm_recv(struct XenDevice *xendev, uint8_t* buf, size_t *count) > > +{ > > + struct xen_vtpm_dev *vtpmdev = container_of(xendev, struct > xen_vtpm_dev, > > + xendev); > > + struct tpmif_shared_page *shr = vtpmdev->shr; > > + unsigned int offset; > > + > > + if (shr->state == TPMIF_STATE_IDLE) { > > + return -ECANCELED; > > + } > > + > > + while (vtpm_status(vtpmdev) != VTPM_STATUS_IDLE) { > > + vtpm_aio_wait(vtpm_aio_ctx); > > + } > > Is this really necessary to write this as a busy loop? > I think you should write it as a proper aio callback for efficiency: > QEMU is going to burn 100% of the cpu polling and not doing anything else! I agree. I will improve it in v4. -Quan > > > > + offset = sizeof(*shr) + 4*shr->nr_extra_pages; > > + memcpy(buf, offset + (uint8_t *)shr, shr->length); > > + *count = shr->length; > > + > > + return 0; > > +} _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |