|
|
|
|
|
|
|
|
|
|
xen-devel
[Xen-devel] RE: vram_dirty vs. shadow paging dirty tracking
> When thinking about multithreading the device model, it occurred to me
> that it's a little odd that we're doing a memcmp to determine which
> portions of the VRAM has changed. Couldn't we just use dirty page
> tracking in the shadow paging code? That should significantly lower
> the
> overhead of this plus I believe the infrastructure is already mostly
> there in the shadow2 code.
Yep, its been in the roadmap doc for quite a while. However, the log
dirty code isn't ideal for this. We'd need to extend it to enable it to
be turned on for just a subset of the GFN range (we could use a xen
rangeset for this).
Even so, I'm not super keen on the idea of tearing down and rebuilding
1024 PTE's up to 50 times a second.
A lower overhead solution would be to do scanning and resetting of the
dirty bits on the PTEs (and a global tlb flush). In the general case
this is tricky as the framebuffer could be mapped by multiple PTEs. In
practice, I believe this doesn't happen for either Linux or Windows.
There's always a good fallback of just returning 'all dirty' if the
heuristic is violated. Would be good to knock this up.
Best,
Ian
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel
|
|
|
|
|