[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH] vmx/monitor: CPUID events



On Fri, Jul 8, 2016 at 10:49 AM, Andrew Cooper
<andrew.cooper3@xxxxxxxxxx> wrote:
> On 08/07/16 16:44, Tamas K Lengyel wrote:
>> On Fri, Jul 8, 2016 at 3:33 AM, Andrew Cooper <andrew.cooper3@xxxxxxxxxx> 
>> wrote:
>>> On 08/07/16 03:31, Tamas K Lengyel wrote:
>>>> This patch implements sending notification to a monitor subscriber when an
>>>> x86/vmx guest executes the CPUID instruction.
>>>>
>>>> Signed-off-by: Tamas K Lengyel <tamas.lengyel@xxxxxxxxxxxx>
>>> Is it wise having an on/off control without any further filtering?  (I
>>> suppose that it is at least a fine first start).
>> What type of extra filtering do you have in mind?
>
> Not sure.  What are you intending to use this facility for?

Primarily to detect malware that is fingerprinting it's environment by
looking for hypervisor leafs and/or doing timing based detection by
benchmarking cpuid with rdtsc.

>
> Given that the hypervisor is already in complete control of what a guest
> gets to see via cpuid, mutating the results via the monitor framework
> doesn't seem like a useful thing to do.

Indeed, the hypervisor is in control and to a certain extant the user
is via overriding some leafs in the domain config. However, there are
CPUID leafs Xen adds that the user is unable to override with the
domain config. For example in malware analysis it may be very useful
to be able to hide all hypervisor leafs from the guest, which
currently requires us to recompile Xen completely. By being able to
put the monitor system inline of CPUID it can decide which process it
wants to allow to see what leafs and when. It's very handy.

>
>>
>>> cpuid is usually the serialising instruction used with rdtsc for timing
>>> loops.  This is bad enough in VMs because of the VMExit, but becomes
>>> even worse if there is a monitor delay as well.
>> Yes, going the extra route of sending a monitor event out will add to
>> that delay (how much delay will depend on the subscriber and what it
>> decides to do with the event). Wouldn't we be able to mask some of
>> that with tsc offsetting though?
>
> I am going to go out on a limb and say that that is a very large can of
> worms which you don't want to open.

Yea, I'm well aware. However, we might have to go down that rabbit
hole eventually..

>
> The problem is not that time skews from the point of view of the guest,
> but that the timing loop with a fixed number of iterations takes
> proportionally longer.
>

Yes, there is overhead inevitably. For our use-case what would be the
goal is make the detection of this overhead as hard as possible so as
long as the overhead is reasonable (ie. we don't make network
connections drop and such) we can live with the overhead.

Cheers,
Tamas

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.