[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Xen-devel] Re: [patch 10/21] Xen-paravirt: add hooks to intercept mm creation and destruction



Andrew Morton wrote:
> On Thu, 15 Feb 2007 18:24:59 -0800 Jeremy Fitzhardinge <jeremy@xxxxxxxx> 
> wrote:
>
>   
>> Add hooks to allow a paravirt implementation to track the lifetime of
>> an mm.
>>
>> --- a/arch/i386/kernel/paravirt.c
>> +++ b/arch/i386/kernel/paravirt.c
>> @@ -706,6 +706,10 @@ struct paravirt_ops paravirt_ops = {
>>      .irq_enable_sysexit = native_irq_enable_sysexit,
>>      .iret = native_iret,
>>  
>> +    .dup_mmap = (void *)native_nop,
>> +    .exit_mmap = (void *)native_nop,
>> +    .activate_mm = (void *)native_nop,
>> +
>>      .startup_ipi_hook = (void *)native_nop,
>>  };
>>     
>
> eww.  I suppose there's a good reason for the casting.
>   

Yeah, it's a bit ugly.  The alternative is to have a separate
correctly-typed nop function for each operation.  But that's even more
typing.

> It seems strange to call out to arch_foo() from within an arch header file.
> I mean, we implicity know we're i386.
>
> Maybe it's just poorly named.
>   

The other two are arch_* and are called from common code. 
arch_activate_mm() is either empty or a call to
paravirt_ops.activate_mm.  I could name it paravirt_activate_mm (as it
was in earlier versions of this patch), but then it would be
inconsistent with the other functions.  I thought the consistency was
more important, because these calls need to be properly matched.

>> +static inline void paravirt_activate_mm(struct mm_struct *prev,
>> +                                    struct mm_struct *next)
>> +{
>> +}
>> +
>> +static inline void paravirt_dup_mmap(struct mm_struct *oldmm,
>> +                                 struct mm_struct *mm)
>> +{
>> +}
>> +
>> +static inline void paravirt_exit_mmap(struct mm_struct *mm)
>> +{
>> +}
>>     
>
> These functions are unreferenced in this patchset.
>   

OK, I'll drop them.

>>  #endif /* CONFIG_PARAVIRT */
>>  #endif      /* __ASM_PARAVIRT_H */
>> ===================================================================
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -374,6 +374,12 @@ struct mm_struct {
>>      rwlock_t                ioctx_list_lock;
>>      struct kioctx           *ioctx_list;
>>  };
>> +
>> +#ifndef __HAVE_ARCH_MM_LIFETIME
>> +#define arch_activate_mm(prev, next)        do {} while(0)
>> +#define arch_dup_mmap(oldmm, mm)    do {} while(0)
>> +#define arch_exit_mmap(mm)          do {} while(0)
>> +#endif
>>     
>
> Can we lose __HAVE_ARCH_MM_LIFETIME?  Just define these (preferably in C,
> not in cpp) in the appropriate include/asm-foo/ files?
>   

I guess, if you want.  For everything except i386 (and x86_64 in the not
too distant future) they'll be noops.  But for consistency, I/we would
have to put the appropriate arch_activate_mm() into each arch's
activate_mm(); I seem to remember some were not as straightforward as i386.

>>  struct sighand_struct {
>>      atomic_t                count;
>> ===================================================================
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -286,6 +286,7 @@ static inline int dup_mmap(struct mm_str
>>              if (retval)
>>                      goto out;
>>      }
>> +    arch_dup_mmap(oldmm, mm);
>>      retval = 0;
>>  out:
>>      up_write(&mm->mmap_sem);
>> ===================================================================
>> --- a/mm/mmap.c
>> +++ b/mm/mmap.c
>> @@ -1976,6 +1976,8 @@ void exit_mmap(struct mm_struct *mm)
>>      struct vm_area_struct *vma = mm->mmap;
>>      unsigned long nr_accounted = 0;
>>      unsigned long end;
>> +
>> +    arch_exit_mmap(mm);
>>  
>>      lru_add_drain();
>>      flush_cache_mm(mm);
>>     
>
> Perhaps some commentary telling the arch maintainer what these hooks he's
> being offered are for?
>   

OK.

    J

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.