[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Re: [Xense-devel] [XSM:ACM][PATCH] nulldereference bug fix



Hi,

I tested your patch.
I confirmed that XSM/ACM run normally.
Thank you for your help.

The result is put as follows.

#xm create vm1.conf <---- create domain1
Using config file "./vm1.conf".

#xm list --label
Name    ID Mem VCPUs  State   Time(s) Label
vm1      3 128  1 r-----  1.0 ACM:example.client_v1:dom_HomeBanking
Domain-0 0 743  2 r-----  0.0 ACM:example.client_v1:dom_SystemManagement

#xm create vm2.conf <---- create domain2
Using config file "./vm2.conf".
Internal error: Domain in conflict set with running domain?.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#xm list --label
Name    ID Mem VCPUs  State   Time(s) Label
vm1      3 128  1 r-----  1.0 ACM:example.client_v1:dom_HomeBanking
Domain-0 0 743  2 r-----  0.0 ACM:example.client_v1:dom_SystemManagement


Chinese Wall looks good and "system boot" did not appen.


Syunsuke HAYASHI.

> On Thu, 2007-09-27 at 16:12 -0400, Stefan Berger wrote:
>> "Coker, George" <gscoker@xxxxxxxxxxxxxx> wrote on 09/27/2007 03:35:14
>> PM:
>>
>>> This patch is correct for XSM.  The patch creates clean
>>> acm_domain_create and acm_domain_destroy operations.
>>>
>>> In 15661 the logic under which acm_domain_destroy is called is
>> slightly
>>> different than under XSM.  In 15661, acm_domain_destroy is called
>> only
>>> if the mask INIT_acm is set.  INIT_acm is set only on successful
>> return
>>> from acm_domain_create.  When acm_domain_create fails, the mask is
>> not
>>> set and acm_domain_destroy is not called.  I do not know if this
>>> resulted in a leak in 15661 due to incomplete cleanup. 
>> So the roll-back call that was necessary before is not necessary
>> anymore? 
>>
> 
> Yes, because on fail, acm_domain_create will always return
> ACM_ACCESS_DENIED which will cause domain_create to goto fail on return
> from xsm_domain_create.  At fail, free_domain is called.  free_domain
> calls xsm_free_security_domain which calls acm_domain_destroy.
> 
> acm_domain_destroy calls the acm_primary_ops->domain_destroy followed by
> the acm_secondary_ops->domain_destroy.  acm_domain_destroy ends with
> acm_free_domain_ssid.  acm_domain_destroy already contains all of the
> rollback code that is replicated in acm_domain_create.
> 
>> static inline int acm_domain_create(struct domain *d, ssidref_t
>> ssidref)
>> {
>>    void *subject_ssid = current->domain->ssid;
>>    domid_t domid = d->domain_id;
>>    int rc;
>>
>>    read_lock(&acm_bin_pol_rwlock);
>>    /*
>>       To be called when a domain is created; returns '0' if the
>>       domain is allowed to be created, != '0' if not.
>>     */
>>    rc = acm_init_domain_ssid(d, ssidref);
>>    if (rc != ACM_OK)
>>        goto error_out;
>>
>>    if ((acm_primary_ops->domain_create != NULL) &&
>>        acm_primary_ops->domain_create(subject_ssid, ssidref, domid)) {
>>        rc = ACM_ACCESS_DENIED;
>>    } else if ((acm_secondary_ops->domain_create != NULL) &&
>>                acm_secondary_ops->domain_create(subject_ssid, ssidref,
>>                                                 domid)) {
>>        /* roll-back primary */
>>        if (acm_primary_ops->domain_destroy != NULL)
>>            acm_primary_ops->domain_destroy(d->ssid, d);
>>        rc = ACM_ACCESS_DENIED;
>>    }
>>
>>    if ( rc == ACM_OK )
>>    {
>>        acm_domain_ssid_onto_list(d->ssid);
>>    } else {
>>        acm_free_domain_ssid(d->ssid);
>>    }
>>
>> error_out:
>>    read_unlock(&acm_bin_pol_rwlock);
>>    return rc;
>> } 
>>
>>
>> The acm_primary_ops->domain_create() establishes state (see
>> chwall_domain_create() in acm_chinesewall_hooks.c ) that if the
>> secondary operation fails needs to be undone. That's what the
>> acm_primary_ops->domain_destroy() did, but you intend to remove it?! I
>> have my doubts that this is correct. 
>>
> Yes I am removing the rollback from acm_domain_create because it is
> already duplicated in acm_domain_destroy.  acm_domain_destroy is always
> now called on fail under XSM.  In 15661, acm_domain_destroy was not
> always called.
> 
>> Which NULL pointer is the code running into and where? 
> 
> The NULL pointer was created in acm_free_domain_ssid and dereferenced in
> the fail code path in the call to xsm_free_security_domain in
> free_domain.
> 
>>    Stefan 
>>
>>
>>> George
>>>
>>>
>>> On Thu, 2007-09-27 at 14:35 -0400, Stefan Berger wrote:
>>>> xense-devel-bounces@xxxxxxxxxxxxxxxxxxx wrote on 09/27/2007
>> 12:43:35
>>>> PM:
>>>>
>>>>> The attached patch fixes a null dereference bug in XSM:ACM. 
>>>> As I read this in response to recent error reports - I wonder why
>> CS
>>>> 15661 does not expose this error whereas afterwards this error
>> occurs.
>>>> Are you sure this is the right solution? Was something changed in
>> this
>>>> area of the code between 'before XSM' and afterwards? 
>>>>
>>>>    Stefan 
>>>>
>>>>
>>>>
>>>>> Signed-off-by: George Coker <gscoker@xxxxxxxxxxxxxx>
>>>>> [attachment "acm-xsm-null_bug-092707-xen-unstable-15880.diff" 
>>>>> deleted by Stefan Berger/Watson/IBM] 
>>>>> _______________________________________________
>>>>> Xense-devel mailing list
>>>>> Xense-devel@xxxxxxxxxxxxxxxxxxx
>>>>> http://lists.xensource.com/xense-devel
>>>> _______________________________________________
>>>> Xen-devel mailing list
>>>> Xen-devel@xxxxxxxxxxxxxxxxxxx
>>>> http://lists.xensource.com/xen-devel
>>> -- 
>>> George S. Coker, II <gscoker@xxxxxxxxxxxxxx> 443-479-6944
>> _______________________________________________
>> Xense-devel mailing list
>> Xense-devel@xxxxxxxxxxxxxxxxxxx
>> http://lists.xensource.com/xense-devel


-- 
/////////////////////////////////////////////
富士通株式会社 新横浜TECHビル A館6F
サーバシステム事業本部 Linux技術開発統括部
Name   : Syunsuke HAYASHI (林 俊介)
TEL    : 7124-3846(内線) 045-473-9478(外線)
E-mail : syunsuke@xxxxxxxxxxxxxx
////////////////////////////////////////////


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.