[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] Security discussion: Summary of proposals and criteria (was Re: Security vulnerability process, and CVE-2012-0217)



I said before that I was going to give an analysis, and I had a very
detailed one written out.  The result of that analysis looked very
clear-cut.  But a couple of new arguments have come to light, and it
makes the whole thing much less clear to me.  So what I'm going to do
instead is describe the arguments that I think are pertinent, and then
my own recommendation.

Next week I plan on sending a poll out.  The poll won't be structured
like a vote; rather, the purpose of the poll is to help move the
discussion forwards by idenfitying where the sentiment lies.  The poll
will ask you to rate each option with one of the following selections:
* This is an excellent idea, and I will argue for it.
* I am happy with this idea, but I will not argue for it.
* I am not happy with this idea, but I will not argue against it.
* This is a terrible idea, and I will argue against it.

If we have some options which has at least one "argue for" and no
"argue againsts", then we can simply take a formal vote and move on to
the smaller points int he discussion.  Otherwise, we can eliminate
options for which there are no "argue for"s, and focus on the points
where there both "argue for"s and "argue against"s.

Back to the discussion.  There are two additional points I want to bring out.

First, as Joanna and others have pointed out, closing a vulnerability
will not close a back-door that has been installed while the user was
vulnerable.  So it may well be worth an attacker's time to develop an
exploit based on a bug report.

Secondly, my original discussion had assumed that the risk during
"public vulnerability" for all users was the same.  Unfortunately, I
don't think that's true.  Some targets may be more valuable than
others.  In particular, the value of attacking a hosting provider may
be correlated to the value to an attacker of the aggregate of all of
their customers.  Thus it is simply more likely for a large provider
to be the targt of an attack than a small provider.

Thus public vulnerability for a large provider may be very risky
indeed; and I tend to agree with the idea that large providers, and
other large potential users (such as large governmetns, &c) should be
given some pre-disclosure to minimize this risk.

However, as has been previously mentioned, being on the pre-disclosure
list is a very large advantage, and is unfair towards the majority of
users, who are also at significant risk during their own public
vulnerability period.

So right now I think the best option is to have a pre-disclosure list
that is fairly easy to join: if the security team has reason to
believe you are a hosting company, they can put you on the list.

Although I am unhappy with the idea of only large providers being on
the list, I still think it's a better option than giving them no
pre-disclsoure.

 -George

On Fri, Jul 6, 2012 at 5:46 PM, George Dunlap
<George.Dunlap@xxxxxxxxxxxxx> wrote:
> We've had a number of viewpoints expressed, and now we need to figure
> out how to move forward in the discussion.
>
> One thing we all seem to agree on is that with regard to the public
> disclosure and the wishes of the discloser:
> * In general, we should default to following the wishes of the discloser
> * We should have a framework available to advise the discloser of a
> reasonable embargo period if they don't have strong opinions of their
> own (many have listed the oCERT guidelines)
> * Disclosing early against the wishes of the disclosure is possible if
> the discloser's request is unreasonable, but should only be considered
> in extreme situations.
>
> What next needs to be decided, it seems to me, is concerning
> pre-disclosure: Are we going to have a pre-disclosure list (to whom we
> send details before the public disclosure), and if so who is going to
> be on it?  Then we can start filling in the details.
>
> What I propose is this.  I'll try to summarize the different options
> and angles discussed.  I will also try to synthesize the different
> arguments people have made and make my own recommendation.  Assuming
> that no creative new solutions are introduced in response, I think we
> should take an anonymous "straw poll", just to see what people think
> about the various options.  If that shows a strong consensus, then we
> should have a formal vote.  If it does not show consensus, then we'll
> at least be able to discuss the issue more constructively (by avoiding
> solutions no one is championing).
>
> So below is my summary of the options and the criteria that have been
> brought up so far.  It's fairly long, so I will give my own analysis
> and recommendation in a different mail, perhaps in a day or two.  I
> will also be working with Lars to form a straw poll where members of
> the list can informally express their preference, so we can see where
> we are in terms of agreement, sometime over the next day or two.
>
> = Proposed options =
>
> At a high level, I think we basically have five options to consider.
>
> In all cases, I think that we can make a public announcement that
> there *is* a security vulnerability, and the date we expect to
> publicly disclose the fix, so that anyone who has not been disclosed
> to non-publicly can be prepared to apply it as soon as possible.
>
> 1. No pre-disclosure list.  People are brought in only to help produce
> a fix.  The fix is released to everyone publicly when it's ready (or,
> if the discloser has asked for a longer embargo period, when that
> embargo period is up).
>
> 2. Pre-disclosure list consists only of software vendors -- people who
> compile and ship binaries to others.  No updates may be given to any
> user until the embargo period is up.
>
> 3. Pre-disclosure list consists of software vendors and some subset of
> privleged users (e.g., service providers above a certain size).
> Privileged users will be provided with patches at the same time as
> software vendors.  However, they will not be permitted to update their
> systems until the embargo period is up.
>
> 4. Pre-disclosure list consists of software vendors and privileged
> users. Privleged users will be provided with patches at the same time
> as software vendors.  They will be permitted to update their systems
> at any time.  Software vendors will be permitted to send code updates
> to service providers who are on the pre-disclosure list.  (This is the
> status quo.)
>
> 5. Pre-disclsoure list is open to any organiation (perhaps with some
> minimal entrance criteria, like having some form of incorporation, or
> having registered a domain name).  Members of the list may update
> their systems at any time; software vendors will be permitted to send
> code updates to anyone on the pre-disclosure list.
>
> 6. Pre-disclosure list open to any organization, but no one permitted
> to roll out fixes until the embargo period is up.
>
> = Criteria =
>
> I think there are several criteria we need to consider.
>
> * _Risk of being exploited_.  The ultimate goal any pre-disclosure
> process is to try to minimize the total risk for users of being
> exploited.  That said, any policy decision must take into account both
> the benefits in terms of risk reduction as well as the other costs of
> implementing the policy.
>
> To simplify things a bit, I think there are two kinds of risk.
> Between the time a vulnerability has been publicly announced and the
> time a user patches their system, that user is "publicly vulnerable"
> -- running software that contains a public vulnerability.  However,
> the user was vulnerable before that; they were vulnerable from the
> time they deployed the system with the vulnerability.  I will call
> this "privately vulnerable" -- running software that contains a
> non-public vulnerability.
>
> Now at first glance, it would seem obvious that being publicly
> vulnerable carries a much higher risk of being privately vulnerable.
> After all, to exploit a vulnerability you need to have malicious
> intent, the skills to leverage a vulnerability into an exploit, and
> you need to know about a vulnerability.  By announcing it publicly, a
> much greater number of people with malicious intent and the requisite
> skills will now know about the vulnerability; surely this increases
> the chances of someone being actually exploited.
>
> However, one should not under-estimate the risk of private
> vulnerability.  Black hats prize and actively look for vulnerabilities
> which have not yet been made public.  There is, in fact, a black
> market for such "0-day" exploits.  If your infrastructure is at all
> valuable, black hats have already been looking for the bug which makes
> you vulnerable; you have no way of knowing if they have found it yet
> or not.
>
> In fact, one could make the argument that publicly announcing a
> vulnerability along with a fix makes the vulnerability _less_ valuable
> to black-hats.  Developing an exploit from a vulnerability requires a
> significant amount of effort; and you know that security-conscious
> service providers will be working as fast as possible to close the
> hole.  Why would you spend your time and energy for an exploit that's
> only going to be useful for a day or two at most?
>
> Ultimately the only way to say for sure would be to talk to people who
> know the black hat community well.  But we can conclude this: private
> vulnerability is a definite risk which needs to be considered when
> minimizing total risk.
>
> Another thing to consider is how the nature of the pre-disclosure and
> public disclosure affect the risk.  For pre-disclosure, the more
> individuals have access to pre-disclosure information, the higher the
> risk that the information will end up in the hands of a black-hat.
> Having a list anyone can sign up to, for instance, may be very little
> more secure than a quiet public disclosure.
>
> For public disclosure, the nature of the disclosure may affect the
> risk, or the perception of risk, materially.  If the fix is simply
> checked into a public repository without fanfare or comment, it may
> not raise the risk of public vulnerability significantly; while if the
> fix is announced in press releases and on blogs, the _perception_ of
> the risk will undoubtedly increase.
>
> * _Fairness_.  Xen is a community project and relies on the good-will
> of the community to continue.  Giving one sub-group of our users an
> advantage over another sub-group will be costly in terms of community
> good will.  Furthermore, depending on what kind of sub-group we have
> and how it's run, it may well be considered anti-competitive and
> illegal in some jurisdictions.  Some might say we should never
> consider such a thing.  At very least, doing so should be very
> carefully considered to make sure the risk is worth the benefit.
>
> The majority of this document will focus on the impact of the policy
> on actual users.  However, I think it is also legitimate to consider
> the impact of the policies on software vendors as well.  Regardless of
> the actual risk to users, the _perception_ of risk may have a
> significant impact on the success of some vendors over others.
>
> It is in fact very difficult to achieve perfect fairness between all
> kinds of parties.  However, as much as possible, unfairness should be
> based on decisions that the party themselves have a reasonable choice
> about.  For instance, having a slight advantage to compiling your own
> hypervisor directly from xen.org rather than using a software vendor
> might be tolerable because 1) those receiving from software vendors
> may have other advantages not available to those consuming directly,
> and 2) anyone can switch to pulling directly from xen.org if they
> wish.
>
> * _Administrative overhead_.  This comprises a number of different
> aspects: for example, how hard is it to come up with a precise and
> "fair" policy?  How much effort will it be for xen.org to determine
> whether or not someone should be on the list?
>
> Another question has to do with robustness of enforcement.  If there
> is a strong incentive for people on the list to break the rules
> ("moral hazard"), then we need to import a whole legal framework: how
> do we detect breaking the rules?  Who decides that the rules have
> indeed been broken, and decides the consequences?  Is there an appeals
> process?  At what point is someone who has broken the rules in the
> past allowed back on the list?  What are the legal, project, and
> community implications of having to do this, and so on?  All of this
> will impose a much heavier burden on not only this discussion, but
> also on the xen.org security team.
>
> (Disclaimer: I am not a lawyer.) It should be noted that because of
> the nature of the GPL, we cannot impose additional contractual
> limitations on the re-distribution of a GPL'ed patch, and thus we
> cannot seek legal redress for those who re-distribute such a patch (or
> resulting binaries) in a way that violates the pre-disclosure policy.
> But for the purposes of this discussion, I am going to assume that we
> can, however, choose to remove them from the pre-disclosure list as a
> result.
>
> I think those cover the main points that have been brought up in the
> discussion.  Please feel free to give feedback.  Next week probably I
> will attempt to give an analysis, applying these criteria to the
> different options.  I haven't yet come up with what I think is a
> satisfactory conclusion yet.
>
>  -George
>
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxx
> http://lists.xen.org/xen-devel

_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.