|
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] [Xen-devel] [PATCH 1/2] xenstore: sanity check incoming message body lengths [and 2 more messages]
Matthew Daley writes ("[PATCH 1/2] xenstore: sanity check incoming message body
lengths"):
> This is for the client-side receiving messages from xenstored, so there
> is no security impact, unlike XSA-72.
Andrew Cooper writes ("Re: [Xen-devel] [PATCH 1/2] xenstore: sanity check
incoming message body lengths"):
> Reviewed-by: Andrew Cooper <andrew.cooper3@xxxxxxxxxx>
Matthew Daley writes ("Re: [PATCH 1/2] xenstore: sanity check incoming message
body lengths"):
> On Tue, Dec 3, 2013 at 12:33 AM, Ian Jackson <Ian.Jackson@xxxxxxxxxxxxx>
> wrote:
> > If this situation should arise, your proposal would discard the
> > headers of the bogus message and read the start of what would be the
> > over-long payload as the next header.
...
> > It would be best to either kill the connection dead, or perhaps to
> > skip the overlong data.
>
> The only callers of read_message I can see are read_reply,
> read_watch_internal and read_thread. read_reply's only caller is
> xs_talkv, which closes the connection on the failure being passed
> down. read_watch_internal doesn't, and neither do its callers.
> read_thread does close the connection.
>
> So, with that said, where should the handling of the failure go? Would
> it be good to consolidate the handling in one spot, ie. directly in
> read_message? Since read_message is the root function for all these
> other functions, and since read_thread already does handle the failure
> and the other functions use that if implicitly if threaded background
> reading is enabled, it seems like a good idea to do so.
Yes.
Arguably, based on what you've said, this patch should be applied
as-is, on the grounds of it being an improvement. Ian C, do you have
an opinion ?
Ian.
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
http://lists.xen.org/xen-devel
|
![]() |
Lists.xenproject.org is hosted with RackSpace, monitoring our |