[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] 2.6.28 and balloon driver



On Tue, 2009-01-06 at 23:34 +0000, Dan Magenheimer wrote:
> I am playing with 2.6.28 on xen.  Nice!  Thanks Jeremy!
> 
> But I have a minor gripe...
> 
> The balloon driver now is accessed via a sysfs file, for example:
> 
> # SIZE=`cat /sys/devices/system/xen_memory/xen_memory0/target_kb`
> # echo $SIZE
> # echo $SIZE > /sys/devices/system/xen_memory/xen_memory0/target_kb
> 
> SIZE does indeed get current memory size in kbytes, but
> if one tries to pass SIZE (or slightly smaller value) back,
> all hell breaks loose because:
> 
> 1) Despite the name, the value written must be in bytes, not kbytes

Looks like the write function uses memparse which understands the M, K
etc suffixes and defaults to bytes. The ability to say balloon to <n>M
is quite nice but for the sake of consistency with the name it's
probably preferable to just treat the value as a number of Kbytes.

> 2) There is no "safety minimum", so writing the same value back
>    actually reduces memory by a factor of 1024!

I guess this just needs porting forward.

Can you provide patches for both these issues?

Ian.

> 
> I realize behavior (1) is backwards-compatible with the existing
> /proc/xen/balloon behavior, but at least that filename doesn't
> imply a unit size.
> 
> For (2), as sysadmins grow comfortable with the "safety minimum"
> that's been implemented in upstream xen now for nearly a year,
> (users can do:
> 
> # echo 0 > /proc/xen/balloon
> 
> and it still works), some people upgrading to a 2.6.28 kernel
> are in for a nasty surprise.
> 
> <gripe off>
> 
> _______________________________________________
> Xen-devel mailing list
> Xen-devel@xxxxxxxxxxxxxxxxxxx
> http://lists.xensource.com/xen-devel


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel


 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.