[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH 1/3] xen/mce: Add mcelog support for Xen platform (RFC)
On Wed, May 30, 2012 at 03:09:12PM +0000, Liu, Jinsong wrote: > > > > Still no go, this is current linus with your patch applied. I'll look > > into it > > later when there's time. > > The root cause is, > 1). at cpu/mcheck/mce.c, device_initcall_sync(mcheck_init_device) is *after* > all device_initcall(); > 2). at cpu/mcheck/mce_amd.c, device_initcall(threshold_init_device) will > threshold_init_device > --> threshold_create_device > --> threshold_create_bank > --> kobject_create_and_add(name, &dev->kobj); > // at this point, struct device *dev = per_cpu(mce_device, cpu), > which is a NULL pointer. > // mce_device is initialized at mcheck_init_device --> > mce_device_create > 3). so kernel panic > > So our RFC patch would affect amd mce logic. > > =========================== > > I have a thought about symlink approach, but seems it would bring more > issues, e.g. > 1). it need change more native mce code, like remove /dev/mcelog which > created at native mce (under xen platform), or > 2). it still need to change device_initcall(mcheck_init_device) to > device_initcall_sync(mcheck_init_device), if it want to implicitly block > native /dev/mcelog --> but that would panic amd mce logic. > > IMO currently there are 2 options: > 1). use the original approach (implicitly redirect /dev/mcelog to > xen_mce_chrdev_device) --> what point of this approach do you think > unreasonable? It just remove a 'static' from native mce code! > 2). use another /dev/xen-mcelog interface, with another misc minor '226' The 2) is no good. 3) What about moving the corresponding other users (so threshold_init_device), to be at late_initcall and the mce to be at late_initcall_sync 4) Or make the driver you are making start at fs_initcall ? _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |