[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [PATCH v2 3/5] golang/xenlight: Add host-related functionality



On 02/03/17 16:07, Ronald Rojas wrote:
> Add calls for the following host-related functionality:
> - libxl_get_max_cpus
> - libxl_get_online_cpus
> - libxl_get_max_nodes
> - libxl_get_free_memory
> - libxl_get_physinfo
> - libxl_get_version_info
> 
> Include Golang versions of the following structs:
> - libxl_physinfo as Physinfo
> - libxl_version_info as VersionInfo
> - libxl_hwcap as Hwcap
> 
> Signed-off-by: Ronald Rojas <ronladred@xxxxxxxxx>

Looks good -- just two minor comments...

> ---
> Changes since last version
> 
> - Refactored creating Physinfo and VersionInfo types into
> seperate toGo() functions.
> 
> - Used libxl_<type>_init() and libxl_<type>_dispose() on IDL
> type physinfo
> 
> - Whitespace fixes
> 
> CC: xen-devel@xxxxxxxxxxxxx
> CC: george.dunlap@xxxxxxxxxx
> CC: ian.jackson@xxxxxxxxxxxxx
> CC: wei.liu2@xxxxxxxxxx
> ---
> ---
>  tools/golang/xenlight/xenlight.go | 200 
> ++++++++++++++++++++++++++++++++++++++
>  1 file changed, 200 insertions(+)
> 
> diff --git a/tools/golang/xenlight/xenlight.go 
> b/tools/golang/xenlight/xenlight.go
> index cbd3527..63cc805 100644
> --- a/tools/golang/xenlight/xenlight.go
> +++ b/tools/golang/xenlight/xenlight.go
> @@ -106,6 +106,103 @@ type Context struct {
>       ctx *C.libxl_ctx
>  }
>  
> +type Hwcap []C.uint32_t
> +
> +func (chwcap C.libxl_hwcap) CToGo() (ghwcap Hwcap) {

Was there a reason you left this as CToGo() rather than just toGo()?

> +     // Alloc a Go slice for the bytes
> +     size := 8
> +     ghwcap = make([]C.uint32_t, size)
> +
> +     // Make a slice pointing to the C array
> +     mapslice := (*[1 << 
> 30]C.uint32_t)(unsafe.Pointer(&chwcap[0]))[:size:size]
> +
> +     // And copy the C array into the Go array
> +     copy(ghwcap, mapslice)
> +
> +     return
> +}
> +
> +/*
> + * Types: IDL
> + *
> + * FIXME: Generate these automatically from the IDL
> + */
> +
> +type Physinfo struct {
> +     ThreadsPerCore    uint32
> +     CoresPerSocket    uint32
> +     MaxCpuId          uint32
> +     NrCpus            uint32
> +     CpuKhz            uint32
> +     TotalPages        uint64
> +     FreePages         uint64
> +     ScrubPages        uint64
> +     OutstandingPages  uint64
> +     SharingFreedPages uint64
> +     SharingUsedFrames uint64
> +     NrNodes           uint32
> +     HwCap             Hwcap
> +     CapHvm            bool
> +     CapHvmDirectio    bool
> +}
> +
> +func (cphys *C.libxl_physinfo) toGo() (physinfo *Physinfo) {
> +
> +     physinfo = &Physinfo{}
> +     physinfo.ThreadsPerCore = uint32(cphys.threads_per_core)
> +     physinfo.CoresPerSocket = uint32(cphys.cores_per_socket)
> +     physinfo.MaxCpuId = uint32(cphys.max_cpu_id)
> +     physinfo.NrCpus = uint32(cphys.nr_cpus)
> +     physinfo.CpuKhz = uint32(cphys.cpu_khz)
> +     physinfo.TotalPages = uint64(cphys.total_pages)
> +     physinfo.FreePages = uint64(cphys.free_pages)
> +     physinfo.ScrubPages = uint64(cphys.scrub_pages)
> +     physinfo.ScrubPages = uint64(cphys.scrub_pages)
> +     physinfo.SharingFreedPages = uint64(cphys.sharing_freed_pages)
> +     physinfo.SharingUsedFrames = uint64(cphys.sharing_used_frames)
> +     physinfo.NrNodes = uint32(cphys.nr_nodes)
> +     physinfo.HwCap = cphys.hw_cap.CToGo()
> +     physinfo.CapHvm = bool(cphys.cap_hvm)
> +     physinfo.CapHvmDirectio = bool(cphys.cap_hvm_directio)
> +
> +     return
> +}
> +
> +type VersionInfo struct {
> +     XenVersionMajor int
> +     XenVersionMinor int
> +     XenVersionExtra string
> +     Compiler        string
> +     CompileBy       string
> +     CompileDomain   string
> +     CompileDate     string
> +     Capabilities    string
> +     Changeset       string
> +     VirtStart       uint64
> +     Pagesize        int
> +     Commandline     string
> +     BuildId         string
> +}
> +
> +func (cinfo *C.libxl_version_info) toGo() (info *VersionInfo) {
> +     info = &VersionInfo{}
> +     info.XenVersionMajor = int(cinfo.xen_version_major)
> +     info.XenVersionMinor = int(cinfo.xen_version_minor)
> +     info.XenVersionExtra = C.GoString(cinfo.xen_version_extra)
> +     info.Compiler = C.GoString(cinfo.compiler)
> +     info.CompileBy = C.GoString(cinfo.compile_by)
> +     info.CompileDomain = C.GoString(cinfo.compile_domain)
> +     info.CompileDate = C.GoString(cinfo.compile_date)
> +     info.Capabilities = C.GoString(cinfo.capabilities)
> +     info.Changeset = C.GoString(cinfo.changeset)
> +     info.VirtStart = uint64(cinfo.virt_start)
> +     info.Pagesize = int(cinfo.pagesize)
> +     info.Commandline = C.GoString(cinfo.commandline)
> +     info.BuildId = C.GoString(cinfo.build_id)
> +
> +     return
> +}
> +
>  /*
>   * Context
>   */
> @@ -156,3 +253,106 @@ func (Ctx *Context) CheckOpen() (err error) {
>       }
>       return
>  }
> +
> +//int libxl_get_max_cpus(libxl_ctx *ctx);
> +func (Ctx *Context) GetMaxCpus() (maxCpus int, err error) {
> +     err = Ctx.CheckOpen()
> +     if err != nil {
> +             return
> +     }
> +
> +     ret := C.libxl_get_max_cpus(Ctx.ctx)
> +     if ret < 0 {
> +             err = Error(-ret)
> +             return
> +     }
> +     maxCpus = int(ret)
> +     return
> +}
> +
> +//int libxl_get_online_cpus(libxl_ctx *ctx);
> +func (Ctx *Context) GetOnlineCpus() (onCpus int, err error) {
> +     err = Ctx.CheckOpen()
> +     if err != nil {
> +             return
> +     }
> +
> +     ret := C.libxl_get_online_cpus(Ctx.ctx)
> +     if ret < 0 {
> +             err = Error(-ret)
> +             return
> +     }
> +     onCpus = int(ret)
> +     return
> +}
> +
> +//int libxl_get_max_nodes(libxl_ctx *ctx);
> +func (Ctx *Context) GetMaxNodes() (maxNodes int, err error) {
> +     err = Ctx.CheckOpen()
> +     if err != nil {
> +             return
> +     }
> +     ret := C.libxl_get_max_nodes(Ctx.ctx)
> +     if ret < 0 {
> +             err = Error(-ret)
> +             return
> +     }
> +     maxNodes = int(ret)
> +     return
> +}
> +
> +//int libxl_get_free_memory(libxl_ctx *ctx, uint64_t *memkb);
> +func (Ctx *Context) GetFreeMemory() (memkb uint64, err error) {
> +     err = Ctx.CheckOpen()
> +     if err != nil {
> +             return
> +     }
> +     var cmem C.uint64_t
> +     ret := C.libxl_get_free_memory(Ctx.ctx, &cmem)
> +
> +     if ret < 0 {
> +             err = Error(-ret)
> +             return
> +     }
> +
> +     memkb = uint64(cmem)
> +     return
> +
> +}
> +
> +//int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
> +func (Ctx *Context) GetPhysinfo() (physinfo *Physinfo, err error) {
> +     err = Ctx.CheckOpen()
> +     if err != nil {
> +             return
> +     }
> +     var cphys C.libxl_physinfo
> +     C.libxl_physinfo_init(&cphys)
> +
> +     ret := C.libxl_get_physinfo(Ctx.ctx, &cphys)
> +
> +     if ret < 0 {
> +             err = Error(ret)
> +             return
> +     }
> +     physinfo = cphys.toGo()
> +     C.libxl_physinfo_dispose(&cphys)

I think it would probably be more idiomatic to write "defer
C.libxl_physinfo_dispose()" just after the physinfo_init.

If the init() actually allocated any memory, the current code would
return without disposing of it if there was an error.  `defer` avoids
that by ensuring that *all* return paths call the clean-up function.

Other than that, looks great, thanks!

 -George


_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxx
https://lists.xen.org/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.