[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Xen-devel] [Patch][2/2][BIOS] Support BCV table



Hi,

Akio Takebe wrote:
>>> On 27/03/2009 00:48, "Akio Takebe" <takebe_akio@xxxxxxxxxxxxxx> wrote:
>>>
>>>>> You want qemu mucking about with the hvm_info_table? I don't think so.
>>>>> You'll have to consider an approach which doesn't touch qemu - you have 
>>>>> some
>>>>> time anyway since this is not going in for 3.4.
>>>> I didn't want to modify qemu, but virtual slot is decided in qemu.
>>>> Most of my patch modify hw/pass-through.c
>>>> Because hw/pass-though.c is used by only xen,
>>>> I though it was accectable to modify it.
>>>> So I modified qemu involuntarily. I'm sorry.
>>>> If we don't modify qemu, we need to see xenstore and so on
>>>> from hvmloader. Do you have any idea?
>>> I may be missing some of the motivation and higher-level design, which you
>>> may have to describe. I'm not really sure what the whole patchset was
>>> actually for and why we'd want it.
>>>
>> I have two problems.
>> 1. We cannot load many optionROM
>>  In the case of a native PnP BIOS, it load a optionROM and
>>  try to initialize their device, then it can free unnecessary ROM 
>>  memory. So a native BIOS can load many optionROM.
>>  But in the case of xen, optionROMs are loaded in hvmloader.
>>  So we cannot free unnecessary memory.
>>  Current hvmloader try to load all of optionROM, But if shadow memory
>>  doesn't have enough space, it stop loading optionROM.
>>  So I wanted to load some necessary optionROM for booting.
>>  As the side effect, the patch make booting faster if you don't want to
>>  boot from pass-through device.
>>  I think it is not important problem.
>>  We can configure bootable devices with early number of vslot.
>>  
>> 2. We cannot retry the next drive of HDD type.
>>  rombios try to boot from only 0x80 drive.
>>  So rombios cannot retry to boot with other drives.
>>  I want to boot from other drive.
>>  It is useful when the first drive(0x80) broke.
>>  Also if acceptable, I want to implement interactive boot key
>>  for pass-through device.
>>
>>> But, for example, why not specify the vendor:dev identifier via
>>> hvm_info_table, rather than specifying the vslot?
>> Oh, I didn't have the idea. I'll try it.
>>
> I don't try the idea of vendor:dev id, but I made a patch(bcv.v2.patch)
> adding the feature of retrying to boot with the next drives.
> What do you think about this patch?
> This patch doesn't add any new syntax, just add the retry feature.
> 
> Also I made another patch(support_interactive_boot_for_bcv.patch).
> It allows user to select a bootable pass-through device with F12.
> support_interactive_boot_for_bcv.patch depends on bcv.v2.patch.
> 
Just RFC. I made a patch which we can select bootable devices with 
vendor_id:device_id.
It depends on previous 2 patches(bcv.v2.patch, 
support_interactive_boot_for_bcv.patch).
If acceptable, I will remake, cleanup and post them after xen-3.5.

Best Regards,

Akio Takebe
diff -r b6cf416223e3 tools/firmware/hvmloader/hvmloader.c
--- a/tools/firmware/hvmloader/hvmloader.c      Fri Mar 27 19:44:05 2009 +0900
+++ b/tools/firmware/hvmloader/hvmloader.c      Mon Mar 30 15:42:08 2009 +0900
@@ -478,6 +478,8 @@
     uint32_t option_rom_addr, rom_phys_addr = rom_base_addr;
     uint16_t vendor_id, device_id;
     uint8_t devfn, class;
+    uint32_t i, found;
+    uint32_t vendev;
 
     for ( devfn = 0; devfn < 128; devfn++ )
     {
@@ -487,6 +489,22 @@
 
         if ( (vendor_id == 0xffff) && (device_id == 0xffff) )
             continue;
+
+        found = 0;
+        for ( i = 0; i < 4 ; i++ ) {
+            vendev = hvm_info->pci_vd[i];
+            if ( vendev == 0 )
+                continue;
+            if ( (vendor_id == (vendev>>16&0xffff)) && (device_id == 
(vendev&0xffff)) ){
+                found = 1;
+                break;
+            }
+        }
+   
+        if ( found == 0 )
+            continue;
+        else 
+            printf("vendor=%x device=%x \n", ((vendev>>16)&0xffff), 
vendev&0xffff);
 
         /*
          * Currently only scan options from mass storage devices and serial
diff -r b6cf416223e3 tools/libxc/xc_hvm_build.c
--- a/tools/libxc/xc_hvm_build.c        Fri Mar 27 19:44:05 2009 +0900
+++ b/tools/libxc/xc_hvm_build.c        Mon Mar 30 15:42:08 2009 +0900
@@ -30,12 +30,37 @@
 #define NR_SPECIAL_PAGES     5
 #define special_pfn(x) (0xff000u - NR_SPECIAL_PAGES + (x))
 
-static void build_hvm_info(void *hvm_info_page, uint64_t mem_size)
+static int token_value(char *token)
+{
+    token = strchr(token, 'x') + 1;
+    return strtol(token, NULL, 16);
+}
+
+static int next_vd(char **str, int *vendor, int *device)
+{
+    char *token;
+
+    if ( !(*str) || !strchr(*str, ',') )
+        return 0;
+
+    token = *str;
+    *vendor  = token_value(token);
+    token = strchr(token, ',') + 1;
+    *device  = token_value(token);
+    token = strchr(token, ',');
+    *str = token ? token + 1 : NULL;
+
+    return 1;
+}
+
+static void build_hvm_info(void *hvm_info_page, uint64_t mem_size, char 
*pci_str)
 {
     struct hvm_info_table *hvm_info = (struct hvm_info_table *)
         (((unsigned char *)hvm_info_page) + HVM_INFO_OFFSET);
     uint64_t lowmem_end = mem_size, highmem_end = 0;
     uint8_t sum;
+    uint32_t vd = 0;
+    int vendor, device;
     int i;
 
     if ( lowmem_end > HVM_BELOW_4G_RAM_END )
@@ -60,10 +85,24 @@
     hvm_info->high_mem_pgend = highmem_end >> PAGE_SHIFT;
     hvm_info->reserved_mem_pgstart = special_pfn(0);
 
+    /* bootable pass-through devices */
+    i = 0;
+    while ( next_vd(&pci_str, &vendor, &device) )
+    {
+        vd |= (vendor & 0xffff) << 16;
+        vd |= (device & 0xffff) ;
+       hvm_info->pci_vd[i] = vd;
+        vd = 0;
+       i++;
+        if ( i == 4 )
+               break;
+    }
+
     /* Finish with the checksum. */
     for ( i = 0, sum = 0; i < hvm_info->length; i++ )
         sum += ((uint8_t *)hvm_info)[i];
     hvm_info->checksum = -sum;
+
 }
 
 static int loadelfimage(
@@ -102,7 +141,8 @@
 
 static int setup_guest(int xc_handle,
                        uint32_t dom, int memsize, int target,
-                       char *image, unsigned long image_size)
+                       char *image, unsigned long image_size,
+                       char *pci)
 {
     xen_pfn_t *page_array = NULL;
     unsigned long i, nr_pages = (unsigned long)memsize << (20 - PAGE_SHIFT);
@@ -132,6 +172,8 @@
     elf_parse_binary(&elf);
     v_start = 0;
     v_end = (unsigned long long)memsize << 20;
+
+    PERROR("%s: BBB pci=%s\n",__func__, pci);
 
     if ( xc_version(xc_handle, XENVER_capabilities, &caps) != 0 )
     {
@@ -248,7 +290,7 @@
               xc_handle, dom, PAGE_SIZE, PROT_READ | PROT_WRITE,
               HVM_INFO_PFN)) == NULL )
         goto error_out;
-    build_hvm_info(hvm_info_page, v_end);
+    build_hvm_info(hvm_info_page, v_end, pci);
     munmap(hvm_info_page, PAGE_SIZE);
 
     /* Map and initialise shared_info page. */
@@ -325,21 +367,25 @@
     free(page_array);
     return -1;
 }
+extern FILE *xc_dom_logfile;
+extern void xc_dom_loginit(void);
 
 static int xc_hvm_build_internal(int xc_handle,
                                  uint32_t domid,
                                  int memsize,
                                  int target,
                                  char *image,
-                                 unsigned long image_size)
+                                 unsigned long image_size,
+                                 char *pci)
 {
+    xc_dom_loginit();
     if ( (image == NULL) || (image_size == 0) )
     {
         ERROR("Image required");
         return -1;
     }
 
-    return setup_guest(xc_handle, domid, memsize, target, image, image_size);
+    return setup_guest(xc_handle, domid, memsize, target, image, image_size, 
pci);
 }
 
 static inline int is_loadable_phdr(Elf32_Phdr *phdr)
@@ -364,7 +410,7 @@
          ((image = xc_read_image(image_name, &image_size)) == NULL) )
         return -1;
 
-    sts = xc_hvm_build_internal(xc_handle, domid, memsize, memsize, image, 
image_size);
+    sts = xc_hvm_build_internal(xc_handle, domid, memsize, memsize, image, 
image_size, NULL);
 
     free(image);
 
@@ -381,7 +427,8 @@
                            uint32_t domid,
                            int memsize,
                            int target,
-                           const char *image_name)
+                           const char *image_name,
+                           char *pci)
 {
     char *image;
     int  sts;
@@ -391,7 +438,7 @@
          ((image = xc_read_image(image_name, &image_size)) == NULL) )
         return -1;
 
-    sts = xc_hvm_build_internal(xc_handle, domid, memsize, target, image, 
image_size);
+    sts = xc_hvm_build_internal(xc_handle, domid, memsize, target, image, 
image_size, pci);
 
     free(image);
 
@@ -427,7 +474,7 @@
     }
 
     sts = xc_hvm_build_internal(xc_handle, domid, memsize, memsize,
-                                img, img_len);
+                                img, img_len, NULL);
 
     /* xc_inflate_buffer may return the original buffer pointer (for
        for already inflated buffers), so exercise some care in freeing */
diff -r b6cf416223e3 tools/libxc/xenguest.h
--- a/tools/libxc/xenguest.h    Fri Mar 27 19:44:05 2009 +0900
+++ b/tools/libxc/xenguest.h    Mon Mar 30 15:42:08 2009 +0900
@@ -134,7 +134,8 @@
                             uint32_t domid,
                             int memsize,
                             int target,
-                            const char *image_name);
+                            const char *image_name,
+                            char *pci);
 
 int xc_hvm_build_mem(int xc_handle,
                      uint32_t domid,
diff -r b6cf416223e3 tools/python/xen/lowlevel/xc/xc.c
--- a/tools/python/xen/lowlevel/xc/xc.c Fri Mar 27 19:44:05 2009 +0900
+++ b/tools/python/xen/lowlevel/xc/xc.c Mon Mar 30 15:42:08 2009 +0900
@@ -891,20 +891,21 @@
 #endif
     char *image;
     int memsize, target=-1, vcpus = 1, acpi = 0, apic = 1;
+    char *pci;
 
     static char *kwd_list[] = { "domid",
                                 "memsize", "image", "target", "vcpus", "acpi",
-                                "apic", NULL };
-    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iis|iiii", kwd_list,
+                                "apic", "pci", NULL };
+    if ( !PyArg_ParseTupleAndKeywords(args, kwds, "iis|iiiis", kwd_list,
                                       &dom, &memsize, &image, &target, &vcpus,
-                                      &acpi, &apic) )
+                                      &acpi, &apic, &pci) )
         return NULL;
 
     if ( target == -1 )
         target = memsize;
 
     if ( xc_hvm_build_target_mem(self->xc_handle, dom, memsize,
-                                 target, image) != 0 )
+                                 target, image, pci) != 0 )
         return pyxc_error_to_exception();
 
 #if !defined(__ia64__)
diff -r b6cf416223e3 tools/python/xen/xend/image.py
--- a/tools/python/xen/xend/image.py    Fri Mar 27 19:44:05 2009 +0900
+++ b/tools/python/xen/xend/image.py    Mon Mar 30 15:42:08 2009 +0900
@@ -41,6 +41,7 @@
 from xen.xend import XendOptions
 from xen.util import oshelp
 from xen.util import utils
+from xen.util import pci as PciUtil
 from xen.xend import osdep
 
 xc = xen.lowlevel.xc.xc()
@@ -49,6 +50,34 @@
 
 sentinel_path_prefix = '/var/run/xend/dm-'
 sentinel_fifos_inuse = { }
+
+def comma_sep_kv_to_dict(c):
+    """Convert comma-separated, equals-separated key-value pairs into a
+    dictionary.
+    """
+    d = {}
+    c = c.strip()
+    if len(c) > 0:
+        a = c.split(',')
+        for b in a:
+            if b.find('=') == -1:
+                err("%s should be a pair, separated by an equals sign." % b)
+            (k, v) = b.split('=', 1)
+            k = k.strip()
+            v = v.strip()
+            d[k] = v
+    log.debug("AAA: d = %s " % d.keys())
+    return d
+
+def parse_hex(val):
+    try:
+        if isinstance(val, types.StringTypes):
+            return int(val, 16)
+        else:
+            return val
+    except ValueError:
+        return None
+
 
 def cleanup_stale_sentinel_fifos():
     for path in glob.glob(sentinel_path_prefix + '*.fifo'):
@@ -741,6 +770,7 @@
         self.apic = int(vmConfig['platform'].get('apic', 0))
         self.acpi = int(vmConfig['platform'].get('acpi', 0))
         self.guest_os_type = vmConfig['platform'].get('guest_os_type')
+       self.pci  = vmConfig['platform'].get('pci')
 
 
     # Return a list of cmd line args to the device models based on the
@@ -839,6 +869,8 @@
 
         memmax_mb = self.getRequiredMaximumReservation() / 1024
         mem_mb = self.getRequiredInitialReservation() / 1024
+        pci_str = ""
+        PciUtil.create_lspci_info()
 
         log.debug("domid          = %d", self.vm.getDomid())
         log.debug("image          = %s", self.loader)
@@ -848,6 +880,15 @@
         log.debug("vcpus          = %d", self.vm.getVCpuCount())
         log.debug("acpi           = %d", self.acpi)
         log.debug("apic           = %d", self.apic)
+        log.debug(self.pci)
+        for (d, b, s, f, vslot, opts) in self.pci:
+            dic = comma_sep_kv_to_dict(opts)
+            if 'boot' in dic.keys():
+                if int(dic['boot'],10) > 0:
+                    pci_dev = PciUtil.PciDevice(int(d, 16), int(b, 16), int(s, 
16), int(f, 16))
+                    pci_str += "%s,%s" % (hex(pci_dev.vendor), 
hex(pci_dev.device))
+
+        log.debug("pci_str        = %s", pci_str)
 
         rc = xc.hvm_build(domid          = self.vm.getDomid(),
                           image          = self.loader,
@@ -855,7 +896,9 @@
                           target         = mem_mb,
                           vcpus          = self.vm.getVCpuCount(),
                           acpi           = self.acpi,
-                          apic           = self.apic)
+                          apic           = self.apic,
+                          pci            = pci_str)
+
         rc['notes'] = { 'SUSPEND_CANCEL': 1 }
 
         rc['store_mfn'] = xc.hvm_get_param(self.vm.getDomid(),
diff -r b6cf416223e3 tools/python/xen/xm/create.py
--- a/tools/python/xen/xm/create.py     Fri Mar 27 19:44:05 2009 +0900
+++ b/tools/python/xen/xm/create.py     Mon Mar 30 15:42:08 2009 +0900
@@ -323,7 +323,7 @@
           backend driver domain to use for the disk.
           The option may be repeated to add more than one disk.""")
 
-gopts.var('pci', 
val='BUS:DEV.FUNC[@VSLOT][,msitranslate=0|1][,power_mgmt=0|1]',
+gopts.var('pci', 
val='BUS:DEV.FUNC[@VSLOT][,msitranslate=0|1][,power_mgmt=0|1][,boot=0|1]',
           fn=append_value, default=[],
           use="""Add a PCI device to a domain, using given params (in hex).
           For example 'pci=c0:02.1'.
@@ -334,7 +334,10 @@
           translated from physical MSI, HVM only. Default is 1.
           The option may be repeated to add more than one pci device.
           If power_mgmt is set, the guest OS will be able to program the power
-          states D0-D3hot of the device, HVM only. Default=0.""")
+          states D0-D3hot of the device, HVM only. Default=0.
+          The option can add only one pci device.
+          If boot is set, guest BIOS boot OS from the pass-through devices.
+          The option is used by SAN/SAS boot.""")
 
 gopts.var('vscsi', val='PDEV,VDEV[,DOM]',
           fn=append_value, default=[],
@@ -704,7 +707,7 @@
         d = comma_sep_kv_to_dict(opts)
 
         def f(k):
-            if k not in ['msitranslate', 'power_mgmt']:
+            if k not in ['msitranslate', 'power_mgmt', 'boot']:
                 err('Invalid pci option: ' + k)
 
             config_pci_opts.append([k, d[k]])
diff -r b6cf416223e3 xen/include/public/hvm/hvm_info_table.h
--- a/xen/include/public/hvm/hvm_info_table.h   Fri Mar 27 19:44:05 2009 +0900
+++ b/xen/include/public/hvm/hvm_info_table.h   Mon Mar 30 15:42:08 2009 +0900
@@ -64,6 +64,11 @@
      *    RAM above 4GB
      */
     uint32_t    high_mem_pgend;
+    /*
+     * SBDF of bootable pass-through devices
+     * It is used by hvmloader for loading option ROM.
+     */
+    uint32_t   pci_vd[4];
 };
 
 #endif /* __XEN_PUBLIC_HVM_HVM_INFO_TABLE_H__ */
_______________________________________________
Xen-devel mailing list
Xen-devel@xxxxxxxxxxxxxxxxxxx
http://lists.xensource.com/xen-devel

 


Rackspace

Lists.xenproject.org is hosted with RackSpace, monitoring our
servers 24x7x365 and backed by RackSpace's Fanatical Support®.