[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: [Xen-devel] [PATCH] asm, x86: Set max CPUs to 512 instead of 256.
On Thu, Jan 22, 2015 at 05:04:12PM +0000, Andrew Cooper wrote: > On 22/01/15 16:52, Konrad Rzeszutek Wilk wrote: > > Contemporary servers sport now 480 CPUs or such. We should crank > > up the default amount of CPUs to a higher level to take advantage > > of this without having the distro to use 'max_phys_cpus' override. > > > > Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@xxxxxxxxxx> > > /me would really like to try something that big out, but I have not had > the opportunity yet to hit the 256 limit. Here is what bloat-o-meter says (256 vs 512): add/remove: 5/0 grow/shrink: 118/25 up/down: 230953/-711 (230242) function old new delta _csched2_priv 46176 141472 +95296 cpu_data 65536 131072 +65536 irq_stat 32768 65536 +32768 cpu_msrs 4096 8192 +4096 cpu_bit_bitmap 2080 4160 +2080 x86_acpiid_to_apicid 2048 4096 +2048 stack_base 2048 4096 +2048 saved_lvtpc 2048 4096 +2048 region 4096 6144 +2048 processor_powers 2048 4096 +2048 processor_pminfo 2048 4096 +2048 node_to_cpumask 2048 4096 +2048 idt_tables 2048 4096 +2048 idle_vcpu 2048 4096 +2048 cpufreq_drv_data 2048 4096 +2048 __per_cpu_offset 2048 4096 +2048 x86_cpu_to_apicid 1024 2048 +1024 prev_nmi_count 1024 2048 +1024 core_parking_cpunum 1024 2048 +1024 apicid_to_node 1024 2048 +1024 apic_version 1024 2048 +1024 cpu_to_node 256 512 +256 sched_move_domain 940 1105 +165 sched_init_vcpu 614 774 +160 phys_id_present_map 128 256 +128 phys_cpu_present_map 128 256 +128 apic_id_map 128 256 +128 cpu_disable_scheduler 596 711 +115 rcu_start_batch.clone - 106 +106 setup_IO_APIC 5553 5657 +104 init_one_irq_desc 205 307 +102 destroy_irq 347 435 +88 init_trace_bufs 160 240 +80 cpumask_clear - 80 +80 scrub_heap_pages 1843 1910 +67 init_IRQ 310 376 +66 set_nr_cpu_ids 101 160 +59 csched2_schedule 3006 3063 +57 __get_page_type 5663 5720 +57 do_domctl 6753 6808 +55 __cpu_disable 577 628 +51 domain_update_node_affinity 498 547 +49 alloc_heap_pages 1746 1794 +48 runq_tickle 1302 1349 +47 check_wakeup_from_wait 251 290 +39 cpumask_copy - 38 +38 cpumask_and - 38 +38 waiting_to_crash 32 64 +32 tsc_sync_cpu_mask 32 64 +32 tsc_check_cpumask 32 64 +32 tb_cpu_mask 32 64 +32 read_clocks_cpumask 32 64 +32 pit_broadcast_mask 32 64 +32 per_cpu__batch_mask 32 64 +32 mce_fatal_cpus 32 64 +32 init_mask 32 64 +32 frozen_cpus 32 64 +32 flush_cpumask 32 64 +32 dump_execstate_mask 32 64 +32 crash_saved_cpus 32 64 +32 cpupool_locked_cpus 32 64 +32 cpupool_free_cpus 32 64 +32 cpuidle_mwait_flags 32 64 +32 cpu_sibling_setup_map 32 64 +32 cpu_present_map 32 64 +32 cpu_online_map 32 64 +32 cpu_initialized 32 64 +32 call_data 56 88 +32 alloc_vcpu 685 717 +32 _rt_priv 88 120 +32 context_switch 4030 4056 +26 update_clusterinfo 298 322 +24 powernow_cpufreq_target 526 550 +24 arch_init_one_irq_desc 124 142 +18 smp_prepare_cpus 485 501 +16 send_IPI_mask_x2apic_cluster 445 461 +16 nmi_mce_softirq 178 194 +16 irq_move_cleanup_interrupt 632 648 +16 handle_hpet_broadcast 460 476 +16 csched_init 433 449 +16 csched_balance_cpumask 159 175 +16 cpu_smpboot_callback 621 637 +16 acpi_cpufreq_target 799 815 +16 _csched_cpu_pick 1358 1374 +16 __runq_pick 312 328 +16 __do_update_va_mapping 987 1003 +16 cpufreq_add_cpu 1238 1250 +12 xenctl_bitmap_to_cpumask 119 129 +10 csched_alloc_pdata 434 443 +9 shadow_alloc 794 802 +8 p2m_init_one 337 345 +8 msi_cpu_callback 121 129 +8 move_masked_irq 122 130 +8 invalidate_shadow_ldt 345 353 +8 init_irq_data 278 286 +8 hpet_broadcast_init 1072 1080 +8 find_non_smt 355 363 +8 desc_guest_eoi 243 251 +8 csched2_dump 401 409 +8 irq_guest_eoi_timer_fn 390 397 +7 core_parking_power 628 635 +7 core_parking_performance 628 635 +7 ept_p2m_init 160 166 +6 cpu_raise_softirq_batch_finish 205 211 +6 vcpu_reset 232 237 +5 __assign_irq_vector 1061 1066 +5 vcpu_set_affinity 225 229 +4 smp_scrub_heap_pages 435 439 +4 set_desc_affinity 216 220 +4 nr_cpumask_bits - 4 +4 mod_l4_entry 1235 1239 +4 irq_set_affinity 53 57 +4 csched2_vcpu_wake 337 341 +4 csched2_vcpu_insert 280 284 +4 timer_interrupt 338 341 +3 free_domain_pirqs 138 140 +2 vcpu_set_hard_affinity 138 139 +1 smp_call_function 144 145 +1 sedf_pick_cpu 163 164 +1 new_tlbflush_clock_period 102 103 +1 cpuidle_wakeup_mwait 165 166 +1 call_rcu 220 221 +1 alloc_cpu_id 84 85 +1 rt_init 164 163 -1 prepare_to_wait 493 492 -1 time_calibration 89 87 -2 enable_nonboot_cpus 183 180 -3 arch_memory_op 2632 2629 -3 vcpumask_to_pcpumask 495 491 -4 irq_complete_move 160 155 -5 smp_intr_init 250 244 -6 csched_vcpu_wake 1159 1153 -6 on_selected_cpus 226 218 -8 msi_compose_msg 343 335 -8 fixup_irqs 693 685 -8 dump_registers 253 245 -8 clear_irq_vector 560 552 -8 numa_initmem_init 374 365 -9 bind_irq_vector 469 457 -12 stop_machine_run 642 627 -15 map_ldt_shadow_page 719 703 -16 __pirq_guest_unbind 658 642 -16 cpupool_create 425 407 -18 shadow_write_p2m_entry 1015 988 -27 rcu_process_callbacks 493 438 -55 cpu_quiet.clone 151 62 -89 do_mmuext_op 7023 6848 -175 io_apic_get_unique_id 794 586 -208 _______________________________________________ Xen-devel mailing list Xen-devel@xxxxxxxxxxxxx http://lists.xen.org/xen-devel
|
Lists.xenproject.org is hosted with RackSpace, monitoring our |