Displaying 4 results from an estimated 4 matches for "wrmsrs".
Did you mean:
wrmsr
2017 Sep 26
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...e of your current solution to be terrible.
e.g. a normal perf PMI does at least 1 MSR reads and 4+ MSR writes
for a single counter. With multiple counters it gets worse.
For each of those you'll need to exit. Adding something
to the entry/exit list is similar to the cost of doing
explicit RD/WRMSRs.
On Skylake we have 32*3=96 MSRs for the LBRs.
So with the 5 exits and entries, you're essentually doing
5*2*96=18432 extra MSR accesses for each PMI.
MSR access is 100+ cycles at least, for writes it is far more
expensive.
-Andi
2017 Sep 26
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...e of your current solution to be terrible.
e.g. a normal perf PMI does at least 1 MSR reads and 4+ MSR writes
for a single counter. With multiple counters it gets worse.
For each of those you'll need to exit. Adding something
to the entry/exit list is similar to the cost of doing
explicit RD/WRMSRs.
On Skylake we have 32*3=96 MSRs for the LBRs.
So with the 5 exits and entries, you're essentually doing
5*2*96=18432 extra MSR accesses for each PMI.
MSR access is 100+ cycles at least, for writes it is far more
expensive.
-Andi
2017 Sep 25
2
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
> +static void auto_switch_lbr_msrs(struct vcpu_vmx *vmx)
> +{
> + int i;
> + struct perf_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx,
2017 Sep 25
2
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
> +static void auto_switch_lbr_msrs(struct vcpu_vmx *vmx)
> +{
> + int i;
> + struct perf_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx,