Displaying 8 results from an estimated 8 matches for "lbr_from".
2017 Sep 25
10
[PATCH v1 0/4] Enable LBR for the guest
This patch series enables the Last Branch Recording feature for the
guest. Instead of trapping each LBR stack MSR access, the MSRs are
passthroughed to the guest. Those MSRs are switched (i.e. load and
saved) on VMExit and VMEntry.
Test:
Try "perf record -b ./test_program" on guest.
Wei Wang (4):
KVM/vmx: re-write the msr auto switch feature
KVM/vmx: auto switch
2017 Sep 25
10
[PATCH v1 0/4] Enable LBR for the guest
This patch series enables the Last Branch Recording feature for the
guest. Instead of trapping each LBR stack MSR access, the MSRs are
passthroughed to the guest. Those MSRs are switched (i.e. load and
saved) on VMExit and VMEntry.
Test:
Try "perf record -b ./test_program" on guest.
Wei Wang (4):
KVM/vmx: re-write the msr auto switch feature
KVM/vmx: auto switch
2017 Sep 25
0
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...vcpu_vmx *vmx)
+{
+ int i;
+ struct perf_lbr_stack lbr_stack;
+
+ perf_get_lbr_stack(&lbr_stack);
+
+ add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
+
+ for (i = 0; i < lbr_stack.lbr_nr; i++) {
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
+ if (lbr_stack.lbr_info)
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
+ 0);
+ }
+}
+
#define VMX_XSS_EXIT_BITMAP 0
/*
* Sets up the vmcs for emulated real mode.
@@ -5508,6 +5530,9 @@ static int vmx_v...
2017 Sep 25
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...f_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
> +}
> +
> #define VMX_XSS_EXIT_BITMAP 0
> /*
> * Sets up the vmcs for emulated...
2017 Sep 25
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...f_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
> +}
> +
> #define VMX_XSS_EXIT_BITMAP 0
> /*
> * Sets up the vmcs for emulated...
2017 Sep 25
2
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...f_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
That will be really expensive and add a lot of overhead to every entry/exit.
perf can already con...
2017 Sep 25
2
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...f_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
That will be really expensive and add a lot of overhead to every entry/exit.
perf can already con...
2017 Sep 26
0
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...+
>> + perf_get_lbr_stack(&lbr_stack);
>> +
>> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
>> +
>> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
>> + if (lbr_stack.lbr_info)
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
>> + 0);
>> + }
> That will be really expensive and add a lot of overhead to every entry/ex...