Displaying 8 results from an estimated 8 matches for "lbr_nr".
Did you mean:
bar_nr
2017 Sep 25
10
[PATCH v1 0/4] Enable LBR for the guest
This patch series enables the Last Branch Recording feature for the
guest. Instead of trapping each LBR stack MSR access, the MSRs are
passthroughed to the guest. Those MSRs are switched (i.e. load and
saved) on VMExit and VMEntry.
Test:
Try "perf record -b ./test_program" on guest.
Wei Wang (4):
KVM/vmx: re-write the msr auto switch feature
KVM/vmx: auto switch
2017 Sep 25
10
[PATCH v1 0/4] Enable LBR for the guest
This patch series enables the Last Branch Recording feature for the
guest. Instead of trapping each LBR stack MSR access, the MSRs are
passthroughed to the guest. Those MSRs are switched (i.e. load and
saved) on VMExit and VMEntry.
Test:
Try "perf record -b ./test_program" on guest.
Wei Wang (4):
KVM/vmx: re-write the msr auto switch feature
KVM/vmx: auto switch
2017 Sep 25
0
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...X_VALUE);
}
+static void auto_switch_lbr_msrs(struct vcpu_vmx *vmx)
+{
+ int i;
+ struct perf_lbr_stack lbr_stack;
+
+ perf_get_lbr_stack(&lbr_stack);
+
+ add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
+
+ for (i = 0; i < lbr_stack.lbr_nr; i++) {
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
+ if (lbr_stack.lbr_info)
+ add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
+ 0);
+ }
+}
+
#define VMX_XSS_EXIT_BITMAP 0
/*
* Sets up the vmcs for e...
2017 Sep 25
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...truct vcpu_vmx *vmx)
> +{
> + int i;
> + struct perf_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
> +}
> +
> #define VMX_XSS...
2017 Sep 25
1
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...truct vcpu_vmx *vmx)
> +{
> + int i;
> + struct perf_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
> +}
> +
> #define VMX_XSS...
2017 Sep 25
2
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...truct vcpu_vmx *vmx)
> +{
> + int i;
> + struct perf_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
That will be really expensive and a...
2017 Sep 25
2
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...truct vcpu_vmx *vmx)
> +{
> + int i;
> + struct perf_lbr_stack lbr_stack;
> +
> + perf_get_lbr_stack(&lbr_stack);
> +
> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
> +
> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
> + if (lbr_stack.lbr_info)
> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
> + 0);
> + }
That will be really expensive and a...
2017 Sep 26
0
[PATCH v1 4/4] KVM/vmx: enable lbr for the guest
...; + int i;
>> + struct perf_lbr_stack lbr_stack;
>> +
>> + perf_get_lbr_stack(&lbr_stack);
>> +
>> + add_atomic_switch_msr(vmx, MSR_LBR_SELECT, 0, 0);
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_tos, 0, 0);
>> +
>> + for (i = 0; i < lbr_stack.lbr_nr; i++) {
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_from + i, 0, 0);
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_to + i, 0, 0);
>> + if (lbr_stack.lbr_info)
>> + add_atomic_switch_msr(vmx, lbr_stack.lbr_info + i, 0,
>> + 0);
>> + }
> That wi...