search for: cr0

Displaying 20 results from an estimated 1085 matches for "cr0".

2007 Apr 18
2
[PATCH] Use correct macros in raid code, not raw asm
...Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> diff -r 4ff048622391 drivers/md/raid6x86.h --- a/drivers/md/raid6x86.h Thu Dec 28 16:52:54 2006 +1100 +++ b/drivers/md/raid6x86.h Fri Dec 29 10:09:38 2006 +1100 @@ -75,13 +75,14 @@ static inline unsigned long raid6_get_fp unsigned long cr0; preempt_disable(); - asm volatile("mov %%cr0,%0 ; clts" : "=r" (cr0)); + cr0 = read_cr0(); + clts(); return cr0; } static inline void raid6_put_fpu(unsigned long cr0) { - asm volatile("mov %0,%%cr0" : : "r" (cr0)); + write_cr0(cr0); preempt_enab...
2007 Apr 18
2
[PATCH] Use correct macros in raid code, not raw asm
...Signed-off-by: Rusty Russell <rusty@rustcorp.com.au> diff -r 4ff048622391 drivers/md/raid6x86.h --- a/drivers/md/raid6x86.h Thu Dec 28 16:52:54 2006 +1100 +++ b/drivers/md/raid6x86.h Fri Dec 29 10:09:38 2006 +1100 @@ -75,13 +75,14 @@ static inline unsigned long raid6_get_fp unsigned long cr0; preempt_disable(); - asm volatile("mov %%cr0,%0 ; clts" : "=r" (cr0)); + cr0 = read_cr0(); + clts(); return cr0; } static inline void raid6_put_fpu(unsigned long cr0) { - asm volatile("mov %0,%%cr0" : : "r" (cr0)); + write_cr0(cr0); preempt_enab...
2013 Jan 07
9
[PATCH v2 0/3] nested vmx bug fixes
Changes from v1 to v2: - Use a macro to replace the hardcode in patch 1/3. This patchset fixes issues about IA32_VMX_MISC MSR emulation, VMCS guest area synchronization about PAGE_FAULT_ERROR_CODE_MASK/PAGE_FAULT_ERROR_CODE_MATCH, and CR0/CR4 emulation. Please help to review and pull. Thanks, Dongxiao Dongxiao Xu (3): nested vmx: emulate IA32_VMX_MISC MSR nested vmx: synchronize page fault error code match and mask nested vmx: fix CR0/CR4 emulation xen/arch/x86/hvm/vmx/vvmx.c | 136 +++++++++++++++++++++++++++++---...
2006 Mar 17
3
[LLVMdev] Stupid '-load-vn -licm' question (LLVM 1.6)
...ubyte 97, label %ret_true ubyte 98, label %ret_true ] Unfortunately, this generates really weird code on the LLVM 1.6 PowerPC backend: LBB_matches_1: ; regex6 lbz r4, 0(r3) LBB_matches_2: ; NodeBlock rlwinm r5, r4, 0, 24, 31 cmplwi cr0, r5, 98 blt cr0, LBB_matches_4 ; LeafBlock LBB_matches_3: ; LeafBlock1 rlwinm r4, r4, 0, 24, 31 cmpwi cr0, r4, 98 beq cr0, LBB_matches_8 ; ret_true b LBB_matches_5 ; NewDefault LBB_matches_4: ; LeafBlock rlwinm r4, r4, 0, 24, 31 cmp...
2012 Jun 29
0
[PATCH] linux-2.6.18/x86: improve CR0 read/write handling
With the only bit in CR0 permitted to be changed by PV guests being TS, optimize the handling towards that: Keep a cached value in a per-CPU variable, and issue HYPERVISOR_fpu_taskswitch hypercalls for updates in all but the unusual case should something in the system still try to modify another bit (the attempt of which w...
2006 Jun 14
8
WP flag in CR0, setting
Hello! I have a slight problem in my guest port with the WP bit in CR0. The original kernel maps certain kernel pages to user-mode read-only and relies on the kernel being able to modify these despite the read-only bit being set in the pages. This in turn requires that the WP bit is unset in CR0. Unfortunately, Xen doesn''t allow the WP bit to be zeroed beca...
2013 Apr 10
3
[LLVMdev] If Conversion and predicated returns
Evan, et al., I've come across a small issue when using the if conversion pass in PPC to generate conditional returns. Here's a small example: ** Before if conversion ** BB#0: derived from LLVM BB %entry %R3<def> = LI 0 %CR0<def> = CMPLWI %R3, 0 BCC 68, %CR0, <BB#3> Successors according to CFG: BB#3(16) BB#1(16) BB#1: derived from LLVM BB %while.body.lr.ph Live Ins: %R3 Predecessors according to CFG: BB#0 %CR0<def> = CMPLWI %R3<kill>, 0 BCC 68, %CR0, <BB#3...
2013 Apr 12
2
[LLVMdev] TableGen list merging
Hi, In the PPC backend, there is a "helper" class used to define instructions that implicitly define a condition register: class isDOT { list<Register> Defs = [CR0]; bit RC = 1; } and this gets used on instructions such as: def ADDICo : DForm_2<13, (outs GPRC:$rD), (ins GPRC:$rA, s16imm:$imm), "addic. $rD, $rA, $imm", IntGeneral, []>, isDOT; but there is a small problem. If these instructions are...
2004 Oct 06
3
flac-1.1.1 completely broken on linux/ppc and on macosx if built with the standard toolchain (not xcode)
Sadly the latest optimization broke completely everything. The asm code isn't gas compliant. the libFLAC linker script has a typo, disabling the asm optimization and/or altivec won't let a correct build anyway. Instant fixes for the asm stuff: sed -i -e"s:;:\#:" on the lpc_asm.s to load address instead of addis+ori you could use lis and la and PLEASE use the @l(register)
2020 Feb 07
0
[RFC PATCH v7 11/78] KVM: x86: add .control_cr3_intercept() to struct kvm_x86_ops
...inline gfn_t gfn_to_index(gfn_t gfn, gfn_t base_gfn, int level) #define KVM_NR_FIXED_MTRR_REGION 88 #define KVM_NR_VAR_MTRR 8 +#define CR_TYPE_R 1 +#define CR_TYPE_W 2 +#define CR_TYPE_RW 3 + #define ASYNC_PF_PER_VCPU 64 enum kvm_reg { @@ -1064,6 +1068,8 @@ struct kvm_x86_ops { void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); void (*set_cr3)(struct kvm_vcpu *vcpu, unsigned long cr3); int (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4); + void (*control_cr3_intercept)(struct kvm_vcpu *vcpu, int type, + bool enable); void (*set_efer)(struct kvm_vcpu *vcpu, u6...
2020 Jul 21
0
[PATCH v9 10/84] KVM: x86: add .control_cr3_intercept() to struct kvm_x86_ops
...fine KVM_NR_FIXED_MTRR_REGION 88 #define KVM_NR_VAR_MTRR 8 +#define CR_TYPE_R 1 +#define CR_TYPE_W 2 +#define CR_TYPE_RW 3 + #define ASYNC_PF_PER_VCPU 64 enum kvm_reg { @@ -1111,6 +1115,8 @@ struct kvm_x86_ops { void (*get_cs_db_l_bits)(struct kvm_vcpu *vcpu, int *db, int *l); void (*set_cr0)(struct kvm_vcpu *vcpu, unsigned long cr0); int (*set_cr4)(struct kvm_vcpu *vcpu, unsigned long cr4); + void (*control_cr3_intercept)(struct kvm_vcpu *vcpu, int type, + bool enable); void (*set_efer)(struct kvm_vcpu *vcpu, u64 efer); void (*get_idt)(struct kvm_vcpu *vcpu, struct desc...
2008 Jan 11
4
GP exception on vmxon
...ception happens. Could anybody help me on this? The following is the context 1. After booting up to the program, I disable A20M. 2. allocate a 4kb-aligned vmxon region and calculate its physical address. 3. setup identity page table and enter protected page mode. In this step I also set x86_cr0_ne ( cr0.bit5) 4. call start_vmx. This start_vmx function is similar to the one in xen3.1.0 a. test cpuid with eax = 1. ecx.vmxe(bit5) is 1. b. Test IA32_FEATURE_CONTROL_MSR, result is 0x05, so bit 0 and bit 2 are both 1. c. Set cr4.vmxe (bit13) to 1 d. Call vmx_init_vmcs_config(). This functi...
2006 Mar 17
0
[LLVMdev] Stupid '-load-vn -licm' question (LLVM 1.6)
On Mar 17, 2006, at 7:54 AM, Eric Kidd wrote: > Unfortunately, this generates really weird code on the LLVM 1.6 > PowerPC backend: > > LBB_matches_1: ; regex6 > lbz r4, 0(r3) > LBB_matches_2: ; NodeBlock > rlwinm r5, r4, 0, 24, 31 > cmplwi cr0, r5, 98 > blt cr0, LBB_matches_4 ; LeafBlock > LBB_matches_3: ; LeafBlock1 > rlwinm r4, r4, 0, 24, 31 > cmpwi cr0, r4, 98 > beq cr0, LBB_matches_8 ; ret_true > b LBB_matches_5 ; NewDefault > LBB_matches_4: ; LeafBlock > rlw...
2006 Jul 09
2
[LLVMdev] Critical edges
...LBB1_4, and now it is falling on LBB1_9. LBB1_3: ;no_exit lis r4, 21845 ori r4, r4, 21846 mulhw r4, r2, r4 addi r5, r2, -1 li r6, -1 srwi r6, r4, 31 add r4, r4, r6 mulli r4, r4, 3 li r6, 1 subf r2, r4, r2 cmpwi cr0, r2, 0 beq cr0, LBB1_9 ;no_exit LBB1_7: ;no_exit mr r2, r6 LBB1_8: ;no_exit cmpwi cr0, r5, 0 add r2, r2, r3 bgt cr0, LBB1_5 ;no_exit.no_exit_llvm_crit_edge LBB1_9: ;no_exit mr r2, r6 b LBB1_8 ;no_exit LBB1_4: ;no_exit.loopexit_llvm_crit...
2020 Aug 24
0
[PATCH v6 01/76] KVM: SVM: nested: Don't allocate VMCB structures on stack
...*vcpu, struct vmcb *hsave = svm->nested.hsave; struct vmcb __user *user_vmcb = (struct vmcb __user *) &user_kvm_nested_state->data.svm[0]; - struct vmcb_control_area ctl; - struct vmcb_save_area save; + struct vmcb_control_area *ctl; + struct vmcb_save_area *save; + int ret; u32 cr0; + BUILD_BUG_ON(sizeof(struct vmcb_control_area) + sizeof(struct vmcb_save_area) > + KVM_STATE_NESTED_SVM_VMCB_SIZE); + if (kvm_state->format != KVM_STATE_NESTED_FORMAT_SVM) return -EINVAL; @@ -1095,13 +1099,22 @@ static int svm_set_nested_state(struct kvm_vcpu *vcpu, return...
2007 Apr 18
0
[RFC/PATCH PV_OPS X86_64 03/17] paravirt_ops - system routines
...clean-start/include/asm-x86_64/system.h @@ -65,46 +65,84 @@ extern void load_gs_index(unsigned); ".previous" \ : :"r" (value), "r" (0)) +static inline void native_clts(void) +{ + asm volatile ("clts"); +} + +static inline unsigned long native_read_cr0(void) +{ + unsigned long val; + asm volatile("movq %%cr0,%0\n\t" :"=r" (val)); + return val; +} + +static inline void native_write_cr0(unsigned long val) +{ + asm volatile("movq %0,%%cr0": :"r" (val)); +} + +static inline unsigned long native_read_cr2(void) +...
2007 Apr 18
0
[RFC/PATCH PV_OPS X86_64 03/17] paravirt_ops - system routines
...clean-start/include/asm-x86_64/system.h @@ -65,46 +65,84 @@ extern void load_gs_index(unsigned); ".previous" \ : :"r" (value), "r" (0)) +static inline void native_clts(void) +{ + asm volatile ("clts"); +} + +static inline unsigned long native_read_cr0(void) +{ + unsigned long val; + asm volatile("movq %%cr0,%0\n\t" :"=r" (val)); + return val; +} + +static inline void native_write_cr0(unsigned long val) +{ + asm volatile("movq %0,%%cr0": :"r" (val)); +} + +static inline unsigned long native_read_cr2(void) +...
2013 Apr 12
0
[LLVMdev] TableGen list merging
On Apr 12, 2013, at 2:06 AM, Hal Finkel <hfinkel at anl.gov> wrote: > In the PPC backend, there is a "helper" class used to define instructions that implicitly define a condition register: > > class isDOT { > list<Register> Defs = [CR0]; > bit RC = 1; > } > > and this gets used on instructions such as: > > def ADDICo : DForm_2<13, (outs GPRC:$rD), (ins GPRC:$rA, s16imm:$imm), > "addic. $rD, $rA, $imm", IntGeneral, > []>, isDOT; > > but ther...
2004 Sep 10
1
altivec lpc_restore_signal
...r9,r1,-28 li r31,0xf andc r9,r9,r31 ; for quadword-aligned stack data slwi r6,r6,2 ; adjust for word size slwi r4,r4,2 add r4,r4,r8 ; r4 = data+data_len mfspr r0,256 ; cache old vrsave addis r31,0,hi16(0xfffffc00) ori r31,r31,lo16(0xfffffc00) mtspr 256,r31 ; declare VRs in vrsave cmplw cr0,r8,r4 ; i<data_len bc 4,0,L1400 ; load coefficients into v0-v7 and initial history into v8-v15 li r31,0xf and r31,r8,r31 ; r31: data%4 li r11,16 subf r31,r31,r11 ; r31: 4-(data%4) slwi r31,r31,3 ; convert to bits for vsro li r10,-4 stw r31,-4(r9) lvewx v0,r10,r9 vspltisb v18,-1 vsro...
2006 Mar 17
0
[LLVMdev] Stupid '-load-vn -licm' question (LLVM 1.6)
On Thu, 16 Mar 2006, Eric Kidd wrote: > Hello! I'm compiling code which uses pointers as iterators. For some > reason--probably a silly misunderstanding of the docs--I can't eliminate > duplicate pointer loads. I'll probably figure this out eventually, but if > somebody else sees the answer instantly, I certainly won't complain. :-) There are no stupid questions.