search for: set_xcr0

Displaying 4 results from an estimated 4 matches for "set_xcr0".

Did you mean: set_cr0
2010 Aug 31
2
[PATCH 2/3 v2] XSAVE/XRSTOR: fix frozen states
If a guest sets a state and dirties the state, but later temporarily clears the state, and at this time if this vcpu is scheduled out, then other vcpus may corrupt the state before the vcpu is scheduled in again, thus the state cannot be restored correctly. To solve this issue, this patch save/restore all states unconditionally on vcpu context switch. Signed-off-by: Weidong Han
2013 Jun 04
12
[PATCH 0/4] XSA-52..54 follow-up
The first patch really isn''t as much of a follow-up than what triggered the security issues to be noticed in the first place. 1: x86: preserve FPU selectors for 32-bit guest code 2: x86: fix XCR0 handling 3: x86/xsave: adjust state management 4: x86/fxsave: bring in line with recent xsave adjustments The first two I would see as candidates for 4.3 (as well as subsequent backporting,
2013 Nov 19
6
[PATCH 2/5] X86 architecture instruction set extension definiation
...return -EOPNOTSUPP; - if ( (new_bv & ~xfeature_mask) || !(new_bv & XSTATE_FP) ) - return -EINVAL; - - if ( (new_bv & XSTATE_YMM) && !(new_bv & XSTATE_SSE) ) + if ( (new_bv & ~xfeature_mask) || !valid_xcr0(new_bv) ) return -EINVAL; if ( !set_xcr0(new_bv) ) diff --git a/xen/include/asm-x86/xstate.h b/xen/include/asm-x86/xstate.h index 5617963..de5711e 100644 --- a/xen/include/asm-x86/xstate.h +++ b/xen/include/asm-x86/xstate.h @@ -20,18 +20,23 @@ #define XCR_XFEATURE_ENABLED_MASK 0x00000000 /* index of XCR0 */ #define XSTATE_YMM_SIZE...
2013 Nov 25
0
[PATCH 2/4 V2] X86: enable support for new ISA extensions
...return -EOPNOTSUPP; - if ( (new_bv & ~xfeature_mask) || !(new_bv & XSTATE_FP) ) - return -EINVAL; - - if ( (new_bv & XSTATE_YMM) && !(new_bv & XSTATE_SSE) ) + if ( (new_bv & ~xfeature_mask) || !valid_xcr0(new_bv) ) return -EINVAL; if ( !set_xcr0(new_bv) ) @@ -364,6 +379,10 @@ int handle_xsetbv(u32 index, u64 new_bv) curr->arch.xcr0 = new_bv; curr->arch.xcr0_accum |= new_bv; + /* LWP sets nonlazy_xstate_used independently. */ + if ( new_bv & (XSTATE_NONLAZY & ~XSTATE_LWP) ) + curr->arch.nonlazy_xsta...