search for: xchgq

Displaying 20 results from an estimated 35 matches for "xchgq".

Did you mean: xchg
2015 Dec 04
0
[PATCH 1/6] x86: Add VMWare Host Communication Macros
...: [OUT] e.g. channel id + * @si: [INOUT] set to 0 if not used + * @di: [INOUT] set to 0 if not used + * @bp: [INOUT] set to 0 if not used + */ +#define VMW_PORT_HB_OUT(cmd, in1, port_num, magic, \ + eax, ebx, ecx, edx, si, di, bp) \ +({ \ + asm volatile ("push %%rbp;" \ + "xchgq %6, %%rbp;" \ + "rep outsb;" \ + "xchgq %%rbp, %6;" \ + "pop %%rbp;" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx), \ + "=d"(edx), \ + "+S"(si), \ + "+D"(di), \ + &qu...
2015 Dec 04
0
[PATCH 1/6] x86: Add VMWare Host Communication Macros
...: [OUT] e.g. channel id + * @si: [INOUT] set to 0 if not used + * @di: [INOUT] set to 0 if not used + * @bp: [INOUT] set to 0 if not used + */ +#define VMW_PORT_HB_OUT(cmd, in1, port_num, magic, \ + eax, ebx, ecx, edx, si, di, bp) \ +({ \ + asm volatile ("push %%rbp;" \ + "xchgq %6, %%rbp;" \ + "rep outsb;" \ + "xchgq %%rbp, %6;" \ + "pop %%rbp;" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx), \ + "=d"(edx), \ + "+S"(si), \ + "+D"(di), \ + &qu...
2016 Jan 19
0
[PATCH 1/6] x86: Add VMWare Host Communication Macros
...us command + * @edx: [OUT] e.g. channel id + * @si: [OUT] + * @di: [OUT] + * @bp: [INOUT] set to 0 if not used + */ +#define VMW_PORT_HB_OUT(cmd, in_ecx, in_si, in_di, \ + port_num, magic, \ + eax, ebx, ecx, edx, si, di, bp) \ +({ \ + asm volatile ("push %%rbp;" \ + "xchgq %6, %%rbp;" \ + "rep outsb;" \ + "xchgq %%rbp, %6;" \ + "pop %%rbp;" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx), \ + "=d"(edx), \ + "=S"(si), \ + "=D"(di), \ + &qu...
2016 Jan 19
0
[PATCH 1/6] x86: Add VMWare Host Communication Macros
...us command + * @edx: [OUT] e.g. channel id + * @si: [OUT] + * @di: [OUT] + * @bp: [INOUT] set to 0 if not used + */ +#define VMW_PORT_HB_OUT(cmd, in_ecx, in_si, in_di, \ + port_num, magic, \ + eax, ebx, ecx, edx, si, di, bp) \ +({ \ + asm volatile ("push %%rbp;" \ + "xchgq %6, %%rbp;" \ + "rep outsb;" \ + "xchgq %%rbp, %6;" \ + "pop %%rbp;" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx), \ + "=d"(edx), \ + "=S"(si), \ + "=D"(di), \ + &qu...
2015 Dec 01
0
[PATCH 1/6] x86: Add VMWare Host Communication Macros
...lso don't save/restore %rbp here, but you do below? Seems very odd. It might be better do so something like: +#define VMW_PORT_HB_OUT(in1, in2, port_num, magic, \ + eax, ebx, ecx, edx, si, di, bp) \ +({ \ + __asm__ __volatile__ ("xchgq %6, %%rbp;" \ + "cld; rep outsb; " \ + "xchgq %%rbp, %6" : \ + "=a"(eax), \ + "=b"(ebx), \ + "=c"(ecx),...
2017 Oct 11
1
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...p), %rdx + pushq %rdx /* Put stack back */ addq $(6*8), %rsp @@ -1479,7 +1485,9 @@ first_nmi: addq $8, (%rsp) /* Fix up RSP */ pushfq /* RFLAGS */ pushq $__KERNEL_CS /* CS */ - pushq $1f /* RIP */ + pushq %rax /* Support Position Independent Code */ + leaq 1f(%rip), %rax /* RIP */ + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ INTERRUPT_RETURN /* continues at repeat_nmi below */ UNWIND_HINT_IRET_REGS 1: -- 2.15.0.rc0.271.g36b669edcc-goog
2015 Dec 01
11
[PATCH 1/6] x86: Add VMWare Host Communication Macros
These macros will be used by multiple VMWare modules for handling host communication. v2: * Keeping only the minimal common platform defines * added vmware_platform() check function v3: * Added new field to handle different hypervisor magic values Signed-off-by: Sinclair Yeh <syeh at vmware.com> Reviewed-by: Thomas Hellstrom <thellstrom at vmware.com> Reviewed-by: Alok N Kataria
2015 Dec 01
11
[PATCH 1/6] x86: Add VMWare Host Communication Macros
These macros will be used by multiple VMWare modules for handling host communication. v2: * Keeping only the minimal common platform defines * added vmware_platform() check function v3: * Added new field to handle different hypervisor magic values Signed-off-by: Sinclair Yeh <syeh at vmware.com> Reviewed-by: Thomas Hellstrom <thellstrom at vmware.com> Reviewed-by: Alok N Kataria
2019 Jul 08
3
[PATCH v8 00/11] x86: PIE support to extend KASLR randomization
...nel can be located. - Streamlined the testing done on each patch proposal. Always testing hibernation, suspend, ftrace and kprobe to ensure no regressions. - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%...
2019 Jul 08
3
[PATCH v8 00/11] x86: PIE support to extend KASLR randomization
...nel can be located. - Streamlined the testing done on each patch proposal. Always testing hibernation, suspend, ftrace and kprobe to ensure no regressions. - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%...
2017 Oct 20
0
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
.../ > addq $(6*8), %rsp > @@ -1479,7 +1485,9 @@ first_nmi: > addq $8, (%rsp) /* Fix up RSP */ > pushfq /* RFLAGS */ > pushq $__KERNEL_CS /* CS */ > - pushq $1f /* RIP */ > + pushq %rax /* Support Position Independent Code */ > + leaq 1f(%rip), %rax /* RIP */ > + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ > INTERRUPT_RETURN /* continues at repeat_nmi below */ > UNWIND_HINT_IRET_REGS This patch seems to add extra overhead to the syscall fast-path even when PIE is disabled, right? Thanks, Ingo
2018 Mar 13
0
[PATCH v2 06/27] x86/entry/64: Adapt assembly for PIE support
...p), %rdx + pushq %rdx /* Put stack back */ addq $(6*8), %rsp @@ -1576,7 +1578,9 @@ first_nmi: addq $8, (%rsp) /* Fix up RSP */ pushfq /* RFLAGS */ pushq $__KERNEL_CS /* CS */ - pushq $1f /* RIP */ + pushq %rax /* Support Position Independent Code */ + leaq 1f(%rip), %rax /* RIP */ + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ iretq /* continues at repeat_nmi below */ UNWIND_HINT_IRET_REGS 1: diff --git a/arch/x86/kernel/relocate_kernel_64.S b/arch/x86/kernel/relocate_kernel_64.S index a7227dfe1a2b..0c0fc259a4e2 100644 --- a/arch/x86/kernel/relocate_kernel_64.S +++ b/arch/x86...
2018 Mar 14
0
[PATCH v2 06/27] x86/entry/64: Adapt assembly for PIE support
...pushfq /* RFLAGS */ > > > pushq $__KERNEL_CS /* CS */ > > > - pushq $1f /* RIP */ > > > + pushq %rax /* Support Position Independent Code */ > > > + leaq 1f(%rip), %rax /* RIP */ > > > + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ > > > iretq /* continues at repeat_nmi below */ > > > UNWIND_HINT_IRET_REGS > > > 1: > > > > Urgh, xchg with a memop has an implicit LOCK prefix. > this_cpu_xchg uses no lock cmpxchg as...
2018 Mar 15
0
[PATCH v2 06/27] x86/entry/64: Adapt assembly for PIE support
On 14/03/2018 16:54, Christopher Lameter wrote: >>> + pushq %rax /* Support Position Independent Code */ >>> + leaq 1f(%rip), %rax /* RIP */ >>> + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ >>> iretq /* continues at repeat_nmi below */ >>> UNWIND_HINT_IRET_REGS >>> 1: >> Urgh, xchg with a memop has an implicit LOCK prefix. > this_cpu_xchg uses no lock cmpxchg as a replacement to reduce latency. That req...
2019 Jul 30
0
[PATCH v8 00/11] x86: PIE support to extend KASLR randomization
...Streamlined the testing done on each patch proposal. Always testing > hibernation, suspend, ftrace and kprobe to ensure no regressions. > - patch v3: > - Update on message to describe longer term PIE goal. > - Minor change on ftrace if condition. > - Changed code using xchgq. > - patch v2: > - Adapt patch to work post KPTI and compiler changes > - Redo all performance testing with latest configs and compilers > - Simplify mov macro on PIE (MOVABS now) > - Reduce GOT footprint > - patch v1: > - Simplify ftrace implementation. >...
2017 Oct 20
3
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...RSP */ >> pushfq /* RFLAGS */ >> pushq $__KERNEL_CS /* CS */ >> - pushq $1f /* RIP */ >> + pushq %rax /* Support Position Independent Code */ >> + leaq 1f(%rip), %rax /* RIP */ >> + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ >> INTERRUPT_RETURN /* continues at repeat_nmi below */ >> UNWIND_HINT_IRET_REGS > > This patch seems to add extra overhead to the syscall fast-path even when PIE is > disabled, right? It does add extra instruction...
2017 Oct 20
3
[PATCH v1 06/27] x86/entry/64: Adapt assembly for PIE support
...RSP */ >> pushfq /* RFLAGS */ >> pushq $__KERNEL_CS /* CS */ >> - pushq $1f /* RIP */ >> + pushq %rax /* Support Position Independent Code */ >> + leaq 1f(%rip), %rax /* RIP */ >> + xchgq %rax, (%rsp) /* Restore RAX, put 1f */ >> INTERRUPT_RETURN /* continues at repeat_nmi below */ >> UNWIND_HINT_IRET_REGS > > This patch seems to add extra overhead to the syscall fast-path even when PIE is > disabled, right? It does add extra instruction...
2013 Nov 23
0
[LLVMdev] [PATCH] Detect Haswell subarchitecture (i.e. using -march=native)
...X, unsigned *rECX, unsigned *rEDX) { +#if defined(__x86_64__) || defined(_M_AMD64) || defined (_M_X64) + #if defined(__GNUC__) + // gcc doesn't know cpuid would clobber ebx/rbx. Preseve it manually. + asm ("movq\t%%rbx, %%rsi\n\t" + "cpuid\n\t" + "xchgq\t%%rbx, %%rsi\n\t" + : "=a" (*rEAX), + "=S" (*rEBX), + "=c" (*rECX), + "=d" (*rEDX) + : "a" (value), + "c" (subleaf)); + return false; + #elif defined(_MSC_VER) + // __c...
2019 May 20
3
[PATCH v7 00/12] x86: PIE support to extend KASLR randomization
...nel can be located. - Streamlined the testing done on each patch proposal. Always testing hibernation, suspend, ftrace and kprobe to ensure no regressions. - patch v3: - Update on message to describe longer term PIE goal. - Minor change on ftrace if condition. - Changed code using xchgq. - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%...
2013 Nov 23
2
[LLVMdev] [PATCH] Detect Haswell subarchitecture (i.e. using -march=native)
I agree with Tim, you need to implement a GetCpuIDAndInfoEx function in Host.cpp and pass the correct value to ecx. Also you need to verify that 7 is a valid leaf because an invalid leaf is defined to return the highest supported leaf on that processor. So if a processor supports say leaf 6 and not leaf 7, then an access leaf 7 will return the data from leaf 6 causing unrelated bits to be