search for: movsq

Displaying 12 results from an estimated 12 matches for "movsq".

Did you mean: movq
2016 Oct 31
1
COMPILER-RT build break
...fter patch: [100%] Generating ASAN_NOINST_TEST_OBJECTS.asan_noinst_test.cc.x86_64-with-calls.o /home/mzuckerm/llvm23/llvm/projects/compiler-rt/lib/asan/tests/asan_asm_test.cc:70:1: error: asm-specifier for input or output variable conflicts with asm clobber list DECLARE_ASM_REP_MOVS(U8, "movsq"); ^ /home/mzuckerm/llvm23/llvm/projects/compiler-rt/lib/asan/tests/asan_asm_test.cc:65:15: note: expanded from macro 'DECLARE_ASM_REP_MOVS' : "rsi", "rdi", "rcx", "memory"); \ ^ /h...
2017 Oct 04
0
[PATCH 09/13] x86/asm: Convert ALTERNATIVE*() assembler macros to preprocessor macros
...1dee51760 100644 --- a/arch/x86/lib/copy_page_64.S +++ b/arch/x86/lib/copy_page_64.S @@ -13,7 +13,7 @@ */ ALIGN ENTRY(copy_page) - ALTERNATIVE "jmp copy_page_regs", "", X86_FEATURE_REP_GOOD + ALTERNATIVE(jmp copy_page_regs, , X86_FEATURE_REP_GOOD) movl $4096/8, %ecx rep movsq ret diff --git a/arch/x86/lib/memcpy_64.S b/arch/x86/lib/memcpy_64.S index 9a53a06e5a3e..7ada0513864b 100644 --- a/arch/x86/lib/memcpy_64.S +++ b/arch/x86/lib/memcpy_64.S @@ -28,8 +28,8 @@ */ ENTRY(__memcpy) ENTRY(memcpy) - ALTERNATIVE_2 "jmp memcpy_orig", "", X86_FEATURE_...
2014 Dec 02
7
[LLVMdev] Memset/memcpy: user control of loop-idiom recognizer
Hi, In feedback from game studios a common issue is the replacement of loops with calls to memcpy/memset. These loops are often hand-optimised, and highly-efficient and the developers strongly want a way to control the compiler (i.e. leave my loop alone). The culprit is of course the loop-idiom recognizer. This replaces any loop that looks like a memset/memcpy with calls. This affects loops
2012 Oct 02
18
[PATCH 0/3] x86: adjust entry frame generation
This set of patches converts the way frames gets created from using PUSHes/POPs to using MOVes, thus allowing (in certain cases) to avoid saving/restoring part of the register set. While the place where the (small) win from this comes from varies between CPUs, the net effect is a 1 to 2% reduction on a combined interruption entry and exit when the full state save can be avoided. 1: use MOV
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality. For example, instead of: callq *0xffffffff81e3a400 (pv_irq_ops.save_fl) vmlinux will now show: pushfq pop %rax nop nop nop nop nop which is what the runtime version of the code will show in most cases. This idea was suggested by Andy Lutomirski. The benefits are: - For the most common runtime cases
2017 Oct 04
31
[PATCH 00/13] x86/paravirt: Make pv ops code generation more closely match reality
This changes the pv ops code generation to more closely match reality. For example, instead of: callq *0xffffffff81e3a400 (pv_irq_ops.save_fl) vmlinux will now show: pushfq pop %rax nop nop nop nop nop which is what the runtime version of the code will show in most cases. This idea was suggested by Andy Lutomirski. The benefits are: - For the most common runtime cases
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 03/13] lguest64 core
...ong as we don't overwrite + * the saved material. + */ + testq $1,LGUEST_VCPU_nmi_sw(%rdi) + jnz 1f + + /* Copy the saved regs */ + cld + movq %rdi, %rbx /* save off vcpu struct */ + leaq LGUEST_VCPU_nmi_regs(%rdi), %rdi + leaq 0(%rsp), %rsi + movq $(LGUEST_REGS_size/8), %rcx + rep movsq + + movq %rbx, %rdi /* put back vcpu struct */ + + /* save the gs base and shadow */ + movl $MSR_GS_BASE, %ecx + rdmsr + movq %rax, LGUEST_VCPU_nmi_gs_a(%rdi) + movq %rdx, LGUEST_VCPU_nmi_gs_d(%rdi) + + movl $MSR_KERNEL_GS_BASE, %ecx + rdmsr + movq %rax, LGUEST_VCPU_nmi_gs_shadow_a(%rdi) + movq %r...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 03/13] lguest64 core
...ong as we don't overwrite + * the saved material. + */ + testq $1,LGUEST_VCPU_nmi_sw(%rdi) + jnz 1f + + /* Copy the saved regs */ + cld + movq %rdi, %rbx /* save off vcpu struct */ + leaq LGUEST_VCPU_nmi_regs(%rdi), %rdi + leaq 0(%rsp), %rsi + movq $(LGUEST_REGS_size/8), %rcx + rep movsq + + movq %rbx, %rdi /* put back vcpu struct */ + + /* save the gs base and shadow */ + movl $MSR_GS_BASE, %ecx + rdmsr + movq %rax, LGUEST_VCPU_nmi_gs_a(%rdi) + movq %rdx, LGUEST_VCPU_nmi_gs_d(%rdi) + + movl $MSR_KERNEL_GS_BASE, %ecx + rdmsr + movq %rax, LGUEST_VCPU_nmi_gs_shadow_a(%rdi) + movq %r...
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2012 Nov 20
12
[PATCH v2 00/11] xen: Initial kexec/kdump implementation
Hi, This set of patches contains initial kexec/kdump implementation for Xen v2 (previous version were posted to few people by mistake; sorry for that). Currently only dom0 is supported, however, almost all infrustructure required for domU support is ready. Jan Beulich suggested to merge Xen x86 assembler code with baremetal x86 code. This could simplify and reduce a bit size of kernel code.
2010 Aug 12
59
[PATCH 00/15] RFC xen device model support
Hi all, this is the long awaited patch series to add xen device model support in qemu; the main author is Anthony Perard. Developing this series we tried to come up with the cleanest possible solution from the qemu point of view, limiting the amount of changes to common code as much as possible. The end result still requires a couple of hooks in piix_pci but overall the impact should be very