search for: rept

Displaying 20 results from an estimated 43 matches for "rept".

Did you mean: repo
2012 Mar 19
1
[LLVMdev] [patch] Enhance of asm macros
Hi llvm users & developers! Attached patches: 1) rewrite previous patch, now for darwin platform applied old mechanism 2) patch added processing .rept directive 3) patch added processing .irp directive 4) patch added processing .irpc directive -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20120319/bba2d4bb/attachment.html> -------------- next part ------...
2012 Mar 05
0
[LLVMdev] [patch] Enhance of asm macros
> For compability this problems requiring some compiler switch flag. Can you > give me description/example how it's can be done? grep for DwarfRequiresRelocationForSectionOffset. Something like that might do what you want. Cheers, Rafael
2006 Jun 21
2
Theora MMX and Mac OS X Intel
hi, i was trying to enable the mmx code on mac os x. to get to that point one has to replace some inline assembler code: .balign 16 -> .p2align 4 and replace .rept .. .endr with #defines. but to makes things more complicated apple's GAS does not support movsx instructions and thus the following line does not work: " movsx %%di, %%edi \n\t" [ more details at https://trac.xiph.org/browser/trunk/theora/lib/ x86_32/dsp_mmx.c#L443 ] if...
2012 Feb 15
2
[LLVMdev] [patch] Enhance of asm macros
Hello Kevin, I thinking about this, but there are some problems: 1) Differrent interpretation of $[:number:] in macro body: source: .macro test par1 movl $0, %eax .endm test %ebx translated to: original llvm => movl %ebx, %eax with patch => movl $0, %ebx 2) Different parsing of space in macro parameters: source: test2 a + b,c parsed as: original llvm => macro test2 with two
2013 Aug 26
5
[RFC PATCH 0/2] GLOBAL() macro for asm code.
Hello, This series has been split into two patches, one for arm and one for x86. I figured that this was easier than doing it as a single combined patch, especially as the changes are functionally independent. x86 has been boot tested, but arm has not even been compile tested as I lack a suitable cross compiler. However, the changes are just text replacement, so I dont expect any issues. The
2005 Sep 05
2
[PATCH][1/6] add a hypercall number for virtual device in unmodified guest
...0:36:49 2005 +++ b/xen/arch/x86/x86_32/entry.S Fri Sep 2 22:46:13 2005 @@ -812,6 +812,7 @@ .long do_ni_hypercall /* 25 */ .long do_mmuext_op .long do_acm_op /* 27 */ + .long do_virtual_device_op /* virutal device op for VMX */ .rept NR_hypercalls-((.-hypercall_table)/4) .long do_ni_hypercall .endr _______________________________________________ Xen-devel mailing list Xen-devel@lists.xensource.com http://lists.xensource.com/xen-devel
2007 Apr 18
3
[RFC, PATCH 4/24] i386 Vmi inline implementation
...entry.S + * + * To work around gas bugs, we must emit the native sequence here into a + * separate section first to measure the length. Some versions of gas have + * difficulty resolving vmi_native_end - vmi_native_begin during evaluation + * of an assembler conditional, which we use during the .rept directive + * below to generate the nop padding -- Zach + */ + +/* First, measure the native instruction sequence length */ +#define vmi_native_start \ + .pushsection .vmi.native,"ax"; \ + 771:; +#define vmi_native_finish \ + 772:; \ + .popsection; +#define vmi_native_begin 771b...
2007 Apr 18
3
[RFC, PATCH 4/24] i386 Vmi inline implementation
...entry.S + * + * To work around gas bugs, we must emit the native sequence here into a + * separate section first to measure the length. Some versions of gas have + * difficulty resolving vmi_native_end - vmi_native_begin during evaluation + * of an assembler conditional, which we use during the .rept directive + * below to generate the nop padding -- Zach + */ + +/* First, measure the native instruction sequence length */ +#define vmi_native_start \ + .pushsection .vmi.native,"ax"; \ + 771:; +#define vmi_native_finish \ + 772:; \ + .popsection; +#define vmi_native_begin 771b...
2007 Apr 18
2
[RFC PATCH 23/35] Increase x86 interrupt vector range
...ernel/irq.c | 4 ++-- arch/x86_64/kernel/smp.c | 4 ++-- include/asm-x86_64/hw_irq.h | 2 +- 6 files changed, 10 insertions(+), 10 deletions(-) --- linus-2.6.orig/arch/i386/kernel/entry.S +++ linus-2.6/arch/i386/kernel/entry.S @@ -464,7 +464,7 @@ vector=0 ENTRY(irq_entries_start) .rept NR_IRQS ALIGN -1: pushl $vector-256 +1: pushl $~(vector) jmp common_interrupt .data .long 1b @@ -481,7 +481,7 @@ common_interrupt: #define BUILD_INTERRUPT(name, nr) \ ENTRY(name) \ - pushl $nr-256; \ +...
2007 Apr 18
2
[RFC PATCH 23/35] Increase x86 interrupt vector range
...ernel/irq.c | 4 ++-- arch/x86_64/kernel/smp.c | 4 ++-- include/asm-x86_64/hw_irq.h | 2 +- 6 files changed, 10 insertions(+), 10 deletions(-) --- linus-2.6.orig/arch/i386/kernel/entry.S +++ linus-2.6/arch/i386/kernel/entry.S @@ -464,7 +464,7 @@ vector=0 ENTRY(irq_entries_start) .rept NR_IRQS ALIGN -1: pushl $vector-256 +1: pushl $~(vector) jmp common_interrupt .data .long 1b @@ -481,7 +481,7 @@ common_interrupt: #define BUILD_INTERRUPT(name, nr) \ ENTRY(name) \ - pushl $nr-256; \ +...
2020 Apr 28
0
[PATCH v3 13/75] x86/boot/compressed/64: Add IDT Infrastructure
...rnel.. */ @@ -681,10 +693,21 @@ SYM_DATA_START_LOCAL(gdt) .quad 0x0000000000000000 /* TS continued */ SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end) +SYM_DATA_START(boot_idt_desc) + .word boot_idt_end - boot_idt + .quad 0 +SYM_DATA_END(boot_idt_desc) + .balign 8 +SYM_DATA_START(boot_idt) + .rept BOOT_IDT_ENTRIES + .quad 0 + .quad 0 + .endr +SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end) + #ifdef CONFIG_EFI_STUB SYM_DATA(image_offset, .long 0) #endif - #ifdef CONFIG_EFI_MIXED SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0, 0) SYM_DATA(efi_is64, .byte 1) diff --git a/arch/x86/boo...
2020 Feb 11
0
[PATCH 08/62] x86/boot/compressed/64: Add IDT Infrastructure
...ernel.. */ @@ -628,6 +650,18 @@ SYM_DATA_START_LOCAL(gdt) .quad 0x0000000000000000 /* TS continued */ SYM_DATA_END_LABEL(gdt, SYM_L_LOCAL, gdt_end) +SYM_DATA_START(boot_idt_desc) + .word boot_idt_end - boot_idt + .quad 0 +SYM_DATA_END(boot_idt_desc) + .balign 8 +SYM_DATA_START(boot_idt) + .rept BOOT_IDT_ENTRIES + .quad 0 + .quad 0 + .endr +SYM_DATA_END_LABEL(boot_idt, SYM_L_GLOBAL, boot_idt_end) + #ifdef CONFIG_EFI_MIXED SYM_DATA_LOCAL(efi32_boot_args, .long 0, 0) SYM_DATA(efi_is64, .byte 1) diff --git a/arch/x86/boot/compressed/idt_64.c b/arch/x86/boot/compressed/idt_64.c new file mod...
2004 Aug 24
5
MMX/mmxext optimisations
quite some speed improvement indeed. attached the updated patch to apply to svn/trunk. j -------------- next part -------------- A non-text attachment was scrubbed... Name: theora-mmx.patch.gz Type: application/x-gzip Size: 8648 bytes Desc: not available Url : http://lists.xiph.org/pipermail/theora-dev/attachments/20040824/5a5f2731/theora-mmx.patch-0001.bin
2006 Aug 31
5
Tables with Graphical Representations
Hi useRs - I was wondering if anyone out there can tell me where to find R-code to do mixes of tables and graphics. I am thinking of something similar to this: http://yost.com/information-design/powerpoint-corrupts/ or like the excel routines people are demonstrating: http://infosthetics.com/archives/2006/08/excel_in_cell_graphing.html My aim is to provide small graphics to illustrate
2007 May 09
1
[patch 3/9] lguest: the host code
...data; .long 1f; .text; 1: + /* Make an error number for most traps, which don't have one. */ + .if (\N <> 8) && (\N < 10 || \N > 14) && (\N <> 17) + pushl $0 + .endif + pushl $\N + jmp \TARGET + ALIGN +.endm + +.macro IRQ_STUBS FIRST LAST TARGET + irq=\FIRST + .rept \LAST-\FIRST+1 + IRQ_STUB irq \TARGET + irq=irq+1 + .endr +.endm + +/* We intercept every interrupt, because we may need to switch back to + * host. Unfortunately we can't tell them apart except by entry + * point, so we need 256 entry points. + */ +.data +.global default_idt_entries +default...
2007 May 09
1
[patch 3/9] lguest: the host code
...data; .long 1f; .text; 1: + /* Make an error number for most traps, which don't have one. */ + .if (\N <> 8) && (\N < 10 || \N > 14) && (\N <> 17) + pushl $0 + .endif + pushl $\N + jmp \TARGET + ALIGN +.endm + +.macro IRQ_STUBS FIRST LAST TARGET + irq=\FIRST + .rept \LAST-\FIRST+1 + IRQ_STUB irq \TARGET + irq=irq+1 + .endr +.endm + +/* We intercept every interrupt, because we may need to switch back to + * host. Unfortunately we can't tell them apart except by entry + * point, so we need 256 entry points. + */ +.data +.global default_idt_entries +default...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 03/13] lguest64 core
.../* .if (\N <> 2) && (\N <> 8) && (\N < 10 || \N > 14) && (\N <> 17) */ + .if (\N < 10 || \N > 14) && (\N <> 17) + pushq $0 + .endif + pushq $\N + jmp \TARGET + .align 8 +.endm + +.macro IRQ_STUBS FIRST LAST TARGET + irq=\FIRST + .rept \LAST-\FIRST+1 + IRQ_STUB irq \TARGET + irq=irq+1 + .endr +.endm + +/* We intercept every interrupt, because we may need to switch back to + * host. Unfortunately we can't tell them apart except by entry + * point, so we need 256 entry points. + */ +irq_stubs: +.data +.global _lguest_default_...
2007 Apr 18
1
[RFC/PATCH LGUEST X86_64 03/13] lguest64 core
.../* .if (\N <> 2) && (\N <> 8) && (\N < 10 || \N > 14) && (\N <> 17) */ + .if (\N < 10 || \N > 14) && (\N <> 17) + pushq $0 + .endif + pushq $\N + jmp \TARGET + .align 8 +.endm + +.macro IRQ_STUBS FIRST LAST TARGET + irq=\FIRST + .rept \LAST-\FIRST+1 + IRQ_STUB irq \TARGET + irq=irq+1 + .endr +.endm + +/* We intercept every interrupt, because we may need to switch back to + * host. Unfortunately we can't tell them apart except by entry + * point, so we need 256 entry points. + */ +irq_stubs: +.data +.global _lguest_default_...
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes: - patch v2: - Adapt patch to work post KPTI and compiler changes - Redo all performance testing with latest configs and compilers - Simplify mov macro on PIE (MOVABS now) - Reduce GOT footprint - patch v1: - Simplify ftrace implementation. - Use gcc mstack-protector-guard-reg=%gs with PIE when possible. - rfc v3: - Use --emit-relocs instead of -pie to reduce