Displaying 20 results from an estimated 22 matches for "sh_flags".
Did you mean:
sa_flags
2010 Jul 17
0
mksh on klibc
...at anoncvs.mirbsd.org:/cvs \
co -PA mksh
% cd mksh
% env CC=klcc CPPFLAGS=-DMKSH_NO_LIMITS sh Build.sh -r
Then, ?./test.sh -v? fails, as does this:
tg at frozenfish:~/mksh $ ./mksh -c 'echo foo; ls; echo bar'
foo
Build.sh check.t eval.c expr.o jobs.c lex.o mksh sh_flags.h syn.c var.c
CVS dot.mkshrc eval.o funcs.c jobs.o main.c mksh.1 shf.c syn.o var.o
Makefile edit.c exec.c funcs.o lalloc.c main.o setmode.c shf.o test.sh var_spec.h
Rebuild.sh edit.o exec.o histrap.c lalloc.o misc.c setmode.o...
2008 Oct 21
5
Why could I get function names from a stripped exec?
Hello, all experts.
When I use pid provider, my Dscript with -F option output the codepath with flowindent as you know,
e.g.
-> main
-> f1
-> f2
however I realized that the exec file I used at that time was stripped.
Does anyone know the reason why I could see the function names?
Thanks in advance.
--
This message posted from opensolaris.org
2015 Oct 10
3
[PATCH] Extend Multiboot1 with support for ELF64 file format
...r = addr;
+ mbinfo.syms.e.num = eh64->e_shnum;
+ mbinfo.syms.e.size = eh64->e_shentsize;
+ mbinfo.syms.e.shndx = eh64->e_shstrndx;
+
+ for (i = 0; i < eh64->e_shnum; i++) {
+ addr_t align;
+
+ if (!sh64[i].sh_size)
+ continue; /* Empty section */
+ if (sh64[i].sh_flags & SHF_ALLOC)
+ continue; /* SHF_ALLOC sections should have PHDRs */
+
+ align = sh64[i].sh_addralign ? sh64[i].sh_addralign : 0;
+ addr = map_data((char *)ptr + sh64[i].sh_offset,
+ sh64[i].sh_size, align, MAP_HIGH);
+ if (!addr) {
+ error("Failed to map symbol section\n&q...
2007 Apr 18
2
[RFC, PATCH] Fixup COMPAT_VDSO to work with CONFIG_PARAVIRT
...>e_type !=3D ET_DYN)
+ panic("Bogus ELF in vsyscall DSO\n");
+
+ hdr->e_entry +=3D VDSO_HIGH_BASE;
+ sechdrs =3D (void *)hdr + hdr->e_shoff;
+ secstrings =3D (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ for (i =3D 1; i < hdr->e_shnum; i++) {
+ if (!(sechdrs[i].sh_flags & SHF_ALLOC))
+ continue;
+
+ sechdrs[i].sh_addr +=3D VDSO_HIGH_BASE;
+ if (strcmp(secstrings+sechdrs[i].sh_name, ".dynsym") =3D=3D 0) {
+ Elf32_Sym *sym =3D (void *)hdr + sechdrs[i].sh_offset;
+ n =3D sechdrs[i].sh_size / sizeof(*sym);
+ for (j =3D 1; j < n; j++) {
+...
2007 Apr 18
2
[RFC, PATCH] Fixup COMPAT_VDSO to work with CONFIG_PARAVIRT
...>e_type !=3D ET_DYN)
+ panic("Bogus ELF in vsyscall DSO\n");
+
+ hdr->e_entry +=3D VDSO_HIGH_BASE;
+ sechdrs =3D (void *)hdr + hdr->e_shoff;
+ secstrings =3D (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ for (i =3D 1; i < hdr->e_shnum; i++) {
+ if (!(sechdrs[i].sh_flags & SHF_ALLOC))
+ continue;
+
+ sechdrs[i].sh_addr +=3D VDSO_HIGH_BASE;
+ if (strcmp(secstrings+sechdrs[i].sh_name, ".dynsym") =3D=3D 0) {
+ Elf32_Sym *sym =3D (void *)hdr + sechdrs[i].sh_offset;
+ n =3D sechdrs[i].sh_size / sizeof(*sym);
+ for (j =3D 1; j < n; j++) {
+...
2018 May 23
0
[PATCH v3 23/27] x86/modules: Adapt module loading for PIE support
...ela);
+
+ if (sechdrs[i].sh_type != SHT_RELA)
+ continue;
+
+ /* sort by type, symbol index and addend */
+ sort(rels, numrels, sizeof(Elf64_Rela), cmp_rela, NULL);
+
+ gots += count_gots(syms, rels, numrels);
+ }
+
+ mod->arch.core.got->sh_type = SHT_NOBITS;
+ mod->arch.core.got->sh_flags = SHF_ALLOC;
+ mod->arch.core.got->sh_addralign = L1_CACHE_BYTES;
+ mod->arch.core.got->sh_size = (gots + 1) * sizeof(u64);
+ mod->arch.core.got_num_entries = 0;
+ mod->arch.core.got_max_entries = gots;
+
+ /*
+ * If a _GLOBAL_OFFSET_TABLE_ symbol exists, make it absolute for
+...
2007 Mar 05
7
[PATCH 2/10] linux 2.6.18: COMPAT_VDSO
...ARN_ON(1);
+ break;
+ }
+ }
+ BUG_ON(ehdr->e_shentsize < sizeof(Elf32_Shdr));
+ BUG_ON(ehdr->e_shnum >= SHN_LORESERVE);
+ for (i = 1; i < ehdr->e_shnum; ++i) {
+ Elf32_Shdr *shdr = (void *)((unsigned long)ehdr + ehdr->e_shoff + i * ehdr->e_shentsize);
+
+ if (!(shdr->sh_flags & SHF_ALLOC))
+ continue;
+ shdr->sh_addr += new_base - old_base;
+ switch(shdr->sh_type) {
+ case SHT_DYNAMIC:
+ case SHT_HASH:
+ case SHT_NOBITS:
+ case SHT_NOTE:
+ case SHT_PROGBITS:
+ case SHT_STRTAB:
+ case 0x6ffffffd: /* SHT_GNU_verdef */
+ case 0x6fffffff: /* SHT_GNU_ve...
2007 Mar 05
7
[PATCH 2/10] linux 2.6.18: COMPAT_VDSO
...ARN_ON(1);
+ break;
+ }
+ }
+ BUG_ON(ehdr->e_shentsize < sizeof(Elf32_Shdr));
+ BUG_ON(ehdr->e_shnum >= SHN_LORESERVE);
+ for (i = 1; i < ehdr->e_shnum; ++i) {
+ Elf32_Shdr *shdr = (void *)((unsigned long)ehdr + ehdr->e_shoff + i * ehdr->e_shentsize);
+
+ if (!(shdr->sh_flags & SHF_ALLOC))
+ continue;
+ shdr->sh_addr += new_base - old_base;
+ switch(shdr->sh_type) {
+ case SHT_DYNAMIC:
+ case SHT_HASH:
+ case SHT_NOBITS:
+ case SHT_NOTE:
+ case SHT_PROGBITS:
+ case SHT_STRTAB:
+ case 0x6ffffffd: /* SHT_GNU_verdef */
+ case 0x6fffffff: /* SHT_GNU_ve...
2007 Mar 05
7
[PATCH 2/10] linux 2.6.18: COMPAT_VDSO
...ARN_ON(1);
+ break;
+ }
+ }
+ BUG_ON(ehdr->e_shentsize < sizeof(Elf32_Shdr));
+ BUG_ON(ehdr->e_shnum >= SHN_LORESERVE);
+ for (i = 1; i < ehdr->e_shnum; ++i) {
+ Elf32_Shdr *shdr = (void *)((unsigned long)ehdr + ehdr->e_shoff + i * ehdr->e_shentsize);
+
+ if (!(shdr->sh_flags & SHF_ALLOC))
+ continue;
+ shdr->sh_addr += new_base - old_base;
+ switch(shdr->sh_type) {
+ case SHT_DYNAMIC:
+ case SHT_HASH:
+ case SHT_NOBITS:
+ case SHT_NOTE:
+ case SHT_PROGBITS:
+ case SHT_STRTAB:
+ case 0x6ffffffd: /* SHT_GNU_verdef */
+ case 0x6fffffff: /* SHT_GNU_ve...
2007 Apr 18
1
[RFC, PATCH 7/24] i386 Vmi memory hole
Create a configurable hole in the linear address space at the top
of memory. A more advanced interface is needed to negotiate how
much space the hypervisor is allowed to steal, but in the end, it
seems most likely that a fixed constant size will be chosen for
the compiled kernel, potentially propagated to an information
page used by paravirtual initialization to determine interface
compatibility.
2007 Apr 18
1
[RFC, PATCH 7/24] i386 Vmi memory hole
Create a configurable hole in the linear address space at the top
of memory. A more advanced interface is needed to negotiate how
much space the hypervisor is allowed to steal, but in the end, it
seems most likely that a fixed constant size will be chosen for
the compiled kernel, potentially propagated to an information
page used by paravirtual initialization to determine interface
compatibility.
2007 Apr 18
1
[PATCH, experimental] i386 Allow the fixmap to be relocated at boot time
...ype !=3D ET_DYN)
+ panic("Bogus ELF in vsyscall DSO\n");
+
+ hdr->e_entry +=3D VSYSCALL_RELOCATION;
+
+ sechdrs =3D (void *)hdr + hdr->e_shoff;
+ secstrings =3D (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ for (i =3D 1; i < hdr->e_shnum; i++) {
+ if (!(sechdrs[i].sh_flags & SHF_ALLOC))
+ continue;
+
+ sechdrs[i].sh_addr +=3D VSYSCALL_RELOCATION;
+ if (strcmp(secstrings+sechdrs[i].sh_name, ".dynsym") =3D=3D 0) {
+ Elf32_Sym *sym =3D (void *)hdr + sechdrs[i].sh_offset;
+ n =3D sechdrs[i].sh_size / sizeof(*sym);
+ for (j =3D 1; j < n; j++)...
2007 Apr 18
1
[PATCH, experimental] i386 Allow the fixmap to be relocated at boot time
...ype !=3D ET_DYN)
+ panic("Bogus ELF in vsyscall DSO\n");
+
+ hdr->e_entry +=3D VSYSCALL_RELOCATION;
+
+ sechdrs =3D (void *)hdr + hdr->e_shoff;
+ secstrings =3D (void *)hdr + sechdrs[hdr->e_shstrndx].sh_offset;
+
+ for (i =3D 1; i < hdr->e_shnum; i++) {
+ if (!(sechdrs[i].sh_flags & SHF_ALLOC))
+ continue;
+
+ sechdrs[i].sh_addr +=3D VSYSCALL_RELOCATION;
+ if (strcmp(secstrings+sechdrs[i].sh_name, ".dynsym") =3D=3D 0) {
+ Elf32_Sym *sym =3D (void *)hdr + sechdrs[i].sh_offset;
+ n =3D sechdrs[i].sh_size / sizeof(*sym);
+ for (j =3D 1; j < n; j++)...
2007 Apr 18
4
[patch 0/4] Clean up asm/bugs.h, identify_cpu() and update COMPAT_VDSO
Hi Andi,
Four patches:
- clean up asm/bugs.h, by moving all the C code into its own C file
- split identify_cpu() into boot and secondary variants, so that
boot-time setup functions can be marked __init
- repost of the COMPAT_VDSO patches with a bit more robustness from
unknown DT_tags, and functions marked __init, since all this is
boot-time only setup.
Thanks,
J
--
2007 Apr 18
4
[patch 0/4] Clean up asm/bugs.h, identify_cpu() and update COMPAT_VDSO
Hi Andi,
Four patches:
- clean up asm/bugs.h, by moving all the C code into its own C file
- split identify_cpu() into boot and secondary variants, so that
boot-time setup functions can be marked __init
- repost of the COMPAT_VDSO patches with a bit more robustness from
unknown DT_tags, and functions marked __init, since all this is
boot-time only setup.
Thanks,
J
--
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2018 Mar 13
32
[PATCH v2 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace implementation.
- Use gcc mstack-protector-guard-reg=%gs with PIE when possible.
- rfc v3:
- Use --emit-relocs instead of -pie to reduce
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2017 Oct 04
28
x86: PIE support and option to extend KASLR randomization
These patches make the changes necessary to build the kernel as Position
Independent Executable (PIE) on x86_64. A PIE kernel can be relocated below
the top 2G of the virtual address space. It allows to optionally extend the
KASLR randomization range from 1G to 3G.
Thanks a lot to Ard Biesheuvel & Kees Cook on their feedback on compiler
changes, PIE support and KASLR in general. Thanks to
2018 May 23
33
[PATCH v3 00/27] x86: PIE support and option to extend KASLR randomization
Changes:
- patch v3:
- Update on message to describe longer term PIE goal.
- Minor change on ftrace if condition.
- Changed code using xchgq.
- patch v2:
- Adapt patch to work post KPTI and compiler changes
- Redo all performance testing with latest configs and compilers
- Simplify mov macro on PIE (MOVABS now)
- Reduce GOT footprint
- patch v1:
- Simplify ftrace