similar to: [PATCH 1 of 1] x86_64: Put .note.* sections into a PT_NOTE segment in vmlinux

Displaying 20 results from an estimated 600 matches similar to: "[PATCH 1 of 1] x86_64: Put .note.* sections into a PT_NOTE segment in vmlinux"

2012 Jul 05
10
[PATCH] kexec-tools: Read always one vmcoreinfo file
vmcoreinfo file could exists under /sys/kernel (valid on baremetal only) and/or under /sys/hypervisor (valid when Xen dom0 is running). Read only one of them. It means that only one PT_NOTE will be always created. Remove extra code for second PT_NOTE creation. Signed-off-by: Daniel Kiper <daniel.kiper@oracle.com> --- kexec/crashdump-elf.c | 33 +++++++-------------------------- 1 files
2006 Aug 28
2
update for x86_64-put-note-sections-into-a-pt_note-segment-in-vmlinux.patch
Older binutils (prior to 2.16) have a problem with the linker script resulting from the change introducing explicit segment maps. Namely does the respective linker not properly handle @nobits sections (i.e. .bss) sitting between @progbits ones (i.e. .data.*). The .bss section must therefore be moved past all initialized sections (as is already the case on i386). Replacement patch attached. Jan
2007 Apr 18
8
[patch 0/8] Basic infrastructure patches for a paravirtualized kernel
Hi Andrew, This series of patches lays the basic ground work for the paravirtualized kernel patches coming later on. I think this lot is ready for the rough-and-tumble world of the -mm tree. The main change from the last posting is that all the page-table related patches have been moved out, and will be posted separately. Also, the off-by-one in reserving the top of address space has been
2007 Apr 18
8
[patch 0/8] Basic infrastructure patches for a paravirtualized kernel
Hi Andrew, This series of patches lays the basic ground work for the paravirtualized kernel patches coming later on. I think this lot is ready for the rough-and-tumble world of the -mm tree. The main change from the last posting is that all the page-table related patches have been moved out, and will be posted separately. Also, the off-by-one in reserving the top of address space has been
2008 Mar 31
3
[PATCH 3/4] extract vmcoreinfo from /proc/vmcore for Xen
This patch is for kexec-tools-testing-20080324. --- kexec/crashdump.c.org 2008-03-25 11:51:51.000000000 +0900 +++ kexec/crashdump.c 2008-03-26 09:29:20.000000000 +0900 @@ -110,10 +110,8 @@ return 0; } -/* Returns the physical address of start of crash notes buffer for a kernel. */ -int get_kernel_vmcoreinfo(uint64_t *addr, uint64_t *len) +static int get_vmcoreinfo(char *kdump_info, uint64_t
2007 Apr 18
15
[PATCH 0 of 13] Basic infrastructure patches for a paravirtualized kernel
[ REPOST: Apologies to anyone who has seen this before. It didn't make it onto any of the lists it should have. -J ] Hi Andrew, This series of patches lays the basic ground work for the paravirtualized kernel patches coming later on. I think this lot is ready for the rough-and-tumble world of the -mm tree. For the most part, these patches do nothing or very little. The patches should
2007 Apr 18
15
[PATCH 0 of 13] Basic infrastructure patches for a paravirtualized kernel
[ REPOST: Apologies to anyone who has seen this before. It didn't make it onto any of the lists it should have. -J ] Hi Andrew, This series of patches lays the basic ground work for the paravirtualized kernel patches coming later on. I think this lot is ready for the rough-and-tumble world of the -mm tree. For the most part, these patches do nothing or very little. The patches should
2007 May 31
1
[patch rfc wip] first cut of ELF bzImage
I started with Vivek's ELF bzImage patch from Oct last year, mashed it to apply to hpa's new setup/boot code. This patch does a couple of things, which would probably be better split into multiple patches: 1. Glue an ELF header onto the front of bzImage. This is a real ELF header at the front of the file. Breaks akpm's laptop, apparently, but it works for me. 2.
2007 May 31
1
[patch rfc wip] first cut of ELF bzImage
I started with Vivek's ELF bzImage patch from Oct last year, mashed it to apply to hpa's new setup/boot code. This patch does a couple of things, which would probably be better split into multiple patches: 1. Glue an ELF header onto the front of bzImage. This is a real ELF header at the front of the file. Breaks akpm's laptop, apparently, but it works for me. 2.
2007 Apr 18
2
Single PV startup vs multiple PV startup
Hi Rusty, I had a look over your 011-paravirt-head.S.patch. I'm struggling to come up with a list of any benefits over having separate entrypoints for each hypervisor. Multiple entry pros: * allows maximum startup flexibility for any given hypervisor * for Xen at least, it's pretty simple * the "what hypervisor am I under" question is answered trivially cons:
2007 Mar 05
7
[PATCH 2/10] linux 2.6.18: COMPAT_VDSO
This adds support for CONFIG_COMPAT_VDSO. As this will certainly raise questions, I left in the code needed for an alternative approach (which requires mode C code, but less build script changes). Signed-off-by: Jan Beulich <jbeulich@novell.com> Index: head-2007-02-27/arch/i386/Kconfig =================================================================== ---
2007 Mar 05
7
[PATCH 2/10] linux 2.6.18: COMPAT_VDSO
This adds support for CONFIG_COMPAT_VDSO. As this will certainly raise questions, I left in the code needed for an alternative approach (which requires mode C code, but less build script changes). Signed-off-by: Jan Beulich <jbeulich@novell.com> Index: head-2007-02-27/arch/i386/Kconfig =================================================================== ---
2007 Mar 05
7
[PATCH 2/10] linux 2.6.18: COMPAT_VDSO
This adds support for CONFIG_COMPAT_VDSO. As this will certainly raise questions, I left in the code needed for an alternative approach (which requires mode C code, but less build script changes). Signed-off-by: Jan Beulich <jbeulich@novell.com> Index: head-2007-02-27/arch/i386/Kconfig =================================================================== ---
2019 Nov 27
3
RFC: Loadable segments watermark for lld
The ELF file header isn't always covered by a segment but still affects loading. I think everything else that effects loading/dynamic linking is always covered by a PT_LOAD segment. As evidence this is basically what --strip-sections in llvm-strip and eu-strip do and they produce perfectly runnable binaries. Having a hash of the actual memory map is interesting IMO. Build IDs can't really
2011 Jul 27
9
[PATCH 0/5] Collected vdso/vsyscall fixes for 3.1
This fixes various problems that cropped up with the vdso patches. - Patch 1 fixes an information leak to userspace. - Patches 2 and 3 fix the kernel build on gold. - Patches 4 and 5 fix Xen (I hope). Konrad, could you could test these on Xen and run 'test_vsyscall test' [1]? I don't have a usable Xen setup. Also, I'd appreciate a review of patches 4 and 5 from some
2011 Jul 27
9
[PATCH 0/5] Collected vdso/vsyscall fixes for 3.1
This fixes various problems that cropped up with the vdso patches. - Patch 1 fixes an information leak to userspace. - Patches 2 and 3 fix the kernel build on gold. - Patches 4 and 5 fix Xen (I hope). Konrad, could you could test these on Xen and run 'test_vsyscall test' [1]? I don't have a usable Xen setup. Also, I'd appreciate a review of patches 4 and 5 from some
2011 Jul 27
9
[PATCH 0/5] Collected vdso/vsyscall fixes for 3.1
This fixes various problems that cropped up with the vdso patches. - Patch 1 fixes an information leak to userspace. - Patches 2 and 3 fix the kernel build on gold. - Patches 4 and 5 fix Xen (I hope). Konrad, could you could test these on Xen and run 'test_vsyscall test' [1]? I don't have a usable Xen setup. Also, I'd appreciate a review of patches 4 and 5 from some
2011 Aug 03
10
[PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
This fixes various problems that cropped up with the vdso patches. - Patch 1 fixes an information leak to userspace. - Patches 2 and 3 fix the kernel build on gold. - Patches 4 and 5 fix Xen (I hope). - Patch 6 (optional) adds a trace event to vsyscall emulation. It will make it easier to handle performance regression reports :) [1]
2011 Aug 03
10
[PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
This fixes various problems that cropped up with the vdso patches. - Patch 1 fixes an information leak to userspace. - Patches 2 and 3 fix the kernel build on gold. - Patches 4 and 5 fix Xen (I hope). - Patch 6 (optional) adds a trace event to vsyscall emulation. It will make it easier to handle performance regression reports :) [1]
2011 Aug 03
10
[PATCH v2 0/6] Collected vdso/vsyscall fixes for 3.1
This fixes various problems that cropped up with the vdso patches. - Patch 1 fixes an information leak to userspace. - Patches 2 and 3 fix the kernel build on gold. - Patches 4 and 5 fix Xen (I hope). - Patch 6 (optional) adds a trace event to vsyscall emulation. It will make it easier to handle performance regression reports :) [1]