search for: ftraced

Displaying 20 results from an estimated 318 matches for "ftraced".

Did you mean: ftrace
2018 May 24
2
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
On Wed 2018-05-23 12:54:15, Thomas Garnier wrote: > When using -fPIE/PIC with function tracing, the compiler generates a > call through the GOT (call *__fentry__ at GOTPCREL). This instruction > takes 6 bytes instead of 5 on the usual relative call. > > If PIE is enabled, replace the 6th byte of the GOT call by a 1-byte nop > so ftrace can handle the previous 5-bytes as before.
2018 May 24
2
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
On Wed 2018-05-23 12:54:15, Thomas Garnier wrote: > When using -fPIE/PIC with function tracing, the compiler generates a > call through the GOT (call *__fentry__ at GOTPCREL). This instruction > takes 6 bytes instead of 5 on the usual relative call. > > If PIE is enabled, replace the 6th byte of the GOT call by a 1-byte nop > so ftrace can handle the previous 5-bytes as before.
2018 May 24
1
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, May 24, 2018 at 1:16 PM Steven Rostedt <rostedt at goodmis.org> wrote: > On Thu, 24 May 2018 13:40:24 +0200 > Petr Mladek <pmladek at suse.com> wrote: > > On Wed 2018-05-23 12:54:15, Thomas Garnier wrote: > > > When using -fPIE/PIC with function tracing, the compiler generates a > > > call through the GOT (call *__fentry__ at GOTPCREL). This
2018 May 23
0
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
When using -fPIE/PIC with function tracing, the compiler generates a call through the GOT (call *__fentry__ at GOTPCREL). This instruction takes 6 bytes instead of 5 on the usual relative call. If PIE is enabled, replace the 6th byte of the GOT call by a 1-byte nop so ftrace can handle the previous 5-bytes as before. Position Independent Executable (PIE) support will allow to extended the KASLR
2018 Mar 13
0
[PATCH v2 21/27] x86/ftrace: Adapt function tracing for PIE support
When using -fPIE/PIC with function tracing, the compiler generates a call through the GOT (call *__fentry__ at GOTPCREL). This instruction takes 6 bytes instead of 5 on the usual relative call. If PIE is enabled, replace the 6th byte of the GOT call by a 1-byte nop so ftrace can handle the previous 5-bytes as before. Position Independent Executable (PIE) support will allow to extended the KASLR
2018 May 24
0
[PATCH v3 21/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, 24 May 2018 13:40:24 +0200 Petr Mladek <pmladek at suse.com> wrote: > On Wed 2018-05-23 12:54:15, Thomas Garnier wrote: > > When using -fPIE/PIC with function tracing, the compiler generates a > > call through the GOT (call *__fentry__ at GOTPCREL). This instruction > > takes 6 bytes instead of 5 on the usual relative call. > > > > If PIE is
2023 Mar 10
0
[PATCH v2 0/6] use canonical ftrace path whenever possible
On Wed, Feb 15, 2023 at 03:33:44PM -0700, Ross Zwisler wrote: > Changes in v2: > * Dropped patches which were pulled into maintainer trees. > * Split BPF patches out into another series targeting bpf-next. > * trace-agent now falls back to debugfs if tracefs isn't present. > * Added Acked-by from mst at redhat.com to series. > * Added a typo fixup for the virtio-trace
2017 Oct 05
2
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, Oct 5, 2017 at 6:06 AM, Steven Rostedt <rostedt at goodmis.org> wrote: > On Wed, 4 Oct 2017 14:19:56 -0700 > Thomas Garnier <thgarnie at google.com> wrote: > >> When using -fPIE/PIC with function tracing, the compiler generates a >> call through the GOT (call *__fentry__ at GOTPCREL). This instruction >> takes 6 bytes instead of 5 on the usual
2017 Oct 05
2
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, Oct 5, 2017 at 6:06 AM, Steven Rostedt <rostedt at goodmis.org> wrote: > On Wed, 4 Oct 2017 14:19:56 -0700 > Thomas Garnier <thgarnie at google.com> wrote: > >> When using -fPIE/PIC with function tracing, the compiler generates a >> call through the GOT (call *__fentry__ at GOTPCREL). This instruction >> takes 6 bytes instead of 5 on the usual
2017 Oct 04
1
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
When using -fPIE/PIC with function tracing, the compiler generates a call through the GOT (call *__fentry__ at GOTPCREL). This instruction takes 6 bytes instead of 5 on the usual relative call. With this change, function tracing supports 6 bytes on traceable function and can still replace relative calls on the ftrace assembly functions. Position Independent Executable (PIE) support will allow to
2017 Oct 05
0
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
On Wed, 4 Oct 2017 14:19:56 -0700 Thomas Garnier <thgarnie at google.com> wrote: > When using -fPIE/PIC with function tracing, the compiler generates a > call through the GOT (call *__fentry__ at GOTPCREL). This instruction > takes 6 bytes instead of 5 on the usual relative call. > > With this change, function tracing supports 6 bytes on traceable > function and can
2017 Oct 05
0
[RFC v3 20/27] x86/ftrace: Adapt function tracing for PIE support
On Thu, 5 Oct 2017 09:01:14 -0700 Thomas Garnier <thgarnie at google.com> wrote: > On Thu, Oct 5, 2017 at 6:06 AM, Steven Rostedt <rostedt at goodmis.org> wrote: > > On Wed, 4 Oct 2017 14:19:56 -0700 > > Thomas Garnier <thgarnie at google.com> wrote: > > > >> When using -fPIE/PIC with function tracing, the compiler generates a > >> call
2008 Jul 07
10
[PATCH RFC 0/4] Paravirtual spinlocks
At the most recent Xen Summit, Thomas Friebel presented a paper ("Preventing Guests from Spinning Around", http://xen.org/files/xensummitboston08/LHP.pdf) investigating the interactions between spinlocks and virtual machines. Specifically, he looked at what happens when a lock-holding VCPU gets involuntarily preempted. The obvious first order effect is that while the VCPU is not
2008 Jul 07
10
[PATCH RFC 0/4] Paravirtual spinlocks
At the most recent Xen Summit, Thomas Friebel presented a paper ("Preventing Guests from Spinning Around", http://xen.org/files/xensummitboston08/LHP.pdf) investigating the interactions between spinlocks and virtual machines. Specifically, he looked at what happens when a lock-holding VCPU gets involuntarily preempted. The obvious first order effect is that while the VCPU is not
2008 Jul 07
10
[PATCH RFC 0/4] Paravirtual spinlocks
At the most recent Xen Summit, Thomas Friebel presented a paper ("Preventing Guests from Spinning Around", http://xen.org/files/xensummitboston08/LHP.pdf) investigating the interactions between spinlocks and virtual machines. Specifically, he looked at what happens when a lock-holding VCPU gets involuntarily preempted. The obvious first order effect is that while the VCPU is not
2019 Jul 08
3
[PATCH v8 00/11] x86: PIE support to extend KASLR randomization
Splitting the previous serie in two. This part contains assembly code changes required for PIE but without any direct dependencies with the rest of the patchset. Changes: - patch v8 (assembly): - Fix issues in crypto changes (thanks to Eric Biggers). - Remove unnecessary jump table change. - Change author and signoff to chromium email address. - patch v7 (assembly): - Split patchset
2019 Jul 08
3
[PATCH v8 00/11] x86: PIE support to extend KASLR randomization
Splitting the previous serie in two. This part contains assembly code changes required for PIE but without any direct dependencies with the rest of the patchset. Changes: - patch v8 (assembly): - Fix issues in crypto changes (thanks to Eric Biggers). - Remove unnecessary jump table change. - Change author and signoff to chromium email address. - patch v7 (assembly): - Split patchset
2014 Jan 03
0
[libvirt] [RFC] Implementing ftrace support for libvirt
On Fri, Jan 3, 2014 at 6:46 AM, yuxh <yuxinghai at cn.fujitsu.com> wrote: > Hi all, > > Happy new year! > > The existing trace mechanism in libvirt is dtrace. Although the dtrace > can work, it's not work well enough. Every time we want get information > from the trace point we must write a systemtap script and run it > together with libvirt. > > That's
2015 Dec 27
5
[PATCH 1/2] virtio_balloon: fix race by fill and leak
During my compaction-related stuff, I encountered a bug with ballooning. With repeated inflating and deflating cycle, guest memory( ie, cat /proc/meminfo | grep MemTotal) is decreased and couldn't be recovered. The reason is balloon_lock doesn't cover release_pages_balloon so struct virtio_balloon fields could be overwritten by race of fill_balloon(e,g, vb->*pfns could be critical).
2015 Dec 27
5
[PATCH 1/2] virtio_balloon: fix race by fill and leak
During my compaction-related stuff, I encountered a bug with ballooning. With repeated inflating and deflating cycle, guest memory( ie, cat /proc/meminfo | grep MemTotal) is decreased and couldn't be recovered. The reason is balloon_lock doesn't cover release_pages_balloon so struct virtio_balloon fields could be overwritten by race of fill_balloon(e,g, vb->*pfns could be critical).