Displaying 5 results from an estimated 5 matches for "uregs_rip".
2012 Jul 26
2
[PATCH] x86-64: drop updating of UREGS_rip when converting sysenter to #GP
This was set to zero immediately before the #GP injection code, since
SYSENTER doesn''t really have a return address.
Reported-by: Ian Campbell <Ian.Campbell@citrix.com>
Furthermore, UREGS_cs and UREGS_rip don''t need to be written a second
time, as the PUSHes above already can/do take care of putting in place
the intended values.
Signed-off-by: Jan Beulich <jbeulich@suse.com>
--- a/xen/arch/x86/x86_64/entry.S
+++ b/xen/arch/x86/x86_64/entry.S
@@ -275,15 +275,13 @@ ENTRY(sysenter_entr...
2007 Aug 08
2
[PATCH] x86-64: syscall/sysenter support for 32-bit apps
...# Bit 9 (EFLAGS.IF)
+ addb %ch,%ch # Bit 9 (EFLAGS.IF)
+ orl VCPU_eflags_mask(%rbx),%eax
+ movl $0,VCPU_eflags_mask(%rbx)
orb %ch,%ah # Fold EFLAGS.IF into %eax
.Lft6: movl %eax,%fs:2*4(%rsi) # EFLAGS
movl UREGS_rip+8(%rsp),%eax
Index: 2007-08-08/xen/arch/x86/x86_64/compat/traps.c
===================================================================
--- 2007-08-08.orig/xen/arch/x86/x86_64/compat/traps.c 2007-07-04 12:13:16.000000000 +0200
+++ 2007-08-08/xen/arch/x86/x86_64/compat/traps.c 2007-08-08 11:37:08.0000...
2006 Aug 23
0
[PATCH 2/7] x86_64: syscall argument clobber check
(noticed during preparation of initial PAE-guest-on-64bit patches)
The offset used to compare the return IP was apparently wrong; I
didn''t actually test that things didn''t work as intended (or that they
do now with this change), it just caught my eye that the UREGS_rip
access a little further up in the same function was done with
adjustment for the extra item on the stack, but this one wasn''t.
Signed-off-by: Jan Beulich <jbeulich@novell.com>
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http...
2012 Oct 02
18
[PATCH 0/3] x86: adjust entry frame generation
This set of patches converts the way frames gets created from
using PUSHes/POPs to using MOVes, thus allowing (in certain
cases) to avoid saving/restoring part of the register set.
While the place where the (small) win from this comes from varies
between CPUs, the net effect is a 1 to 2% reduction on a
combined interruption entry and exit when the full state save
can be avoided.
1: use MOV
2007 Mar 27
0
[PATCH] make all performance counter per-cpu
...6_64/entry.S
===================================================================
--- 2007-03-19.orig/xen/arch/x86/x86_64/entry.S 2007-02-28 12:10:32.000000000 +0100
+++ 2007-03-19/xen/arch/x86/x86_64/entry.S 2007-03-27 12:11:33.000000000 +0200
@@ -147,7 +147,7 @@ ENTRY(syscall_enter)
pushq UREGS_rip+8(%rsp)
#endif
leaq hypercall_table(%rip),%r10
- PERFC_INCR(PERFC_hypercalls, %rax)
+ PERFC_INCR(PERFC_hypercalls, %rax, %rbx)
callq *(%r10,%rax,8)
#ifndef NDEBUG
/* Deliberately corrupt parameter regs used by this hypercall. */
@@ -396,7 +396,7 @@ ENTRY(...