Xu Zhang
2013-Apr-11 04:46 UTC
[PATCH V2 0/6] mini-os: check and fix up against nested events in x86-64 kernel entry
mini-os almost always use direct iret to return from interrupt. But this operation is not atomic because Xen uses event mask to enable/disable event delivery. So there is a window for nested events to happen after re-enabling event delivery and before a direct iret. The issues come with such non-atomicity have been discussed in: http://lists.xen.org/archives/html/xen-devel/2007-06/msg00142.html And also on Xen-devel: http://markmail.org/message/jkzhzy6fyes6igcf This patch checks and fixes up against nested events in a similar fashion of Linux 32bit pvops. It checks against re-entrant of critical section in event handling callback. Try to fix up by coalescing the two stack frames into one when the a nested event came. It then resumes execution as if the second event never happened. It also refactors mini-os''s x86-64 kernel entry assembly code. Xu Zhang (6): mini-os/x86-64 entry: code clean-ups; no functional changes mini-os/x86-64 entry: define macros for registers partial save and restore; no functional changes mini-os/x86-64 entry: code refactoring; no functional changes mini-os/x86-64 entry: remove unnecessary event blocking mini-os/x86-64 entry: defer RESTORE_REST until return mini-os/x86-64 entry: check against nested events and try to fix up extras/mini-os/arch/x86/x86_64.S | 245 ++++++++++++++++++++++++-------------- 1 files changed, 156 insertions(+), 89 deletions(-) --- Changed since v1: * Drop the chunky lookup table; use Linux x86-32''s fixup strategy instead, as suggested by Jeremy Fitzhardinge; * Reflect Samuel Thibault''s comments. -- 1.7.7.6
Xu Zhang
2013-Apr-11 04:46 UTC
[PATCH V2 1/6] mini-os/x86-64 entry: code clean-ups; no functional changes
Make use of tabs and spaces consistent in arch/x86/x86_64.S Signed-off-by: Xu Zhang <xzhang@cs.uic.edu> --- extras/mini-os/arch/x86/x86_64.S | 61 +++++++++++++++++++------------------ 1 files changed, 31 insertions(+), 30 deletions(-) diff --git a/extras/mini-os/arch/x86/x86_64.S b/extras/mini-os/arch/x86/x86_64.S index a65e5d5..79099f1 100644 --- a/extras/mini-os/arch/x86/x86_64.S +++ b/extras/mini-os/arch/x86/x86_64.S @@ -78,6 +78,7 @@ NMI_MASK = 0x80000000 jmp hypercall_page + (__HYPERVISOR_iret * 32) .endm + /* * Exception entry point. This expects an error code/orig_rax on the stack * and the exception handler in %rax. @@ -103,7 +104,7 @@ ENTRY(error_entry) movq %r15,(%rsp) error_call_handler: - movq %rdi, RDI(%rsp) + movq %rdi, RDI(%rsp) movq %rsp,%rdi movq ORIG_RAX(%rsp),%rsi # get error code movq $-1,ORIG_RAX(%rsp) @@ -111,9 +112,9 @@ error_call_handler: jmp error_exit .macro zeroentry sym - movq (%rsp),%rcx - movq 8(%rsp),%r11 - addq $0x10,%rsp /* skip rcx and r11 */ + movq (%rsp),%rcx + movq 8(%rsp),%r11 + addq $0x10,%rsp /* skip rcx and r11 */ pushq $0 /* push error code/oldrax */ pushq %rax /* push real oldrax to the rdi slot */ leaq \sym(%rip),%rax @@ -121,9 +122,9 @@ error_call_handler: .endm .macro errorentry sym - movq (%rsp),%rcx - movq 8(%rsp),%r11 - addq $0x10,%rsp /* rsp points to the error code */ + movq (%rsp),%rcx + movq 8(%rsp),%r11 + addq $0x10,%rsp /* rsp points to the error code */ pushq %rax leaq \sym(%rip),%rax jmp error_entry @@ -147,39 +148,39 @@ error_call_handler: ENTRY(hypervisor_callback) - zeroentry hypervisor_callback2 + zeroentry hypervisor_callback2 ENTRY(hypervisor_callback2) - movq %rdi, %rsp -11: movq %gs:8,%rax - incl %gs:0 - cmovzq %rax,%rsp - pushq %rdi - call do_hypervisor_callback - popq %rsp - decl %gs:0 - jmp error_exit - -restore_all_enable_events: + movq %rdi, %rsp +11: movq %gs:8,%rax + incl %gs:0 + cmovzq %rax,%rsp + pushq %rdi + call do_hypervisor_callback + popq %rsp + decl %gs:0 + jmp error_exit + +restore_all_enable_events: XEN_UNBLOCK_EVENTS(%rsi) # %rsi is already set up... scrit: /**** START OF CRITICAL REGION ****/ XEN_TEST_PENDING(%rsi) jnz 14f # process more events if necessary... XEN_PUT_VCPU_INFO(%rsi) - RESTORE_ALL - HYPERVISOR_IRET 0 - + RESTORE_ALL + HYPERVISOR_IRET 0 + 14: XEN_LOCKED_BLOCK_EVENTS(%rsi) XEN_PUT_VCPU_INFO(%rsi) subq $6*8,%rsp - movq %rbx,5*8(%rsp) - movq %rbp,4*8(%rsp) - movq %r12,3*8(%rsp) - movq %r13,2*8(%rsp) - movq %r14,1*8(%rsp) - movq %r15,(%rsp) - movq %rsp,%rdi # set the argument again + movq %rbx,5*8(%rsp) + movq %rbp,4*8(%rsp) + movq %r12,3*8(%rsp) + movq %r13,2*8(%rsp) + movq %r14,1*8(%rsp) + movq %r15,(%rsp) + movq %rsp,%rdi # set the argument again jmp 11b ecrit: /**** END OF CRITICAL REGION ****/ @@ -193,7 +194,7 @@ retint_restore_args: andb $1,%al # EAX[0] == IRET_EFLAGS.IF & event_mask jnz restore_all_enable_events # != 0 => enable event delivery XEN_PUT_VCPU_INFO(%rsi) - + RESTORE_ALL HYPERVISOR_IRET 0 -- 1.7.7.6
Xu Zhang
2013-Apr-11 04:46 UTC
[PATCH V2 2/6] mini-os/x86-64 entry: define macros for registers partial save and restore; no functional changes
For saving and restoring registers rbx, rbp, and r12-r15, define and use macros SAVE_REST and RESTORE_REST respectively. Signed-off-by: Xu Zhang <xzhang@cs.uic.edu> --- extras/mini-os/arch/x86/x86_64.S | 37 ++++++++++++++++++++++--------------- 1 files changed, 22 insertions(+), 15 deletions(-) diff --git a/extras/mini-os/arch/x86/x86_64.S b/extras/mini-os/arch/x86/x86_64.S index 79099f1..24f35cd 100644 --- a/extras/mini-os/arch/x86/x86_64.S +++ b/extras/mini-os/arch/x86/x86_64.S @@ -59,6 +59,25 @@ NMI_MASK = 0x80000000 addq $9*8+8,%rsp .endm +.macro RESTORE_REST + movq (%rsp),%r15 + movq 1*8(%rsp),%r14 + movq 2*8(%rsp),%r13 + movq 3*8(%rsp),%r12 + movq 4*8(%rsp),%rbp + movq 5*8(%rsp),%rbx + addq $6*8,%rsp +.endm + +.macro SAVE_REST + subq $6*8,%rsp + movq %rbx,5*8(%rsp) + movq %rbp,4*8(%rsp) + movq %r12,3*8(%rsp) + movq %r13,2*8(%rsp) + movq %r14,1*8(%rsp) + movq %r15,(%rsp) +.endm .macro HYPERVISOR_IRET flag testl $NMI_MASK,2*8(%rsp) @@ -173,13 +192,7 @@ scrit: /**** START OF CRITICAL REGION ****/ 14: XEN_LOCKED_BLOCK_EVENTS(%rsi) XEN_PUT_VCPU_INFO(%rsi) - subq $6*8,%rsp - movq %rbx,5*8(%rsp) - movq %rbp,4*8(%rsp) - movq %r12,3*8(%rsp) - movq %r13,2*8(%rsp) - movq %r14,1*8(%rsp) - movq %r15,(%rsp) + SAVE_REST movq %rsp,%rdi # set the argument again jmp 11b ecrit: /**** END OF CRITICAL REGION ****/ @@ -199,14 +212,8 @@ retint_restore_args: HYPERVISOR_IRET 0 -error_exit: - movq (%rsp),%r15 - movq 1*8(%rsp),%r14 - movq 2*8(%rsp),%r13 - movq 3*8(%rsp),%r12 - movq 4*8(%rsp),%rbp - movq 5*8(%rsp),%rbx - addq $6*8,%rsp +error_exit: + RESTORE_REST XEN_BLOCK_EVENTS(%rsi) jmp retint_kernel -- 1.7.7.6
Xu Zhang
2013-Apr-11 04:46 UTC
[PATCH V2 3/6] mini-os/x86-64 entry: code refactoring; no functional changes
Re-arrange assembly code blocks so that they are in called order instead of jumping around, enhancing readability. Macros are grouped together as well. Signed-off-by: Xu Zhang <xzhang@cs.uic.edu> --- extras/mini-os/arch/x86/x86_64.S | 113 +++++++++++++++++++------------------- 1 files changed, 57 insertions(+), 56 deletions(-) diff --git a/extras/mini-os/arch/x86/x86_64.S b/extras/mini-os/arch/x86/x86_64.S index 24f35cd..d9b34a7 100644 --- a/extras/mini-os/arch/x86/x86_64.S +++ b/extras/mini-os/arch/x86/x86_64.S @@ -36,6 +36,22 @@ hypercall_page: .org 0x3000 +#define XEN_GET_VCPU_INFO(reg) movq HYPERVISOR_shared_info,reg +#define XEN_PUT_VCPU_INFO(reg) +#define XEN_PUT_VCPU_INFO_fixup +#define XEN_LOCKED_BLOCK_EVENTS(reg) movb $1,evtchn_upcall_mask(reg) +#define XEN_LOCKED_UNBLOCK_EVENTS(reg) movb $0,evtchn_upcall_mask(reg) +#define XEN_TEST_PENDING(reg) testb $0xFF,evtchn_upcall_pending(reg) + +#define XEN_BLOCK_EVENTS(reg) XEN_GET_VCPU_INFO(reg) ; \ + XEN_LOCKED_BLOCK_EVENTS(reg) ; \ + XEN_PUT_VCPU_INFO(reg) + +#define XEN_UNBLOCK_EVENTS(reg) XEN_GET_VCPU_INFO(reg) ; \ + XEN_LOCKED_UNBLOCK_EVENTS(reg) ; \ + XEN_PUT_VCPU_INFO(reg) + + /* Offsets into shared_info_t. */ #define evtchn_upcall_pending /* 0 */ #define evtchn_upcall_mask 1 @@ -46,6 +62,27 @@ NMI_MASK = 0x80000000 #define ORIG_RAX 120 /* + error_code */ #define EFLAGS 144 + +/* Macros */ +.macro zeroentry sym + movq (%rsp),%rcx + movq 8(%rsp),%r11 + addq $0x10,%rsp /* skip rcx and r11 */ + pushq $0 /* push error code/oldrax */ + pushq %rax /* push real oldrax to the rdi slot */ + leaq \sym(%rip),%rax + jmp error_entry +.endm + +.macro errorentry sym + movq (%rsp),%rcx + movq 8(%rsp),%r11 + addq $0x10,%rsp /* rsp points to the error code */ + pushq %rax + leaq \sym(%rip),%rax + jmp error_entry +.endm + .macro RESTORE_ALL movq (%rsp),%r11 movq 1*8(%rsp),%r10 @@ -130,42 +167,10 @@ error_call_handler: call *%rax jmp error_exit -.macro zeroentry sym - movq (%rsp),%rcx - movq 8(%rsp),%r11 - addq $0x10,%rsp /* skip rcx and r11 */ - pushq $0 /* push error code/oldrax */ - pushq %rax /* push real oldrax to the rdi slot */ - leaq \sym(%rip),%rax - jmp error_entry -.endm - -.macro errorentry sym - movq (%rsp),%rcx - movq 8(%rsp),%r11 - addq $0x10,%rsp /* rsp points to the error code */ - pushq %rax - leaq \sym(%rip),%rax - jmp error_entry -.endm - -#define XEN_GET_VCPU_INFO(reg) movq HYPERVISOR_shared_info,reg -#define XEN_PUT_VCPU_INFO(reg) -#define XEN_PUT_VCPU_INFO_fixup -#define XEN_LOCKED_BLOCK_EVENTS(reg) movb $1,evtchn_upcall_mask(reg) -#define XEN_LOCKED_UNBLOCK_EVENTS(reg) movb $0,evtchn_upcall_mask(reg) -#define XEN_TEST_PENDING(reg) testb $0xFF,evtchn_upcall_pending(reg) - -#define XEN_BLOCK_EVENTS(reg) XEN_GET_VCPU_INFO(reg) ; \ - XEN_LOCKED_BLOCK_EVENTS(reg) ; \ - XEN_PUT_VCPU_INFO(reg) - -#define XEN_UNBLOCK_EVENTS(reg) XEN_GET_VCPU_INFO(reg) ; \ - XEN_LOCKED_UNBLOCK_EVENTS(reg) ; \ - XEN_PUT_VCPU_INFO(reg) - - +/* + * Xen event (virtual interrupt) entry point. + */ ENTRY(hypervisor_callback) zeroentry hypervisor_callback2 @@ -178,7 +183,23 @@ ENTRY(hypervisor_callback2) call do_hypervisor_callback popq %rsp decl %gs:0 - jmp error_exit + +error_exit: + RESTORE_REST + XEN_BLOCK_EVENTS(%rsi) + +retint_kernel: +retint_restore_args: + movl EFLAGS-6*8(%rsp), %eax + shr $9, %eax # EAX[0] == IRET_EFLAGS.IF + XEN_GET_VCPU_INFO(%rsi) + andb evtchn_upcall_mask(%rsi),%al + andb $1,%al # EAX[0] == IRET_EFLAGS.IF & event_mask + jnz restore_all_enable_events # != 0 => enable event delivery + XEN_PUT_VCPU_INFO(%rsi) + + RESTORE_ALL + HYPERVISOR_IRET 0 restore_all_enable_events: XEN_UNBLOCK_EVENTS(%rsi) # %rsi is already set up... @@ -198,26 +219,6 @@ scrit: /**** START OF CRITICAL REGION ****/ ecrit: /**** END OF CRITICAL REGION ****/ -retint_kernel: -retint_restore_args: - movl EFLAGS-6*8(%rsp), %eax - shr $9, %eax # EAX[0] == IRET_EFLAGS.IF - XEN_GET_VCPU_INFO(%rsi) - andb evtchn_upcall_mask(%rsi),%al - andb $1,%al # EAX[0] == IRET_EFLAGS.IF & event_mask - jnz restore_all_enable_events # != 0 => enable event delivery - XEN_PUT_VCPU_INFO(%rsi) - - RESTORE_ALL - HYPERVISOR_IRET 0 - - -error_exit: - RESTORE_REST - XEN_BLOCK_EVENTS(%rsi) - jmp retint_kernel - - ENTRY(failsafe_callback) popq %rcx -- 1.7.7.6
Xu Zhang
2013-Apr-11 04:46 UTC
[PATCH V2 4/6] mini-os/x86-64 entry: remove unnecessary event blocking
We don''t need to block events here because: - if we came from "hypervisor_callback", events are disabled at this point, no need to block again; - if we came from "error_entry", we shouldn''t touch event mask, for exception hanlding are meant to be interrupted by Xen events (virtual irq). Signed-off-by: Xu Zhang <xzhang@cs.uic.edu> Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org> --- extras/mini-os/arch/x86/x86_64.S | 1 - 1 files changed, 0 insertions(+), 1 deletions(-) diff --git a/extras/mini-os/arch/x86/x86_64.S b/extras/mini-os/arch/x86/x86_64.S index d9b34a7..909024d 100644 --- a/extras/mini-os/arch/x86/x86_64.S +++ b/extras/mini-os/arch/x86/x86_64.S @@ -186,7 +186,6 @@ ENTRY(hypervisor_callback2) error_exit: RESTORE_REST - XEN_BLOCK_EVENTS(%rsi) retint_kernel: retint_restore_args: -- 1.7.7.6
Xu Zhang
2013-Apr-11 04:46 UTC
[PATCH V2 5/6] mini-os/x86-64 entry: defer RESTORE_REST until return
No need to do a RESTORE_REST at this point because if we saw pending events after we enabled event delivery, we have to do a SAVE_REST again. Instead, we do a "lazy" RESTORE_REST, deferring it until actual return. The offset of saved-on-stack rflags register is changed as well. Signed-off-by: Xu Zhang <xzhang@cs.uic.edu> Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org> --- extras/mini-os/arch/x86/x86_64.S | 16 ++++++++-------- 1 files changed, 8 insertions(+), 8 deletions(-) diff --git a/extras/mini-os/arch/x86/x86_64.S b/extras/mini-os/arch/x86/x86_64.S index 909024d..20ab477 100644 --- a/extras/mini-os/arch/x86/x86_64.S +++ b/extras/mini-os/arch/x86/x86_64.S @@ -60,7 +60,7 @@ NMI_MASK = 0x80000000 #define RDI 112 #define ORIG_RAX 120 /* + error_code */ -#define EFLAGS 144 +#define RFLAGS 144 /* Macros */ @@ -185,18 +185,17 @@ ENTRY(hypervisor_callback2) decl %gs:0 error_exit: - RESTORE_REST - retint_kernel: -retint_restore_args: - movl EFLAGS-6*8(%rsp), %eax - shr $9, %eax # EAX[0] == IRET_EFLAGS.IF + movl RFLAGS(%rsp), %eax + shr $9, %eax # EAX[0] == IRET_RFLAGS.IF XEN_GET_VCPU_INFO(%rsi) andb evtchn_upcall_mask(%rsi),%al - andb $1,%al # EAX[0] == IRET_EFLAGS.IF & event_mask + andb $1,%al # EAX[0] == IRET_RFLAGS.IF & event_mask jnz restore_all_enable_events # != 0 => enable event delivery XEN_PUT_VCPU_INFO(%rsi) +retint_restore_args: + RESTORE_REST RESTORE_ALL HYPERVISOR_IRET 0 @@ -207,12 +206,13 @@ scrit: /**** START OF CRITICAL REGION ****/ XEN_TEST_PENDING(%rsi) jnz 14f # process more events if necessary... XEN_PUT_VCPU_INFO(%rsi) + + RESTORE_REST RESTORE_ALL HYPERVISOR_IRET 0 14: XEN_LOCKED_BLOCK_EVENTS(%rsi) XEN_PUT_VCPU_INFO(%rsi) - SAVE_REST movq %rsp,%rdi # set the argument again jmp 11b ecrit: /**** END OF CRITICAL REGION ****/ -- 1.7.7.6
Xu Zhang
2013-Apr-11 04:47 UTC
[PATCH V2 6/6] mini-os/x86-64 entry: check against nested events and try to fix up
In hypervisor_callback, check against event re-entrant. If we came from the critical region in interrupt context, try to fix up by coalescing the two stack frames. The execution is resumed as if the second event never happened. Signed-off-by: Xu Zhang <xzhang@cs.uic.edu> --- extras/mini-os/arch/x86/x86_64.S | 87 +++++++++++++++++++++++++++++++------ 1 files changed, 73 insertions(+), 14 deletions(-) diff --git a/extras/mini-os/arch/x86/x86_64.S b/extras/mini-os/arch/x86/x86_64.S index 20ab477..f022eb3 100644 --- a/extras/mini-os/arch/x86/x86_64.S +++ b/extras/mini-os/arch/x86/x86_64.S @@ -57,10 +57,15 @@ hypercall_page: #define evtchn_upcall_mask 1 NMI_MASK = 0x80000000 +KERNEL_CS_MASK = 0xfc -#define RDI 112 -#define ORIG_RAX 120 /* + error_code */ -#define RFLAGS 144 +#define RAX 80 +#define RDI 112 +#define ORIG_RAX 120 /* + error_code */ +#define RIP 128 +#define CS 136 +#define RFLAGS 144 +#define RSP 152 /* Macros */ @@ -176,6 +181,14 @@ ENTRY(hypervisor_callback) ENTRY(hypervisor_callback2) movq %rdi, %rsp + + /* check against event re-entrant */ + movq RIP(%rsp),%rax + cmpq $scrit,%rax + jb 11f + cmpq $ecrit,%rax + jb critical_region_fixup + 11: movq %gs:8,%rax incl %gs:0 cmovzq %rax,%rsp @@ -200,22 +213,68 @@ retint_restore_args: HYPERVISOR_IRET 0 restore_all_enable_events: - XEN_UNBLOCK_EVENTS(%rsi) # %rsi is already set up... - -scrit: /**** START OF CRITICAL REGION ****/ - XEN_TEST_PENDING(%rsi) - jnz 14f # process more events if necessary... - XEN_PUT_VCPU_INFO(%rsi) - RESTORE_REST RESTORE_ALL + pushq %rax # save rax for it will be clobbered later + RSP_OFFSET=8 # record the stack frame layout changes + XEN_GET_VCPU_INFO(%rax) # safe to use rax since it is saved + XEN_UNBLOCK_EVENTS(%rax) + +scrit: /**** START OF CRITICAL REGION ****/ + XEN_TEST_PENDING(%rax) + jz 12f + XEN_LOCKED_BLOCK_EVENTS(%rax) # if pending, mask events and handle + # by jumping to hypervisor_prologue +12: popq %rax # all registers restored from this point + +restore_end: + jnz hypervisor_prologue # safe to jump out of critical region + # because events are masked if ZF = 0 HYPERVISOR_IRET 0 +ecrit: /**** END OF CRITICAL REGION ****/ -14: XEN_LOCKED_BLOCK_EVENTS(%rsi) - XEN_PUT_VCPU_INFO(%rsi) - movq %rsp,%rdi # set the argument again +# Set up the stack as Xen does before calling event callback +hypervisor_prologue: + pushq %r11 + pushq %rcx + jmp hypervisor_callback + +# [How we do the fixup]. We want to merge the current stack frame with the +# just-interrupted frame. How we do this depends on where in the critical +# region the interrupted handler was executing, and so if rax has been +# restored. We determine by comparing interrupted rip with "restore_end". +# We always copy all registers below RIP from the current stack frame +# to the end of the previous activation frame so that we can continue +# as if we''ve never even reached 11 running in the old activation frame. + +critical_region_fixup: + # Set up source and destination region pointers + leaq RIP(%rsp),%rsi # esi points at end of src region + # Acquire interrupted rsp which was saved-on-stack. This points to + # the end of dst region. Note that it is not necessarily current rsp + # plus 0xb0, because the second interrupt might align the stack frame. + movq RSP(%rsp),%rdi # edi points at end of dst region + + cmpq $restore_end,%rax + jae 13f + + # If interrupted rip is before restore_end + # then rax hasn''t been restored yet + movq (%rdi),%rax + movq %rax, RAX(%rsp) # save rax + addq $RSP_OFFSET,%rdi + + # Set up the copy +13: movq $RIP,%rcx + shr $3,%rcx # convert bytes into count of 64-bit entities +15: subq $8,%rsi # pre-decrementing copy loop + subq $8,%rdi + movq (%rsi),%rax + movq %rax,(%rdi) + loop 15b +16: movq %rdi,%rsp # final rdi is top of merged stack + andb $KERNEL_CS_MASK,CS(%rsp) # CS might have changed jmp 11b -ecrit: /**** END OF CRITICAL REGION ****/ -- 1.7.7.6
Ian Campbell
2013-Apr-17 15:14 UTC
Re: [PATCH V2 0/6] mini-os: check and fix up against nested events in x86-64 kernel entry
On Thu, 2013-04-11 at 05:46 +0100, Xu Zhang wrote:> mini-os almost always use direct iret to return from interrupt. > But this operation is not atomic because Xen uses event mask to > enable/disable event delivery. So there is a window for nested > events to happen after re-enabling event delivery and before > a direct iret. > > The issues come with such non-atomicity have been discussed in: > http://lists.xen.org/archives/html/xen-devel/2007-06/msg00142.html > > And also on Xen-devel: > http://markmail.org/message/jkzhzy6fyes6igcf > > This patch checks and fixes up against nested events in a similar > fashion of Linux 32bit pvops. > It checks against re-entrant of critical section in event handling > callback. Try to fix up by coalescing the two stack frames into > one when the a nested event came. > It then resumes execution as if the second event never happened. > > It also refactors mini-os''s x86-64 kernel entry assembly code.Samuel are you now happy with this? The un-acked changes look to be mostly cleanups and refactoring rather than substantive changes. The very last patch has meat in it and is unacked. WRT making a case for a freeze exception it seems like this is fixing a pretty obvious bug in x86_64 mini-os event handling? Ian.> > Xu Zhang (6): > mini-os/x86-64 entry: code clean-ups; no functional changes > mini-os/x86-64 entry: define macros for registers partial save and > restore; no functional changes > mini-os/x86-64 entry: code refactoring; no functional changes > mini-os/x86-64 entry: remove unnecessary event blocking > mini-os/x86-64 entry: defer RESTORE_REST until return > mini-os/x86-64 entry: check against nested events and try to fix up > > extras/mini-os/arch/x86/x86_64.S | 245 ++++++++++++++++++++++++-------------- > 1 files changed, 156 insertions(+), 89 deletions(-) > > --- > Changed since v1: > * Drop the chunky lookup table; use Linux x86-32''s fixup strategy instead, > as suggested by Jeremy Fitzhardinge; > * Reflect Samuel Thibault''s comments. >
Samuel Thibault
2013-Apr-21 00:14 UTC
Re: [PATCH V2 3/6] mini-os/x86-64 entry: code refactoring; no functional changes
Xu Zhang, le Wed 10 Apr 2013 23:46:57 -0500, a écrit :> Re-arrange assembly code blocks so that they are in called > order instead of jumping around, enhancing readability. > Macros are grouped together as well. > > Signed-off-by: Xu Zhang <xzhang@cs.uic.edu>Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Samuel Thibault
2013-Apr-21 00:15 UTC
Re: [PATCH V2 1/6] mini-os/x86-64 entry: code clean-ups; no functional changes
Xu Zhang, le Wed 10 Apr 2013 23:46:55 -0500, a écrit :> Make use of tabs and spaces consistent in arch/x86/x86_64.S > > Signed-off-by: Xu Zhang <xzhang@cs.uic.edu>Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Samuel Thibault
2013-Apr-21 00:16 UTC
Re: [PATCH V2 2/6] mini-os/x86-64 entry: define macros for registers partial save and restore; no functional changes
Xu Zhang, le Wed 10 Apr 2013 23:46:56 -0500, a écrit :> For saving and restoring registers rbx, rbp, and r12-r15, > define and use macros SAVE_REST and RESTORE_REST respectively. > > Signed-off-by: Xu Zhang <xzhang@cs.uic.edu>Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Samuel Thibault
2013-Apr-21 00:22 UTC
Re: [PATCH V2 6/6] mini-os/x86-64 entry: check against nested events and try to fix up
Xu Zhang, le Wed 10 Apr 2013 23:47:00 -0500, a écrit :> In hypervisor_callback, check against event re-entrant. > If we came from the critical region in interrupt context, > try to fix up by coalescing the two stack frames. > The execution is resumed as if the second event never happened. > > Signed-off-by: Xu Zhang <xzhang@cs.uic.edu>Acked-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Samuel Thibault
2013-Apr-21 00:22 UTC
Re: [PATCH V2 0/6] mini-os: check and fix up against nested events in x86-64 kernel entry
Ian Campbell, le Wed 17 Apr 2013 16:14:51 +0100, a écrit :> On Thu, 2013-04-11 at 05:46 +0100, Xu Zhang wrote: > > mini-os almost always use direct iret to return from interrupt. > > But this operation is not atomic because Xen uses event mask to > > enable/disable event delivery. So there is a window for nested > > events to happen after re-enabling event delivery and before > > a direct iret. > > > > The issues come with such non-atomicity have been discussed in: > > http://lists.xen.org/archives/html/xen-devel/2007-06/msg00142.html > > > > And also on Xen-devel: > > http://markmail.org/message/jkzhzy6fyes6igcf > > > > This patch checks and fixes up against nested events in a similar > > fashion of Linux 32bit pvops. > > It checks against re-entrant of critical section in event handling > > callback. Try to fix up by coalescing the two stack frames into > > one when the a nested event came. > > It then resumes execution as if the second event never happened. > > > > It also refactors mini-os''s x86-64 kernel entry assembly code. > > Samuel are you now happy with this?Yes.> WRT making a case for a freeze exception it seems like this is fixing a > pretty obvious bug in x86_64 mini-os event handling?Yes. Samuel
Ian Campbell
2013-Apr-22 12:00 UTC
Re: [PATCH V2 0/6] mini-os: check and fix up against nested events in x86-64 kernel entry
On Sun, 2013-04-21 at 01:22 +0100, Samuel Thibault wrote:> Ian Campbell, le Wed 17 Apr 2013 16:14:51 +0100, a écrit : > > On Thu, 2013-04-11 at 05:46 +0100, Xu Zhang wrote: > > > mini-os almost always use direct iret to return from interrupt. > > > But this operation is not atomic because Xen uses event mask to > > > enable/disable event delivery. So there is a window for nested > > > events to happen after re-enabling event delivery and before > > > a direct iret. > > > > > > The issues come with such non-atomicity have been discussed in: > > > http://lists.xen.org/archives/html/xen-devel/2007-06/msg00142.html > > > > > > And also on Xen-devel: > > > http://markmail.org/message/jkzhzy6fyes6igcf > > > > > > This patch checks and fixes up against nested events in a similar > > > fashion of Linux 32bit pvops. > > > It checks against re-entrant of critical section in event handling > > > callback. Try to fix up by coalescing the two stack frames into > > > one when the a nested event came. > > > It then resumes execution as if the second event never happened. > > > > > > It also refactors mini-os's x86-64 kernel entry assembly code. > > > > Samuel are you now happy with this? > > Yes.Thanks. George also Acked on IRC wrt the freeze so applied. Thanks Xu. Ian. _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Xu Zhang
2013-Apr-22 15:00 UTC
Re: [PATCH V2 0/6] mini-os: check and fix up against nested events in x86-64 kernel entry
On 04/22/2013 07:00 AM, Ian Campbell wrote:> On Sun, 2013-04-21 at 01:22 +0100, Samuel Thibault wrote: >> Ian Campbell, le Wed 17 Apr 2013 16:14:51 +0100, a écrit : >>> On Thu, 2013-04-11 at 05:46 +0100, Xu Zhang wrote: >>>> mini-os almost always use direct iret to return from interrupt. >>>> But this operation is not atomic because Xen uses event mask to >>>> enable/disable event delivery. So there is a window for nested >>>> events to happen after re-enabling event delivery and before >>>> a direct iret. >>>> >>>> The issues come with such non-atomicity have been discussed in: >>>> http://lists.xen.org/archives/html/xen-devel/2007-06/msg00142.html >>>> >>>> And also on Xen-devel: >>>> http://markmail.org/message/jkzhzy6fyes6igcf >>>> >>>> This patch checks and fixes up against nested events in a similar >>>> fashion of Linux 32bit pvops. >>>> It checks against re-entrant of critical section in event handling >>>> callback. Try to fix up by coalescing the two stack frames into >>>> one when the a nested event came. >>>> It then resumes execution as if the second event never happened. >>>> >>>> It also refactors mini-os's x86-64 kernel entry assembly code. >>> Samuel are you now happy with this? >> Yes. > Thanks. George also Acked on IRC wrt the freeze so applied. > > Thanks Xu. > > Ian. >Thanks, Ian. Another thing: mini-os x86-32bit fixes up against event re-entrant using a look up table in the similar fashion of the initial version of these patches. There is no problem correctness wise. But such table could be hard to maintain, as Jeremy pointed out. So I can apply the similar refinement on x86_32.S, too. It also makes mini-os 32bit kernel entry consistent with 64bit's. Let me know if it is worth doing. Thanks again. -- xu :q! _______________________________________________ Xen-devel mailing list Xen-devel@lists.xen.org http://lists.xen.org/xen-devel
Ian Campbell
2013-Apr-22 15:06 UTC
Re: [PATCH V2 0/6] mini-os: check and fix up against nested events in x86-64 kernel entry
On Mon, 2013-04-22 at 16:00 +0100, Xu Zhang wrote:> Another thing: mini-os x86-32bit fixes up against event re-entrant using > a look up table in the similar fashion of the initial version of these > patches. There is no problem correctness wise. But such table could be > hard to maintain, as Jeremy pointed out. So I can apply the similar > refinement on x86_32.S, too. It also makes mini-os 32bit kernel entry > consistent with 64bit''s. Let me know if it is worth doing.IMHO it is worth doing. However probably not for 4.3 since it is more of a cleanup than a bug fix. Ian.