search for: loadsegment

Displaying 20 results from an estimated 44 matches for "loadsegment".

2007 Apr 18
1
[PATCH 7/21] i386 Losing fs gs to bios
...save_area->saved_gs); + save_area->save_desc_40 = gdt[GDT_ENTRY_BAD_BIOS]; + gdt[GDT_ENTRY_BAD_BIOS] = gdt[GDT_ENTRY_BAD_BIOS_CACHE]; +} + +static inline void restore_bios_segments(struct bios_segment_save *save_area) +{ + save_area->gdt[GDT_ENTRY_BAD_BIOS] = save_area->save_desc_40; + loadsegment(fs, save_area->saved_fs); + loadsegment(gs, save_area->saved_gs); + put_cpu(); +} + #endif /* !__ASSEMBLY__ */ #endif Index: linux-2.6.14-zach-work/drivers/pnp/pnpbios/bioscalls.c =================================================================== --- linux-2.6.14-zach-work.orig/drivers/p...
2007 Apr 18
1
[PATCH 7/21] i386 Losing fs gs to bios
...save_area->saved_gs); + save_area->save_desc_40 = gdt[GDT_ENTRY_BAD_BIOS]; + gdt[GDT_ENTRY_BAD_BIOS] = gdt[GDT_ENTRY_BAD_BIOS_CACHE]; +} + +static inline void restore_bios_segments(struct bios_segment_save *save_area) +{ + save_area->gdt[GDT_ENTRY_BAD_BIOS] = save_area->save_desc_40; + loadsegment(fs, save_area->saved_fs); + loadsegment(gs, save_area->saved_gs); + put_cpu(); +} + #endif /* !__ASSEMBLY__ */ #endif Index: linux-2.6.14-zach-work/drivers/pnp/pnpbios/bioscalls.c =================================================================== --- linux-2.6.14-zach-work.orig/drivers/p...
2007 Feb 14
4
[PATCH 3/12] Provide basic Xen PM infrastructure
...("sidt %0":"=m" (*dtr)) +#define store_tr(tr) __asm__ ("str %0":"=mr" (tr)) +#define store_ldt(ldt) __asm__ ("sldt %0":"=mr" (ldt)) + +/* + * Load a segment. Fall back on loading the zero + * segment if something goes wrong.. + */ +#define loadsegment(seg,value) \ + asm volatile("\n" \ + "1:\t" \ + "mov %0,%%" #seg "\n" \ + "2:\n" \ + ".section .fixup,\"ax\"\n" \ + "3:\t"...
2008 Jan 07
2
bcmxcp patch
Hi Michael, I've forwarded your patch to Kjell, who's maintaining the bcmxcp driver, and to the nut development list. Kjell and others have more knowledges of the xcp protocol and will be able to analyze your patch. 2008/1/4, michalwd1979 <michalwd1979 at o2.pl>: > Hello Arnaud, > I am sending You a small patch to bcmxcp.c and bcmxcp.h files from nut-2.2.0. I written this
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...006-03-10 15:57:34.000000000 -0800 @@ -662,18 +662,6 @@ struct task_struct fastcall * __switch_t load_TLS(next, cpu); /* - * Restore %fs and %gs if needed. - * - * Glibc normally makes %fs be zero, and %gs is one of - * the TLS segments. - */ - if (unlikely(prev->fs | next->fs)) - loadsegment(fs, next->fs); - - if (prev->gs | next->gs) - loadsegment(gs, next->gs); - - /* * Restore IOPL if needed. */ if (unlikely(prev->iopl != next->iopl)) @@ -696,6 +684,19 @@ struct task_struct fastcall * __switch_t handle_io_bitmap(next, tss); disable_tsc(prev_p, next_...
2007 Apr 18
1
[RFC, PATCH 19/24] i386 Vmi mmu changes
...006-03-10 15:57:34.000000000 -0800 @@ -662,18 +662,6 @@ struct task_struct fastcall * __switch_t load_TLS(next, cpu); /* - * Restore %fs and %gs if needed. - * - * Glibc normally makes %fs be zero, and %gs is one of - * the TLS segments. - */ - if (unlikely(prev->fs | next->fs)) - loadsegment(fs, next->fs); - - if (prev->gs | next->gs) - loadsegment(gs, next->gs); - - /* * Restore IOPL if needed. */ if (unlikely(prev->iopl != next->iopl)) @@ -696,6 +684,19 @@ struct task_struct fastcall * __switch_t handle_io_bitmap(next, tss); disable_tsc(prev_p, next_...
2007 Apr 18
0
[PATCH 2/6] Paravirt CPU hypercall batching mode
...ediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +698,6 @@ struct task_struct fastcall * __switch_t loadsegment(fs, next->fs); write_pda(pcurrent, next_p); - - /* - * Now maybe handle debug registers and/or IO bitmaps - */ - if (unlikely((task_thread_info(next_p)->flags & _TIF_WORK_CTXSW) - || test_tsk_thread_flag(prev_p, TIF_IO_BITMAP))) - __switch_to_xtra(next_p, tss); - - disable_tsc(...
2007 Apr 18
0
[PATCH 2/6] Paravirt CPU hypercall batching mode
...ediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +698,6 @@ struct task_struct fastcall * __switch_t loadsegment(fs, next->fs); write_pda(pcurrent, next_p); - - /* - * Now maybe handle debug registers and/or IO bitmaps - */ - if (unlikely((task_thread_info(next_p)->flags & _TIF_WORK_CTXSW) - || test_tsk_thread_flag(prev_p, TIF_IO_BITMAP))) - __switch_to_xtra(next_p, tss); - - disable_tsc(...
2007 Apr 18
2
[PATCH 2/5] Paravirt cpu batching.patch
...ediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +704,6 @@ struct task_struct fastcall * __switch_t loadsegment(fs, next->fs); write_pda(pcurrent, next_p); - - /* - * Now maybe handle debug registers and/or IO bitmaps - */ - if (unlikely((task_thread_info(next_p)->flags & _TIF_WORK_CTXSW) - || test_tsk_thread_flag(prev_p, TIF_IO_BITMAP))) - __switch_to_xtra(next_p, tss); - - disable_tsc(...
2007 Apr 18
2
[PATCH 2/5] Paravirt cpu batching.patch
...ediately to avoid the trap; the + * chances of needing FPU soon are obviously high now + */ + if (next_p->fpu_counter > 5) + math_state_restore(); + + /* * Restore %fs if needed. * * Glibc normally makes %fs be zero. @@ -673,22 +704,6 @@ struct task_struct fastcall * __switch_t loadsegment(fs, next->fs); write_pda(pcurrent, next_p); - - /* - * Now maybe handle debug registers and/or IO bitmaps - */ - if (unlikely((task_thread_info(next_p)->flags & _TIF_WORK_CTXSW) - || test_tsk_thread_flag(prev_p, TIF_IO_BITMAP))) - __switch_to_xtra(next_p, tss); - - disable_tsc(...
2007 Oct 09
0
[PATCH RFC REPOST 2/2] paravirt: clean up lazy mode handling
...d) @@ -357,7 +336,7 @@ static void xen_load_tls(struct thread_s * loaded properly. This will go away as soon as Xen has been * modified to not save/restore %gs for normal hypercalls. */ - if (xen_get_lazy_mode() == PARAVIRT_LAZY_CPU) + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) loadsegment(gs, 0); } @@ -961,6 +940,11 @@ static const struct pv_cpu_ops xen_cpu_o .set_iopl_mask = xen_set_iopl_mask, .io_delay = xen_io_delay, + + .lazy_mode = { + .enter = paravirt_enter_lazy_cpu, + .leave = xen_leave_lazy, + }, }; static const struct pv_irq_ops xen_irq_ops __initdata = { @@...
2007 Oct 09
0
[PATCH RFC REPOST 2/2] paravirt: clean up lazy mode handling
...d) @@ -357,7 +336,7 @@ static void xen_load_tls(struct thread_s * loaded properly. This will go away as soon as Xen has been * modified to not save/restore %gs for normal hypercalls. */ - if (xen_get_lazy_mode() == PARAVIRT_LAZY_CPU) + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) loadsegment(gs, 0); } @@ -961,6 +940,11 @@ static const struct pv_cpu_ops xen_cpu_o .set_iopl_mask = xen_set_iopl_mask, .io_delay = xen_io_delay, + + .lazy_mode = { + .enter = paravirt_enter_lazy_cpu, + .leave = xen_leave_lazy, + }, }; static const struct pv_irq_ops xen_irq_ops __initdata = { @@...
2007 Aug 07
1
[PATCH] Fix Malicious Guest GDT Host Crash
...est_load_tls(struct thread_struct *t, unsigned int cpu) { + /* There's one problem which normal hardware doesn't have: the Host + * can't handle us removing entries we're currently using. So we clear + * the GS register here: if it's needed it'll be reloaded anyway. */ + loadsegment(gs, 0); lazy_hcall(LHCALL_LOAD_TLS, __pa(&t->tls_array), cpu, 0); } -/*:*/ /*G:038 That's enough excitement for now, back to ploughing through each of * the paravirt_ops (we're about 1/3 of the way through). diff -r 55fdd7fa62b7 drivers/lguest/segments.c --- a/drivers/lguest/...
2007 Aug 07
1
[PATCH] Fix Malicious Guest GDT Host Crash
...est_load_tls(struct thread_struct *t, unsigned int cpu) { + /* There's one problem which normal hardware doesn't have: the Host + * can't handle us removing entries we're currently using. So we clear + * the GS register here: if it's needed it'll be reloaded anyway. */ + loadsegment(gs, 0); lazy_hcall(LHCALL_LOAD_TLS, __pa(&t->tls_array), cpu, 0); } -/*:*/ /*G:038 That's enough excitement for now, back to ploughing through each of * the paravirt_ops (we're about 1/3 of the way through). diff -r 55fdd7fa62b7 drivers/lguest/segments.c --- a/drivers/lguest/...
2007 Oct 01
2
[PATCH RFC] paravirt: cleanup lazy mode handling
...d) @@ -357,7 +335,7 @@ static void xen_load_tls(struct thread_s * loaded properly. This will go away as soon as Xen has been * modified to not save/restore %gs for normal hypercalls. */ - if (xen_get_lazy_mode() == PARAVIRT_LAZY_CPU) + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) loadsegment(gs, 0); } @@ -961,6 +939,11 @@ static const struct pv_cpu_ops xen_cpu_o .set_iopl_mask = xen_set_iopl_mask, .io_delay = xen_io_delay, + + .lazy_mode = { + .enter = paravirt_nop, + .leave = xen_leave_lazy, + }, }; static const struct pv_irq_ops xen_irq_ops __initdata = { @@ -1036,10 +...
2007 Oct 01
2
[PATCH RFC] paravirt: cleanup lazy mode handling
...d) @@ -357,7 +335,7 @@ static void xen_load_tls(struct thread_s * loaded properly. This will go away as soon as Xen has been * modified to not save/restore %gs for normal hypercalls. */ - if (xen_get_lazy_mode() == PARAVIRT_LAZY_CPU) + if (paravirt_get_lazy_mode() == PARAVIRT_LAZY_CPU) loadsegment(gs, 0); } @@ -961,6 +939,11 @@ static const struct pv_cpu_ops xen_cpu_o .set_iopl_mask = xen_set_iopl_mask, .io_delay = xen_io_delay, + + .lazy_mode = { + .enter = paravirt_nop, + .leave = xen_leave_lazy, + }, }; static const struct pv_irq_ops xen_irq_ops __initdata = { @@ -1036,10 +...
2007 Apr 18
5
[patch 0/5] i386-gdt-pda i386 gdt and pda updates
Hi Andrew, This patch series adds to the end of the existing i386-gdt-cleanups patches: allow-per-cpu-variables-to-be-page-aligned.patch i386-gdt-cleanups-use-per-cpu-variables-for-gdt-pda.patch i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot.patch i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot-fix.patch i386-gdt-cleanups-clean-up-cpu_init.patch
2007 Apr 18
5
[patch 0/5] i386-gdt-pda i386 gdt and pda updates
Hi Andrew, This patch series adds to the end of the existing i386-gdt-cleanups patches: allow-per-cpu-variables-to-be-page-aligned.patch i386-gdt-cleanups-use-per-cpu-variables-for-gdt-pda.patch i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot.patch i386-gdt-cleanups-use-per-cpu-gdt-immediately-upon-boot-fix.patch i386-gdt-cleanups-clean-up-cpu_init.patch
2007 Apr 18
3
Per-cpu patches on top of PDA stuff...
Hi Jeremy, all, Sorry this took so long, spent last week in Japan at OSDL conf then netconf. After several false starts, I ended up with a very simple implementation, which clashes significantly with your work since then 8(. I've pushed the patches anyway, but it's going to be significant work for me to re-merge them, so I wanted your feedback first. The first patch simply changes
2007 Apr 18
3
Per-cpu patches on top of PDA stuff...
Hi Jeremy, all, Sorry this took so long, spent last week in Japan at OSDL conf then netconf. After several false starts, I ended up with a very simple implementation, which clashes significantly with your work since then 8(. I've pushed the patches anyway, but it's going to be significant work for me to re-merge them, so I wanted your feedback first. The first patch simply changes