Skip to content

Commit

Permalink
x86-32: make sure clts is batched during context switch
Browse files Browse the repository at this point in the history
If we're preloading the fpu state during context switch, make sure the clts
happens while we're batching the cpu context update, then do the actual
__math_state_restore once the updates are flushed.

This allows more efficient context switches when running paravirtualized,
as all the hypercalls can be folded together into one.

[ Impact: optimise paravirtual FPU context switch ]

Signed-off-by: Jeremy Fitzhardinge <jeremy.fitzhardinge@citrix.com>
Cc: Alok Kataria <akataria@vmware.com>
Cc: Rusty Russell <rusty@rustcorp.com.au>
  • Loading branch information
Jeremy Fitzhardinge committed Jun 17, 2009
1 parent e6e9cac commit 2fcddce
Showing 1 changed file with 16 additions and 11 deletions.
27 changes: 16 additions & 11 deletions arch/x86/kernel/process_32.c
Original file line number Diff line number Diff line change
Expand Up @@ -350,14 +350,21 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
*next = &next_p->thread;
int cpu = smp_processor_id();
struct tss_struct *tss = &per_cpu(init_tss, cpu);
bool preload_fpu;

/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */

__unlazy_fpu(prev_p);
/*
* If the task has used fpu the last 5 timeslices, just do a full
* restore of the math state immediately to avoid the trap; the
* chances of needing FPU soon are obviously high now
*/
preload_fpu = tsk_used_math(next_p) && next_p->fpu_counter > 5;

__unlazy_fpu(prev_p);

/* we're going to use this soon, after a few expensive things */
if (next_p->fpu_counter > 5)
if (preload_fpu)
prefetch(next->xstate);

/*
Expand Down Expand Up @@ -398,6 +405,11 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
__switch_to_xtra(prev_p, next_p, tss);

/* If we're going to preload the fpu context, make sure clts
is run while we're batching the cpu state updates. */
if (preload_fpu)
clts();

/*
* Leave lazy mode, flushing any hypercalls made here.
* This must be done before restoring TLS segments so
Expand All @@ -407,15 +419,8 @@ __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
*/
arch_end_context_switch(next_p);

/* If the task has used fpu the last 5 timeslices, just do a full
* restore of the math state immediately to avoid the trap; the
* chances of needing FPU soon are obviously high now
*
* tsk_used_math() checks prevent calling math_state_restore(),
* which can sleep in the case of !tsk_used_math()
*/
if (tsk_used_math(next_p) && next_p->fpu_counter > 5)
math_state_restore();
if (preload_fpu)
__math_state_restore();

/*
* Restore %gs if needed (which is common)
Expand Down

0 comments on commit 2fcddce

Please sign in to comment.