Skip to content

Commit

Permalink
cpuidle: coupled: add parallel barrier function
Browse files Browse the repository at this point in the history
Adds cpuidle_coupled_parallel_barrier, which can be used by coupled
cpuidle state enter functions to handle resynchronization after
determining if any cpu needs to abort.  The normal use case will
be:

static bool abort_flag;
static atomic_t abort_barrier;

int arch_cpuidle_enter(struct cpuidle_device *dev, ...)
{
	if (arch_turn_off_irq_controller()) {
	   	/* returns an error if an irq is pending and would be lost
		   if idle continued and turned off power */
		abort_flag = true;
	}

	cpuidle_coupled_parallel_barrier(dev, &abort_barrier);

	if (abort_flag) {
	   	/* One of the cpus didn't turn off it's irq controller */
	   	arch_turn_on_irq_controller();
		return -EINTR;
	}

	/* continue with idle */
	...
}

This will cause all cpus to abort idle together if one of them needs
to abort.

Reviewed-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Tested-by: Santosh Shilimkar <santosh.shilimkar@ti.com>
Reviewed-by: Kevin Hilman <khilman@ti.com>
Tested-by: Kevin Hilman <khilman@ti.com>
Signed-off-by: Colin Cross <ccross@android.com>
Signed-off-by: Len Brown <len.brown@intel.com>
  • Loading branch information
Colin Cross authored and Len Brown committed Jun 2, 2012
1 parent 4126c01 commit 20ff51a
Show file tree
Hide file tree
Showing 2 changed files with 41 additions and 0 deletions.
37 changes: 37 additions & 0 deletions drivers/cpuidle/coupled.c
Original file line number Diff line number Diff line change
Expand Up @@ -129,6 +129,43 @@ static DEFINE_PER_CPU(struct call_single_data, cpuidle_coupled_poke_cb);
*/
static cpumask_t cpuidle_coupled_poked_mask;

/**
* cpuidle_coupled_parallel_barrier - synchronize all online coupled cpus
* @dev: cpuidle_device of the calling cpu
* @a: atomic variable to hold the barrier
*
* No caller to this function will return from this function until all online
* cpus in the same coupled group have called this function. Once any caller
* has returned from this function, the barrier is immediately available for
* reuse.
*
* The atomic variable a must be initialized to 0 before any cpu calls
* this function, will be reset to 0 before any cpu returns from this function.
*
* Must only be called from within a coupled idle state handler
* (state.enter when state.flags has CPUIDLE_FLAG_COUPLED set).
*
* Provides full smp barrier semantics before and after calling.
*/
void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a)
{
int n = dev->coupled->online_count;

smp_mb__before_atomic_inc();
atomic_inc(a);

while (atomic_read(a) < n)
cpu_relax();

if (atomic_inc_return(a) == n * 2) {
atomic_set(a, 0);
return;
}

while (atomic_read(a) > n)
cpu_relax();
}

/**
* cpuidle_state_is_coupled - check if a state is part of a coupled set
* @dev: struct cpuidle_device for the current cpu
Expand Down
4 changes: 4 additions & 0 deletions include/linux/cpuidle.h
Original file line number Diff line number Diff line change
Expand Up @@ -183,6 +183,10 @@ static inline int cpuidle_play_dead(void) {return -ENODEV; }

#endif

#ifdef CONFIG_ARCH_NEEDS_CPU_IDLE_COUPLED
void cpuidle_coupled_parallel_barrier(struct cpuidle_device *dev, atomic_t *a);
#endif

/******************************
* CPUIDLE GOVERNOR INTERFACE *
******************************/
Expand Down

0 comments on commit 20ff51a

Please sign in to comment.