Skip to content

Commit

Permalink
---
Browse files Browse the repository at this point in the history
yaml
---
r: 253992
b: refs/heads/master
c: 7a1f7b3
h: refs/heads/master
v: v3
  • Loading branch information
Linus Torvalds committed Jun 24, 2011
1 parent 508d4a9 commit 33c7045
Show file tree
Hide file tree
Showing 61 changed files with 535 additions and 470 deletions.
2 changes: 1 addition & 1 deletion [refs]
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
---
refs/heads/master: 95efa2863996b643083957079b9304fb3c01130f
refs/heads/master: 7a1f7b3d68174f99d01d52b585088b2eaee758a6
67 changes: 14 additions & 53 deletions trunk/Documentation/power/devices.txt
Original file line number Diff line number Diff line change
Expand Up @@ -520,59 +520,20 @@ Support for power domains is provided through the pwr_domain field of struct
device. This field is a pointer to an object of type struct dev_power_domain,
defined in include/linux/pm.h, providing a set of power management callbacks
analogous to the subsystem-level and device driver callbacks that are executed
for the given device during all power transitions, in addition to the respective
subsystem-level callbacks. Specifically, the power domain "suspend" callbacks
(i.e. ->runtime_suspend(), ->suspend(), ->freeze(), ->poweroff(), etc.) are
executed after the analogous subsystem-level callbacks, while the power domain
"resume" callbacks (i.e. ->runtime_resume(), ->resume(), ->thaw(), ->restore,
etc.) are executed before the analogous subsystem-level callbacks. Error codes
returned by the "suspend" and "resume" power domain callbacks are ignored.

Power domain ->runtime_idle() callback is executed before the subsystem-level
->runtime_idle() callback and the result returned by it is not ignored. Namely,
if it returns error code, the subsystem-level ->runtime_idle() callback will not
be called and the helper function rpm_idle() executing it will return error
code. This mechanism is intended to help platforms where saving device state
is a time consuming operation and should only be carried out if all devices
in the power domain are idle, before turning off the shared power resource(s).
Namely, the power domain ->runtime_idle() callback may return error code until
the pm_runtime_idle() helper (or its asychronous version) has been called for
all devices in the power domain (it is recommended that the returned error code
be -EBUSY in those cases), preventing the subsystem-level ->runtime_idle()
callback from being run prematurely.

The support for device power domains is only relevant to platforms needing to
use the same subsystem-level (e.g. platform bus type) and device driver power
management callbacks in many different power domain configurations and wanting
to avoid incorporating the support for power domains into the subsystem-level
callbacks. The other platforms need not implement it or take it into account
in any way.


System Devices
--------------
System devices (sysdevs) follow a slightly different API, which can be found in

include/linux/sysdev.h
drivers/base/sys.c

System devices will be suspended with interrupts disabled, and after all other
devices have been suspended. On resume, they will be resumed before any other
devices, and also with interrupts disabled. These things occur in special
"sysdev_driver" phases, which affect only system devices.

Thus, after the suspend_noirq (or freeze_noirq or poweroff_noirq) phase, when
the non-boot CPUs are all offline and IRQs are disabled on the remaining online
CPU, then a sysdev_driver.suspend phase is carried out, and the system enters a
sleep state (or a system image is created). During resume (or after the image
has been created or loaded) a sysdev_driver.resume phase is carried out, IRQs
are enabled on the only online CPU, the non-boot CPUs are enabled, and the
resume_noirq (or thaw_noirq or restore_noirq) phase begins.

Code to actually enter and exit the system-wide low power state sometimes
involves hardware details that are only known to the boot firmware, and
may leave a CPU running software (from SRAM or flash memory) that monitors
the system and manages its wakeup sequence.
for the given device during all power transitions, instead of the respective
subsystem-level callbacks. Specifically, if a device's pm_domain pointer is
not NULL, the ->suspend() callback from the object pointed to by it will be
executed instead of its subsystem's (e.g. bus type's) ->suspend() callback and
anlogously for all of the remaining callbacks. In other words, power management
domain callbacks, if defined for the given device, always take precedence over
the callbacks provided by the device's subsystem (e.g. bus type).

The support for device power management domains is only relevant to platforms
needing to use the same device driver power management callbacks in many
different power domain configurations and wanting to avoid incorporating the
support for power domains into subsystem-level callbacks, for example by
modifying the platform bus type. Other platforms need not implement it or take
it into account in any way.


Device Low Power (suspend) States
Expand Down
5 changes: 0 additions & 5 deletions trunk/Documentation/power/runtime_pm.txt
Original file line number Diff line number Diff line change
Expand Up @@ -566,11 +566,6 @@ to do this is:
pm_runtime_set_active(dev);
pm_runtime_enable(dev);

The PM core always increments the run-time usage counter before calling the
->prepare() callback and decrements it after calling the ->complete() callback.
Hence disabling run-time PM temporarily like this will not cause any run-time
suspend callbacks to be lost.

7. Generic subsystem callbacks

Subsystems may wish to conserve code space by using the set of generic power
Expand Down
1 change: 1 addition & 0 deletions trunk/arch/mn10300/include/asm/uaccess.h
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@
* User space memory access functions
*/
#include <linux/thread_info.h>
#include <linux/kernel.h>
#include <asm/page.h>
#include <asm/errno.h>

Expand Down
2 changes: 1 addition & 1 deletion trunk/arch/x86/pci/acpi.c
Original file line number Diff line number Diff line change
Expand Up @@ -188,7 +188,7 @@ static bool resource_contains(struct resource *res, resource_size_t point)
return false;
}

static void coalesce_windows(struct pci_root_info *info, int type)
static void coalesce_windows(struct pci_root_info *info, unsigned long type)
{
int i, j;
struct resource *res1, *res2;
Expand Down
4 changes: 2 additions & 2 deletions trunk/drivers/base/power/clock_ops.c
Original file line number Diff line number Diff line change
Expand Up @@ -387,15 +387,15 @@ static int pm_runtime_clk_notify(struct notifier_block *nb,
clknb = container_of(nb, struct pm_clk_notifier_block, nb);

switch (action) {
case BUS_NOTIFY_ADD_DEVICE:
case BUS_NOTIFY_BIND_DRIVER:
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
enable_clock(dev, *con_id);
} else {
enable_clock(dev, NULL);
}
break;
case BUS_NOTIFY_DEL_DEVICE:
case BUS_NOTIFY_UNBOUND_DRIVER:
if (clknb->con_ids[0]) {
for (con_id = clknb->con_ids; *con_id; con_id++)
disable_clock(dev, *con_id);
Expand Down
28 changes: 21 additions & 7 deletions trunk/drivers/base/power/main.c
Original file line number Diff line number Diff line change
Expand Up @@ -57,7 +57,8 @@ static int async_error;
*/
void device_pm_init(struct device *dev)
{
dev->power.in_suspend = false;
dev->power.is_prepared = false;
dev->power.is_suspended = false;
init_completion(&dev->power.completion);
complete_all(&dev->power.completion);
dev->power.wakeup = NULL;
Expand Down Expand Up @@ -91,7 +92,7 @@ void device_pm_add(struct device *dev)
pr_debug("PM: Adding info for %s:%s\n",
dev->bus ? dev->bus->name : "No Bus", dev_name(dev));
mutex_lock(&dpm_list_mtx);
if (dev->parent && dev->parent->power.in_suspend)
if (dev->parent && dev->parent->power.is_prepared)
dev_warn(dev, "parent %s should not be sleeping\n",
dev_name(dev->parent));
list_add_tail(&dev->power.entry, &dpm_list);
Expand Down Expand Up @@ -511,7 +512,14 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
dpm_wait(dev->parent, async);
device_lock(dev);

dev->power.in_suspend = false;
/*
* This is a fib. But we'll allow new children to be added below
* a resumed device, even if the device hasn't been completed yet.
*/
dev->power.is_prepared = false;

if (!dev->power.is_suspended)
goto Unlock;

if (dev->pwr_domain) {
pm_dev_dbg(dev, state, "power domain ");
Expand Down Expand Up @@ -548,6 +556,9 @@ static int device_resume(struct device *dev, pm_message_t state, bool async)
}

End:
dev->power.is_suspended = false;

Unlock:
device_unlock(dev);
complete_all(&dev->power.completion);

Expand Down Expand Up @@ -670,7 +681,7 @@ void dpm_complete(pm_message_t state)
struct device *dev = to_device(dpm_prepared_list.prev);

get_device(dev);
dev->power.in_suspend = false;
dev->power.is_prepared = false;
list_move(&dev->power.entry, &list);
mutex_unlock(&dpm_list_mtx);

Expand Down Expand Up @@ -835,11 +846,11 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
device_lock(dev);

if (async_error)
goto End;
goto Unlock;

if (pm_wakeup_pending()) {
async_error = -EBUSY;
goto End;
goto Unlock;
}

if (dev->pwr_domain) {
Expand Down Expand Up @@ -877,6 +888,9 @@ static int __device_suspend(struct device *dev, pm_message_t state, bool async)
}

End:
dev->power.is_suspended = !error;

Unlock:
device_unlock(dev);
complete_all(&dev->power.completion);

Expand Down Expand Up @@ -1042,7 +1056,7 @@ int dpm_prepare(pm_message_t state)
put_device(dev);
break;
}
dev->power.in_suspend = true;
dev->power.is_prepared = true;
if (!list_empty(&dev->power.entry))
list_move_tail(&dev->power.entry, &dpm_prepared_list);
put_device(dev);
Expand Down
46 changes: 37 additions & 9 deletions trunk/drivers/infiniband/hw/cxgb4/cm.c
Original file line number Diff line number Diff line change
Expand Up @@ -1463,9 +1463,9 @@ static int peer_close(struct c4iw_dev *dev, struct sk_buff *skb)
struct c4iw_qp_attributes attrs;
int disconnect = 1;
int release = 0;
int abort = 0;
struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(hdr);
int ret;

ep = lookup_tid(t, tid);
PDBG("%s ep %p tid %u\n", __func__, ep, ep->hwtid);
Expand Down Expand Up @@ -1501,10 +1501,12 @@ static int peer_close(struct c4iw_dev *dev, struct sk_buff *skb)
start_ep_timer(ep);
__state_set(&ep->com, CLOSING);
attrs.next_state = C4IW_QP_STATE_CLOSING;
abort = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,
ret = c4iw_modify_qp(ep->com.qp->rhp, ep->com.qp,
C4IW_QP_ATTR_NEXT_STATE, &attrs, 1);
peer_close_upcall(ep);
disconnect = 1;
if (ret != -ECONNRESET) {
peer_close_upcall(ep);
disconnect = 1;
}
break;
case ABORTING:
disconnect = 0;
Expand Down Expand Up @@ -2109,15 +2111,16 @@ int c4iw_ep_disconnect(struct c4iw_ep *ep, int abrupt, gfp_t gfp)
break;
}

mutex_unlock(&ep->com.mutex);
if (close) {
if (abrupt)
ret = abort_connection(ep, NULL, gfp);
else
if (abrupt) {
close_complete_upcall(ep);
ret = send_abort(ep, NULL, gfp);
} else
ret = send_halfclose(ep, gfp);
if (ret)
fatal = 1;
}
mutex_unlock(&ep->com.mutex);
if (fatal)
release_ep_resources(ep);
return ret;
Expand Down Expand Up @@ -2301,6 +2304,31 @@ static int fw6_msg(struct c4iw_dev *dev, struct sk_buff *skb)
return 0;
}

static int peer_abort_intr(struct c4iw_dev *dev, struct sk_buff *skb)
{
struct cpl_abort_req_rss *req = cplhdr(skb);
struct c4iw_ep *ep;
struct tid_info *t = dev->rdev.lldi.tids;
unsigned int tid = GET_TID(req);

ep = lookup_tid(t, tid);
if (is_neg_adv_abort(req->status)) {
PDBG("%s neg_adv_abort ep %p tid %u\n", __func__, ep,
ep->hwtid);
kfree_skb(skb);
return 0;
}
PDBG("%s ep %p tid %u state %u\n", __func__, ep, ep->hwtid,
ep->com.state);

/*
* Wake up any threads in rdma_init() or rdma_fini().
*/
c4iw_wake_up(&ep->com.wr_wait, -ECONNRESET);
sched(dev, skb);
return 0;
}

/*
* Most upcalls from the T4 Core go to sched() to
* schedule the processing on a work queue.
Expand All @@ -2317,7 +2345,7 @@ c4iw_handler_func c4iw_handlers[NUM_CPL_CMDS] = {
[CPL_PASS_ESTABLISH] = sched,
[CPL_PEER_CLOSE] = sched,
[CPL_CLOSE_CON_RPL] = sched,
[CPL_ABORT_REQ_RSS] = sched,
[CPL_ABORT_REQ_RSS] = peer_abort_intr,
[CPL_RDMA_TERMINATE] = sched,
[CPL_FW4_ACK] = sched,
[CPL_SET_TCB_RPL] = set_tcb_rpl,
Expand Down
4 changes: 4 additions & 0 deletions trunk/drivers/infiniband/hw/cxgb4/cq.c
Original file line number Diff line number Diff line change
Expand Up @@ -801,6 +801,10 @@ struct ib_cq *c4iw_create_cq(struct ib_device *ibdev, int entries,
if (ucontext) {
memsize = roundup(memsize, PAGE_SIZE);
hwentries = memsize / sizeof *chp->cq.queue;
while (hwentries > T4_MAX_IQ_SIZE) {
memsize -= PAGE_SIZE;
hwentries = memsize / sizeof *chp->cq.queue;
}
}
chp->cq.size = hwentries;
chp->cq.memsize = memsize;
Expand Down
2 changes: 1 addition & 1 deletion trunk/drivers/infiniband/hw/cxgb4/mem.c
Original file line number Diff line number Diff line change
Expand Up @@ -625,7 +625,7 @@ struct ib_mr *c4iw_reg_user_mr(struct ib_pd *pd, u64 start, u64 length,
mhp->attr.perms = c4iw_ib_to_tpt_access(acc);
mhp->attr.va_fbo = virt;
mhp->attr.page_size = shift - 12;
mhp->attr.len = (u32) length;
mhp->attr.len = length;

err = register_mem(rhp, php, mhp, shift);
if (err)
Expand Down
5 changes: 1 addition & 4 deletions trunk/drivers/infiniband/hw/cxgb4/qp.c
Original file line number Diff line number Diff line change
Expand Up @@ -1207,11 +1207,8 @@ int c4iw_modify_qp(struct c4iw_dev *rhp, struct c4iw_qp *qhp,
c4iw_get_ep(&qhp->ep->com);
}
ret = rdma_fini(rhp, qhp, ep);
if (ret) {
if (internal)
c4iw_get_ep(&qhp->ep->com);
if (ret)
goto err;
}
break;
case C4IW_QP_STATE_TERMINATE:
set_state(qhp, C4IW_QP_STATE_TERMINATE);
Expand Down
25 changes: 18 additions & 7 deletions trunk/drivers/infiniband/hw/qib/qib_iba7322.c
Original file line number Diff line number Diff line change
Expand Up @@ -469,6 +469,8 @@ static u8 ib_rate_to_delay[IB_RATE_120_GBPS + 1] = {
#define IB_7322_LT_STATE_RECOVERIDLE 0x0f
#define IB_7322_LT_STATE_CFGENH 0x10
#define IB_7322_LT_STATE_CFGTEST 0x11
#define IB_7322_LT_STATE_CFGWAITRMTTEST 0x12
#define IB_7322_LT_STATE_CFGWAITENH 0x13

/* link state machine states from IBC */
#define IB_7322_L_STATE_DOWN 0x0
Expand Down Expand Up @@ -498,8 +500,10 @@ static const u8 qib_7322_physportstate[0x20] = {
IB_PHYSPORTSTATE_LINK_ERR_RECOVER,
[IB_7322_LT_STATE_CFGENH] = IB_PHYSPORTSTATE_CFG_ENH,
[IB_7322_LT_STATE_CFGTEST] = IB_PHYSPORTSTATE_CFG_TRAIN,
[0x12] = IB_PHYSPORTSTATE_CFG_TRAIN,
[0x13] = IB_PHYSPORTSTATE_CFG_WAIT_ENH,
[IB_7322_LT_STATE_CFGWAITRMTTEST] =
IB_PHYSPORTSTATE_CFG_TRAIN,
[IB_7322_LT_STATE_CFGWAITENH] =
IB_PHYSPORTSTATE_CFG_WAIT_ENH,
[0x14] = IB_PHYSPORTSTATE_CFG_TRAIN,
[0x15] = IB_PHYSPORTSTATE_CFG_TRAIN,
[0x16] = IB_PHYSPORTSTATE_CFG_TRAIN,
Expand Down Expand Up @@ -1692,7 +1696,9 @@ static void handle_serdes_issues(struct qib_pportdata *ppd, u64 ibcst)
break;
}

if (ibclt == IB_7322_LT_STATE_CFGTEST &&
if (((ibclt >= IB_7322_LT_STATE_CFGTEST &&
ibclt <= IB_7322_LT_STATE_CFGWAITENH) ||
ibclt == IB_7322_LT_STATE_LINKUP) &&
(ibcst & SYM_MASK(IBCStatusA_0, LinkSpeedQDR))) {
force_h1(ppd);
ppd->cpspec->qdr_reforce = 1;
Expand Down Expand Up @@ -7301,12 +7307,17 @@ static void ibsd_wr_allchans(struct qib_pportdata *ppd, int addr, unsigned data,
static void serdes_7322_los_enable(struct qib_pportdata *ppd, int enable)
{
u64 data = qib_read_kreg_port(ppd, krp_serdesctrl);
printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS %s\n",
ppd->dd->unit, ppd->port, (enable ? "on" : "off"));
if (enable)
u8 state = SYM_FIELD(data, IBSerdesCtrl_0, RXLOSEN);

if (enable && !state) {
printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS on\n",
ppd->dd->unit, ppd->port);
data |= SYM_MASK(IBSerdesCtrl_0, RXLOSEN);
else
} else if (!enable && state) {
printk(KERN_INFO QIB_DRV_NAME " IB%u:%u Turning LOS off\n",
ppd->dd->unit, ppd->port);
data &= ~SYM_MASK(IBSerdesCtrl_0, RXLOSEN);
}
qib_write_kreg_port(ppd, krp_serdesctrl, data);
}

Expand Down
Loading

0 comments on commit 33c7045

Please sign in to comment.