diff --git a/[refs] b/[refs] index 6192b3d1549b..0becc691f0cf 100644 --- a/[refs] +++ b/[refs] @@ -1,2 +1,2 @@ --- -refs/heads/master: d098840e37468fdd0143d8bcfe86bc53627bf96e +refs/heads/master: e3b7df65e089f143b9228472b80fb96c495fb634 diff --git a/trunk/Documentation/aoe/todo.txt b/trunk/Documentation/aoe/todo.txt new file mode 100644 index 000000000000..7fee1e1165bc --- /dev/null +++ b/trunk/Documentation/aoe/todo.txt @@ -0,0 +1,14 @@ +There is a potential for deadlock when allocating a struct sk_buff for +data that needs to be written out to aoe storage. If the data is +being written from a dirty page in order to free that page, and if +there are no other pages available, then deadlock may occur when a +free page is needed for the sk_buff allocation. This situation has +not been observed, but it would be nice to eliminate any potential for +deadlock under memory pressure. + +Because ATA over Ethernet is not fragmented by the kernel's IP code, +the destructore member of the struct sk_buff is available to the aoe +driver. By using a mempool for allocating all but the first few +sk_buffs, and by registering a destructor, we should be able to +efficiently allocate sk_buffs without introducing any potential for +deadlock.