android_kernel_xiaomi_sm8350/include/rdma
Vladimir Sokolovsky 5e80ba8ff0 IB/core: Add support for masked atomic operations
- Add new IB_WR_MASKED_ATOMIC_CMP_AND_SWP and IB_WR_MASKED_ATOMIC_FETCH_AND_ADD
   send opcodes that can be used to post "masked atomic compare and
   swap" and "masked atomic fetch and add" work request respectively.
 - Add masked_atomic_cap capability.
 - Add mask fields to atomic struct of ib_send_wr
 - Add new opcodes to ib_wc_opcode

The new operations are described more precisely below:

* Masked Compare and Swap (MskCmpSwap)

The MskCmpSwap atomic operation is an extension to the CmpSwap
operation defined in the IB spec.  MskCmpSwap allows the user to
select a portion of the 64 bit target data for the “compare” check as
well as to restrict the swap to a (possibly different) portion.  The
pseudo code below describes the operation:

| atomic_response = *va
| if (!((compare_add ^ *va) & compare_add_mask)) then
|     *va = (*va & ~(swap_mask)) | (swap & swap_mask)
|
| return atomic_response

The additional operands are carried in the Extended Transport Header.
Atomic response generation and packet format for MskCmpSwap is as for
standard IB Atomic operations.

* Masked Fetch and Add (MFetchAdd)

The MFetchAdd Atomic operation extends the functionality of the
standard IB FetchAdd by allowing the user to split the target into
multiple fields of selectable length. The atomic add is done
independently on each one of this fields. A bit set in the
field_boundary parameter specifies the field boundaries. The pseudo
code below describes the operation:

| bit_adder(ci, b1, b2, *co)
| {
|	value = ci + b1 + b2
|	*co = !!(value & 2)
|
|	return value & 1
| }
|
| #define MASK_IS_SET(mask, attr)      (!!((mask)&(attr)))
| bit_position = 1
| carry = 0
| atomic_response = 0
|
| for i = 0 to 63
| {
|         if ( i != 0 )
|                 bit_position =  bit_position << 1
|
|         bit_add_res = bit_adder(carry, MASK_IS_SET(*va, bit_position),
|                                 MASK_IS_SET(compare_add, bit_position), &new_carry)
|         if (bit_add_res)
|                 atomic_response |= bit_position
|
|         carry = ((new_carry) && (!MASK_IS_SET(compare_add_mask, bit_position)))
| }
|
| return atomic_response

Signed-off-by: Vladimir Sokolovsky <vlad@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2010-04-21 16:37:48 -07:00
..
ib_addr.h RDMA/cm: fix loopback address support 2009-11-19 13:26:06 -08:00
ib_cache.h
ib_cm.h trivial: fix typo "to to" in multiple files 2009-09-21 15:14:55 +02:00
ib_fmr_pool.h
ib_mad.h
ib_marshall.h
ib_pack.h IB/core: Fix and clean up ib_ud_header_init() 2010-02-24 14:54:10 -08:00
ib_sa.h RDMA/ucma: Add option to manually set IB path 2009-11-16 09:30:33 -08:00
ib_smi.h
ib_umem.h
ib_user_cm.h
ib_user_mad.h
ib_user_sa.h RDMA/ucma: Add option to manually set IB path 2009-11-16 09:30:33 -08:00
ib_user_verbs.h
ib_verbs.h IB/core: Add support for masked atomic operations 2010-04-21 16:37:48 -07:00
iw_cm.h
Kbuild
rdma_cm_ib.h
rdma_cm.h RDMA/cm: Remove unused definition of RDMA_PS_SCTP 2010-02-11 15:40:25 -08:00
rdma_user_cm.h RDMA/ucma: Add option to manually set IB path 2009-11-16 09:30:33 -08:00