Commit Graph

32 Commits

Author SHA1 Message Date
Roland Dreier
d2ae16d576 IB/mlx4: Endianness annotations
Trivial fixes to stamp_send_wqe().

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-04-16 21:01:07 -07:00
Jack Morgenstein
ea54b10c77 IB/mlx4: Use multiple WQ blocks to post smaller send WQEs
ConnectX HCA supports shrinking WQEs, so that a single work request
can be made of multiple units of wqe_shift.  This way, WRs can differ
in size, and do not have to be a power of 2 in size, saving memory and
speeding up send WR posting.  Unfortunately, if we do this then the
wqe_index field in CQEs can't be used to look up the WR ID anymore, so
our implementation does this only if selective signaling is off.

Further, on 32-bit platforms, we can't use vmap() to make the QP
buffer virtually contigious. Thus we have to use constant-sized WRs to
make sure a WR is always fully within a single page-sized chunk.

Finally, we use WRs with the NOP opcode to avoid wrapping around the
queue buffer in the middle of posting a WR, and we set the
NoErrorCompletion bit to avoid getting completions with error for NOP
WRs.  However, NEC is only supported starting with firmware 2.2.232,
so we use constant-sized WRs for older firmware.  And, since MLX QPs
only support SEND, we use constant-sized WRs in this case.

When stamping during NOP posting, do stamping following setting of the
NOP WQE valid bit.

Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-02-08 13:30:02 -08:00
Roland Dreier
1c69fc2a90 IB/mlx4: Consolidate code to get an entry from a struct mlx4_buf
We use struct mlx4_buf for kernel QP, CQ and SRQ buffers, and the code
to look up an entry is duplicated in get_cqe_from_buf() and the QP and
SRQ versions of get_wqe().  Factor this out into mlx4_buf_offset().

This will also make it easier to switch over to using vmap() for buffers.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2008-02-06 21:07:54 -08:00
Roland Dreier
96db0e0335 IB/mlx4: Lock SQ lock in mlx4_ib_post_send()
Because of a typo, mlx4_ib_post_send() takes the same lock rq.lock as
mlx4_ib_post_recv().  Correct the code so the intended sq.lock is
taken when posting a send.

Noticed by Yossi Leybovitch and pointed out by Jack Morgenstein from
Mellanox.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-30 10:53:54 -07:00
Jack Morgenstein
839041329f IB/mlx4: Sanity check userspace send queue sizes
Add sanity checks to send queue sizes passed in from userspace. The
minimum sq stride value below is taken from the MT25408 PRM (section
11.10, Table 306, log_sq_stride definition).

Without this check, userspace can submit arbitrarily large/small
values for the number of WQEs and the stride, which can crash the
kernel.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-18 09:27:26 -07:00
Roland Dreier
2242fa4f04 IB/mlx4: Use __set_data_seg() in mlx4_ib_post_recv()
Use a __set_data_seg() helper in mlx4_ib_post_recv() too; in addition
to making the code easier to read, this also allows gcc to generate
better code -- on x86_64:

add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-8 (-8)
function                                     old     new   delta
mlx4_ib_post_recv                            359     351      -8

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-10-09 19:59:05 -07:00
Jack Morgenstein
6e694ea33e IB/mlx4: Fix data corruption triggered by wrong headroom marking order
This is an addendum to commit 0e6e7416 ("IB/mlx4: Handle new FW
requirement for send request prefetching").  We also need to handle
prefetch marking properly for S/G segments, or else the HCA may end up
processing S/G segments that are not fully written and end up sending
the wrong data.  This can actually cause data corruption in practice,
especially on systems with relatively slow CPUs (where the HCA is more
likely to prefetch while the CPU is in the middle of writing a work
request into memory).

We write S/G segments in reverse order into the WQE, in order to
guarantee that the first dword of all cachelines containing S/G
segments is written last (overwriting the headroom invalidation
pattern).  The entire cacheline will thus contain valid data when the
invalidation pattern is overwritten.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-09-23 13:03:22 -07:00
Roland Dreier
e0f5d99e8d IB/mlx4: Whitespace fix
Remove extra dumb-looking blank line that snuck in somehow.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-28 20:52:44 -07:00
Roland Dreier
23f1b38481 IB/mlx4: Fix error path in create_qp_common()
The error handling code at err_wrid in create_qp_common() does not
handle a userspace QP attached to an SRQ correctly, since it ends up
in the else clause of the if statement.  This means it tries to
kfree() the uninitialized qp->sq.wrid and qp->rq.wrid pointers.  Fix
this so we only free the wrid arrays for kernel QPs.

Pointed out by Michael S. Tsirkin <mst@dev.mellanox.co.il>.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-20 21:19:43 -07:00
Florin Malita
f5b404317b IB/mlx4: Fix leaks in __mlx4_ib_modify_qp
Temporarily allocated struct mlx4_qp_context *context is leaked by
several error paths.  The patch takes advantage of the return value
'err' being preinitialized to -EINVAL.

Spotted by Coverity (CID 1768).

Signed-off-by: Florin Malita <fmalita@gmail.com>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-20 21:19:43 -07:00
Roland Dreier
0fbfa6a906 IB/mlx4: Factor out setting other WQE segments
Factor code to set remote address, atomic and datagram segments out of
mlx4_ib_post_send() into small helper functions.  This doesn't change
the generated code in any significant way, and makes the source easier
on the eyes.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-18 11:47:55 -07:00
Roland Dreier
d420d9e32f IB/mlx4: Factor out setting WQE data segment entries
Factor code to set data segment entries out of mlx4_ib_post_send()
into set_data_seg().  This cleans up the code and lets the compiler do
a better job -- on x86_64:

add/remove: 0/0 grow/shrink: 0/1 up/down: 0/-16 (-16)
function                                     old     new   delta
mlx4_ib_post_send                           1598    1582     -16

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-18 11:46:27 -07:00
Roland Dreier
7f5eb9bb8c IB/mlx4: Return receive queue sizes for userspace QPs from query QP
Return the receive queue sizes for both userspace QPs and kernel Qps
(not just kernel QPs) from mlx4_ib_query_qp().  Also zero the send
queue sizes for userspace QPs to avoid a possible information leak,
and set the max_inline_data for kernel QPs to 0 since inline sends are
not supported for kernel QPs.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-17 20:59:02 -07:00
Dotan Barak
8fcea95a2a IB/mlx4: Take sizeof the correct pointer in call to memset()
When clearing the ib_ah_attr parameter in to_ib_ah_attr(), use sizeof
*ib_ah_attr instead of sizeof *path.  This is the same bug as was
fixed for mthca in 99d4f22e ("IB/mthca: Use correct structure size in
call to memset()"), but the code was cut and pasted into mlx4 before the
fix was merged.

Signed-off-by: Dotan Barak <dotanb@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-17 18:37:38 -07:00
Jack Morgenstein
1c27cb71aa IB/mlx4: Fix port returned from query QP for QPs in INIT state
When a QP is in the INIT state, the sched_queue field hasn't been given 
to the firmware yet, so the firmware cannot return the value when the QP 
is queried.  To handle this, use the port number that is saved in the 
driver's QP data structure.

Found by Dotan Barak and Yaron Gepstein of Mellanox.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-17 18:37:38 -07:00
Jack Morgenstein
586bb586ae IB/mlx4: Fix flow label returned from query QP
Correct the mask used to get the flow label, since the field is 20 bits, 
not 24 bits.

Found by Dotan Barak and Yaron Gepstein of Mellanox. 

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-17 18:37:38 -07:00
Jack Morgenstein
6a775e2ba4 IB/mlx4: Implement query QP
Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-07-12 15:41:00 -07:00
Roland Dreier
e61ef2416b IB/mlx4: Make sure inline data segments don't cross a 64 byte boundary
Inline data segments in send WQEs are not allowed to cross a 64 byte
boundary.  We use inline data segments to hold the UD headers for MLX
QPs (QP0 and QP1).  A send with GRH on QP1 will have a UD header that
is too big to fit in a single inline data segment without crossing a
64 byte boundary, so split the header into two inline data segments.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-18 09:23:47 -07:00
Roland Dreier
5ae2a7a836 IB/mlx4: Handle FW command interface rev 3
Upcoming firmware introduces command interface revision 3, which
changes the way port capabilities are queried and set.  Update the
driver to handle both the new and old command interfaces by adding a
new MLX4_FLAG_OLD_PORT_CMDS that it is set after querying the firmware
interface revision and then using the correct interface based on the
setting of the flag.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-18 08:15:02 -07:00
Roland Dreier
54e95f8dcb IB/mlx4: Get rid of max_inline_data calculation
The calculation of max_inline_data in set_kernel_sq_size() is bogus,
since it doesn't take into account the fact that inline segments may
not cross a 64-byte boundary, and hence multiple inline segments will
probably need to be used to post large inline sends.

We don't support inline sends for kernel QPs anyway, so there's no
point in doing this calculation anyway, since the field is just zeroed
out a little later.  So just delete the bogus calculation.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-18 08:13:53 -07:00
Roland Dreier
0e6e741621 IB/mlx4: Handle new FW requirement for send request prefetching
New ConnectX firmware introduces FW command interface revision 2,
which requires that for each QP, a chunk of send queue entries (the
"headroom") is kept marked as invalid, so that the HCA doesn't get
confused if it prefetches entries that haven't been posted yet.  Add
code to the driver to do this, and also update the user ABI so that
userspace can request that the prefetcher be turned off for userspace
QPs (we just leave the prefetcher on for all kernel QPs).

Unfortunately, marking send queue entries this way is confuses older
firmware, so we change the driver to allow only FW command interface
revisions 2.  This means that users will have to update their firmware
to work with the new driver, but the firmware is changing quickly and
the old firmware has lots of other bugs anyway, so this shouldn't be too
big a deal.

Based on a patch from Jack Morgenstein <jackm@dev.mellanox.co.il>.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-18 08:13:48 -07:00
Roland Dreier
42c059ea2b IB/mlx4: Fix warning in rounding up queue sizes
Doing max(1, foo) where foo is u32 generates a warning, because 1 is a
signed constant.  Fix this by using 1U instead.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-12 10:52:02 -07:00
Roland Dreier
a4cd7ed86f IB/mlx4: Make sure RQ allocation is always valid
QPs attached to an SRQ must never have their own RQ, and QPs not
attached to SRQs must have an RQ with at least 1 entry.  Enforce all
of this in set_rq_size().

Based on a patch by Eli Cohen <eli@mellanox.co.il>.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-07 23:24:39 -07:00
Jack Morgenstein
57f01b5339 IB/mlx4: Fix zeroing of rnr_retry value in ib_modify_qp()
The code in __mlx4_ib_modify_qp() overwrites context->params1 after
the RNR retry parameter is ORed in, which results in the RNR retry
parameter always being set to 0.  Fix this by moving where we OR in
the value to later in the function, after the initial assignment of
context->params1.

Found by the Mellanox firmware group.

Signed-off-by: Jack Morgenstein <jackm@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-06-07 23:24:38 -07:00
Eli Cohen
c0be5fb5f8 IB/mlx4: Initialize send queue entry ownership bits
We need to initialize the owner bit of send queue WQEs to hardware 
ownership whenever the QP is modified from reset to init, not just 
when the QP is first allocated.  This avoids having the hardware 
process stale WQEs when the QP is moved to reset but not destroyed and 
then modified to init again. 

Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-24 14:02:38 -07:00
Roland Dreier
02d89b8708 IB/mlx4: Don't allocate RQ doorbell if using SRQ
If a QP is attached to a shared receive queue (SRQ), then it doesn't
have a receive queue (RQ).  So don't allocate an RQ doorbell (or map a
doorbell from userspace for userspace QPs) for that QP.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-23 15:16:08 -07:00
Eli Cohen
2446304dd6 IB/mlx4: Pass send queue sizes from userspace to kernel
Pass the number of WQEs for the send queue and their size from userspace
to the kernel to avoid having to keep the QP size calculations in sync 
between the kernel driver and libmlx4.  This fixes a bug seen with the 
current mlx4_ib driver and current libmlx4 caused by a difference in the 
calculated sizes for SQ WQEs.  Also, this gives more flexibility for 
userspace to experiment with using multiple WQE BBs for a single SQ WQE.

Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-20 10:18:04 -07:00
Roland Dreier
59b0ed1212 IB/mlx4: Fix check of opcode in mlx4_ib_post_send()
wr->opcode is invalid if it's >= ARRAY_SIZE(mlx4_ib_opcode), not just
strictly >.

This was spotted by the Coverity checker (CID 1643).

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-19 08:51:58 -07:00
Michael S. Tsirkin
65adfa911a IB/mlx4: Fix RESET to RESET and RESET to ERROR transitions
According to the IB spec, a QP can be moved from RESET back to RESET
or to the ERROR state, but mlx4 firmware does not support this and
returns an error if we try.  Fix the RESET to RESET transition by
just returning 0 without doing anything, and fix RESET to ERROR by
moving the QP from RESET to INIT with dummy parameters and then
transitioning from INIT to ERROR.

Signed-off-by: Michael S. Tsirkin <mst@dev.mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-19 08:51:57 -07:00
Roland Dreier
1526130351 IB/mlx4: Set GRH:HopLimit when sending globally routed MADs
This is the same issue discovered in mthca by Rolf Manderscheid
<rvm@obsidianresearch.com>.

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-19 08:51:57 -07:00
Eli Cohen
1f8f7b7a7b IB/mlx4: Fix check of max_qp_dest_rdma in modify QP
max_qp_dest_rdma is already in natural units - no need to shift.  This
was discovered by a test that deliberately requests more outstanding
atomic operation than the device supports.

Found by Sagi Rotem at Mellanox.

Signed-off-by: Eli Cohen <eli@mellanox.co.il>
Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-19 08:51:56 -07:00
Roland Dreier
225c7b1fee IB/mlx4: Add a driver Mellanox ConnectX InfiniBand adapters
Add an InfiniBand driver for Mellanox ConnectX adapters.  Because
these adapters can also be used as ethernet NICs and Fibre Channel 
HBAs, the driver is split into two modules: 
 
  mlx4_core: Handles low-level things like device initialization and 
    processing firmware commands.  Also controls resource allocation 
    so that the InfiniBand, ethernet and FC functions can share a 
    device without stepping on each other. 
 
  mlx4_ib: Handles InfiniBand-specific things; plugs into the 
    InfiniBand midlayer. 

Signed-off-by: Roland Dreier <rolandd@cisco.com>
2007-05-08 18:00:38 -07:00