Avoid compiling ol_rx_reorder.c and ol_rx_reorder_timeout.c
for low latency and only compile for high latency
data path.
Change-Id: I1f3819fa093766abba87e5dc6dc44e6d2188740b
CRs-Fixed: 2506005
Cleanup ol_txrx_get_tx_resource to be peer mac address based
from local peer id based.
Change-Id: Id7ac4b5152c782d3475d9fad59f8f835102483cc
CRs-Fixed: 2508132
In the current wlanhost driver dump status, it doesn't
support to count the dropped packets seperately that
due to firmware don't have enough tx descriptors, so
add such function which can benefit KPI tune.
Change-Id: I1a72acbc4f1f861c2013a1ef1a95b73acccd6b53
CRs-Fixed: 2507410
Rename API ol_txrx_get_vdev_by_sta_id to ol_txrx_get_vdev_by_peer_addr
and cleanup ol_txrx_get_vdev_by_peer_addr to be peer mac address based
from local peer id based.
Change-Id: Ie3b8a1d97b5196e7306e5641cb894f31b8abe154
CRs-Fixed: 2504565
Currently DP RX threads are using the same wait_q for all operations.
The problem with this is that when there is traffic for only one
threads, we end up waking up other threads as well moementarily.
This wastes power and is in-efficient.
Use different wait queues for different threads.
CRs-Fixed: 2495719
Change-Id: I689659b7aa0ab93b7e2f009d2dc7fe741b66ee78
When CONFIG_FEATURE_HL_GROUP_CREDIT_FLOW_CONTROL is enabled each group
has its own credit limit.
It may happen that when the High Latency TX Scheduler selects a category
the txq at the head may belong to a group which has credits less than the
"credit_reserve" of that group. In this case the scheduler will return
without downloading any frames although the other group may have credits
and also frames to be downloaded.
The scheduler will be called again if there is a credit update from FW or a
packet arrives from network stack and the next txq will be picked up which
belongs to the group which has sufficient credits.
It is seen that sometimes there is no credit update from FW (since the
host has sufficient credits) and the network stack also does not
transmits packet since it has already queued packets in driver's queue.
In such case the scheduler is not called and throughput drop to zero is
seen although there are enough credits on host.
To avoid such a situation, in case scheduler is unable to download
packets from a txq since its group does not have enough credits, iterate
over to the next txq in the chosen category and download its packets.
Exit from the schduler is case able to download from any txq OR not able
to download from any txq.
Change-Id: I6143d5c3aa40761d1997846896e5e77435252b26
CRs-Fixed: 2485819
Cleanup ol_txrx_clear_peer to be peer mac address based
from local peer id based.
Change-Id: I63154508e6a08f973a4c602de58217e6bf23d683
CRs-Fixed: 2503737
In case of high latency data path, addba/delba messages
needs to be handled to support partial reorder.
This reverts commit b8919e14c5
and fix below security isses in ol_tx_addba_handler.
1. handle memory allocation failure scenario.
2. Free array memory before assign new memory to avoid
memory leak.
Change-Id: I4f577c9e8bcb40f70ffb1b305659a059eac68d8d
CRs-Fixed: 2488966
Cleanup ol_txrx_register_peer to be peer mac address based
from local peer id based.
Change-Id: I59fd19c0185d8fe89563ac78bc9e8a8117c12ed1
CRs-Fixed: 2503182
In some platform, it reports error "old_credit is used uninitialized".
Initialize old_credit before using it.
Change-Id: I06351bba0abdfc5efb32406d1d245f8d8c658684
CRs-Fixed: 2495209
This is revert of Change-Id: I9a9554ef0aa9288bf5abe22cd2513d8cc41c29d4
When peer_unmap_timer_handler runs for multiple peers, same work
is INITed for multiple peers as this work is not per peer based.
This will update the work param i.e. peer gets updated each time,
which leads in deletion of wrong peer.
Remove the work INIT and scheduling of "peer_unmap_timer_work_function"
as cds_trigger_recover() has already taken care of atomic context.
Change-Id: Ida0a50f27cfe4c08763b359dab51c82e757ec100
CRs-Fixed: 2498498
As for Rome, currently it usually just only support two
IPA interfaces offload, if use the default value 3, which
will cause the IPA pipe setup failure issue. So export
the max IPA interface numbers, and it's better to set
the appropriate value from build file according to
different requirement for different chipset and platform.
Meanwhile, it support SMMU on the sa415 platform, so
enable SMMU for it.
Change-Id: I2de31bcb4d38f5e7964d2cbdc2fc6f143eef510d
CRs-Fixed: 2480627
Post rx mic error information to HDD via new HDD mic
error callback(hdd_rx_mic_error_ind) registered to
.rx_mic_error member in dp_ol_if_ops.
Change-Id: Ia1e2b78a94dddba48937995ecf62fb5a7ae4139d
CRs-Fixed: 2488452
In the htt_ipa_uc_get_resource, rx2_rdy_ring is from
pdev->ipa_uc_rx_rsc.rx2_ind_ring, rx2_proc_done_idx is
from pdev->ipa_uc_rx_rsc.rx2_ipa_prc_done_idx, but
rx2_ind_ring & rx2_ipa_prc_done_idx are not used for
WDI 1.0, just only used for WDI 2.0, which initialized in
the htt_rx_ipa_uc_alloc_wdi2_rsc. So as for Rome that use
WDI 1.0, these two variables are not initialized and thus
they are NULL pointers. Therefore this change can fix NULL
pointer deference for WDI 1.0 when CONFIG_IPA_WDI_UNIFIED_API
is defined and meanwhile QCA_WIFI_3_0 used to distinguish WDI 2.0
and WDI 1.0 IPA RX params.
Change-Id: I0378753dcedde4f398885d930a4cbbb2c854c110
CRs-Fixed: 2483879
In case of TSO, same buffer results in multiple tx_desc after
segmentations. To avoid multiple free of skb ref_cnt is used.
Currently ref_cnt is incremented twice for 1st segment and
not incremented for last segment and in case of failure from
ce_send_fast, ref_cnt is decremented twice.
Above logic don't work in case when TSO packet with segment
count is 1 and ce_send_fast failure is observed.
So, Change logic to increment ref_cnt only once for 1st segment
and avoid reducing ref_cnt twice in case of ce_send_fast failure.
Change-Id: Ia85a6a8f905310b210d6f480a004feb2528a31d7
CRs-Fixed: 2469773
Affine DP RX Threads to perf cluster in high throughput scenarios.
High throughput is detected using existing logic from the bandwidth
timer.
Change-Id: Ieb98c6930807ba42be7f5b4d0b8a78dfb197ba27
CRs-Fixed: 2474322
qcacld-2.0 to qcacld-3.0 propagation
This change adds support for driver supported TCP
delayed ack to increase TCP RX performance in
third-party platform which does't support kernel
TCP delayed ack feature.
TCP delayed ack is dependent on count and timer
values. Whatever is achieved first will trigger
sending TCP ack.
This feature can be controlled through ini values.
gDriverDelAckTimerValue - timer value in ms
gDriverDelAckPktCount - delayed ack count
gDriverDelAckEnable - enable/disable feature
Change-Id: I8105bbb90965295b5a4aefeb00d344a90155974d
CRs-fixed: 2414224
Add QCA_LL_PDEV_TX_FLOW_CONTROL for both
QCA_LL_LEGACY_TX_FLOW_CONTROL and QCA_LL_TX_FLOW_CONTROL_V2
disabled platform, avoid frame drop in driver which leads to bad TCP
TX throughput. Change NUM_TX_QUEUES to 5 for this case to avoid invalid
memory access in wlan_hdd_netif_queue_control().
Change-Id: Ifa649e31a41d1bf89eadc8cc7e9520f0e27b9fe4
CRs-Fixed: 2466996
In TSO enabled case, update HTC header payload length
after adjusting download length for TSO. Also initialize
download length for every segment to avoid send wrong
payload length.
Change-Id: Ie63d11e5543429d00e40864191f5e7d6a11a689f
CRs-Fixed: 2454727
In ol_tx_download_done_hl_free() when 'status != A_OK',
tx_desc and netbuf are freed in ol_tx_download_done_base() irrespective
of the reference count to tx_desc.
Hence return after ol_tx_download_done_base() if 'status != A_OK'.
Change-Id: I2e55d178abc0c2cf30d0f474962f4c06e5c8e327
CRs-Fixed: 2442568
Host need to fill netbuf with qtime instead of tsf. So
host need to add tsf64 enable/disable related functions
and definitions to sync with FW.
The tsf64_time is new added to fw/host structure, so host
need to add parse functions to get tsf64_time from tx_desc.
Change-Id: Ieea0d8f905eb57629d279f8da0e811857b760b1f
CRs-Fixed: 2444456
Host need to fill netbuf with qtime instead of tsf. So host first
need to set enable_ppdu_end to 1, so that FW will pass ppdu_end
contents to host, and host can translate tsf64_time into qtime.
The tsf64_time is new added to fw struture, so host will need
add it accordingly to struct hdd_adapter, and keep it updated
in time synchronization function <hdd_update_timestamp>.
Change-Id: Ib19ac1411c4e17624c012f188297c9f2122642d2
CRs-Fixed: 2444456
Move the enum WDI_EVENT from wdi_event.h to
cdp_txrx_stats_struct.h as it is used in
common datapath.
Change-Id: If3a2dcc481d59e6615e4a50ffbb721bf61fb75c2
CRs-Fixed: 2449966
Add stop_th and start_th for QCA_LL_TX_FLOW_CONTROL_V2 disabled
platform, which is pdev based tx_desc pool. Change pdev tx_desc pool
size from 1056 to 900, default stop_th is 15% start_th is 25%, this
setting is exactly same as QCA_LL_TX_FLOW_CONTROL_V2. Pause netif tx
queues for all vdevs when stop_th reached instead of dropping frames.
Reduce pdev pool size could significantly reduce firmware wmm drop. Both
of host and firmware frame dropps lead to bad TCP throughput.
Change-Id: I77daf8c9fdef624f8ec479885b7705deb1fef142
CRs-Fixed: 2436772
Modify get tx success ack count api to support it for
lithium datapath by passing pdev reference.
Change-Id: Ibf4396bba941fd4f7e1dc55ca24534fecf54e01e
CRs-Fixed: 2438716
In TSO case, if eit header is less than 64 bytes in length,
it will result in unauthorized access to memory that has not
been dma mapped.
For TSO path, adjust the pkt download length before the call
to ce_send_fast(), so that the excess delta is taken into
account and handled.
Change-Id: I049f40afb87c66ad5544da583db27d066fe12453
CRs-Fixed: 2439186
Memory optimize for QCS403 platform, 1x1 chip. Reduce CE1 htt data
dest ring buffer from 512 to 256, reduce CE2 wmi dest ring buffer from
128 to 64, reduce CE9 & CE10 desg ring buffer from 512 to 64, disable
CE11 pktlog. Note: this change only affect specific WLAN build config for
extremely saving memory, for debug purose, there is another build
selecting default WLAN config for reference HW.
Change-Id: I868e74b09cdb11df3dccaa3f9e051da55724983d
CRs-Fixed: 2432631
Do not change bus/target delta for QCN7605. All the credits
received from FW will be released to the schheduler.
Change-Id: I17dbd1a4545d8b577ea521773c17506a0fc818cf
CRs-Fixed: 2423138
On association completion, RX handles are registered with the DP layer
as a part of cdp_vdev_register. It is possible that immediately after
association, host receives RX packets, but RX handles have not been
registered for the vdev with DP layer.
Drop packets in DP RX Thread if OS RX handles are not found for the
vdev.
Change-Id: I3bbd489ec9c5e6f6267521818663b123a85bb3f9
CRs-Fixed: 2419376
Per the Linux Kernel coding style, as enforced by the kernel
checkpatch script, pointers should not be explicitly compared to
NULL. Therefore within dp replace any such comparisons with logical
operations performed on the pointer itself.
Change-Id: I6c5589e430bdd8687122337fe88fb84ba72bab60
CRs-Fixed: 2418391
To avoid using qcacld code in cmn, when getting
wlan op mode, using op mode info from vdev rather
adapter.
Change-Id: If8432aae12800884e3a4567d99319afcdfa9d1f5
CRs-Fixed: 2412315
Currently rx_arp_resp_count stats is not updated for ARP packet
during rx in htt_rx_amsdu_rx_in_order_pop_ll as packet type is
not marked as QDF_NBUF_CB_PACKET_TYPE_ARP.
Also, track_arp_ip member of adapter is updated during NUD SET
but track_arp_ip of hdd_ctx is used inside hdd_hard_start_xmit
and hdd_rx_packet_cbk to compare arp src ip.
Update the packet type to QDF_NBUF_CB_PACKET_TYPE_ARP for ipv4 ARP
packet during htt_rx_amsdu_rx_in_order_pop_ll.
Use track_arp_ip member of adapter to compare arp src ip.
Change-Id: I58a678caa8ce4b54b583f76cfcbbb4f46443f448
CRs-Fixed: 2405335
Remove flag CONFIG_PER_VDEV_TX_DESC_POOL.
Legacy per vdev flow control implementation under
CONFIG_PER_VDEV_TX_DESC_POOL was never used in cld3.0 driver.
For a Genoa a new per vdev flow control implementation for HL systems,
was added under the flag QCA_HL_NETDEV_FLOW_CONTROL.
Hence remove CONFIG_PER_VDEV_TX_DESC_POOL and replace it with
QCA_HL_NETDEV_FLOW_CONTROL or QCA_LL_LEGACY_TX_FLOW_CONTROL whereever
applicable.
Change-Id: Ibdf88e60cff7d3be46924ce7605f468781b5b856
CRs-Fixed: 2373790