a8a0447e0d
https://source.android.com/docs/security/bulletin/2023-06-01 * tag 'ASB-2023-06-05_11-5.4' of https://android.googlesource.com/kernel/common: UPSTREAM: io_uring: have io_kill_timeout() honor the request references UPSTREAM: io_uring: don't drop completion lock before timer is fully initialized UPSTREAM: io_uring: always grab lock in io_cancel_async_work() UPSTREAM: net: cdc_ncm: Deal with too low values of dwNtbOutMaxSize UPSTREAM: cdc_ncm: Fix the build warning UPSTREAM: cdc_ncm: Implement the 32-bit version of NCM Transfer Block UPSTREAM: ext4: avoid a potential slab-out-of-bounds in ext4_group_desc_csum UPSTREAM: ext4: fix invalid free tracking in ext4_xattr_move_to_block() Revert "Revert "mm/rmap: Fix anon_vma->degree ambiguity leading to double-reuse"" FROMLIST: binder: fix UAF caused by faulty buffer cleanup Linux 5.4.242 ASN.1: Fix check for strdup() success iio: adc: at91-sama5d2_adc: fix an error code in at91_adc_allocate_trigger() pwm: meson: Explicitly set .polarity in .get_state() xfs: fix forkoff miscalculation related to XFS_LITINO(mp) sctp: Call inet6_destroy_sock() via sk->sk_destruct(). dccp: Call inet6_destroy_sock() via sk->sk_destruct(). inet6: Remove inet6_destroy_sock() in sk->sk_prot->destroy(). tcp/udp: Call inet6_destroy_sock() in IPv6 sk->sk_destruct(). udp: Call inet6_destroy_sock() in setsockopt(IPV6_ADDRFORM). ext4: fix use-after-free in ext4_xattr_set_entry ext4: remove duplicate definition of ext4_xattr_ibody_inline_set() Revert "ext4: fix use-after-free in ext4_xattr_set_entry" x86/purgatory: Don't generate debug info for purgatory.ro MIPS: Define RUNTIME_DISCARD_EXIT in LD script mmc: sdhci_am654: Set HIGH_SPEED_ENA for SDR12 and SDR25 memstick: fix memory leak if card device is never registered nilfs2: initialize unused bytes in segment summary blocks iio: light: tsl2772: fix reading proximity-diodes from device tree xen/netback: use same error messages for same errors nvme-tcp: fix a possible UAF when failing to allocate an io queue s390/ptrace: fix PTRACE_GET_LAST_BREAK error handling net: dsa: b53: mmap: add phy ops scsi: core: Improve scsi_vpd_inquiry() checks scsi: megaraid_sas: Fix fw_crash_buffer_show() selftests: sigaltstack: fix -Wuninitialized Input: i8042 - add quirk for Fujitsu Lifebook A574/H f2fs: Fix f2fs_truncate_partial_nodes ftrace event e1000e: Disable TSO on i219-LM card to increase speed bpf: Fix incorrect verifier pruning due to missing register precision taints mlxfw: fix null-ptr-deref in mlxfw_mfa2_tlv_next() i40e: fix i40e_setup_misc_vector() error handling i40e: fix accessing vsi->active_filters without holding lock netfilter: nf_tables: fix ifdef to also consider nf_tables=m virtio_net: bugfix overflow inside xdp_linearize_page() net: sched: sch_qfq: prevent slab-out-of-bounds in qfq_activate_agg regulator: fan53555: Explicitly include bits header netfilter: br_netfilter: fix recent physdev match breakage arm64: dts: meson-g12-common: specify full DMC range ARM: dts: rockchip: fix a typo error for rk3288 spdif node Linux 5.4.241 xfs: force log and push AIL to clear pinned inodes when aborting mount xfs: don't reuse busy extents on extent trim xfs: consider shutdown in bmapbt cursor delete assert xfs: shut down the filesystem if we screw up quota reservation xfs: report corruption only as a regular error xfs: set inode size after creating symlink xfs: fix up non-directory creation in SGID directories xfs: remove the di_version field from struct icdinode xfs: simplify a check in xfs_ioctl_setattr_check_cowextsize xfs: simplify di_flags2 inheritance in xfs_ialloc xfs: only check the superblock version for dinode size calculation xfs: add a new xfs_sb_version_has_v3inode helper xfs: remove the kuid/kgid conversion wrappers xfs: remove the icdinode di_uid/di_gid members xfs: ensure that the inode uid/gid match values match the icdinode ones xfs: merge the projid fields in struct xfs_icdinode xfs: show the proper user quota options coresight-etm4: Fix for() loop drvdata->nr_addr_cmp range bug watchdog: sbsa_wdog: Make sure the timeout programming is within the limits i2c: ocores: generate stop condition after timeout in polling mode ubi: Fix deadlock caused by recursively holding work_sem mtd: ubi: wl: Fix a couple of kernel-doc issues ubi: Fix failure attaching when vid_hdr offset equals to (sub)page size asymmetric_keys: log on fatal failures in PE/pkcs7 verify_pefile: relax wrapper length check drm: panel-orientation-quirks: Add quirk for Lenovo Yoga Book X90F efi: sysfb_efi: Add quirk for Lenovo Yoga Book X91F/L i2c: imx-lpi2c: clean rx/tx buffers upon new message power: supply: cros_usbpd: reclassify "default case!" as debug net: macb: fix a memory corruption in extended buffer descriptor mode udp6: fix potential access to stale information RDMA/core: Fix GID entry ref leak when create_ah fails sctp: fix a potential overflow in sctp_ifwdtsn_skip qlcnic: check pci_reset_function result niu: Fix missing unwind goto in niu_alloc_channels() 9p/xen : Fix use after free bug in xen_9pfs_front_remove due to race condition mtd: rawnand: stm32_fmc2: remove unsupported EDO mode mtd: rawnand: meson: fix bitmask for length in command word mtdblock: tolerate corrected bit-flips btrfs: fix fast csum implementation detection btrfs: print checksum type and implementation at mount time Bluetooth: Fix race condition in hidp_session_thread Bluetooth: L2CAP: Fix use-after-free in l2cap_disconnect_{req,rsp} ALSA: hda/sigmatel: fix S/PDIF out on Intel D*45* motherboards ALSA: firewire-tascam: add missing unwind goto in snd_tscm_stream_start_duplex() ALSA: i2c/cs8427: fix iec958 mixer control deactivation ALSA: hda/sigmatel: add pin overrides for Intel DP45SG motherboard ALSA: emu10k1: fix capture interrupt handler unlinking Revert "pinctrl: amd: Disable and mask interrupts on resume" irqdomain: Fix mapping-creation race irqdomain: Refactor __irq_domain_alloc_irqs() irqdomain: Look for existing mapping only once mm/swap: fix swap_info_struct race between swapoff and get_swap_pages() ring-buffer: Fix race while reader and writer are on the same page drm/panfrost: Fix the panfrost_mmu_map_fault_addr() error path net_sched: prevent NULL dereference if default qdisc setup failed tracing: Free error logs of tracing instances can: j1939: j1939_tp_tx_dat_new(): fix out-of-bounds memory access ftrace: Mark get_lock_parent_ip() __always_inline perf/core: Fix the same task check in perf_event_set_output ALSA: hda/realtek: Add quirk for Clevo X370SNW nilfs2: fix sysfs interface lifetime nilfs2: fix potential UAF of struct nilfs_sc_info in nilfs_segctor_thread() tty: serial: fsl_lpuart: avoid checking for transfer complete when UARTCTRL_SBK is asserted in lpuart32_tx_empty tty: serial: sh-sci: Fix Rx on RZ/G2L SCI tty: serial: sh-sci: Fix transmit end interrupt handler iio: dac: cio-dac: Fix max DAC write value check for 12-bit iio: adc: ti-ads7950: Set `can_sleep` flag for GPIO chip USB: serial: option: add Quectel RM500U-CN modem USB: serial: option: add Telit FE990 compositions usb: typec: altmodes/displayport: Fix configure initial pin assignment USB: serial: cp210x: add Silicon Labs IFS-USB-DATACABLE IDs xhci: also avoid the XHCI_ZERO_64B_REGS quirk with a passthrough iommu NFSD: callback request does not use correct credential for AUTH_SYS sunrpc: only free unix grouplist after RCU settles gpio: davinci: Add irq chip flag to skip set wake ipv6: Fix an uninit variable access bug in __ip6_make_skb() sctp: check send stream number after wait_for_sndbuf net: don't let netpoll invoke NAPI if in xmit context icmp: guard against too small mtu wifi: mac80211: fix invalid drv_sta_pre_rcu_remove calls for non-uploaded sta pwm: sprd: Explicitly set .polarity in .get_state() pwm: cros-ec: Explicitly set .polarity in .get_state() pinctrl: amd: Disable and mask interrupts on resume pinctrl: amd: disable and mask interrupts on probe pinctrl: amd: Use irqchip template smb3: fix problem with null cifs super block with previous patch treewide: Replace DECLARE_TASKLET() with DECLARE_TASKLET_OLD() Revert "treewide: Replace DECLARE_TASKLET() with DECLARE_TASKLET_OLD()" cgroup/cpuset: Wake up cpuset_attach_wq tasks in cpuset_cancel_attach() x86/PCI: Add quirk for AMD XHCI controller that loses MSI-X state in D3hot scsi: ses: Handle enclosure with just a primary component gracefully Linux 5.4.240 gfs2: Always check inode size of inline inodes firmware: arm_scmi: Fix device node validation for mailbox transport net: sched: fix race condition in qdisc_graft() net_sched: add __rcu annotation to netdev->qdisc ext4: fix kernel BUG in 'ext4_write_inline_data_end()' btrfs: scan device in non-exclusive mode s390/uaccess: add missing earlyclobber annotations to __clear_user() drm/etnaviv: fix reference leak when mmaping imported buffer ALSA: usb-audio: Fix regression on detection of Roland VS-100 ALSA: hda/conexant: Partial revert of a quirk for Lenovo NFSv4: Fix hangs when recovering open state after a server reboot pinctrl: at91-pio4: fix domain name assignment xen/netback: don't do grant copy across page boundary Input: goodix - add Lenovo Yoga Book X90F to nine_bytes_report DMI table cifs: fix DFS traversal oops without CONFIG_CIFS_DFS_UPCALL cifs: prevent infinite recursion in CIFSGetDFSRefer() Input: focaltech - use explicitly signed char type Input: alps - fix compatibility with -funsigned-char pinctrl: ocelot: Fix alt mode for ocelot net: mvneta: make tx buffer array agnostic net: dsa: mv88e6xxx: Enable IGMP snooping on user ports only bnxt_en: Fix typo in PCI id to device description string mapping i40e: fix registers dump after run ethtool adapter self test s390/vfio-ap: fix memory leak in vfio_ap device driver can: bcm: bcm_tx_setup(): fix KMSAN uninit-value in vfs_write net/net_failover: fix txq exceeding warning regulator: Handle deferred clk regulator: fix spelling mistake "Cant" -> "Can't" ptp_qoriq: fix memory leak in probe() scsi: megaraid_sas: Fix crash after a double completion mtd: rawnand: meson: invalidate cache on polling ECC bit mips: bmips: BCM6358: disable RAC flush for TP1 dma-mapping: drop the dev argument to arch_sync_dma_for_* ca8210: Fix unsigned mac_len comparison with zero in ca8210_skb_tx() fbdev: au1200fb: Fix potential divide by zero fbdev: lxfb: Fix potential divide by zero fbdev: intelfb: Fix potential divide by zero fbdev: nvidia: Fix potential divide by zero sched_getaffinity: don't assume 'cpumask_size()' is fully initialized fbdev: tgafb: Fix potential divide by zero ALSA: hda/ca0132: fixup buffer overrun at tuning_ctl_set() ALSA: asihpi: check pao in control_message() md: avoid signed overflow in slot_store() bus: imx-weim: fix branch condition evaluates to a garbage value fsverity: don't drop pagecache at end of FS_IOC_ENABLE_VERITY ocfs2: fix data corruption after failed write tun: avoid double free in tun_free_netdev sched/fair: Sanitize vruntime of entity being migrated sched/fair: sanitize vruntime of entity being placed dm crypt: add cond_resched() to dmcrypt_write() dm stats: check for and propagate alloc_percpu failure i2c: xgene-slimpro: Fix out-of-bounds bug in xgene_slimpro_i2c_xfer() nilfs2: fix kernel-infoleak in nilfs_ioctl_wrap_copy() wifi: mac80211: fix qos on mesh interfaces usb: chipidea: core: fix possible concurrent when switch role usb: chipdea: core: fix return -EINVAL if request role is the same with current role usb: cdns3: Fix issue with using incorrect PCI device function dm thin: fix deadlock when swapping to thin device igb: revert rtnl_lock() that causes deadlock fsverity: Remove WQ_UNBOUND from fsverity read workqueue usb: gadget: u_audio: don't let userspace block driver unbind scsi: core: Add BLIST_SKIP_VPD_PAGES for SKhynix H28U74301AMR cifs: empty interface list when server doesn't support query interfaces sh: sanitize the flags on sigreturn net: usb: qmi_wwan: add Telit 0x1080 composition net: usb: cdc_mbim: avoid altsetting toggling for Telit FE990 scsi: lpfc: Avoid usage of list iterator variable after loop scsi: ufs: core: Add soft dependency on governor_simpleondemand scsi: target: iscsi: Fix an error message in iscsi_check_key() selftests/bpf: check that modifier resolves after pointer m68k: Only force 030 bus error if PC not in exception table ca8210: fix mac_len negative array access riscv: Bump COMMAND_LINE_SIZE value to 1024 thunderbolt: Use const qualifier for `ring_interrupt_index` uas: Add US_FL_NO_REPORT_OPCODES for JMicron JMS583Gen 2 scsi: qla2xxx: Perform lockless command completion in abort path hwmon (it87): Fix voltage scaling for chips with 10.9mV ADCs platform/chrome: cros_ec_chardev: fix kernel data leak from ioctl Bluetooth: btsdio: fix use after free bug in btsdio_remove due to unfinished work Bluetooth: btqcomsmd: Fix command timeout after setting BD address net: mdio: thunder: Add missing fwnode_handle_put() hvc/xen: prevent concurrent accesses to the shared ring nvme-tcp: fix nvme_tcp_term_pdu to match spec net/sonic: use dma_mapping_error() for error check erspan: do not use skb_mac_header() in ndo_start_xmit() atm: idt77252: fix kmemleak when rmmod idt77252 net/mlx5: Read the TC mapping of all priorities on ETS query bpf: Adjust insufficient default bpf_jit_limit keys: Do not cache key in task struct if key is requested from kernel thread net/ps3_gelic_net: Use dma_mapping_error net/ps3_gelic_net: Fix RX sk_buff length net: qcom/emac: Fix use after free bug in emac_remove due to race condition xirc2ps_cs: Fix use after free bug in xirc2ps_detach qed/qed_sriov: guard against NULL derefs from qed_iov_get_vf_info net: usb: smsc95xx: Limit packet length to skb->len scsi: scsi_dh_alua: Fix memleak for 'qdata' in alua_activate() i2c: imx-lpi2c: check only for enabled interrupt flags igbvf: Regard vf reset nack as success intel/igbvf: free irq on the error path in igbvf_request_msix() iavf: fix non-tunneled IPv6 UDP packet type and hashing iavf: fix inverted Rx hash condition leading to disabled hash power: supply: da9150: Fix use after free bug in da9150_charger_remove due to race condition net: tls: fix possible race condition between do_tls_getsockopt_conf() and do_tls_setsockopt_conf() Linux 5.4.239 selftests: Fix the executable permissions for fib_tests.sh BACKPORT: mac80211_hwsim: notify wmediumd of used MAC addresses FROMGIT: mac80211_hwsim: add concurrent channels scanning support over virtio Revert "HID: core: Provide new max_buffer_size attribute to over-ride the default" Revert "HID: uhid: Over-ride the default maximum data buffer value with our own" Linux 5.4.238 HID: uhid: Over-ride the default maximum data buffer value with our own HID: core: Provide new max_buffer_size attribute to over-ride the default PCI: Unify delay handling for reset and resume s390/ipl: add missing intersection check to ipl_report handling serial: 8250_em: Fix UART port type drm/i915: Don't use stolen memory for ring buffers with LLC x86/mm: Fix use of uninitialized buffer in sme_enable() fbdev: stifb: Provide valid pixelclock and add fb_check_var() checks ftrace: Fix invalid address access in lookup_rec() when index is 0 KVM: nVMX: add missing consistency checks for CR0 and CR4 tracing: Make tracepoint lockdep check actually test something tracing: Check field value in hist_field_name() interconnect: fix mem leak when freeing nodes tty: serial: fsl_lpuart: skip waiting for transmission complete when UARTCTRL_SBK is asserted ext4: fix possible double unlock when moving a directory sh: intc: Avoid spurious sizeof-pointer-div warning drm/amdkfd: Fix an illegal memory access ext4: fix task hung in ext4_xattr_delete_inode ext4: fail ext4_iget if special inode unallocated jffs2: correct logic when creating a hole in jffs2_write_begin mmc: atmel-mci: fix race between stop command and start of next command media: m5mols: fix off-by-one loop termination error hwmon: (ina3221) return prober error code hwmon: (xgene) Fix use after free bug in xgene_hwmon_remove due to race condition hwmon: (adt7475) Fix masking of hysteresis registers hwmon: (adt7475) Display smoothing attributes in correct order ethernet: sun: add check for the mdesc_grab() net/iucv: Fix size of interrupt data net: usb: smsc75xx: Move packet length check to prevent kernel panic in skb_pull ipv4: Fix incorrect table ID in IOCTL path block: sunvdc: add check for mdesc_grab() returning NULL nvmet: avoid potential UAF in nvmet_req_complete() net: usb: smsc75xx: Limit packet length to skb->len nfc: st-nci: Fix use after free bug in ndlc_remove due to race condition net: phy: smsc: bail out in lan87xx_read_status if genphy_read_status fails net: tunnels: annotate lockless accesses to dev->needed_headroom qed/qed_dev: guard against a possible division by zero i40e: Fix kernel crash during reboot when adapter is in recovery mode ipvlan: Make skb->skb_iif track skb->dev for l3s mode nfc: pn533: initialize struct pn533_out_arg properly tcp: tcp_make_synack() can be called from process context scsi: core: Fix a procfs host directory removal regression scsi: core: Fix a comment in function scsi_host_dev_release() netfilter: nft_redir: correct value of inet type `.maxattrs` ALSA: hda: Match only Intel devices with CONTROLLER_IN_GPU() ALSA: hda: Add Intel DG2 PCI ID and HDMI codec vid ALSA: hda: Add Alderlake-S PCI ID and HDMI codec vid ALSA: hda - controller is in GPU on the DG1 ALSA: hda - add Intel DG1 PCI and HDMI ids scsi: mpt3sas: Fix NULL pointer access in mpt3sas_transport_port_add() docs: Correct missing "d_" prefix for dentry_operations member d_weak_revalidate clk: HI655X: select REGMAP instead of depending on it drm/meson: fix 1px pink line on GXM when scaling video overlay cifs: Move the in_send statistic to __smb_send_rqst() drm/panfrost: Don't sync rpm suspension after mmu flushing xfrm: Allow transport-mode states with AF_UNSPEC selector ext4: fix cgroup writeback accounting with fs-layer encryption ANDROID: preserve CRC for __irq_domain_add() Revert "drm/exynos: Don't reset bridge->next" Revert "drm/bridge: Rename bridge helpers targeting a bridge chain" Revert "drm/bridge: Introduce drm_bridge_get_next_bridge()" Revert "drm: Initialize struct drm_crtc_state.no_vblank from device settings" Revert "drm/msm/mdp5: Add check for kzalloc" Linux 5.4.237 s390/dasd: add missing discipline function UML: define RUNTIME_DISCARD_EXIT sh: define RUNTIME_DISCARD_EXIT s390: define RUNTIME_DISCARD_EXIT to fix link error with GNU ld < 2.36 powerpc/vmlinux.lds: Don't discard .rela* for relocatable builds powerpc/vmlinux.lds: Define RUNTIME_DISCARD_EXIT arch: fix broken BuildID for arm64 and riscv x86, vmlinux.lds: Add RUNTIME_DISCARD_EXIT to generic DISCARDS drm/i915: Don't use BAR mappings for ring buffers with LLC ipmi:watchdog: Set panic count to proper value on a panic ipmi/watchdog: replace atomic_add() and atomic_sub() media: ov5640: Fix analogue gain control PCI: Add SolidRun vendor ID macintosh: windfarm: Use unsigned type for 1-bit bitfields alpha: fix R_ALPHA_LITERAL reloc for large modules MIPS: Fix a compilation issue ext4: Fix deadlock during directory rename riscv: Use READ_ONCE_NOCHECK in imprecise unwinding stack mode net/smc: fix fallback failed while sendmsg with fastopen scsi: megaraid_sas: Update max supported LD IDs to 240 btf: fix resolving BTF_KIND_VAR after ARRAY, STRUCT, UNION, PTR netfilter: tproxy: fix deadlock due to missing BH disable bnxt_en: Avoid order-5 memory allocation for TPA data net: caif: Fix use-after-free in cfusbl_device_notify() net: lan78xx: fix accessing the LAN7800's internal phy specific registers from the MAC driver net: usb: lan78xx: Remove lots of set but unused 'ret' variables selftests: nft_nat: ensuring the listening side is up before starting the client ila: do not generate empty messages in ila_xlat_nl_cmd_get_mapping() nfc: fdp: add null check of devm_kmalloc_array in fdp_nci_i2c_read_device_properties drm/msm/a5xx: fix setting of the CP_PREEMPT_ENABLE_LOCAL register ext4: Fix possible corruption when moving a directory scsi: core: Remove the /proc/scsi/${proc_name} directory earlier cifs: Fix uninitialized memory read in smb3_qfs_tcon() SMB3: Backup intent flag missing from some more ops iommu/vt-d: Fix PASID directory pointer coherency irqdomain: Fix domain registration race irqdomain: Change the type of 'size' in __irq_domain_add() to be consistent ipmi:ssif: Add a timer between request retries ipmi:ssif: Increase the message retry time ipmi:ssif: Remove rtc_us_timer ipmi:ssif: resend_msg() cannot fail ipmi:ssif: make ssif_i2c_send() void iommu/amd: Add a length limitation for the ivrs_acpihid command-line parameter iommu/amd: Fix ill-formed ivrs_ioapic, ivrs_hpet and ivrs_acpihid options iommu/amd: Add PCI segment support for ivrs_[ioapic/hpet/acpihid] commands nfc: change order inside nfc_se_io error path ext4: zero i_disksize when initializing the bootloader inode ext4: fix WARNING in ext4_update_inline_data ext4: move where set the MAY_INLINE_DATA flag is set ext4: fix another off-by-one fsmap error on 1k block filesystems ext4: fix RENAME_WHITEOUT handling for inline directories drm/connector: print max_requested_bpc in state debugfs x86/CPU/AMD: Disable XSAVES on AMD family 0x17 fs: prevent out-of-bounds array speculation when closing a file descriptor Linux 5.4.236 staging: rtl8192e: Remove call_usermodehelper starting RadioPower.sh staging: rtl8192e: Remove function ..dm_check_ac_dc_power calling a script wifi: cfg80211: Partial revert "wifi: cfg80211: Fix use after free for wext" Linux 5.4.235 dt-bindings: rtc: sun6i-a31-rtc: Loosen the requirements on the clocks media: uvcvideo: Fix race condition with usb_kill_urb media: uvcvideo: Provide sync and async uvc_ctrl_status_event tcp: Fix listen() regression in 5.4.229. Bluetooth: hci_sock: purge socket queues in the destruct() callback x86/resctl: fix scheduler confusion with 'current' x86/resctrl: Apply READ_ONCE/WRITE_ONCE to task_struct.{rmid,closid} net: tls: avoid hanging tasks on the tx_lock phy: rockchip-typec: Fix unsigned comparison with less than zero PCI: Add ACS quirk for Wangxun NICs kernel/fail_function: fix memory leak with using debugfs_lookup() usb: uvc: Enumerate valid values for color matching USB: ene_usb6250: Allocate enough memory for full object usb: host: xhci: mvebu: Iterate over array indexes instead of using pointer math iio: accel: mma9551_core: Prevent uninitialized variable in mma9551_read_config_word() iio: accel: mma9551_core: Prevent uninitialized variable in mma9551_read_status_word() tools/iio/iio_utils:fix memory leak mei: bus-fixup:upon error print return values of send and receive tty: serial: fsl_lpuart: disable the CTS when send break signal tty: fix out-of-bounds access in tty_driver_lookup_tty() staging: emxx_udc: Add checks for dma_alloc_coherent() media: uvcvideo: Silence memcpy() run-time false positive warnings media: uvcvideo: Quirk for autosuspend in Logitech B910 and C910 media: uvcvideo: Handle errors from calls to usb_string media: uvcvideo: Handle cameras with invalid descriptors mfd: arizona: Use pm_runtime_resume_and_get() to prevent refcnt leak firmware/efi sysfb_efi: Add quirk for Lenovo IdeaPad Duet 3 tracing: Add NULL checks for buffer in ring_buffer_free_read_page() thermal: intel: BXT_PMIC: select REGMAP instead of depending on it thermal: intel: quark_dts: fix error pointer dereference scsi: ipr: Work around fortify-string warning rtc: sun6i: Always export the internal oscillator rtc: sun6i: Make external 32k oscillator optional vc_screen: modify vcs_size() handling in vcs_read() tcp: tcp_check_req() can be called from process context ARM: dts: spear320-hmi: correct STMPE GPIO compatible net/sched: act_sample: fix action bind logic nfc: fix memory leak of se_io context in nfc_genl_se_io net/mlx5: Geneve, Fix handling of Geneve object id as error code 9p/rdma: unmap receive dma buffer in rdma_request()/post_recv() 9p/xen: fix connection sequence 9p/xen: fix version parsing net: fix __dev_kfree_skb_any() vs drop monitor sctp: add a refcnt in sctp_stream_priorities to avoid a nested loop ipv6: Add lwtunnel encap size of all siblings in nexthop calculation netfilter: ctnetlink: fix possible refcount leak in ctnetlink_create_conntrack() watchdog: pcwd_usb: Fix attempting to access uninitialized memory watchdog: Fix kmemleak in watchdog_cdev_register watchdog: at91sam9_wdt: use devm_request_irq to avoid missing free_irq() in error path x86: um: vdso: Add '%rcx' and '%r11' to the syscall clobber list ubi: ubi_wl_put_peb: Fix infinite loop when wear-leveling work failed ubi: Fix UAF wear-leveling entry in eraseblk_count_seq_show() ubifs: ubifs_writepage: Mark page dirty after writing inode failed ubifs: dirty_cow_znode: Fix memleak in error handling path ubifs: Re-statistic cleaned znode count if commit failed ubi: Fix possible null-ptr-deref in ubi_free_volume() ubifs: Fix memory leak in alloc_wbufs() ubi: Fix unreferenced object reported by kmemleak in ubi_resize_volume() ubi: Fix use-after-free when volume resizing failed ubifs: Reserve one leb for each journal head while doing budget ubifs: do_rename: Fix wrong space budget when target inode's nlink > 1 ubifs: Fix wrong dirty space budget for dirty inode ubifs: Rectify space budget for ubifs_xrename() ubifs: Rectify space budget for ubifs_symlink() if symlink is encrypted ubifs: Fix build errors as symbol undefined ubi: ensure that VID header offset + VID header size <= alloc, size um: vector: Fix memory leak in vector_config fs: f2fs: initialize fsdata in pagecache_write() f2fs: use memcpy_{to,from}_page() where possible pwm: stm32-lp: fix the check on arr and cmp registers update pwm: sifive: Always let the first pwm_apply_state succeed pwm: sifive: Reduce time the controller lock is held fs/jfs: fix shift exponent db_agl2size negative net/sched: Retire tcindex classifier kbuild: Port silent mode detection to future gnu make. wifi: ath9k: use proper statements in conditionals drm/radeon: Fix eDP for single-display iMac11,2 drm/i915/quirks: Add inverted backlight quirk for HP 14-r206nv PCI: Avoid FLR for AMD FCH AHCI adapters PCI: hotplug: Allow marking devices as disconnected during bind/unbind PCI/PM: Observe reset delay irrespective of bridge_d3 scsi: ses: Fix slab-out-of-bounds in ses_intf_remove() scsi: ses: Fix possible desc_ptr out-of-bounds accesses scsi: ses: Fix possible addl_desc_ptr out-of-bounds accesses scsi: ses: Fix slab-out-of-bounds in ses_enclosure_data_process() scsi: ses: Don't attach if enclosure has no components scsi: qla2xxx: Fix erroneous link down scsi: qla2xxx: Fix DMA-API call trace on NVMe LS requests scsi: qla2xxx: Fix link failure in NPIV environment ktest.pl: Add RUN_TIMEOUT option with default unlimited ktest.pl: Fix missing "end_monitor" when machine check fails ktest.pl: Give back console on Ctrt^C on monitor mm/thp: check and bail out if page in deferred queue already mm: memcontrol: deprecate charge moving media: ipu3-cio2: Fix PM runtime usage_count in driver unbind mips: fix syscall_get_nr alpha: fix FEN fault handling rbd: avoid use-after-free in do_rbd_add() when rbd_dev_create() fails ARM: dts: exynos: correct TMU phandle in Odroid XU ARM: dts: exynos: correct TMU phandle in Exynos4 dm flakey: don't corrupt the zero page dm flakey: fix logic when corrupting a bio thermal: intel: powerclamp: Fix cur_state for multi package system wifi: cfg80211: Fix use after free for wext wifi: rtl8xxxu: Use a longer retry limit of 48 ext4: refuse to create ea block when umounted ext4: optimize ea_inode block expansion ALSA: hda/realtek: Add quirk for HP EliteDesk 800 G6 Tower PC ALSA: ice1712: Do not left ice->gpio_mutex locked in aureon_add_controls() irqdomain: Drop bogus fwspec-mapping error handling irqdomain: Fix disassociation race irqdomain: Fix association race ima: Align ima_file_mmap() parameters with mmap_file LSM hook Documentation/hw-vuln: Document the interaction between IBRS and STIBP x86/speculation: Allow enabling STIBP with legacy IBRS x86/microcode/AMD: Fix mixed steppings support x86/microcode/AMD: Add a @cpu parameter to the reloading functions x86/microcode/amd: Remove load_microcode_amd()'s bsp parameter x86/kprobes: Fix arch_check_optimized_kprobe check within optimized_kprobe range x86/kprobes: Fix __recover_optprobed_insn check optimizing logic x86/reboot: Disable SVM, not just VMX, when stopping CPUs x86/reboot: Disable virtualization in an emergency if SVM is supported x86/crash: Disable virt in core NMI crash handler to avoid double shootdown x86/virt: Force GIF=1 prior to disabling SVM (for reboot flows) KVM: s390: disable migration mode when dirty tracking is disabled KVM: Destroy target device if coalesced MMIO unregistration fails udf: Fix file corruption when appending just after end of preallocated extent udf: Detect system inodes linked into directory hierarchy udf: Preserve link count of system files udf: Do not update file length for failed writes to inline files udf: Do not bother merging very long extents udf: Truncate added extents on failed expansion ocfs2: fix non-auto defrag path not working issue ocfs2: fix defrag path triggering jbd2 ASSERT f2fs: fix cgroup writeback accounting with fs-layer encryption f2fs: fix information leak in f2fs_move_inline_dirents() fs: hfsplus: fix UAF issue in hfsplus_put_super hfs: fix missing hfs_bnode_get() in __hfs_bnode_create ARM: dts: exynos: correct HDMI phy compatible in Exynos4 s390/kprobes: fix current_kprobe never cleared after kprobes reenter s390/kprobes: fix irq mask clobbering on kprobe reenter from post_handler s390: discard .interp section ipmi_ssif: Rename idle state and check rtc: pm8xxx: fix set-alarm race firmware: coreboot: framebuffer: Ignore reserved pixel color bits wifi: rtl8xxxu: fixing transmisison failure for rtl8192eu nfsd: zero out pointers after putting nfsd_files on COPY setup error dm cache: add cond_resched() to various workqueue loops dm thin: add cond_resched() to various workqueue loops drm: panel-orientation-quirks: Add quirk for Lenovo IdeaPad Duet 3 10IGL5 pinctrl: at91: use devm_kasprintf() to avoid potential leaks hwmon: (coretemp) Simplify platform device handling regulator: s5m8767: Bounds check id indexing into arrays regulator: max77802: Bounds check regulator id against opmode ASoC: kirkwood: Iterate over array indexes instead of using pointer math docs/scripts/gdb: add necessary make scripts_gdb step drm/msm/dsi: Add missing check for alloc_ordered_workqueue drm/radeon: free iio for atombios when driver shutdown HID: Add Mapping for System Microphone Mute drm/omap: dsi: Fix excessive stack usage drm/amd/display: Fix potential null-deref in dm_resume uaccess: Add minimum bounds check on kernel buffer size coda: Avoid partial allocation of sig_inputArgs net/mlx5: fw_tracer: Fix debug print ACPI: video: Fix Lenovo Ideapad Z570 DMI match wifi: mt76: dma: free rx_head in mt76_dma_rx_cleanup m68k: Check syscall_trace_enter() return code net: bcmgenet: Add a check for oversized packets ACPI: Don't build ACPICA with '-Os' ice: add missing checks for PF vsi type inet: fix fast path in __inet_hash_connect() wifi: mt7601u: fix an integer underflow wifi: brcmfmac: ensure CLM version is null-terminated to prevent stack-out-of-bounds x86/bugs: Reset speculation control settings on init timers: Prevent union confusion from unexpected restart_syscall() thermal: intel: Fix unsigned comparison with less than zero rcu: Suppress smp_processor_id() complaint in synchronize_rcu_expedited_wait() wifi: brcmfmac: Fix potential stack-out-of-bounds in brcmf_c_preinit_dcmds() blk-iocost: fix divide by 0 error in calc_lcoefs() ARM: dts: exynos: Use Exynos5420 compatible for the MIPI video phy udf: Define EFSCORRUPTED error code rpmsg: glink: Avoid infinite loop on intent for missing channel media: usb: siano: Fix use after free bugs caused by do_submit_urb media: i2c: ov7670: 0 instead of -EINVAL was returned media: rc: Fix use-after-free bugs caused by ene_tx_irqsim() media: i2c: ov772x: Fix memleak in ov772x_probe() media: ov5675: Fix memleak in ov5675_init_controls() powerpc: Remove linker flag from KBUILD_AFLAGS media: platform: ti: Add missing check for devm_regulator_get remoteproc: qcom_q6v5_mss: Use a carveout to authenticate modem headers MIPS: vpe-mt: drop physical_memsize MIPS: SMP-CPS: fix build error when HOTPLUG_CPU not set powerpc/eeh: Set channel state after notifying the drivers powerpc/eeh: Small refactor of eeh_handle_normal_event() powerpc/rtas: ensure 4KB alignment for rtas_data_buf powerpc/rtas: make all exports GPL powerpc/pseries/lparcfg: add missing RTAS retry status handling powerpc/pseries/lpar: add missing RTAS retry status handling clk: Honor CLK_OPS_PARENT_ENABLE in clk_core_is_enabled() powerpc/powernv/ioda: Skip unallocated resources when mapping to PE clk: qcom: gpucc-sdm845: fix clk_dis_wait being programmed for CX GDSC Input: ads7846 - don't check penirq immediately for 7845 Input: ads7846 - don't report pressure for ads7845 clk: renesas: cpg-mssr: Remove superfluous check in resume code clk: renesas: cpg-mssr: Use enum clk_reg_layout instead of a boolean flag clk: renesas: cpg-mssr: Fix use after free if cpg_mssr_common_init() failed mtd: rawnand: sunxi: Fix the size of the last OOB region clk: qcom: gcc-qcs404: fix names of the DSI clocks used as parents clk: qcom: gcc-qcs404: disable gpll[04]_out_aux parents mfd: pcf50633-adc: Fix potential memleak in pcf50633_adc_async_read() selftests/ftrace: Fix bash specific "==" operator sparc: allow PM configs for sparc32 COMPILE_TEST perf tools: Fix auto-complete on aarch64 perf llvm: Fix inadvertent file creation gfs2: jdata writepage fix cifs: Fix warning and UAF when destroy the MR list cifs: Fix lost destroy smbd connection when MR allocate failed nfsd: fix race to check ls_layouts hid: bigben_probe(): validate report count HID: asus: Fix mute and touchpad-toggle keys on Medion Akoya E1239T HID: asus: Add support for multi-touch touchpad on Medion Akoya E1239T HID: asus: Add report_size to struct asus_touchpad_info HID: asus: Only set EV_REP if we are adding a mapping HID: bigben: use spinlock to safely schedule workers HID: bigben_worker() remove unneeded check on report_field HID: bigben: use spinlock to protect concurrent accesses ASoC: soc-dapm.h: fixup warning struct snd_pcm_substream not declared ASoC: dapm: declare missing structure prototypes spi: synquacer: Fix timeout handling in synquacer_spi_transfer_one() dm: remove flush_scheduled_work() during local_exit() hwmon: (mlxreg-fan) Return zero speed for broken fan spi: bcm63xx-hsspi: Fix multi-bit mode setting spi: bcm63xx-hsspi: fix pm_runtime scsi: aic94xx: Add missing check for dma_map_single() hwmon: (ltc2945) Handle error case in ltc2945_value_store gpio: vf610: connect GPIO label to dev name ASoC: soc-compress.c: fixup private_data on snd_soc_new_compress() drm/mediatek: Clean dangling pointer on bind error path drm/mediatek: Drop unbalanced obj unref drm/mediatek: Use NULL instead of 0 for NULL pointer drm/mediatek: remove cast to pointers passed to kfree gpu: host1x: Don't skip assigning syncpoints to channels drm/msm/mdp5: Add check for kzalloc drm: Initialize struct drm_crtc_state.no_vblank from device settings drm/bridge: Introduce drm_bridge_get_next_bridge() drm/bridge: Rename bridge helpers targeting a bridge chain drm/exynos: Don't reset bridge->next drm/msm/dpu: Add check for pstates drm/msm/dpu: Add check for cstate drm/msm: use strscpy instead of strncpy drm/mipi-dsi: Fix byte order of 16-bit DCS set/get brightness ALSA: hda/ca0132: minor fix for allocation size ASoC: fsl_sai: initialize is_dsp_mode flag pinctrl: stm32: Fix refcount leak in stm32_pctrl_get_irq_domain drm/msm/hdmi: Add missing check for alloc_ordered_workqueue gpu: ipu-v3: common: Add of_node_put() for reference returned by of_graph_get_port_by_id() drm/vc4: dpi: Fix format mapping for RGB565 drm/vc4: dpi: Add option for inverting pixel clock and output enable drm/bridge: megachips: Fix error handling in i2c_register_driver() drm: mxsfb: DRM_MXSFB should depend on ARCH_MXS || ARCH_MXC drm/fourcc: Add missing big-endian XRGB1555 and RGB565 formats selftest: fib_tests: Always cleanup before exit selftests/net: Interpret UDP_GRO cmsg data as an int value irqchip/irq-bcm7120-l2: Set IRQ_LEVEL for level triggered interrupts irqchip/irq-brcmstb-l2: Set IRQ_LEVEL for level triggered interrupts can: esd_usb: Move mislocated storage of SJA1000_ECC_SEG bits in case of a bus error thermal/drivers/hisi: Drop second sensor hi3660 wifi: mac80211: make rate u32 in sta_set_rate_info_rx() crypto: crypto4xx - Call dma_unmap_page when done wifi: mwifiex: fix loop iterator in mwifiex_update_ampdu_txwinsize() wifi: iwl4965: Add missing check for create_singlethread_workqueue() wifi: iwl3945: Add missing check for create_singlethread_workqueue treewide: Replace DECLARE_TASKLET() with DECLARE_TASKLET_OLD() usb: gadget: udc: Avoid tasklet passing a global RISC-V: time: initialize hrtimer based broadcast clock event device m68k: /proc/hardware should depend on PROC_FS crypto: rsa-pkcs1pad - Use akcipher_request_complete rds: rds_rm_zerocopy_callback() correct order for list_add_tail() libbpf: Fix alen calculation in libbpf_nla_dump_errormsg() Bluetooth: L2CAP: Fix potential user-after-free OPP: fix error checking in opp_migrate_dentry() tap: tap_open(): correctly initialize socket uid tun: tun_chr_open(): correctly initialize socket uid net: add sock_init_data_uid() mptcp: add sk_stop_timer_sync helper irqchip/ti-sci: Fix refcount leak in ti_sci_intr_irq_domain_probe irqchip/irq-mvebu-gicp: Fix refcount leak in mvebu_gicp_probe irqchip/alpine-msi: Fix refcount leak in alpine_msix_init_domains net/mlx5: Enhance debug print in page allocation failure powercap: fix possible name leak in powercap_register_zone() crypto: seqiv - Handle EBUSY correctly crypto: essiv - Handle EBUSY correctly crypto: essiv - remove redundant null pointer check before kfree crypto: ccp - Failure on re-initialization due to duplicate sysfs filename ACPI: battery: Fix missing NUL-termination with large strings wifi: ath9k: Fix potential stack-out-of-bounds write in ath9k_wmi_rsp_callback() wifi: ath9k: hif_usb: clean up skbs if ath9k_hif_usb_rx_stream() fails ath9k: htc: clean up statistics macros ath9k: hif_usb: simplify if-if to if-else wifi: ath9k: htc_hst: free skb in ath9k_htc_rx_msg() if there is no callback function wifi: orinoco: check return value of hermes_write_wordrec() ACPICA: nsrepair: handle cases without a return value correctly lib/mpi: Fix buffer overrun when SG is too long genirq: Fix the return type of kstat_cpu_irqs_sum() ACPICA: Drop port I/O validation for some regions crypto: x86/ghash - fix unaligned access in ghash_setkey() wifi: wl3501_cs: don't call kfree_skb() under spin_lock_irqsave() wifi: libertas: cmdresp: don't call kfree_skb() under spin_lock_irqsave() wifi: libertas: main: don't call kfree_skb() under spin_lock_irqsave() wifi: libertas: if_usb: don't call kfree_skb() under spin_lock_irqsave() wifi: libertas_tf: don't call kfree_skb() under spin_lock_irqsave() wifi: brcmfmac: unmap dma buffer in brcmf_msgbuf_alloc_pktid() wifi: brcmfmac: fix potential memory leak in brcmf_netdev_start_xmit() wifi: wilc1000: fix potential memory leak in wilc_mac_xmit() wilc1000: let wilc_mac_xmit() return NETDEV_TX_OK wifi: ipw2200: fix memory leak in ipw_wdev_init() wifi: ipw2x00: don't call dev_kfree_skb() under spin_lock_irqsave() ipw2x00: switch from 'pci_' to 'dma_' API wifi: rtlwifi: Fix global-out-of-bounds bug in _rtl8812ae_phy_set_txpower_limit() rtlwifi: fix -Wpointer-sign warning wifi: rtl8xxxu: don't call dev_kfree_skb() under spin_lock_irqsave() wifi: libertas: fix memory leak in lbs_init_adapter() wifi: iwlegacy: common: don't call dev_kfree_skb() under spin_lock_irqsave() net/wireless: Delete unnecessary checks before the macro call “dev_kfree_skb” wifi: rsi: Fix memory leak in rsi_coex_attach() block: bio-integrity: Copy flags when bio_integrity_payload is cloned sched/rt: pick_next_rt_entity(): check list_entry sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity() s390/dasd: Fix potential memleak in dasd_eckd_init() s390/dasd: Prepare for additional path event handling blk-mq: correct stale comment of .get_budget blk-mq: wait on correct sbitmap_queue in blk_mq_mark_tag_wait blk-mq: remove stale comment for blk_mq_sched_mark_restart_hctx block: Limit number of items taken from the I/O scheduler in one go Revert "scsi: core: run queue if SCSI device queue isn't ready and queue is idle" arm64: dts: mediatek: mt7622: Add missing pwm-cells to pwm node ARM: dts: imx7s: correct iomuxc gpr mux controller cells arm64: dts: amlogic: meson-gxl-s905d-phicomm-n1: fix led node name arm64: dts: amlogic: meson-gxl: add missing unit address to eth-phy-mux node name arm64: dts: amlogic: meson-gx: add missing unit address to rng node name arm64: dts: amlogic: meson-gx: add missing SCPI sensors compatible arm64: dts: amlogic: meson-axg: fix SCPI clock dvfs node name arm64: dts: amlogic: meson-gx: fix SCPI clock dvfs node name ARM: imx: Call ida_simple_remove() for ida_simple_get ARM: dts: exynos: correct wr-active property in Exynos3250 Rinato ARM: OMAP1: call platform_device_put() in error case in omap1_dm_timer_init() arm64: dts: meson: remove CPU opps below 1GHz for G12A boards arm64: dts: meson-gx: Fix the SCPI DVFS node name and unit address arm64: dts: meson-g12a: Fix internal Ethernet PHY unit name arm64: dts: meson-gx: Fix Ethernet MAC address unit name ARM: zynq: Fix refcount leak in zynq_early_slcr_init arm64: dts: qcom: qcs404: use symbol names for PCIe resets ARM: OMAP2+: Fix memory leak in realtime_counter_init() HID: asus: use spinlock to safely schedule workers HID: asus: use spinlock to protect concurrent accesses HID: asus: Remove check for same LED brightness on set Linux 5.4.234 USB: core: Don't hold device lock while reading the "descriptors" sysfs file USB: serial: option: add support for VW/Skoda "Carstick LTE" dmaengine: sh: rcar-dmac: Check for error num after dma_set_max_seg_size vc_screen: don't clobber return value in vcs_read net: Remove WARN_ON_ONCE(sk->sk_forward_alloc) from sk_stream_kill_queues(). bpf: bpf_fib_lookup should not return neigh in NUD_FAILED state HID: core: Fix deadloop in hid_apply_multiplier. neigh: make sure used and confirmed times are valid IB/hfi1: Assign npages earlier btrfs: send: limit number of clones and allocated memory size ACPI: NFIT: fix a potential deadlock during NFIT teardown ARM: dts: rockchip: add power-domains property to dp node on rk3288 arm64: dts: rockchip: drop unused LED mode property from rk3328-roc-cc Conflicts: Documentation/devicetree/bindings/rtc/allwinner,sun6i-a31-rtc.yaml Documentation/devicetree/bindings~HEAD arch/arm/mm/dma-mapping.c drivers/clk/qcom/gcc-qcs404.c drivers/iommu/dma-iommu.c drivers/mtd/ubi/wl.c kernel/dma/direct.c Change-Id: I804ccb5552f305c49ec17b323c6c933cc99e6d39
2654 lines
69 KiB
C
2654 lines
69 KiB
C
// SPDX-License-Identifier: GPL-2.0-only
|
|
/*
|
|
* linux/arch/arm/mm/dma-mapping.c
|
|
*
|
|
* Copyright (C) 2000-2004 Russell King
|
|
*
|
|
* DMA uncached mapping support.
|
|
*/
|
|
#include <linux/module.h>
|
|
#include <linux/mm.h>
|
|
#include <linux/genalloc.h>
|
|
#include <linux/gfp.h>
|
|
#include <linux/errno.h>
|
|
#include <linux/list.h>
|
|
#include <linux/init.h>
|
|
#include <linux/device.h>
|
|
#include <linux/dma-direct.h>
|
|
#include <linux/dma-mapping.h>
|
|
#include <linux/dma-noncoherent.h>
|
|
#include <linux/dma-iommu.h>
|
|
#include <linux/dma-contiguous.h>
|
|
#include <linux/highmem.h>
|
|
#include <linux/memblock.h>
|
|
#include <linux/slab.h>
|
|
#include <linux/iommu.h>
|
|
#include <linux/io.h>
|
|
#include <linux/vmalloc.h>
|
|
#include <linux/sizes.h>
|
|
#include <linux/cma.h>
|
|
|
|
#include <asm/memory.h>
|
|
#include <asm/highmem.h>
|
|
#include <asm/cacheflush.h>
|
|
#include <asm/tlbflush.h>
|
|
#include <asm/mach/arch.h>
|
|
#include <asm/dma-iommu.h>
|
|
#include <asm/mach/map.h>
|
|
#include <asm/system_info.h>
|
|
#include <asm/dma-contiguous.h>
|
|
#include <xen/swiotlb-xen.h>
|
|
|
|
#include "dma.h"
|
|
#include "mm.h"
|
|
|
|
struct arm_dma_alloc_args {
|
|
struct device *dev;
|
|
size_t size;
|
|
gfp_t gfp;
|
|
pgprot_t prot;
|
|
const void *caller;
|
|
bool want_vaddr;
|
|
int coherent_flag;
|
|
};
|
|
|
|
struct arm_dma_free_args {
|
|
struct device *dev;
|
|
size_t size;
|
|
void *cpu_addr;
|
|
struct page *page;
|
|
bool want_vaddr;
|
|
};
|
|
|
|
#define NORMAL 0
|
|
#define COHERENT 1
|
|
|
|
struct arm_dma_allocator {
|
|
void *(*alloc)(struct arm_dma_alloc_args *args,
|
|
struct page **ret_page);
|
|
void (*free)(struct arm_dma_free_args *args);
|
|
};
|
|
|
|
struct arm_dma_buffer {
|
|
struct list_head list;
|
|
void *virt;
|
|
struct arm_dma_allocator *allocator;
|
|
};
|
|
|
|
static LIST_HEAD(arm_dma_bufs);
|
|
static DEFINE_SPINLOCK(arm_dma_bufs_lock);
|
|
|
|
static struct arm_dma_buffer *arm_dma_buffer_find(void *virt)
|
|
{
|
|
struct arm_dma_buffer *buf, *found = NULL;
|
|
unsigned long flags;
|
|
|
|
spin_lock_irqsave(&arm_dma_bufs_lock, flags);
|
|
list_for_each_entry(buf, &arm_dma_bufs, list) {
|
|
if (buf->virt == virt) {
|
|
list_del(&buf->list);
|
|
found = buf;
|
|
break;
|
|
}
|
|
}
|
|
spin_unlock_irqrestore(&arm_dma_bufs_lock, flags);
|
|
return found;
|
|
}
|
|
|
|
/*
|
|
* The DMA API is built upon the notion of "buffer ownership". A buffer
|
|
* is either exclusively owned by the CPU (and therefore may be accessed
|
|
* by it) or exclusively owned by the DMA device. These helper functions
|
|
* represent the transitions between these two ownership states.
|
|
*
|
|
* Note, however, that on later ARMs, this notion does not work due to
|
|
* speculative prefetches. We model our approach on the assumption that
|
|
* the CPU does do speculative prefetches, which means we clean caches
|
|
* before transfers and delay cache invalidation until transfer completion.
|
|
*
|
|
*/
|
|
static void __dma_page_cpu_to_dev(struct page *, unsigned long,
|
|
size_t, enum dma_data_direction);
|
|
static void __dma_page_dev_to_cpu(struct page *, unsigned long,
|
|
size_t, enum dma_data_direction);
|
|
|
|
static inline pgprot_t __get_dma_pgprot(unsigned long attrs, pgprot_t prot,
|
|
bool coherent);
|
|
|
|
static pgprot_t __get_dma_pgprot(unsigned long attrs, pgprot_t prot,
|
|
bool coherent)
|
|
{
|
|
if (!coherent || (attrs & DMA_ATTR_WRITE_COMBINE))
|
|
return pgprot_writecombine(prot);
|
|
return prot;
|
|
}
|
|
|
|
/**
|
|
* arm_dma_map_page - map a portion of a page for streaming DMA
|
|
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
|
|
* @page: page that buffer resides in
|
|
* @offset: offset into page for start of buffer
|
|
* @size: size of buffer to map
|
|
* @dir: DMA transfer direction
|
|
*
|
|
* Ensure that any data held in the cache is appropriately discarded
|
|
* or written back.
|
|
*
|
|
* The device owns this memory once this call has completed. The CPU
|
|
* can regain ownership by calling dma_unmap_page().
|
|
*/
|
|
static dma_addr_t arm_dma_map_page(struct device *dev, struct page *page,
|
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
|
|
__dma_page_cpu_to_dev(page, offset, size, dir);
|
|
return pfn_to_dma(dev, page_to_pfn(page)) + offset;
|
|
}
|
|
|
|
static dma_addr_t arm_coherent_dma_map_page(struct device *dev, struct page *page,
|
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
return pfn_to_dma(dev, page_to_pfn(page)) + offset;
|
|
}
|
|
|
|
/**
|
|
* arm_dma_unmap_page - unmap a buffer previously mapped through dma_map_page()
|
|
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
|
|
* @handle: DMA address of buffer
|
|
* @size: size of buffer (same as passed to dma_map_page)
|
|
* @dir: DMA transfer direction (same as passed to dma_map_page)
|
|
*
|
|
* Unmap a page streaming mode DMA translation. The handle and size
|
|
* must match what was provided in the previous dma_map_page() call.
|
|
* All other usages are undefined.
|
|
*
|
|
* After this call, reads by the CPU to the buffer are guaranteed to see
|
|
* whatever the device wrote there.
|
|
*/
|
|
static void arm_dma_unmap_page(struct device *dev, dma_addr_t handle,
|
|
size_t size, enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
if ((attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
|
|
__dma_page_dev_to_cpu(pfn_to_page(dma_to_pfn(dev, handle)),
|
|
handle & ~PAGE_MASK, size, dir);
|
|
}
|
|
|
|
static void arm_dma_sync_single_for_cpu(struct device *dev,
|
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
|
{
|
|
unsigned int offset = handle & (PAGE_SIZE - 1);
|
|
struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
|
|
__dma_page_dev_to_cpu(page, offset, size, dir);
|
|
}
|
|
|
|
static void arm_dma_sync_single_for_device(struct device *dev,
|
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
|
{
|
|
unsigned int offset = handle & (PAGE_SIZE - 1);
|
|
struct page *page = pfn_to_page(dma_to_pfn(dev, handle-offset));
|
|
__dma_page_cpu_to_dev(page, offset, size, dir);
|
|
}
|
|
|
|
const struct dma_map_ops arm_dma_ops = {
|
|
.alloc = arm_dma_alloc,
|
|
.free = arm_dma_free,
|
|
.mmap = arm_dma_mmap,
|
|
.get_sgtable = arm_dma_get_sgtable,
|
|
.map_page = arm_dma_map_page,
|
|
.unmap_page = arm_dma_unmap_page,
|
|
.map_sg = arm_dma_map_sg,
|
|
.unmap_sg = arm_dma_unmap_sg,
|
|
.map_resource = dma_direct_map_resource,
|
|
.sync_single_for_cpu = arm_dma_sync_single_for_cpu,
|
|
.sync_single_for_device = arm_dma_sync_single_for_device,
|
|
.sync_sg_for_cpu = arm_dma_sync_sg_for_cpu,
|
|
.sync_sg_for_device = arm_dma_sync_sg_for_device,
|
|
.dma_supported = arm_dma_supported,
|
|
.get_required_mask = dma_direct_get_required_mask,
|
|
};
|
|
EXPORT_SYMBOL(arm_dma_ops);
|
|
|
|
static void *arm_coherent_dma_alloc(struct device *dev, size_t size,
|
|
dma_addr_t *handle, gfp_t gfp, unsigned long attrs);
|
|
static void arm_coherent_dma_free(struct device *dev, size_t size, void *cpu_addr,
|
|
dma_addr_t handle, unsigned long attrs);
|
|
static int arm_coherent_dma_mmap(struct device *dev, struct vm_area_struct *vma,
|
|
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
|
unsigned long attrs);
|
|
|
|
const struct dma_map_ops arm_coherent_dma_ops = {
|
|
.alloc = arm_coherent_dma_alloc,
|
|
.free = arm_coherent_dma_free,
|
|
.mmap = arm_coherent_dma_mmap,
|
|
.get_sgtable = arm_dma_get_sgtable,
|
|
.map_page = arm_coherent_dma_map_page,
|
|
.map_sg = arm_dma_map_sg,
|
|
.map_resource = dma_direct_map_resource,
|
|
.dma_supported = arm_dma_supported,
|
|
.get_required_mask = dma_direct_get_required_mask,
|
|
};
|
|
EXPORT_SYMBOL(arm_coherent_dma_ops);
|
|
|
|
static int __dma_supported(struct device *dev, u64 mask, bool warn)
|
|
{
|
|
unsigned long max_dma_pfn = min(max_pfn - 1, arm_dma_pfn_limit);
|
|
|
|
/*
|
|
* Translate the device's DMA mask to a PFN limit. This
|
|
* PFN number includes the page which we can DMA to.
|
|
*/
|
|
if (dma_to_pfn(dev, mask) < max_dma_pfn) {
|
|
if (warn)
|
|
dev_warn(dev, "Coherent DMA mask %#llx (pfn %#lx-%#lx) covers a smaller range of system memory than the DMA zone pfn 0x0-%#lx\n",
|
|
mask,
|
|
dma_to_pfn(dev, 0), dma_to_pfn(dev, mask) + 1,
|
|
max_dma_pfn + 1);
|
|
return 0;
|
|
}
|
|
|
|
return 1;
|
|
}
|
|
|
|
static u64 get_coherent_dma_mask(struct device *dev)
|
|
{
|
|
u64 mask = (u64)DMA_BIT_MASK(32);
|
|
|
|
if (dev) {
|
|
mask = dev->coherent_dma_mask;
|
|
|
|
/*
|
|
* Sanity check the DMA mask - it must be non-zero, and
|
|
* must be able to be satisfied by a DMA allocation.
|
|
*/
|
|
if (mask == 0) {
|
|
dev_warn(dev, "coherent DMA mask is unset\n");
|
|
return 0;
|
|
}
|
|
|
|
if (!__dma_supported(dev, mask, true))
|
|
return 0;
|
|
}
|
|
|
|
return mask;
|
|
}
|
|
|
|
static void __dma_clear_buffer(struct page *page, size_t size, int coherent_flag)
|
|
{
|
|
/*
|
|
* Ensure that the allocated pages are zeroed, and that any data
|
|
* lurking in the kernel direct-mapped region is invalidated.
|
|
*/
|
|
if (PageHighMem(page)) {
|
|
phys_addr_t base = __pfn_to_phys(page_to_pfn(page));
|
|
phys_addr_t end = base + size;
|
|
while (size > 0) {
|
|
void *ptr = kmap_atomic(page);
|
|
memset(ptr, 0, PAGE_SIZE);
|
|
if (coherent_flag != COHERENT)
|
|
dmac_flush_range(ptr, ptr + PAGE_SIZE);
|
|
kunmap_atomic(ptr);
|
|
page++;
|
|
size -= PAGE_SIZE;
|
|
}
|
|
if (coherent_flag != COHERENT)
|
|
outer_flush_range(base, end);
|
|
} else {
|
|
void *ptr = page_address(page);
|
|
memset(ptr, 0, size);
|
|
if (coherent_flag != COHERENT) {
|
|
dmac_flush_range(ptr, ptr + size);
|
|
outer_flush_range(__pa(ptr), __pa(ptr) + size);
|
|
}
|
|
}
|
|
}
|
|
|
|
/*
|
|
* Allocate a DMA buffer for 'dev' of size 'size' using the
|
|
* specified gfp mask. Note that 'size' must be page aligned.
|
|
*/
|
|
static struct page *__dma_alloc_buffer(struct device *dev, size_t size,
|
|
gfp_t gfp, int coherent_flag)
|
|
{
|
|
unsigned long order = get_order(size);
|
|
struct page *page, *p, *e;
|
|
|
|
page = alloc_pages(gfp, order);
|
|
if (!page)
|
|
return NULL;
|
|
|
|
/*
|
|
* Now split the huge page and free the excess pages
|
|
*/
|
|
split_page(page, order);
|
|
for (p = page + (size >> PAGE_SHIFT), e = page + (1 << order); p < e; p++)
|
|
__free_page(p);
|
|
|
|
__dma_clear_buffer(page, size, coherent_flag);
|
|
|
|
return page;
|
|
}
|
|
|
|
/*
|
|
* Free a DMA buffer. 'size' must be page aligned.
|
|
*/
|
|
static void __dma_free_buffer(struct page *page, size_t size)
|
|
{
|
|
struct page *e = page + (size >> PAGE_SHIFT);
|
|
|
|
while (page < e) {
|
|
__free_page(page);
|
|
page++;
|
|
}
|
|
}
|
|
|
|
static void *__alloc_from_contiguous(struct device *dev, size_t size,
|
|
pgprot_t prot, struct page **ret_page,
|
|
const void *caller, bool want_vaddr,
|
|
int coherent_flag, gfp_t gfp);
|
|
|
|
static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp,
|
|
pgprot_t prot, struct page **ret_page,
|
|
const void *caller, bool want_vaddr);
|
|
|
|
#define DEFAULT_DMA_COHERENT_POOL_SIZE SZ_256K
|
|
static struct gen_pool *atomic_pool __ro_after_init;
|
|
|
|
static size_t atomic_pool_size __initdata = DEFAULT_DMA_COHERENT_POOL_SIZE;
|
|
|
|
static int __init early_coherent_pool(char *p)
|
|
{
|
|
atomic_pool_size = memparse(p, &p);
|
|
return 0;
|
|
}
|
|
early_param("coherent_pool", early_coherent_pool);
|
|
|
|
/*
|
|
* Initialise the coherent pool for atomic allocations.
|
|
*/
|
|
static int __init atomic_pool_init(void)
|
|
{
|
|
pgprot_t prot = pgprot_dmacoherent(PAGE_KERNEL);
|
|
gfp_t gfp = GFP_KERNEL | GFP_DMA;
|
|
struct page *page;
|
|
void *ptr;
|
|
|
|
atomic_pool = gen_pool_create(PAGE_SHIFT, -1);
|
|
if (!atomic_pool)
|
|
goto out;
|
|
/*
|
|
* The atomic pool is only used for non-coherent allocations
|
|
* so we must pass NORMAL for coherent_flag.
|
|
*/
|
|
if (dev_get_cma_area(NULL))
|
|
ptr = __alloc_from_contiguous(NULL, atomic_pool_size, prot,
|
|
&page, atomic_pool_init, true, NORMAL,
|
|
GFP_KERNEL);
|
|
else
|
|
ptr = __alloc_remap_buffer(NULL, atomic_pool_size, gfp, prot,
|
|
&page, atomic_pool_init, true);
|
|
if (ptr) {
|
|
int ret;
|
|
|
|
ret = gen_pool_add_virt(atomic_pool, (unsigned long)ptr,
|
|
page_to_phys(page),
|
|
atomic_pool_size, -1);
|
|
if (ret)
|
|
goto destroy_genpool;
|
|
|
|
gen_pool_set_algo(atomic_pool,
|
|
gen_pool_first_fit_order_align,
|
|
NULL);
|
|
pr_info("DMA: preallocated %zu KiB pool for atomic coherent allocations\n",
|
|
atomic_pool_size / 1024);
|
|
return 0;
|
|
}
|
|
|
|
destroy_genpool:
|
|
gen_pool_destroy(atomic_pool);
|
|
atomic_pool = NULL;
|
|
out:
|
|
pr_err("DMA: failed to allocate %zu KiB pool for atomic coherent allocation\n",
|
|
atomic_pool_size / 1024);
|
|
return -ENOMEM;
|
|
}
|
|
/*
|
|
* CMA is activated by core_initcall, so we must be called after it.
|
|
*/
|
|
postcore_initcall(atomic_pool_init);
|
|
|
|
struct dma_contig_early_reserve {
|
|
phys_addr_t base;
|
|
unsigned long size;
|
|
};
|
|
|
|
static struct dma_contig_early_reserve dma_mmu_remap[MAX_CMA_AREAS] __initdata;
|
|
|
|
static int dma_mmu_remap_num __initdata;
|
|
|
|
void __init dma_contiguous_early_fixup(phys_addr_t base, unsigned long size)
|
|
{
|
|
dma_mmu_remap[dma_mmu_remap_num].base = base;
|
|
dma_mmu_remap[dma_mmu_remap_num].size = size;
|
|
dma_mmu_remap_num++;
|
|
}
|
|
|
|
void __init dma_contiguous_remap(void)
|
|
{
|
|
int i;
|
|
for (i = 0; i < dma_mmu_remap_num; i++) {
|
|
phys_addr_t start = dma_mmu_remap[i].base;
|
|
phys_addr_t end = start + dma_mmu_remap[i].size;
|
|
struct map_desc map;
|
|
unsigned long addr;
|
|
|
|
/*
|
|
* Make start and end PMD_SIZE aligned, observing memory
|
|
* boundaries
|
|
*/
|
|
if (memblock_is_memory(start & PMD_MASK))
|
|
start = start & PMD_MASK;
|
|
if (memblock_is_memory(ALIGN(end, PMD_SIZE)))
|
|
end = ALIGN(end, PMD_SIZE);
|
|
|
|
if (end > arm_lowmem_limit)
|
|
end = arm_lowmem_limit;
|
|
if (start >= end)
|
|
continue;
|
|
|
|
map.pfn = __phys_to_pfn(start);
|
|
map.virtual = __phys_to_virt(start);
|
|
map.length = end - start;
|
|
map.type = MT_MEMORY_DMA_READY;
|
|
|
|
/*
|
|
* Clear previous low-memory mapping to ensure that the
|
|
* TLB does not see any conflicting entries, then flush
|
|
* the TLB of the old entries before creating new mappings.
|
|
*
|
|
* This ensures that any speculatively loaded TLB entries
|
|
* (even though they may be rare) can not cause any problems,
|
|
* and ensures that this code is architecturally compliant.
|
|
*/
|
|
for (addr = __phys_to_virt(start); addr < __phys_to_virt(end);
|
|
addr += PMD_SIZE) {
|
|
pmd_t *pmd;
|
|
|
|
pmd = pmd_off_k(addr);
|
|
if (pmd_bad(*pmd))
|
|
pmd_clear(pmd);
|
|
}
|
|
|
|
flush_tlb_kernel_range(__phys_to_virt(start),
|
|
__phys_to_virt(end));
|
|
|
|
iotable_init(&map, 1);
|
|
}
|
|
}
|
|
|
|
static int __dma_update_pte(pte_t *pte, unsigned long addr, void *data)
|
|
{
|
|
struct page *page = virt_to_page(addr);
|
|
pgprot_t prot = *(pgprot_t *)data;
|
|
|
|
set_pte_ext(pte, mk_pte(page, prot), 0);
|
|
return 0;
|
|
}
|
|
|
|
static void __dma_remap(struct page *page, size_t size, pgprot_t prot)
|
|
{
|
|
unsigned long start = (unsigned long) page_address(page);
|
|
unsigned end = start + size;
|
|
|
|
apply_to_page_range(&init_mm, start, size, __dma_update_pte, &prot);
|
|
flush_tlb_kernel_range(start, end);
|
|
}
|
|
|
|
static void *__alloc_remap_buffer(struct device *dev, size_t size, gfp_t gfp,
|
|
pgprot_t prot, struct page **ret_page,
|
|
const void *caller, bool want_vaddr)
|
|
{
|
|
struct page *page;
|
|
void *ptr = NULL;
|
|
/*
|
|
* __alloc_remap_buffer is only called when the device is
|
|
* non-coherent
|
|
*/
|
|
page = __dma_alloc_buffer(dev, size, gfp, NORMAL);
|
|
if (!page)
|
|
return NULL;
|
|
if (!want_vaddr)
|
|
goto out;
|
|
|
|
ptr = dma_common_contiguous_remap(page, size, prot, caller);
|
|
if (!ptr) {
|
|
__dma_free_buffer(page, size);
|
|
return NULL;
|
|
}
|
|
|
|
out:
|
|
*ret_page = page;
|
|
return ptr;
|
|
}
|
|
|
|
static void *__alloc_from_pool(size_t size, struct page **ret_page)
|
|
{
|
|
unsigned long val;
|
|
void *ptr = NULL;
|
|
|
|
if (!atomic_pool) {
|
|
WARN(1, "coherent pool not initialised!\n");
|
|
return NULL;
|
|
}
|
|
|
|
val = gen_pool_alloc(atomic_pool, size);
|
|
if (val) {
|
|
phys_addr_t phys = gen_pool_virt_to_phys(atomic_pool, val);
|
|
|
|
*ret_page = phys_to_page(phys);
|
|
ptr = (void *)val;
|
|
}
|
|
|
|
return ptr;
|
|
}
|
|
|
|
static bool __in_atomic_pool(void *start, size_t size)
|
|
{
|
|
return addr_in_gen_pool(atomic_pool, (unsigned long)start, size);
|
|
}
|
|
|
|
static int __free_from_pool(void *start, size_t size)
|
|
{
|
|
if (!__in_atomic_pool(start, size))
|
|
return 0;
|
|
|
|
gen_pool_free(atomic_pool, (unsigned long)start, size);
|
|
|
|
return 1;
|
|
}
|
|
|
|
static void *__alloc_from_contiguous(struct device *dev, size_t size,
|
|
pgprot_t prot, struct page **ret_page,
|
|
const void *caller, bool want_vaddr,
|
|
int coherent_flag, gfp_t gfp)
|
|
{
|
|
unsigned long order = get_order(size);
|
|
size_t count = size >> PAGE_SHIFT;
|
|
struct page *page;
|
|
void *ptr = NULL;
|
|
|
|
page = dma_alloc_from_contiguous(dev, count, order, gfp & __GFP_NOWARN);
|
|
if (!page)
|
|
return NULL;
|
|
|
|
__dma_clear_buffer(page, size, coherent_flag);
|
|
|
|
if (!want_vaddr)
|
|
goto out;
|
|
|
|
if (PageHighMem(page)) {
|
|
ptr = dma_common_contiguous_remap(page, size, prot, caller);
|
|
if (!ptr) {
|
|
dma_release_from_contiguous(dev, page, count);
|
|
return NULL;
|
|
}
|
|
} else {
|
|
__dma_remap(page, size, prot);
|
|
ptr = page_address(page);
|
|
}
|
|
|
|
out:
|
|
*ret_page = page;
|
|
return ptr;
|
|
}
|
|
|
|
static void __free_from_contiguous(struct device *dev, struct page *page,
|
|
void *cpu_addr, size_t size, bool want_vaddr)
|
|
{
|
|
if (want_vaddr) {
|
|
if (PageHighMem(page))
|
|
dma_common_free_remap(cpu_addr, size);
|
|
else
|
|
__dma_remap(page, size, PAGE_KERNEL);
|
|
}
|
|
dma_release_from_contiguous(dev, page, size >> PAGE_SHIFT);
|
|
}
|
|
|
|
static void *__alloc_simple_buffer(struct device *dev, size_t size, gfp_t gfp,
|
|
struct page **ret_page)
|
|
{
|
|
struct page *page;
|
|
/* __alloc_simple_buffer is only called when the device is coherent */
|
|
page = __dma_alloc_buffer(dev, size, gfp, COHERENT);
|
|
if (!page)
|
|
return NULL;
|
|
|
|
*ret_page = page;
|
|
return page_address(page);
|
|
}
|
|
|
|
static void *simple_allocator_alloc(struct arm_dma_alloc_args *args,
|
|
struct page **ret_page)
|
|
{
|
|
return __alloc_simple_buffer(args->dev, args->size, args->gfp,
|
|
ret_page);
|
|
}
|
|
|
|
static void simple_allocator_free(struct arm_dma_free_args *args)
|
|
{
|
|
__dma_free_buffer(args->page, args->size);
|
|
}
|
|
|
|
static struct arm_dma_allocator simple_allocator = {
|
|
.alloc = simple_allocator_alloc,
|
|
.free = simple_allocator_free,
|
|
};
|
|
|
|
static void *cma_allocator_alloc(struct arm_dma_alloc_args *args,
|
|
struct page **ret_page)
|
|
{
|
|
return __alloc_from_contiguous(args->dev, args->size, args->prot,
|
|
ret_page, args->caller,
|
|
args->want_vaddr, args->coherent_flag,
|
|
args->gfp);
|
|
}
|
|
|
|
static void cma_allocator_free(struct arm_dma_free_args *args)
|
|
{
|
|
__free_from_contiguous(args->dev, args->page, args->cpu_addr,
|
|
args->size, args->want_vaddr);
|
|
}
|
|
|
|
static struct arm_dma_allocator cma_allocator = {
|
|
.alloc = cma_allocator_alloc,
|
|
.free = cma_allocator_free,
|
|
};
|
|
|
|
static void *pool_allocator_alloc(struct arm_dma_alloc_args *args,
|
|
struct page **ret_page)
|
|
{
|
|
return __alloc_from_pool(args->size, ret_page);
|
|
}
|
|
|
|
static void pool_allocator_free(struct arm_dma_free_args *args)
|
|
{
|
|
__free_from_pool(args->cpu_addr, args->size);
|
|
}
|
|
|
|
static struct arm_dma_allocator pool_allocator = {
|
|
.alloc = pool_allocator_alloc,
|
|
.free = pool_allocator_free,
|
|
};
|
|
|
|
static void *remap_allocator_alloc(struct arm_dma_alloc_args *args,
|
|
struct page **ret_page)
|
|
{
|
|
return __alloc_remap_buffer(args->dev, args->size, args->gfp,
|
|
args->prot, ret_page, args->caller,
|
|
args->want_vaddr);
|
|
}
|
|
|
|
static void remap_allocator_free(struct arm_dma_free_args *args)
|
|
{
|
|
if (args->want_vaddr)
|
|
dma_common_free_remap(args->cpu_addr, args->size);
|
|
|
|
__dma_free_buffer(args->page, args->size);
|
|
}
|
|
|
|
static struct arm_dma_allocator remap_allocator = {
|
|
.alloc = remap_allocator_alloc,
|
|
.free = remap_allocator_free,
|
|
};
|
|
|
|
static void *__dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
|
|
gfp_t gfp, pgprot_t prot, bool is_coherent,
|
|
unsigned long attrs, const void *caller)
|
|
{
|
|
u64 mask = get_coherent_dma_mask(dev);
|
|
struct page *page = NULL;
|
|
void *addr;
|
|
bool allowblock, cma;
|
|
struct arm_dma_buffer *buf;
|
|
struct arm_dma_alloc_args args = {
|
|
.dev = dev,
|
|
.size = PAGE_ALIGN(size),
|
|
.gfp = gfp,
|
|
.prot = prot,
|
|
.caller = caller,
|
|
.want_vaddr = ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) == 0),
|
|
.coherent_flag = is_coherent ? COHERENT : NORMAL,
|
|
};
|
|
|
|
#ifdef CONFIG_DMA_API_DEBUG
|
|
u64 limit = (mask + 1) & ~mask;
|
|
if (limit && size >= limit) {
|
|
dev_warn(dev, "coherent allocation too big (requested %#x mask %#llx)\n",
|
|
size, mask);
|
|
return NULL;
|
|
}
|
|
#endif
|
|
|
|
if (!mask)
|
|
return NULL;
|
|
|
|
buf = kzalloc(sizeof(*buf),
|
|
gfp & ~(__GFP_DMA | __GFP_DMA32 | __GFP_HIGHMEM));
|
|
if (!buf)
|
|
return NULL;
|
|
|
|
if (mask < 0xffffffffULL)
|
|
gfp |= GFP_DMA;
|
|
|
|
/*
|
|
* Following is a work-around (a.k.a. hack) to prevent pages
|
|
* with __GFP_COMP being passed to split_page() which cannot
|
|
* handle them. The real problem is that this flag probably
|
|
* should be 0 on ARM as it is not supported on this
|
|
* platform; see CONFIG_HUGETLBFS.
|
|
*/
|
|
gfp &= ~(__GFP_COMP);
|
|
args.gfp = gfp;
|
|
|
|
*handle = DMA_MAPPING_ERROR;
|
|
allowblock = gfpflags_allow_blocking(gfp);
|
|
cma = allowblock ? dev_get_cma_area(dev) : false;
|
|
|
|
if (cma)
|
|
buf->allocator = &cma_allocator;
|
|
else if (is_coherent)
|
|
buf->allocator = &simple_allocator;
|
|
else if (allowblock)
|
|
buf->allocator = &remap_allocator;
|
|
else
|
|
buf->allocator = &pool_allocator;
|
|
|
|
addr = buf->allocator->alloc(&args, &page);
|
|
|
|
if (page) {
|
|
unsigned long flags;
|
|
|
|
*handle = pfn_to_dma(dev, page_to_pfn(page));
|
|
buf->virt = args.want_vaddr ? addr : page;
|
|
|
|
spin_lock_irqsave(&arm_dma_bufs_lock, flags);
|
|
list_add(&buf->list, &arm_dma_bufs);
|
|
spin_unlock_irqrestore(&arm_dma_bufs_lock, flags);
|
|
} else {
|
|
kfree(buf);
|
|
}
|
|
|
|
return args.want_vaddr ? addr : page;
|
|
}
|
|
|
|
/*
|
|
* Allocate DMA-coherent memory space and return both the kernel remapped
|
|
* virtual and bus address for that space.
|
|
*/
|
|
void *arm_dma_alloc(struct device *dev, size_t size, dma_addr_t *handle,
|
|
gfp_t gfp, unsigned long attrs)
|
|
{
|
|
pgprot_t prot = __get_dma_pgprot(attrs, PAGE_KERNEL, false);
|
|
|
|
return __dma_alloc(dev, size, handle, gfp, prot, false,
|
|
attrs, __builtin_return_address(0));
|
|
}
|
|
|
|
static void *arm_coherent_dma_alloc(struct device *dev, size_t size,
|
|
dma_addr_t *handle, gfp_t gfp, unsigned long attrs)
|
|
{
|
|
return __dma_alloc(dev, size, handle, gfp, PAGE_KERNEL, true,
|
|
attrs, __builtin_return_address(0));
|
|
}
|
|
|
|
static int __arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
|
|
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
|
unsigned long attrs)
|
|
{
|
|
int ret = -ENXIO;
|
|
unsigned long nr_vma_pages = vma_pages(vma);
|
|
unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
|
unsigned long pfn = dma_to_pfn(dev, dma_addr);
|
|
unsigned long off = vma->vm_pgoff;
|
|
|
|
if (dma_mmap_from_dev_coherent(dev, vma, cpu_addr, size, &ret))
|
|
return ret;
|
|
|
|
if (off < nr_pages && nr_vma_pages <= (nr_pages - off)) {
|
|
ret = remap_pfn_range(vma, vma->vm_start,
|
|
pfn + off,
|
|
vma->vm_end - vma->vm_start,
|
|
vma->vm_page_prot);
|
|
}
|
|
|
|
return ret;
|
|
}
|
|
|
|
/*
|
|
* Create userspace mapping for the DMA-coherent memory.
|
|
*/
|
|
static int arm_coherent_dma_mmap(struct device *dev, struct vm_area_struct *vma,
|
|
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
|
unsigned long attrs)
|
|
{
|
|
return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
|
|
}
|
|
|
|
int arm_dma_mmap(struct device *dev, struct vm_area_struct *vma,
|
|
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
|
unsigned long attrs)
|
|
{
|
|
vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot,
|
|
false);
|
|
return __arm_dma_mmap(dev, vma, cpu_addr, dma_addr, size, attrs);
|
|
}
|
|
|
|
/*
|
|
* Free a buffer as defined by the above mapping.
|
|
*/
|
|
static void __arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
|
|
dma_addr_t handle, unsigned long attrs,
|
|
bool is_coherent)
|
|
{
|
|
struct page *page = pfn_to_page(dma_to_pfn(dev, handle));
|
|
struct arm_dma_buffer *buf;
|
|
struct arm_dma_free_args args = {
|
|
.dev = dev,
|
|
.size = PAGE_ALIGN(size),
|
|
.cpu_addr = cpu_addr,
|
|
.page = page,
|
|
.want_vaddr = ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) == 0),
|
|
};
|
|
void *addr = (args.want_vaddr) ? cpu_addr : page;
|
|
|
|
buf = arm_dma_buffer_find(addr);
|
|
if (WARN(!buf, "Freeing invalid buffer %pK\n", addr))
|
|
return;
|
|
|
|
buf->allocator->free(&args);
|
|
kfree(buf);
|
|
}
|
|
|
|
void arm_dma_free(struct device *dev, size_t size, void *cpu_addr,
|
|
dma_addr_t handle, unsigned long attrs)
|
|
{
|
|
__arm_dma_free(dev, size, cpu_addr, handle, attrs, false);
|
|
}
|
|
|
|
static void arm_coherent_dma_free(struct device *dev, size_t size, void *cpu_addr,
|
|
dma_addr_t handle, unsigned long attrs)
|
|
{
|
|
__arm_dma_free(dev, size, cpu_addr, handle, attrs, true);
|
|
}
|
|
|
|
int arm_dma_get_sgtable(struct device *dev, struct sg_table *sgt,
|
|
void *cpu_addr, dma_addr_t handle, size_t size,
|
|
unsigned long attrs)
|
|
{
|
|
unsigned long pfn = dma_to_pfn(dev, handle);
|
|
struct page *page;
|
|
int ret;
|
|
|
|
/* If the PFN is not valid, we do not have a struct page */
|
|
if (!pfn_valid(pfn))
|
|
return -ENXIO;
|
|
|
|
page = pfn_to_page(pfn);
|
|
|
|
ret = sg_alloc_table(sgt, 1, GFP_KERNEL);
|
|
if (unlikely(ret))
|
|
return ret;
|
|
|
|
sg_set_page(sgt->sgl, page, PAGE_ALIGN(size), 0);
|
|
return 0;
|
|
}
|
|
|
|
static void dma_cache_maint_page(struct page *page, unsigned long offset,
|
|
size_t size, enum dma_data_direction dir,
|
|
void (*op)(const void *, size_t, int))
|
|
{
|
|
unsigned long pfn;
|
|
size_t left = size;
|
|
|
|
pfn = page_to_pfn(page) + offset / PAGE_SIZE;
|
|
offset %= PAGE_SIZE;
|
|
|
|
/*
|
|
* A single sg entry may refer to multiple physically contiguous
|
|
* pages. But we still need to process highmem pages individually.
|
|
* If highmem is not configured then the bulk of this loop gets
|
|
* optimized out.
|
|
*/
|
|
do {
|
|
size_t len = left;
|
|
void *vaddr;
|
|
|
|
page = pfn_to_page(pfn);
|
|
|
|
if (PageHighMem(page)) {
|
|
if (len + offset > PAGE_SIZE)
|
|
len = PAGE_SIZE - offset;
|
|
|
|
if (cache_is_vipt_nonaliasing()) {
|
|
vaddr = kmap_atomic(page);
|
|
op(vaddr + offset, len, dir);
|
|
kunmap_atomic(vaddr);
|
|
} else {
|
|
vaddr = kmap_high_get(page);
|
|
if (vaddr) {
|
|
op(vaddr + offset, len, dir);
|
|
kunmap_high(page);
|
|
}
|
|
}
|
|
} else {
|
|
vaddr = page_address(page) + offset;
|
|
op(vaddr, len, dir);
|
|
}
|
|
offset = 0;
|
|
pfn++;
|
|
left -= len;
|
|
} while (left);
|
|
}
|
|
|
|
/*
|
|
* Make an area consistent for devices.
|
|
* Note: Drivers should NOT use this function directly, as it will break
|
|
* platforms with CONFIG_DMABOUNCE.
|
|
* Use the driver DMA support - see dma-mapping.h (dma_sync_*)
|
|
*/
|
|
static void __dma_page_cpu_to_dev(struct page *page, unsigned long off,
|
|
size_t size, enum dma_data_direction dir)
|
|
{
|
|
phys_addr_t paddr;
|
|
|
|
dma_cache_maint_page(page, off, size, dir, dmac_map_area);
|
|
|
|
paddr = page_to_phys(page) + off;
|
|
if (dir == DMA_FROM_DEVICE) {
|
|
outer_inv_range(paddr, paddr + size);
|
|
} else {
|
|
outer_clean_range(paddr, paddr + size);
|
|
}
|
|
/* FIXME: non-speculating: flush on bidirectional mappings? */
|
|
}
|
|
|
|
static void __dma_page_dev_to_cpu(struct page *page, unsigned long off,
|
|
size_t size, enum dma_data_direction dir)
|
|
{
|
|
phys_addr_t paddr = page_to_phys(page) + off;
|
|
|
|
/* FIXME: non-speculating: not required */
|
|
/* in any case, don't bother invalidating if DMA to device */
|
|
if (dir != DMA_TO_DEVICE) {
|
|
outer_inv_range(paddr, paddr + size);
|
|
|
|
dma_cache_maint_page(page, off, size, dir, dmac_unmap_area);
|
|
}
|
|
|
|
/*
|
|
* Mark the D-cache clean for these pages to avoid extra flushing.
|
|
*/
|
|
if (dir != DMA_TO_DEVICE && size >= PAGE_SIZE) {
|
|
unsigned long pfn;
|
|
size_t left = size;
|
|
|
|
pfn = page_to_pfn(page) + off / PAGE_SIZE;
|
|
off %= PAGE_SIZE;
|
|
if (off) {
|
|
pfn++;
|
|
left -= PAGE_SIZE - off;
|
|
}
|
|
while (left >= PAGE_SIZE) {
|
|
page = pfn_to_page(pfn++);
|
|
set_bit(PG_dcache_clean, &page->flags);
|
|
left -= PAGE_SIZE;
|
|
}
|
|
}
|
|
}
|
|
|
|
/**
|
|
* arm_dma_map_sg - map a set of SG buffers for streaming mode DMA
|
|
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map
|
|
* @dir: DMA transfer direction
|
|
*
|
|
* Map a set of buffers described by scatterlist in streaming mode for DMA.
|
|
* This is the scatter-gather version of the dma_map_single interface.
|
|
* Here the scatter gather list elements are each tagged with the
|
|
* appropriate dma address and length. They are obtained via
|
|
* sg_dma_{address,length}.
|
|
*
|
|
* Device ownership issues as mentioned for dma_map_single are the same
|
|
* here.
|
|
*/
|
|
int arm_dma_map_sg(struct device *dev, struct scatterlist *sg, int nents,
|
|
enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
const struct dma_map_ops *ops = get_dma_ops(dev);
|
|
struct scatterlist *s;
|
|
int i, j;
|
|
|
|
for_each_sg(sg, s, nents, i) {
|
|
#ifdef CONFIG_NEED_SG_DMA_LENGTH
|
|
s->dma_length = s->length;
|
|
#endif
|
|
s->dma_address = ops->map_page(dev, sg_page(s), s->offset,
|
|
s->length, dir, attrs);
|
|
if (dma_mapping_error(dev, s->dma_address))
|
|
goto bad_mapping;
|
|
}
|
|
return nents;
|
|
|
|
bad_mapping:
|
|
for_each_sg(sg, s, i, j)
|
|
ops->unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs);
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* arm_dma_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
|
|
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to unmap (same as was passed to dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*
|
|
* Unmap a set of streaming mode DMA translations. Again, CPU access
|
|
* rules concerning calls here are the same as for dma_unmap_single().
|
|
*/
|
|
void arm_dma_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
|
|
enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
const struct dma_map_ops *ops = get_dma_ops(dev);
|
|
struct scatterlist *s;
|
|
|
|
int i;
|
|
|
|
for_each_sg(sg, s, nents, i)
|
|
ops->unmap_page(dev, sg_dma_address(s), sg_dma_len(s), dir, attrs);
|
|
}
|
|
|
|
/**
|
|
* arm_dma_sync_sg_for_cpu
|
|
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map (returned from dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*/
|
|
void arm_dma_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir)
|
|
{
|
|
const struct dma_map_ops *ops = get_dma_ops(dev);
|
|
struct scatterlist *s;
|
|
int i;
|
|
|
|
for_each_sg(sg, s, nents, i)
|
|
ops->sync_single_for_cpu(dev, sg_dma_address(s), s->length,
|
|
dir);
|
|
}
|
|
|
|
/**
|
|
* arm_dma_sync_sg_for_device
|
|
* @dev: valid struct device pointer, or NULL for ISA and EISA-like devices
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map (returned from dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*/
|
|
void arm_dma_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir)
|
|
{
|
|
const struct dma_map_ops *ops = get_dma_ops(dev);
|
|
struct scatterlist *s;
|
|
int i;
|
|
|
|
for_each_sg(sg, s, nents, i)
|
|
ops->sync_single_for_device(dev, sg_dma_address(s), s->length,
|
|
dir);
|
|
}
|
|
|
|
/*
|
|
* Return whether the given device DMA address mask can be supported
|
|
* properly. For example, if your device can only drive the low 24-bits
|
|
* during bus mastering, then you would pass 0x00ffffff as the mask
|
|
* to this function.
|
|
*/
|
|
int arm_dma_supported(struct device *dev, u64 mask)
|
|
{
|
|
return __dma_supported(dev, mask, false);
|
|
}
|
|
|
|
static const struct dma_map_ops *arm_get_dma_map_ops(bool coherent)
|
|
{
|
|
/*
|
|
* When CONFIG_ARM_LPAE is set, physical address can extend above
|
|
* 32-bits, which then can't be addressed by devices that only support
|
|
* 32-bit DMA.
|
|
* Use the generic dma-direct / swiotlb ops code in that case, as that
|
|
* handles bounce buffering for us.
|
|
*/
|
|
if (IS_ENABLED(CONFIG_ARM_LPAE))
|
|
return NULL;
|
|
return coherent ? &arm_coherent_dma_ops : &arm_dma_ops;
|
|
}
|
|
|
|
#ifdef CONFIG_ARM_DMA_USE_IOMMU
|
|
|
|
static bool is_dma_coherent(struct device *dev, unsigned long attrs,
|
|
bool is_coherent)
|
|
{
|
|
if (attrs & DMA_ATTR_FORCE_COHERENT)
|
|
is_coherent = true;
|
|
else if (attrs & DMA_ATTR_FORCE_NON_COHERENT)
|
|
is_coherent = false;
|
|
else if (dev->archdata.dma_coherent)
|
|
is_coherent = true;
|
|
|
|
return is_coherent;
|
|
}
|
|
|
|
static int __dma_info_to_prot(enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
int prot = 0;
|
|
|
|
if (attrs & DMA_ATTR_PRIVILEGED)
|
|
prot |= IOMMU_PRIV;
|
|
|
|
switch (dir) {
|
|
case DMA_BIDIRECTIONAL:
|
|
return prot | IOMMU_READ | IOMMU_WRITE;
|
|
case DMA_TO_DEVICE:
|
|
return prot | IOMMU_READ;
|
|
case DMA_FROM_DEVICE:
|
|
return prot | IOMMU_WRITE;
|
|
default:
|
|
return prot;
|
|
}
|
|
}
|
|
|
|
/* IOMMU */
|
|
|
|
static int extend_iommu_mapping(struct dma_iommu_mapping *mapping);
|
|
|
|
static inline dma_addr_t __alloc_iova(struct dma_iommu_mapping *mapping,
|
|
size_t size)
|
|
{
|
|
unsigned int order = get_order(size);
|
|
unsigned int align = 0;
|
|
unsigned int count, start;
|
|
size_t mapping_size = mapping->bits << PAGE_SHIFT;
|
|
unsigned long flags;
|
|
dma_addr_t iova;
|
|
int i;
|
|
|
|
if (order > CONFIG_ARM_DMA_IOMMU_ALIGNMENT)
|
|
order = CONFIG_ARM_DMA_IOMMU_ALIGNMENT;
|
|
|
|
count = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
|
align = (1 << order) - 1;
|
|
|
|
spin_lock_irqsave(&mapping->lock, flags);
|
|
for (i = 0; i < mapping->nr_bitmaps; i++) {
|
|
start = bitmap_find_next_zero_area(mapping->bitmaps[i],
|
|
mapping->bits, 0, count, align);
|
|
|
|
if (start > mapping->bits)
|
|
continue;
|
|
|
|
bitmap_set(mapping->bitmaps[i], start, count);
|
|
break;
|
|
}
|
|
|
|
/*
|
|
* No unused range found. Try to extend the existing mapping
|
|
* and perform a second attempt to reserve an IO virtual
|
|
* address range of size bytes.
|
|
*/
|
|
if (i == mapping->nr_bitmaps) {
|
|
if (extend_iommu_mapping(mapping)) {
|
|
spin_unlock_irqrestore(&mapping->lock, flags);
|
|
return DMA_MAPPING_ERROR;
|
|
}
|
|
|
|
start = bitmap_find_next_zero_area(mapping->bitmaps[i],
|
|
mapping->bits, 0, count, align);
|
|
|
|
if (start > mapping->bits) {
|
|
spin_unlock_irqrestore(&mapping->lock, flags);
|
|
return DMA_MAPPING_ERROR;
|
|
}
|
|
|
|
bitmap_set(mapping->bitmaps[i], start, count);
|
|
}
|
|
spin_unlock_irqrestore(&mapping->lock, flags);
|
|
|
|
iova = mapping->base + (mapping_size * i);
|
|
iova += start << PAGE_SHIFT;
|
|
|
|
return iova;
|
|
}
|
|
|
|
static inline void __free_iova(struct dma_iommu_mapping *mapping,
|
|
dma_addr_t addr, size_t size)
|
|
{
|
|
unsigned int start, count;
|
|
size_t mapping_size = mapping->bits << PAGE_SHIFT;
|
|
unsigned long flags;
|
|
dma_addr_t bitmap_base;
|
|
u32 bitmap_index;
|
|
|
|
if (!size)
|
|
return;
|
|
|
|
bitmap_index = (u32) (addr - mapping->base) / (u32) mapping_size;
|
|
BUG_ON(addr < mapping->base || bitmap_index > mapping->extensions);
|
|
|
|
bitmap_base = mapping->base + mapping_size * bitmap_index;
|
|
|
|
start = (addr - bitmap_base) >> PAGE_SHIFT;
|
|
|
|
if (addr + size > bitmap_base + mapping_size) {
|
|
/*
|
|
* The address range to be freed reaches into the iova
|
|
* range of the next bitmap. This should not happen as
|
|
* we don't allow this in __alloc_iova (at the
|
|
* moment).
|
|
*/
|
|
BUG();
|
|
} else
|
|
count = size >> PAGE_SHIFT;
|
|
|
|
spin_lock_irqsave(&mapping->lock, flags);
|
|
bitmap_clear(mapping->bitmaps[bitmap_index], start, count);
|
|
spin_unlock_irqrestore(&mapping->lock, flags);
|
|
}
|
|
|
|
/* We'll try 2M, 1M, 64K, and finally 4K; array must end with 0! */
|
|
static const int iommu_order_array[] = { 9, 8, 4, 0 };
|
|
|
|
static struct page **__iommu_alloc_buffer(struct device *dev, size_t size,
|
|
gfp_t gfp, unsigned long attrs,
|
|
int coherent_flag)
|
|
{
|
|
struct page **pages;
|
|
size_t count = size >> PAGE_SHIFT;
|
|
size_t array_size = count * sizeof(struct page *);
|
|
int i = 0;
|
|
int order_idx = 0;
|
|
|
|
if (array_size <= PAGE_SIZE)
|
|
pages = kzalloc(array_size, GFP_KERNEL);
|
|
else
|
|
pages = vzalloc(array_size);
|
|
if (!pages)
|
|
return NULL;
|
|
|
|
if (attrs & DMA_ATTR_FORCE_CONTIGUOUS)
|
|
{
|
|
unsigned long order = get_order(size);
|
|
struct page *page;
|
|
|
|
page = dma_alloc_from_contiguous(dev, count, order,
|
|
gfp & __GFP_NOWARN);
|
|
if (!page)
|
|
goto error;
|
|
|
|
__dma_clear_buffer(page, size, coherent_flag);
|
|
|
|
for (i = 0; i < count; i++)
|
|
pages[i] = page + i;
|
|
|
|
return pages;
|
|
}
|
|
|
|
/* Go straight to 4K chunks if caller says it's OK. */
|
|
if (attrs & DMA_ATTR_ALLOC_SINGLE_PAGES)
|
|
order_idx = ARRAY_SIZE(iommu_order_array) - 1;
|
|
|
|
/*
|
|
* IOMMU can map any pages, so himem can also be used here
|
|
*/
|
|
gfp |= __GFP_NOWARN | __GFP_HIGHMEM;
|
|
|
|
while (count) {
|
|
int j, order;
|
|
|
|
order = iommu_order_array[order_idx];
|
|
|
|
/* Drop down when we get small */
|
|
if (__fls(count) < order) {
|
|
order_idx++;
|
|
continue;
|
|
}
|
|
|
|
if (order) {
|
|
/* See if it's easy to allocate a high-order chunk */
|
|
pages[i] = alloc_pages(gfp | __GFP_NORETRY, order);
|
|
|
|
/* Go down a notch at first sign of pressure */
|
|
if (!pages[i]) {
|
|
order_idx++;
|
|
continue;
|
|
}
|
|
} else {
|
|
pages[i] = alloc_pages(gfp, 0);
|
|
if (!pages[i])
|
|
goto error;
|
|
}
|
|
|
|
if (order) {
|
|
split_page(pages[i], order);
|
|
j = 1 << order;
|
|
while (--j)
|
|
pages[i + j] = pages[i] + j;
|
|
}
|
|
|
|
__dma_clear_buffer(pages[i], PAGE_SIZE << order, coherent_flag);
|
|
i += 1 << order;
|
|
count -= 1 << order;
|
|
}
|
|
|
|
return pages;
|
|
error:
|
|
while (i--)
|
|
if (pages[i])
|
|
__free_pages(pages[i], 0);
|
|
kvfree(pages);
|
|
return NULL;
|
|
}
|
|
|
|
static int __iommu_free_buffer(struct device *dev, struct page **pages,
|
|
size_t size, unsigned long attrs)
|
|
{
|
|
int count = size >> PAGE_SHIFT;
|
|
int i;
|
|
|
|
if (attrs & DMA_ATTR_FORCE_CONTIGUOUS) {
|
|
dma_release_from_contiguous(dev, pages[0], count);
|
|
} else {
|
|
for (i = 0; i < count; i++)
|
|
if (pages[i])
|
|
__free_pages(pages[i], 0);
|
|
}
|
|
|
|
kvfree(pages);
|
|
return 0;
|
|
}
|
|
|
|
/*
|
|
* Create a mapping in device IO address space for specified pages
|
|
*/
|
|
static dma_addr_t
|
|
__iommu_create_mapping(struct device *dev, struct page **pages, size_t size,
|
|
int coherent_flag)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
|
dma_addr_t dma_addr, iova;
|
|
int i;
|
|
int prot = IOMMU_READ | IOMMU_WRITE;
|
|
|
|
dma_addr = __alloc_iova(mapping, size);
|
|
if (dma_addr == DMA_MAPPING_ERROR)
|
|
return dma_addr;
|
|
prot |= coherent_flag ? IOMMU_CACHE : 0;
|
|
|
|
iova = dma_addr;
|
|
for (i = 0; i < count; ) {
|
|
int ret;
|
|
|
|
unsigned int next_pfn = page_to_pfn(pages[i]) + 1;
|
|
phys_addr_t phys = page_to_phys(pages[i]);
|
|
unsigned int len, j;
|
|
|
|
for (j = i + 1; j < count; j++, next_pfn++)
|
|
if (page_to_pfn(pages[j]) != next_pfn)
|
|
break;
|
|
|
|
len = (j - i) << PAGE_SHIFT;
|
|
ret = iommu_map(mapping->domain, iova, phys, len, prot);
|
|
if (ret < 0)
|
|
goto fail;
|
|
iova += len;
|
|
i = j;
|
|
}
|
|
return dma_addr;
|
|
fail:
|
|
iommu_unmap(mapping->domain, dma_addr, iova-dma_addr);
|
|
__free_iova(mapping, dma_addr, size);
|
|
return DMA_MAPPING_ERROR;
|
|
}
|
|
|
|
static int __iommu_remove_mapping(struct device *dev, dma_addr_t iova, size_t size)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
|
|
/*
|
|
* add optional in-page offset from iova to size and align
|
|
* result to page size
|
|
*/
|
|
size = PAGE_ALIGN((iova & ~PAGE_MASK) + size);
|
|
iova &= PAGE_MASK;
|
|
|
|
iommu_unmap(mapping->domain, iova, size);
|
|
__free_iova(mapping, iova, size);
|
|
return 0;
|
|
}
|
|
|
|
static struct page **__atomic_get_pages(void *addr)
|
|
{
|
|
struct page *page;
|
|
phys_addr_t phys;
|
|
|
|
phys = gen_pool_virt_to_phys(atomic_pool, (unsigned long)addr);
|
|
page = phys_to_page(phys);
|
|
|
|
return (struct page **)page;
|
|
}
|
|
|
|
static struct page **__iommu_get_pages(void *cpu_addr, unsigned long attrs)
|
|
{
|
|
if (__in_atomic_pool(cpu_addr, PAGE_SIZE))
|
|
return __atomic_get_pages(cpu_addr);
|
|
|
|
if (attrs & DMA_ATTR_NO_KERNEL_MAPPING)
|
|
return cpu_addr;
|
|
|
|
return dma_common_find_pages(cpu_addr);
|
|
}
|
|
|
|
static void *__iommu_alloc_simple(struct device *dev, size_t size, gfp_t gfp,
|
|
dma_addr_t *handle, int coherent_flag,
|
|
unsigned long attrs)
|
|
{
|
|
struct page *page;
|
|
void *addr;
|
|
|
|
if (coherent_flag == COHERENT)
|
|
addr = __alloc_simple_buffer(dev, size, gfp, &page);
|
|
else
|
|
addr = __alloc_from_pool(size, &page);
|
|
if (!addr)
|
|
return NULL;
|
|
|
|
*handle = __iommu_create_mapping(dev, &page, size, coherent_flag);
|
|
if (*handle == DMA_MAPPING_ERROR)
|
|
goto err_mapping;
|
|
|
|
return addr;
|
|
|
|
err_mapping:
|
|
__free_from_pool(addr, size);
|
|
return NULL;
|
|
}
|
|
|
|
static void __iommu_free_atomic(struct device *dev, void *cpu_addr,
|
|
dma_addr_t handle, size_t size, int coherent_flag)
|
|
{
|
|
__iommu_remove_mapping(dev, handle, size);
|
|
if (coherent_flag == COHERENT)
|
|
__dma_free_buffer(virt_to_page(cpu_addr), size);
|
|
else
|
|
__free_from_pool(cpu_addr, size);
|
|
}
|
|
|
|
static void *__arm_iommu_alloc_attrs(struct device *dev, size_t size,
|
|
dma_addr_t *handle, gfp_t gfp, unsigned long attrs,
|
|
int coherent_flag)
|
|
{
|
|
struct page **pages;
|
|
void *addr = NULL;
|
|
pgprot_t prot;
|
|
|
|
*handle = DMA_MAPPING_ERROR;
|
|
size = PAGE_ALIGN(size);
|
|
|
|
if (coherent_flag == COHERENT || !gfpflags_allow_blocking(gfp))
|
|
return __iommu_alloc_simple(dev, size, gfp, handle,
|
|
coherent_flag, attrs);
|
|
|
|
coherent_flag = is_dma_coherent(dev, attrs, coherent_flag);
|
|
prot = __get_dma_pgprot(attrs, PAGE_KERNEL, coherent_flag);
|
|
/*
|
|
* Following is a work-around (a.k.a. hack) to prevent pages
|
|
* with __GFP_COMP being passed to split_page() which cannot
|
|
* handle them. The real problem is that this flag probably
|
|
* should be 0 on ARM as it is not supported on this
|
|
* platform; see CONFIG_HUGETLBFS.
|
|
*/
|
|
gfp &= ~(__GFP_COMP);
|
|
|
|
pages = __iommu_alloc_buffer(dev, size, gfp, attrs, coherent_flag);
|
|
if (!pages)
|
|
return NULL;
|
|
|
|
*handle = __iommu_create_mapping(dev, pages, size, coherent_flag);
|
|
if (*handle == DMA_MAPPING_ERROR)
|
|
goto err_buffer;
|
|
|
|
if (attrs & DMA_ATTR_NO_KERNEL_MAPPING)
|
|
return pages;
|
|
|
|
addr = dma_common_pages_remap(pages, size, prot,
|
|
__builtin_return_address(0));
|
|
if (!addr)
|
|
goto err_mapping;
|
|
|
|
return addr;
|
|
|
|
err_mapping:
|
|
__iommu_remove_mapping(dev, *handle, size);
|
|
err_buffer:
|
|
__iommu_free_buffer(dev, pages, size, attrs);
|
|
return NULL;
|
|
}
|
|
|
|
static void *arm_iommu_alloc_attrs(struct device *dev, size_t size,
|
|
dma_addr_t *handle, gfp_t gfp, unsigned long attrs)
|
|
{
|
|
return __arm_iommu_alloc_attrs(dev, size, handle, gfp, attrs, NORMAL);
|
|
}
|
|
|
|
static void *arm_coherent_iommu_alloc_attrs(struct device *dev, size_t size,
|
|
dma_addr_t *handle, gfp_t gfp, unsigned long attrs)
|
|
{
|
|
return __arm_iommu_alloc_attrs(dev, size, handle, gfp, attrs, COHERENT);
|
|
}
|
|
|
|
static int __arm_iommu_mmap_attrs(struct device *dev, struct vm_area_struct *vma,
|
|
void *cpu_addr, dma_addr_t dma_addr, size_t size,
|
|
unsigned long attrs)
|
|
{
|
|
struct page **pages = __iommu_get_pages(cpu_addr, attrs);
|
|
unsigned long nr_pages = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
|
int err;
|
|
|
|
if (!pages)
|
|
return -ENXIO;
|
|
|
|
if (vma->vm_pgoff >= nr_pages)
|
|
return -ENXIO;
|
|
|
|
err = vm_map_pages(vma, pages, nr_pages);
|
|
if (err)
|
|
pr_err("Remapping memory failed: %d\n", err);
|
|
|
|
return err;
|
|
}
|
|
static int arm_iommu_mmap_attrs(struct device *dev,
|
|
struct vm_area_struct *vma, void *cpu_addr,
|
|
dma_addr_t dma_addr, size_t size, unsigned long attrs)
|
|
{
|
|
vma->vm_page_prot = __get_dma_pgprot(attrs, vma->vm_page_prot,
|
|
is_dma_coherent(dev, attrs, NORMAL));
|
|
|
|
return __arm_iommu_mmap_attrs(dev, vma, cpu_addr, dma_addr, size, attrs);
|
|
}
|
|
|
|
static int arm_coherent_iommu_mmap_attrs(struct device *dev,
|
|
struct vm_area_struct *vma, void *cpu_addr,
|
|
dma_addr_t dma_addr, size_t size, unsigned long attrs)
|
|
{
|
|
return __arm_iommu_mmap_attrs(dev, vma, cpu_addr, dma_addr, size, attrs);
|
|
}
|
|
|
|
/*
|
|
* free a page as defined by the above mapping.
|
|
* Must not be called with IRQs disabled.
|
|
*/
|
|
void __arm_iommu_free_attrs(struct device *dev, size_t size, void *cpu_addr,
|
|
dma_addr_t handle, unsigned long attrs, int coherent_flag)
|
|
{
|
|
struct page **pages;
|
|
size = PAGE_ALIGN(size);
|
|
|
|
if (coherent_flag == COHERENT || __in_atomic_pool(cpu_addr, size)) {
|
|
__iommu_free_atomic(dev, cpu_addr, handle, size, coherent_flag);
|
|
return;
|
|
}
|
|
|
|
pages = __iommu_get_pages(cpu_addr, attrs);
|
|
if (!pages) {
|
|
WARN(1, "trying to free invalid coherent area: %p\n", cpu_addr);
|
|
return;
|
|
}
|
|
|
|
if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) == 0)
|
|
dma_common_free_remap(cpu_addr, size);
|
|
|
|
__iommu_remove_mapping(dev, handle, size);
|
|
__iommu_free_buffer(dev, pages, size, attrs);
|
|
}
|
|
|
|
void arm_iommu_free_attrs(struct device *dev, size_t size,
|
|
void *cpu_addr, dma_addr_t handle, unsigned long attrs)
|
|
{
|
|
__arm_iommu_free_attrs(dev, size, cpu_addr, handle, attrs,
|
|
is_dma_coherent(dev, attrs, NORMAL));
|
|
}
|
|
|
|
void arm_coherent_iommu_free_attrs(struct device *dev, size_t size,
|
|
void *cpu_addr, dma_addr_t handle, unsigned long attrs)
|
|
{
|
|
__arm_iommu_free_attrs(dev, size, cpu_addr, handle, attrs, COHERENT);
|
|
}
|
|
|
|
static int arm_iommu_get_sgtable(struct device *dev, struct sg_table *sgt,
|
|
void *cpu_addr, dma_addr_t dma_addr,
|
|
size_t size, unsigned long attrs)
|
|
{
|
|
unsigned int count = PAGE_ALIGN(size) >> PAGE_SHIFT;
|
|
struct page **pages = __iommu_get_pages(cpu_addr, attrs);
|
|
|
|
if (!pages)
|
|
return -ENXIO;
|
|
|
|
return sg_alloc_table_from_pages(sgt, pages, count, 0, size,
|
|
GFP_KERNEL);
|
|
}
|
|
|
|
/*
|
|
* Map a part of the scatter-gather list into contiguous io address space
|
|
*/
|
|
static int __map_sg_chunk(struct device *dev, struct scatterlist *sg,
|
|
size_t size, dma_addr_t *handle,
|
|
enum dma_data_direction dir, unsigned long attrs,
|
|
bool is_coherent)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t iova, iova_base;
|
|
int ret = 0;
|
|
unsigned int count;
|
|
struct scatterlist *s;
|
|
int prot;
|
|
|
|
size = PAGE_ALIGN(size);
|
|
*handle = DMA_MAPPING_ERROR;
|
|
|
|
iova_base = iova = __alloc_iova(mapping, size);
|
|
if (iova == DMA_MAPPING_ERROR)
|
|
return -ENOMEM;
|
|
|
|
for (count = 0, s = sg; count < (size >> PAGE_SHIFT); s = sg_next(s)) {
|
|
phys_addr_t phys = page_to_phys(sg_page(s));
|
|
unsigned int len = PAGE_ALIGN(s->offset + s->length);
|
|
|
|
if (!is_coherent && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
|
|
__dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
|
|
|
|
prot = __dma_info_to_prot(dir, attrs);
|
|
|
|
ret = iommu_map(mapping->domain, iova, phys, len, prot);
|
|
if (ret < 0)
|
|
goto fail;
|
|
count += len >> PAGE_SHIFT;
|
|
iova += len;
|
|
}
|
|
*handle = iova_base;
|
|
|
|
return 0;
|
|
fail:
|
|
iommu_unmap(mapping->domain, iova_base, count * PAGE_SIZE);
|
|
__free_iova(mapping, iova_base, size);
|
|
return ret;
|
|
}
|
|
|
|
static int __iommu_map_sg(struct device *dev, struct scatterlist *sg, int nents,
|
|
enum dma_data_direction dir, unsigned long attrs,
|
|
bool is_coherent)
|
|
{
|
|
struct scatterlist *s = sg, *dma = sg, *start = sg;
|
|
int i, count = 0;
|
|
unsigned int offset = s->offset;
|
|
unsigned int size = s->offset + s->length;
|
|
unsigned int max = dma_get_max_seg_size(dev);
|
|
|
|
for (i = 1; i < nents; i++) {
|
|
s = sg_next(s);
|
|
|
|
s->dma_address = DMA_MAPPING_ERROR;
|
|
s->dma_length = 0;
|
|
|
|
if (s->offset || (size & ~PAGE_MASK) || size + s->length > max) {
|
|
if (__map_sg_chunk(dev, start, size, &dma->dma_address,
|
|
dir, attrs, is_coherent) < 0)
|
|
goto bad_mapping;
|
|
|
|
dma->dma_address += offset;
|
|
dma->dma_length = size - offset;
|
|
|
|
size = offset = s->offset;
|
|
start = s;
|
|
dma = sg_next(dma);
|
|
count += 1;
|
|
}
|
|
size += s->length;
|
|
}
|
|
if (__map_sg_chunk(dev, start, size, &dma->dma_address, dir, attrs,
|
|
is_coherent) < 0)
|
|
goto bad_mapping;
|
|
|
|
dma->dma_address += offset;
|
|
dma->dma_length = size - offset;
|
|
|
|
return count+1;
|
|
|
|
bad_mapping:
|
|
for_each_sg(sg, s, count, i)
|
|
__iommu_remove_mapping(dev, sg_dma_address(s), sg_dma_len(s));
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* arm_coherent_iommu_map_sg - map a set of SG buffers for streaming mode DMA
|
|
* @dev: valid struct device pointer
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map
|
|
* @dir: DMA transfer direction
|
|
*
|
|
* Map a set of i/o coherent buffers described by scatterlist in streaming
|
|
* mode for DMA. The scatter gather list elements are merged together (if
|
|
* possible) and tagged with the appropriate dma address and length. They are
|
|
* obtained via sg_dma_{address,length}.
|
|
*/
|
|
int arm_coherent_iommu_map_sg(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
return __iommu_map_sg(dev, sg, nents, dir, attrs, true);
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_map_sg - map a set of SG buffers for streaming mode DMA
|
|
* @dev: valid struct device pointer
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map
|
|
* @dir: DMA transfer direction
|
|
*
|
|
* Map a set of buffers described by scatterlist in streaming mode for DMA.
|
|
* The scatter gather list elements are merged together (if possible) and
|
|
* tagged with the appropriate dma address and length. They are obtained via
|
|
* sg_dma_{address,length}.
|
|
*/
|
|
int arm_iommu_map_sg(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
struct scatterlist *s;
|
|
int i;
|
|
size_t ret;
|
|
struct dma_iommu_mapping *mapping = dev->archdata.mapping;
|
|
unsigned int total_length = 0, current_offset = 0;
|
|
dma_addr_t iova;
|
|
int prot = __dma_info_to_prot(dir, attrs);
|
|
bool coherent;
|
|
|
|
for_each_sg(sg, s, nents, i)
|
|
total_length += s->length;
|
|
|
|
iova = __alloc_iova(mapping, total_length);
|
|
if (iova == DMA_MAPPING_ERROR)
|
|
return 0;
|
|
|
|
coherent = of_dma_is_coherent(dev->of_node);
|
|
prot |= is_dma_coherent(dev, attrs, coherent) ? IOMMU_CACHE : 0;
|
|
|
|
ret = iommu_map_sg(mapping->domain, iova, sg, nents, prot);
|
|
if (ret != total_length) {
|
|
__free_iova(mapping, iova, total_length);
|
|
return 0;
|
|
}
|
|
|
|
for_each_sg(sg, s, nents, i) {
|
|
s->dma_address = iova + current_offset;
|
|
if (i == 0)
|
|
s->dma_length = total_length;
|
|
else
|
|
s->dma_length = 0;
|
|
current_offset += s->length;
|
|
}
|
|
|
|
return nents;
|
|
}
|
|
|
|
static void __iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir,
|
|
unsigned long attrs, bool is_coherent)
|
|
{
|
|
struct scatterlist *s;
|
|
int i;
|
|
|
|
for_each_sg(sg, s, nents, i) {
|
|
if (sg_dma_len(s))
|
|
__iommu_remove_mapping(dev, sg_dma_address(s),
|
|
sg_dma_len(s));
|
|
if (!is_coherent && (attrs & DMA_ATTR_SKIP_CPU_SYNC) == 0)
|
|
__dma_page_dev_to_cpu(sg_page(s), s->offset,
|
|
s->length, dir);
|
|
}
|
|
}
|
|
|
|
/**
|
|
* arm_coherent_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
|
|
* @dev: valid struct device pointer
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to unmap (same as was passed to dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*
|
|
* Unmap a set of streaming mode DMA translations. Again, CPU access
|
|
* rules concerning calls here are the same as for dma_unmap_single().
|
|
*/
|
|
void arm_coherent_iommu_unmap_sg(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
__iommu_unmap_sg(dev, sg, nents, dir, attrs, true);
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_unmap_sg - unmap a set of SG buffers mapped by dma_map_sg
|
|
* @dev: valid struct device pointer
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to unmap (same as was passed to dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*
|
|
* Unmap a set of streaming mode DMA translations. Again, CPU access
|
|
* rules concerning calls here are the same as for dma_unmap_single().
|
|
*/
|
|
void arm_iommu_unmap_sg(struct device *dev, struct scatterlist *sg, int nents,
|
|
enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
struct dma_iommu_mapping *mapping = dev->archdata.mapping;
|
|
unsigned int total_length = sg_dma_len(sg);
|
|
dma_addr_t iova = sg_dma_address(sg);
|
|
|
|
total_length = PAGE_ALIGN((iova & ~PAGE_MASK) + total_length);
|
|
iova &= PAGE_MASK;
|
|
|
|
iommu_unmap(mapping->domain, iova, total_length);
|
|
__free_iova(mapping, iova, total_length);
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_sync_sg_for_cpu
|
|
* @dev: valid struct device pointer
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map (returned from dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*/
|
|
void arm_iommu_sync_sg_for_cpu(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir)
|
|
{
|
|
struct scatterlist *s;
|
|
int i;
|
|
struct dma_iommu_mapping *mapping = dev->archdata.mapping;
|
|
dma_addr_t iova = sg_dma_address(sg);
|
|
bool iova_coherent = iommu_is_iova_coherent(mapping->domain, iova);
|
|
|
|
if (iova_coherent)
|
|
return;
|
|
|
|
for_each_sg(sg, s, nents, i)
|
|
__dma_page_dev_to_cpu(sg_page(s), s->offset, s->length, dir);
|
|
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_sync_sg_for_device
|
|
* @dev: valid struct device pointer
|
|
* @sg: list of buffers
|
|
* @nents: number of buffers to map (returned from dma_map_sg)
|
|
* @dir: DMA transfer direction (same as was passed to dma_map_sg)
|
|
*/
|
|
void arm_iommu_sync_sg_for_device(struct device *dev, struct scatterlist *sg,
|
|
int nents, enum dma_data_direction dir)
|
|
{
|
|
struct scatterlist *s;
|
|
int i;
|
|
|
|
struct dma_iommu_mapping *mapping = dev->archdata.mapping;
|
|
dma_addr_t iova = sg_dma_address(sg);
|
|
bool iova_coherent = iommu_is_iova_coherent(mapping->domain, iova);
|
|
|
|
if (iova_coherent)
|
|
return;
|
|
|
|
for_each_sg(sg, s, nents, i)
|
|
__dma_page_cpu_to_dev(sg_page(s), s->offset, s->length, dir);
|
|
}
|
|
|
|
|
|
/**
|
|
* arm_coherent_iommu_map_page
|
|
* @dev: valid struct device pointer
|
|
* @page: page that buffer resides in
|
|
* @offset: offset into page for start of buffer
|
|
* @size: size of buffer to map
|
|
* @dir: DMA transfer direction
|
|
*
|
|
* Coherent IOMMU aware version of arm_dma_map_page()
|
|
*/
|
|
static dma_addr_t arm_coherent_iommu_map_page(struct device *dev, struct page *page,
|
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t dma_addr;
|
|
int ret, prot, len, start_offset, map_offset;
|
|
|
|
map_offset = offset & ~PAGE_MASK;
|
|
start_offset = offset & PAGE_MASK;
|
|
len = PAGE_ALIGN(map_offset + size);
|
|
|
|
dma_addr = __alloc_iova(mapping, len);
|
|
if (dma_addr == DMA_MAPPING_ERROR)
|
|
return dma_addr;
|
|
|
|
prot = __dma_info_to_prot(dir, attrs);
|
|
|
|
ret = iommu_map(mapping->domain, dma_addr, page_to_phys(page) +
|
|
start_offset, len, prot);
|
|
if (ret < 0)
|
|
goto fail;
|
|
|
|
return dma_addr + map_offset;
|
|
fail:
|
|
__free_iova(mapping, dma_addr, len);
|
|
return DMA_MAPPING_ERROR;
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_map_page
|
|
* @dev: valid struct device pointer
|
|
* @page: page that buffer resides in
|
|
* @offset: offset into page for start of buffer
|
|
* @size: size of buffer to map
|
|
* @dir: DMA transfer direction
|
|
*
|
|
* IOMMU aware version of arm_dma_map_page()
|
|
*/
|
|
static dma_addr_t arm_iommu_map_page(struct device *dev, struct page *page,
|
|
unsigned long offset, size_t size, enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
if (!is_dma_coherent(dev, attrs, false) &&
|
|
!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
|
|
__dma_page_cpu_to_dev(page, offset, size, dir);
|
|
|
|
return arm_coherent_iommu_map_page(dev, page, offset, size, dir, attrs);
|
|
}
|
|
|
|
/**
|
|
* arm_coherent_iommu_unmap_page
|
|
* @dev: valid struct device pointer
|
|
* @handle: DMA address of buffer
|
|
* @size: size of buffer (same as passed to dma_map_page)
|
|
* @dir: DMA transfer direction (same as passed to dma_map_page)
|
|
*
|
|
* Coherent IOMMU aware version of arm_dma_unmap_page()
|
|
*/
|
|
static void arm_coherent_iommu_unmap_page(struct device *dev, dma_addr_t handle,
|
|
size_t size, enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t iova = handle & PAGE_MASK;
|
|
int offset = handle & ~PAGE_MASK;
|
|
int len = PAGE_ALIGN(size + offset);
|
|
|
|
iommu_unmap(mapping->domain, iova, len);
|
|
__free_iova(mapping, iova, len);
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_unmap_page
|
|
* @dev: valid struct device pointer
|
|
* @handle: DMA address of buffer
|
|
* @size: size of buffer (same as passed to dma_map_page)
|
|
* @dir: DMA transfer direction (same as passed to dma_map_page)
|
|
*
|
|
* IOMMU aware version of arm_dma_unmap_page()
|
|
*/
|
|
static void arm_iommu_unmap_page(struct device *dev, dma_addr_t handle,
|
|
size_t size, enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t iova = handle & PAGE_MASK;
|
|
struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
|
|
int offset = handle & ~PAGE_MASK;
|
|
int len = PAGE_ALIGN(size + offset);
|
|
|
|
if (!iova)
|
|
return;
|
|
|
|
if (!(is_dma_coherent(dev, attrs, false) ||
|
|
(attrs & DMA_ATTR_SKIP_CPU_SYNC)))
|
|
__dma_page_dev_to_cpu(page, offset, size, dir);
|
|
|
|
iommu_unmap(mapping->domain, iova, len);
|
|
__free_iova(mapping, iova, len);
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_map_resource - map a device resource for DMA
|
|
* @dev: valid struct device pointer
|
|
* @phys_addr: physical address of resource
|
|
* @size: size of resource to map
|
|
* @dir: DMA transfer direction
|
|
*/
|
|
static dma_addr_t arm_iommu_map_resource(struct device *dev,
|
|
phys_addr_t phys_addr, size_t size,
|
|
enum dma_data_direction dir, unsigned long attrs)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t dma_addr;
|
|
int ret, prot;
|
|
phys_addr_t addr = phys_addr & PAGE_MASK;
|
|
unsigned int offset = phys_addr & ~PAGE_MASK;
|
|
size_t len = PAGE_ALIGN(size + offset);
|
|
|
|
dma_addr = __alloc_iova(mapping, len);
|
|
if (dma_addr == DMA_MAPPING_ERROR)
|
|
return dma_addr;
|
|
|
|
prot = __dma_info_to_prot(dir, attrs) | IOMMU_MMIO;
|
|
|
|
ret = iommu_map(mapping->domain, dma_addr, addr, len, prot);
|
|
if (ret < 0)
|
|
goto fail;
|
|
|
|
return dma_addr + offset;
|
|
fail:
|
|
__free_iova(mapping, dma_addr, len);
|
|
return DMA_MAPPING_ERROR;
|
|
}
|
|
|
|
/**
|
|
* arm_iommu_unmap_resource - unmap a device DMA resource
|
|
* @dev: valid struct device pointer
|
|
* @dma_handle: DMA address to resource
|
|
* @size: size of resource to map
|
|
* @dir: DMA transfer direction
|
|
*/
|
|
static void arm_iommu_unmap_resource(struct device *dev, dma_addr_t dma_handle,
|
|
size_t size, enum dma_data_direction dir,
|
|
unsigned long attrs)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t iova = dma_handle & PAGE_MASK;
|
|
unsigned int offset = dma_handle & ~PAGE_MASK;
|
|
size_t len = PAGE_ALIGN(size + offset);
|
|
|
|
if (!iova)
|
|
return;
|
|
|
|
iommu_unmap(mapping->domain, iova, len);
|
|
__free_iova(mapping, iova, len);
|
|
}
|
|
|
|
static void arm_iommu_sync_single_for_cpu(struct device *dev,
|
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t iova = handle & PAGE_MASK;
|
|
struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
|
|
unsigned int offset = handle & ~PAGE_MASK;
|
|
bool iova_coherent = iommu_is_iova_coherent(mapping->domain, handle);
|
|
|
|
if (!iova_coherent)
|
|
__dma_page_dev_to_cpu(page, offset, size, dir);
|
|
}
|
|
|
|
static void arm_iommu_sync_single_for_device(struct device *dev,
|
|
dma_addr_t handle, size_t size, enum dma_data_direction dir)
|
|
{
|
|
struct dma_iommu_mapping *mapping = to_dma_iommu_mapping(dev);
|
|
dma_addr_t iova = handle & PAGE_MASK;
|
|
struct page *page = phys_to_page(iommu_iova_to_phys(mapping->domain, iova));
|
|
unsigned int offset = handle & ~PAGE_MASK;
|
|
bool iova_coherent = iommu_is_iova_coherent(mapping->domain, handle);
|
|
|
|
if (!iova_coherent)
|
|
__dma_page_cpu_to_dev(page, offset, size, dir);
|
|
}
|
|
|
|
const struct dma_map_ops iommu_ops = {
|
|
.alloc = arm_iommu_alloc_attrs,
|
|
.free = arm_iommu_free_attrs,
|
|
.mmap = arm_iommu_mmap_attrs,
|
|
.get_sgtable = arm_iommu_get_sgtable,
|
|
|
|
.map_page = arm_iommu_map_page,
|
|
.unmap_page = arm_iommu_unmap_page,
|
|
.sync_single_for_cpu = arm_iommu_sync_single_for_cpu,
|
|
.sync_single_for_device = arm_iommu_sync_single_for_device,
|
|
|
|
.map_sg = arm_iommu_map_sg,
|
|
.unmap_sg = arm_iommu_unmap_sg,
|
|
.sync_sg_for_cpu = arm_iommu_sync_sg_for_cpu,
|
|
.sync_sg_for_device = arm_iommu_sync_sg_for_device,
|
|
|
|
.map_resource = arm_iommu_map_resource,
|
|
.unmap_resource = arm_iommu_unmap_resource,
|
|
|
|
.dma_supported = arm_dma_supported,
|
|
};
|
|
|
|
const struct dma_map_ops iommu_coherent_ops = {
|
|
.alloc = arm_coherent_iommu_alloc_attrs,
|
|
.free = arm_coherent_iommu_free_attrs,
|
|
.mmap = arm_coherent_iommu_mmap_attrs,
|
|
.get_sgtable = arm_iommu_get_sgtable,
|
|
|
|
.map_page = arm_coherent_iommu_map_page,
|
|
.unmap_page = arm_coherent_iommu_unmap_page,
|
|
|
|
.map_sg = arm_coherent_iommu_map_sg,
|
|
.unmap_sg = arm_coherent_iommu_unmap_sg,
|
|
|
|
.map_resource = arm_iommu_map_resource,
|
|
.unmap_resource = arm_iommu_unmap_resource,
|
|
|
|
.dma_supported = arm_dma_supported,
|
|
};
|
|
|
|
/**
|
|
* DEPRECATED
|
|
* arm_iommu_create_mapping
|
|
* @bus: pointer to the bus holding the client device (for IOMMU calls)
|
|
* @base: start address of the valid IO address space
|
|
* @size: maximum size of the valid IO address space
|
|
*
|
|
* Creates a mapping structure which holds information about used/unused
|
|
* IO address ranges, which is required to perform memory allocation and
|
|
* mapping with IOMMU aware functions.
|
|
*
|
|
* The client device need to be attached to the mapping with
|
|
* arm_iommu_attach_device function.
|
|
*/
|
|
struct dma_iommu_mapping *
|
|
arm_iommu_create_mapping(struct bus_type *bus, dma_addr_t base, u64 size)
|
|
{
|
|
unsigned int bits = size >> PAGE_SHIFT;
|
|
unsigned int bitmap_size = BITS_TO_LONGS(bits) * sizeof(long);
|
|
struct dma_iommu_mapping *mapping;
|
|
int extensions = 1;
|
|
int err = -ENOMEM;
|
|
|
|
/* currently only 32-bit DMA address space is supported */
|
|
if (size > DMA_BIT_MASK(32) + 1)
|
|
return ERR_PTR(-ERANGE);
|
|
|
|
if (!bitmap_size)
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
if (bitmap_size > PAGE_SIZE) {
|
|
extensions = bitmap_size / PAGE_SIZE;
|
|
bitmap_size = PAGE_SIZE;
|
|
}
|
|
|
|
mapping = kzalloc(sizeof(struct dma_iommu_mapping), GFP_KERNEL);
|
|
if (!mapping)
|
|
goto err;
|
|
|
|
mapping->bitmap_size = bitmap_size;
|
|
mapping->bitmaps = kcalloc(extensions, sizeof(unsigned long *),
|
|
GFP_KERNEL);
|
|
if (!mapping->bitmaps)
|
|
goto err2;
|
|
|
|
mapping->bitmaps[0] = kzalloc(bitmap_size, GFP_KERNEL);
|
|
if (!mapping->bitmaps[0])
|
|
goto err3;
|
|
|
|
mapping->nr_bitmaps = 1;
|
|
mapping->extensions = extensions;
|
|
mapping->base = base;
|
|
mapping->bits = BITS_PER_BYTE * bitmap_size;
|
|
|
|
spin_lock_init(&mapping->lock);
|
|
|
|
mapping->domain = iommu_domain_alloc(bus);
|
|
if (!mapping->domain)
|
|
goto err4;
|
|
|
|
kref_init(&mapping->kref);
|
|
return mapping;
|
|
err4:
|
|
kfree(mapping->bitmaps[0]);
|
|
err3:
|
|
kfree(mapping->bitmaps);
|
|
err2:
|
|
kfree(mapping);
|
|
err:
|
|
return ERR_PTR(err);
|
|
}
|
|
EXPORT_SYMBOL_GPL(arm_iommu_create_mapping);
|
|
|
|
static void release_iommu_mapping(struct kref *kref)
|
|
{
|
|
int i;
|
|
struct dma_iommu_mapping *mapping =
|
|
container_of(kref, struct dma_iommu_mapping, kref);
|
|
|
|
iommu_domain_free(mapping->domain);
|
|
for (i = 0; i < mapping->nr_bitmaps; i++)
|
|
kfree(mapping->bitmaps[i]);
|
|
kfree(mapping->bitmaps);
|
|
kfree(mapping);
|
|
}
|
|
|
|
static int extend_iommu_mapping(struct dma_iommu_mapping *mapping)
|
|
{
|
|
int next_bitmap;
|
|
|
|
if (mapping->nr_bitmaps >= mapping->extensions)
|
|
return -EINVAL;
|
|
|
|
next_bitmap = mapping->nr_bitmaps;
|
|
mapping->bitmaps[next_bitmap] = kzalloc(mapping->bitmap_size,
|
|
GFP_ATOMIC);
|
|
if (!mapping->bitmaps[next_bitmap])
|
|
return -ENOMEM;
|
|
|
|
mapping->nr_bitmaps++;
|
|
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* DEPRECATED
|
|
*/
|
|
void arm_iommu_release_mapping(struct dma_iommu_mapping *mapping)
|
|
{
|
|
if (mapping)
|
|
kref_put(&mapping->kref, release_iommu_mapping);
|
|
}
|
|
EXPORT_SYMBOL_GPL(arm_iommu_release_mapping);
|
|
|
|
static int __arm_iommu_attach_device(struct device *dev,
|
|
struct dma_iommu_mapping *mapping)
|
|
{
|
|
int err;
|
|
|
|
err = iommu_attach_device(mapping->domain, dev);
|
|
if (err)
|
|
return err;
|
|
|
|
kref_get(&mapping->kref);
|
|
to_dma_iommu_mapping(dev) = mapping;
|
|
|
|
pr_debug("Attached IOMMU controller to %s device.\n", dev_name(dev));
|
|
return 0;
|
|
}
|
|
|
|
/**
|
|
* DEPRECATED
|
|
* arm_iommu_attach_device
|
|
* @dev: valid struct device pointer
|
|
* @mapping: io address space mapping structure (returned from
|
|
* arm_iommu_create_mapping)
|
|
*
|
|
* Attaches specified io address space mapping to the provided device.
|
|
* This replaces the dma operations (dma_map_ops pointer) with the
|
|
* IOMMU aware version.
|
|
*
|
|
* More than one client might be attached to the same io address space
|
|
* mapping.
|
|
*/
|
|
int arm_iommu_attach_device(struct device *dev,
|
|
struct dma_iommu_mapping *mapping)
|
|
{
|
|
int err;
|
|
|
|
err = __arm_iommu_attach_device(dev, mapping);
|
|
if (err)
|
|
return err;
|
|
|
|
set_dma_ops(dev, &iommu_ops);
|
|
return 0;
|
|
}
|
|
EXPORT_SYMBOL_GPL(arm_iommu_attach_device);
|
|
|
|
/**
|
|
* DEPRECATED
|
|
* arm_iommu_detach_device
|
|
* @dev: valid struct device pointer
|
|
*
|
|
* Detaches the provided device from a previously attached map.
|
|
* This overwrites the dma_ops pointer with appropriate non-IOMMU ops.
|
|
*/
|
|
void arm_iommu_detach_device(struct device *dev)
|
|
{
|
|
struct dma_iommu_mapping *mapping;
|
|
|
|
mapping = to_dma_iommu_mapping(dev);
|
|
if (!mapping) {
|
|
dev_warn(dev, "Not attached\n");
|
|
return;
|
|
}
|
|
|
|
iommu_detach_device(mapping->domain, dev);
|
|
kref_put(&mapping->kref, release_iommu_mapping);
|
|
to_dma_iommu_mapping(dev) = NULL;
|
|
set_dma_ops(dev, arm_get_dma_map_ops(dev->archdata.dma_coherent));
|
|
|
|
pr_debug("Detached IOMMU controller from %s device.\n", dev_name(dev));
|
|
}
|
|
EXPORT_SYMBOL_GPL(arm_iommu_detach_device);
|
|
|
|
/*
|
|
static const struct dma_map_ops *arm_get_iommu_dma_map_ops(bool coherent)
|
|
{
|
|
return coherent ? &iommu_coherent_ops : &iommu_ops;
|
|
}
|
|
*/
|
|
|
|
static void arm_iommu_dma_release_mapping(struct kref *kref)
|
|
{
|
|
int i;
|
|
int is_fast = 0;
|
|
int s1_bypass = 0;
|
|
struct dma_iommu_mapping *mapping =
|
|
container_of(kref, struct dma_iommu_mapping, kref);
|
|
|
|
if (!mapping)
|
|
return;
|
|
|
|
iommu_domain_get_attr(mapping->domain, DOMAIN_ATTR_FAST, &is_fast);
|
|
iommu_domain_get_attr(mapping->domain, DOMAIN_ATTR_S1_BYPASS,
|
|
&s1_bypass);
|
|
|
|
if (is_fast) {
|
|
fast_smmu_put_dma_cookie(mapping->domain);
|
|
} else if (!s1_bypass) {
|
|
for (i = 0; i < mapping->nr_bitmaps; i++)
|
|
kfree(mapping->bitmaps[i]);
|
|
kfree(mapping->bitmaps);
|
|
}
|
|
|
|
kfree(mapping);
|
|
}
|
|
|
|
static int
|
|
iommu_init_mapping(struct device *dev, struct dma_iommu_mapping *mapping)
|
|
{
|
|
unsigned int bitmap_size = BITS_TO_LONGS(mapping->bits) * sizeof(long);
|
|
int extensions = 1;
|
|
int err = -ENOMEM;
|
|
|
|
if (!bitmap_size)
|
|
return -EINVAL;
|
|
|
|
if (bitmap_size > PAGE_SIZE) {
|
|
extensions = bitmap_size / PAGE_SIZE;
|
|
bitmap_size = PAGE_SIZE;
|
|
}
|
|
|
|
mapping->bitmap_size = bitmap_size;
|
|
mapping->bitmaps = kcalloc(extensions, sizeof(unsigned long *),
|
|
GFP_KERNEL);
|
|
if (!mapping->bitmaps)
|
|
goto err;
|
|
|
|
mapping->bitmaps[0] = kzalloc(bitmap_size, GFP_KERNEL);
|
|
if (!mapping->bitmaps[0])
|
|
goto err2;
|
|
|
|
mapping->nr_bitmaps = 1;
|
|
mapping->extensions = extensions;
|
|
mapping->bits = BITS_PER_BYTE * bitmap_size;
|
|
mapping->ops = &iommu_ops;
|
|
|
|
spin_lock_init(&mapping->lock);
|
|
return 0;
|
|
err2:
|
|
kfree(mapping->bitmaps);
|
|
err:
|
|
return err;
|
|
}
|
|
|
|
struct dma_iommu_mapping *
|
|
arm_iommu_dma_init_mapping(struct device *dev, dma_addr_t base, u64 size,
|
|
struct iommu_domain *domain)
|
|
{
|
|
unsigned int bits = size >> PAGE_SHIFT;
|
|
struct dma_iommu_mapping *mapping;
|
|
int err = 0;
|
|
int is_fast = 0;
|
|
int s1_bypass = 0;
|
|
|
|
if (!bits)
|
|
return ERR_PTR(-EINVAL);
|
|
|
|
/* currently only 32-bit DMA address space is supported */
|
|
if (size > DMA_BIT_MASK(32) + 1)
|
|
return ERR_PTR(-ERANGE);
|
|
|
|
mapping = kzalloc(sizeof(struct dma_iommu_mapping), GFP_KERNEL);
|
|
if (!mapping)
|
|
return ERR_PTR(-ENOMEM);
|
|
|
|
mapping->base = base;
|
|
mapping->bits = bits;
|
|
mapping->domain = domain;
|
|
|
|
iommu_domain_get_attr(domain, DOMAIN_ATTR_FAST, &is_fast);
|
|
iommu_domain_get_attr(domain, DOMAIN_ATTR_S1_BYPASS, &s1_bypass);
|
|
|
|
if (is_fast)
|
|
err = fast_smmu_init_mapping(dev, mapping);
|
|
else if (s1_bypass)
|
|
mapping->ops = arm_get_dma_map_ops(dev->archdata.dma_coherent);
|
|
else
|
|
err = iommu_init_mapping(dev, mapping);
|
|
|
|
if (err) {
|
|
kfree(mapping);
|
|
return ERR_PTR(err);
|
|
}
|
|
|
|
kref_init(&mapping->kref);
|
|
return mapping;
|
|
}
|
|
|
|
/*
|
|
* Checks for "qcom,iommu-dma-addr-pool" property.
|
|
* If not present, leaves dma_addr and dma_size unmodified.
|
|
*/
|
|
static void arm_iommu_get_dma_window(struct device *dev, u64 *dma_addr,
|
|
u64 *dma_size)
|
|
{
|
|
struct device_node *np;
|
|
int naddr, nsize, len;
|
|
const __be32 *ranges;
|
|
|
|
if (!dev->of_node)
|
|
return;
|
|
|
|
np = of_parse_phandle(dev->of_node, "qcom,iommu-group", 0);
|
|
if (!np)
|
|
np = dev->of_node;
|
|
|
|
ranges = of_get_property(np, "qcom,iommu-dma-addr-pool", &len);
|
|
if (!ranges)
|
|
return;
|
|
|
|
len /= sizeof(u32);
|
|
naddr = of_n_addr_cells(np);
|
|
nsize = of_n_size_cells(np);
|
|
if (len < naddr + nsize) {
|
|
dev_err(dev, "Invalid length for qcom,iommu-dma-addr-pool, expected %d cells\n",
|
|
naddr + nsize);
|
|
return;
|
|
}
|
|
if (naddr == 0 || nsize == 0) {
|
|
dev_err(dev, "Invalid #address-cells %d or #size-cells %d\n",
|
|
naddr, nsize);
|
|
return;
|
|
}
|
|
|
|
*dma_addr = of_read_number(ranges, naddr);
|
|
*dma_size = of_read_number(ranges + naddr, nsize);
|
|
}
|
|
|
|
static bool __maybe_unused
|
|
arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
|
const struct iommu_ops *iommu)
|
|
{
|
|
struct iommu_group *group;
|
|
struct iommu_domain *domain;
|
|
struct dma_iommu_mapping *mapping;
|
|
|
|
if (!iommu)
|
|
return false;
|
|
|
|
group = dev->iommu_group;
|
|
if (!group)
|
|
return false;
|
|
|
|
domain = iommu_get_domain_for_dev(dev);
|
|
if (!domain)
|
|
return false;
|
|
|
|
arm_iommu_get_dma_window(dev, &dma_base, &size);
|
|
|
|
/* Allow iommu-debug to call arch_setup_dma_ops to reconfigure itself */
|
|
if (domain->type != IOMMU_DOMAIN_DMA &&
|
|
!of_device_is_compatible(dev->of_node, "iommu-debug-test")) {
|
|
dev_err(dev, "Invalid iommu domain type!\n");
|
|
return false;
|
|
}
|
|
|
|
mapping = arm_iommu_dma_init_mapping(dev, dma_base, size, domain);
|
|
if (IS_ERR(mapping)) {
|
|
pr_warn("Failed to initialize %llu-byte IOMMU mapping for device %s\n",
|
|
size, dev_name(dev));
|
|
return false;
|
|
}
|
|
|
|
to_dma_iommu_mapping(dev) = mapping;
|
|
|
|
return true;
|
|
}
|
|
|
|
static void arm_teardown_iommu_dma_ops(struct device *dev)
|
|
{
|
|
struct dma_iommu_mapping *mapping;
|
|
int s1_bypass = 0;
|
|
|
|
mapping = to_dma_iommu_mapping(dev);
|
|
if (!mapping)
|
|
return;
|
|
|
|
iommu_domain_get_attr(mapping->domain, DOMAIN_ATTR_S1_BYPASS,
|
|
&s1_bypass);
|
|
|
|
kref_put(&mapping->kref, arm_iommu_dma_release_mapping);
|
|
to_dma_iommu_mapping(dev) = NULL;
|
|
|
|
/* Let arch_setup_dma_ops() start again from scratch upon re-probe */
|
|
if (!s1_bypass)
|
|
set_dma_ops(dev, NULL);
|
|
|
|
}
|
|
#else
|
|
|
|
static bool __maybe_unused
|
|
arm_setup_iommu_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
|
const struct iommu_ops *iommu)
|
|
{
|
|
return false;
|
|
}
|
|
|
|
static void arm_teardown_iommu_dma_ops(struct device *dev) { }
|
|
|
|
#define arm_get_iommu_dma_map_ops arm_get_dma_map_ops
|
|
|
|
#endif /* CONFIG_ARM_DMA_USE_IOMMU */
|
|
|
|
void arch_setup_dma_ops(struct device *dev, u64 dma_base, u64 size,
|
|
const struct iommu_ops *iommu, bool coherent)
|
|
{
|
|
const struct dma_map_ops *dma_ops;
|
|
|
|
dev->archdata.dma_coherent = coherent;
|
|
#if defined(CONFIG_SWIOTLB) || defined(CONFIG_IOMMU_DMA)
|
|
dev->dma_coherent = coherent;
|
|
#endif
|
|
|
|
/*
|
|
* Don't override the dma_ops if they have already been set. Ideally
|
|
* this should be the only location where dma_ops are set, remove this
|
|
* check when all other callers of set_dma_ops will have disappeared.
|
|
*/
|
|
if (dev->dma_ops)
|
|
return;
|
|
|
|
if (iommu)
|
|
iommu_setup_dma_ops(dev, dma_base, size);
|
|
|
|
if (!dev->dma_ops) {
|
|
dma_ops = arm_get_dma_map_ops(coherent);
|
|
set_dma_ops(dev, dma_ops);
|
|
}
|
|
|
|
#ifdef CONFIG_XEN
|
|
if (xen_initial_domain())
|
|
dev->dma_ops = &xen_swiotlb_dma_ops;
|
|
#endif
|
|
dev->archdata.dma_ops_setup = true;
|
|
}
|
|
EXPORT_SYMBOL_GPL(arch_setup_dma_ops);
|
|
|
|
void arch_teardown_dma_ops(struct device *dev)
|
|
{
|
|
if (!dev->archdata.dma_ops_setup)
|
|
return;
|
|
|
|
arm_teardown_iommu_dma_ops(dev);
|
|
}
|
|
|
|
#if defined(CONFIG_SWIOTLB) || defined(CONFIG_IOMMU_DMA)
|
|
void arch_sync_dma_for_device(phys_addr_t paddr, size_t size,
|
|
enum dma_data_direction dir)
|
|
{
|
|
__dma_page_cpu_to_dev(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
|
|
size, dir);
|
|
}
|
|
|
|
void arch_sync_dma_for_cpu(phys_addr_t paddr, size_t size,
|
|
enum dma_data_direction dir)
|
|
{
|
|
__dma_page_dev_to_cpu(phys_to_page(paddr), paddr & (PAGE_SIZE - 1),
|
|
size, dir);
|
|
}
|
|
#endif
|
|
|
|
#ifdef CONFIG_SWIOTLB
|
|
long arch_dma_coherent_to_pfn(struct device *dev, void *cpu_addr,
|
|
dma_addr_t dma_addr)
|
|
{
|
|
return dma_to_pfn(dev, dma_addr);
|
|
}
|
|
|
|
void *arch_dma_alloc(struct device *dev, size_t size, dma_addr_t *dma_handle,
|
|
gfp_t gfp, unsigned long attrs)
|
|
{
|
|
return __dma_alloc(dev, size, dma_handle, gfp,
|
|
__get_dma_pgprot(attrs, PAGE_KERNEL, false), false,
|
|
attrs, __builtin_return_address(0));
|
|
}
|
|
|
|
void arch_dma_free(struct device *dev, size_t size, void *cpu_addr,
|
|
dma_addr_t dma_handle, unsigned long attrs)
|
|
{
|
|
__arm_dma_free(dev, size, cpu_addr, dma_handle, attrs, false);
|
|
}
|
|
#endif /* CONFIG_SWIOTLB */
|
|
|
|
#ifdef CONFIG_IOMMU_DMA
|
|
void arch_dma_prep_coherent(struct page *page, size_t size)
|
|
{
|
|
__dma_page_cpu_to_dev(page, 0, size, DMA_BIDIRECTIONAL);
|
|
}
|
|
#endif /* CONFIG_IOMMU_DMA */
|