This is the 5.4.106 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAmBSKIcACgkQONu9yGCS aT6nww//RYwO4quTQO9h/SnVtYta3C0bkgSjLCuLjM6LY20L5sHiPxMXKn3LTb67 SSFtW7vyR4gOmIduQ783yoDxzSGuKZvQ48zh5OZYXD4GlhP9JZ5y4IkEf5r0SGIA k4pYYX8rPLNaeOu8TprjdGdaDFC4XplFfZEN19sympvv2q20qD+JzvcjjhyCFmvk 4A9NibAStU4jUK8AvY4STJb9XmaYo337Btv3Y2j+qUBVj6fMsNCfUif1SdGHA4de TPzaPVOIm5p4USOy/m+hsc0e/q+nzz+VYYk+T7X9NDU+kAiEOjdyMqwNOtfAUl9A k7aca4oQMjO+MNVGrvER7xF0Se+wlTomTINzLYf0YTfkCMh9+Me+pFr8Fivdvhv9 /mBFOJ0qqYXpezUETh7F5tgzMUHkzEcOiOpEG/sINxnsZXJaa09VJrS2GYIjILFN Epe83Z4ekbZtIzfUY+RWYVEP44fvV1lmLqKIs7z4xoz/IgF2NR++ABwyScCY1E2X GstK4fJ7wHA/usbmQofyfLMEF9hvawOu/GwWP2IVQRbK3E5Miux+tTkLXvVhqlr+ CrLXHb8OZSb4+bzZb3fFLg/B6mR+MiNKXYp2WW1/7pqhTfJHHg8P7Ui72nAcM5Jw +W0Gezv/DtPqbhK6rGGTUxOTYOvWqJEuh6QAI4mDx1kIeevw13o= =MKFy -----END PGP SIGNATURE----- Merge 5.4.106 into android11-5.4-lts Changes in 5.4.106 uapi: nfnetlink_cthelper.h: fix userspace compilation error powerpc/pseries: Don't enforce MSI affinity with kdump ethernet: alx: fix order of calls on resume ath9k: fix transmitting to stations in dynamic SMPS mode net: Fix gro aggregation for udp encaps with zero csum net: check if protocol extracted by virtio_net_hdr_set_proto is correct net: avoid infinite loop in mpls_gso_segment when mpls_hlen == 0 sh_eth: fix TRSCER mask for SH771x can: skb: can_skb_set_owner(): fix ref counting if socket was closed before setting skb ownership can: flexcan: assert FRZ bit in flexcan_chip_freeze() can: flexcan: enable RX FIFO after FRZ/HALT valid can: flexcan: invoke flexcan_chip_freeze() to enter freeze mode can: tcan4x5x: tcan4x5x_init(): fix initialization - clear MRAM before entering Normal Mode tcp: add sanity tests to TCP_QUEUE_SEQ netfilter: nf_nat: undo erroneous tcp edemux lookup netfilter: x_tables: gpf inside xt_find_revision() selftests/bpf: No need to drop the packet when there is no geneve opt selftests/bpf: Mask bpf_csum_diff() return value to 16 bits in test_verifier samples, bpf: Add missing munmap in xdpsock ibmvnic: always store valid MAC address mt76: dma: do not report truncated frames to mac80211 powerpc/603: Fix protection of user pages mapped with PROT_NONE mount: fix mounting of detached mounts onto targets that reside on shared mounts cifs: return proper error code in statfs(2) Revert "mm, slub: consider rest of partial list if acquire_slab() fails" net: enetc: don't overwrite the RSS indirection table when initializing net/mlx4_en: update moderation when config reset net: stmmac: fix incorrect DMA channel intr enable setting of EQoS v4.10 nexthop: Do not flush blackhole nexthops when loopback goes down net: sched: avoid duplicates in classes dump net: usb: qmi_wwan: allow qmimux add/del with master up netdevsim: init u64 stats for 32bit hardware cipso,calipso: resolve a number of problems with the DOI refcounts net: lapbether: Remove netif_start_queue / netif_stop_queue net: davicom: Fix regulator not turned off on failed probe net: davicom: Fix regulator not turned off on driver removal net: qrtr: fix error return code of qrtr_sendmsg() ixgbe: fail to create xfrm offload of IPsec tunnel mode SA net: stmmac: stop each tx channel independently net: stmmac: fix watchdog timeout during suspend/resume stress test selftests: forwarding: Fix race condition in mirror installation perf traceevent: Ensure read cmdlines are null terminated. net: hns3: fix query vlan mask value error for flow director net: hns3: fix bug when calculating the TCAM table info s390/cio: return -EFAULT if copy_to_user() fails again bnxt_en: reliably allocate IRQ table on reset to avoid crash drm/compat: Clear bounce structures drm/shmem-helper: Check for purged buffers in fault handler drm/shmem-helper: Don't remove the offset in vm_area_struct pgoff drm: meson_drv add shutdown function s390/cio: return -EFAULT if copy_to_user() fails s390/crypto: return -EFAULT if copy_to_user() fails qxl: Fix uninitialised struct field head.surface_id sh_eth: fix TRSCER mask for R7S9210 media: usbtv: Fix deadlock on suspend media: v4l: vsp1: Fix uif null pointer access media: v4l: vsp1: Fix bru null pointer access media: rc: compile rc-cec.c into rc-core net: hns3: fix error mask definition of flow director net: enetc: initialize RFS/RSS memories for unused ports too net: phy: fix save wrong speed and duplex problem if autoneg is on i2c: rcar: faster irq code to minimize HW race condition i2c: rcar: optimize cacheline to minimize HW race condition udf: fix silent AED tagLocation corruption mmc: mxs-mmc: Fix a resource leak in an error handling path in 'mxs_mmc_probe()' mmc: mediatek: fix race condition between msdc_request_timeout and irq Platform: OLPC: Fix probe error handling powerpc/pci: Add ppc_md.discover_phbs() spi: stm32: make spurious and overrun interrupts visible powerpc: improve handling of unrecoverable system reset powerpc/perf: Record counter overflow always if SAMPLE_IP is unset HID: logitech-dj: add support for the new lightspeed connection iteration powerpc/64: Fix stack trace not displaying final frame iommu/amd: Fix performance counter initialization sparc32: Limit memblock allocation to low memory sparc64: Use arch_validate_flags() to validate ADI flag Input: applespi - don't wait for responses to commands indefinitely. PCI: xgene-msi: Fix race in installing chained irq handler PCI: mediatek: Add missing of_node_put() to fix reference leak kbuild: clamp SUBLEVEL to 255 PCI: Fix pci_register_io_range() memory leak i40e: Fix memory leak in i40e_probe s390/smp: __smp_rescan_cpus() - move cpumask away from stack sysctl.c: fix underflow value setting risk in vm_table scsi: libiscsi: Fix iscsi_prep_scsi_cmd_pdu() error handling scsi: target: core: Add cmd length set before cmd complete scsi: target: core: Prevent underflow for service actions ALSA: usb: Add Plantronics C320-M USB ctrl msg delay quirk ALSA: hda/hdmi: Cancel pending works before suspend ALSA: hda/ca0132: Add Sound BlasterX AE-5 Plus support ALSA: hda: Drop the BATCH workaround for AMD controllers ALSA: hda: Flush pending unsolicited events before suspend ALSA: hda: Avoid spurious unsol event handling during S3/S4 ALSA: usb-audio: Fix "cannot get freq eq" errors on Dell AE515 sound bar ALSA: usb-audio: Apply the control quirk to Plantronics headsets Revert 95ebabde382c ("capabilities: Don't allow writing ambiguous v3 file capabilities") arm64: kasan: fix page_alloc tagging with DEBUG_VIRTUAL s390/dasd: fix hanging DASD driver unbind s390/dasd: fix hanging IO request during DASD driver unbind software node: Fix node registration mmc: core: Fix partition switch time for eMMC mmc: cqhci: Fix random crash when remove mmc module/card Goodix Fingerprint device is not a modem USB: gadget: u_ether: Fix a configfs return code usb: gadget: f_uac2: always increase endpoint max_packet_size by one audio slot usb: gadget: f_uac1: stop playback on function disable usb: dwc3: qcom: Add missing DWC3 OF node refcount decrement usb: dwc3: qcom: Honor wakeup enabled/disabled state USB: usblp: fix a hang in poll() if disconnected usb: renesas_usbhs: Clear PIPECFG for re-enabling pipe with other EPNUM usb: xhci: do not perform Soft Retry for some xHCI hosts xhci: Improve detection of device initiated wake signal. usb: xhci: Fix ASMedia ASM1042A and ASM3242 DMA addressing xhci: Fix repeated xhci wake after suspend due to uncleared internal wake state USB: serial: io_edgeport: fix memory leak in edge_startup USB: serial: ch341: add new Product ID USB: serial: cp210x: add ID for Acuity Brands nLight Air Adapter USB: serial: cp210x: add some more GE USB IDs usbip: fix stub_dev to check for stream socket usbip: fix vhci_hcd to check for stream socket usbip: fix vudc to check for stream socket usbip: fix stub_dev usbip_sockfd_store() races leading to gpf usbip: fix vhci_hcd attach_store() races leading to gpf usbip: fix vudc usbip_sockfd_store races leading to gpf misc/pvpanic: Export module FDT device table misc: fastrpc: restrict user apps from sending kernel RPC messages staging: rtl8192u: fix ->ssid overflow in r8192_wx_set_scan() staging: rtl8188eu: prevent ->ssid overflow in rtw_wx_set_scan() staging: rtl8712: unterminated string leads to read overflow staging: rtl8188eu: fix potential memory corruption in rtw_check_beacon_data() staging: ks7010: prevent buffer overflow in ks_wlan_set_scan() staging: rtl8712: Fix possible buffer overflow in r8712_sitesurvey_cmd staging: rtl8192e: Fix possible buffer overflow in _rtl92e_wx_set_scan staging: comedi: addi_apci_1032: Fix endian problem for COS sample staging: comedi: addi_apci_1500: Fix endian problem for command sample staging: comedi: adv_pci1710: Fix endian problem for AI command data staging: comedi: das6402: Fix endian problem for AI command data staging: comedi: das800: Fix endian problem for AI command data staging: comedi: dmm32at: Fix endian problem for AI command data staging: comedi: me4000: Fix endian problem for AI command data staging: comedi: pcl711: Fix endian problem for AI command data staging: comedi: pcl818: Fix endian problem for AI command data sh_eth: fix TRSCER mask for R7S72100 arm64/mm: Fix pfn_valid() for ZONE_DEVICE based memory SUNRPC: Set memalloc_nofs_save() for sync tasks NFS: Don't revalidate the directory permissions on a lookup failure NFS: Don't gratuitously clear the inode cache when lookup failed NFSv4.2: fix return value of _nfs4_get_security_label() block: rsxx: fix error return code of rsxx_pci_probe() configfs: fix a use-after-free in __configfs_open_file arm64: mm: use a 48-bit ID map when possible on 52-bit VA builds hrtimer: Update softirq_expires_next correctly after __hrtimer_get_next_event() stop_machine: mark helpers __always_inline include/linux/sched/mm.h: use rcu_dereference in in_vfork() zram: fix return value on writeback_store sched/membarrier: fix missing local execution of ipi_sync_rq_state() powerpc/64s: Fix instruction encoding for lis in ppc_function_entry() binfmt_misc: fix possible deadlock in bm_register_write x86/unwind/orc: Disable KASAN checking in the ORC unwinder, part 2 KVM: arm64: Fix exclusive limit for IPA size nvme: unlink head after removing last namespace nvme: release namespace head reference on error KVM: arm64: Ensure I-cache isolation between vcpus of a same VM KVM: arm64: Reject VM creation when the default IPA size is unsupported xen/events: reset affinity of 2-level event when tearing it down xen/events: don't unmask an event channel when an eoi is pending xen/events: avoid handling the same event on two cpus at the same time Linux 5.4.106 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: I14a7c69a857d6b64e7cf72003120c99610279bae
This commit is contained in:
commit
25491b4ff3
@ -172,6 +172,9 @@ is dependent on the CPU capability and the kernel configuration. The limit can
|
||||
be retrieved using KVM_CAP_ARM_VM_IPA_SIZE of the KVM_CHECK_EXTENSION
|
||||
ioctl() at run-time.
|
||||
|
||||
Creation of the VM will fail if the requested IPA size (whether it is
|
||||
implicit or explicit) is unsupported on the host.
|
||||
|
||||
Please note that configuring the IPA size does not affect the capability
|
||||
exposed by the guest CPUs in ID_AA64MMFR0_EL1[PARange]. It only affects
|
||||
size of the address translated by the stage2 level (guest physical to
|
||||
|
14
Makefile
14
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 105
|
||||
SUBLEVEL = 106
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
@ -1257,9 +1257,15 @@ define filechk_utsrelease.h
|
||||
endef
|
||||
|
||||
define filechk_version.h
|
||||
echo \#define LINUX_VERSION_CODE $(shell \
|
||||
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 0$(SUBLEVEL)); \
|
||||
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + (c))'
|
||||
if [ $(SUBLEVEL) -gt 255 ]; then \
|
||||
echo \#define LINUX_VERSION_CODE $(shell \
|
||||
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + 255); \
|
||||
else \
|
||||
echo \#define LINUX_VERSION_CODE $(shell \
|
||||
expr $(VERSION) \* 65536 + 0$(PATCHLEVEL) \* 256 + $(SUBLEVEL)); \
|
||||
fi; \
|
||||
echo '#define KERNEL_VERSION(a,b,c) (((a) << 16) + ((b) << 8) + \
|
||||
((c) > 255 ? 255 : (c)))'
|
||||
endef
|
||||
|
||||
$(version_h): FORCE
|
||||
|
@ -56,7 +56,7 @@ extern char __kvm_hyp_init_end[];
|
||||
extern void __kvm_flush_vm_context(void);
|
||||
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
|
||||
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
|
||||
extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
|
||||
extern void __kvm_flush_cpu_context(struct kvm_vcpu *vcpu);
|
||||
|
||||
extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
|
||||
|
||||
|
@ -45,7 +45,7 @@ void __hyp_text __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa)
|
||||
__kvm_tlb_flush_vmid(kvm);
|
||||
}
|
||||
|
||||
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
|
||||
void __hyp_text __kvm_flush_cpu_context(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
|
||||
|
||||
@ -54,6 +54,7 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
|
||||
isb();
|
||||
|
||||
write_sysreg(0, TLBIALL);
|
||||
write_sysreg(0, ICIALLU);
|
||||
dsb(nsh);
|
||||
isb();
|
||||
|
||||
|
@ -60,7 +60,7 @@ extern char __kvm_hyp_vector[];
|
||||
extern void __kvm_flush_vm_context(void);
|
||||
extern void __kvm_tlb_flush_vmid_ipa(struct kvm *kvm, phys_addr_t ipa);
|
||||
extern void __kvm_tlb_flush_vmid(struct kvm *kvm);
|
||||
extern void __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu);
|
||||
extern void __kvm_flush_cpu_context(struct kvm_vcpu *vcpu);
|
||||
|
||||
extern void __kvm_timer_set_cntvoff(u32 cntvoff_low, u32 cntvoff_high);
|
||||
|
||||
|
@ -334,6 +334,11 @@ static inline void *phys_to_virt(phys_addr_t x)
|
||||
#define ARCH_PFN_OFFSET ((unsigned long)PHYS_PFN_OFFSET)
|
||||
|
||||
#if !defined(CONFIG_SPARSEMEM_VMEMMAP) || defined(CONFIG_DEBUG_VIRTUAL)
|
||||
#define page_to_virt(x) ({ \
|
||||
__typeof__(x) __page = x; \
|
||||
void *__addr = __va(page_to_phys(__page)); \
|
||||
(void *)__tag_set((const void *)__addr, page_kasan_tag(__page));\
|
||||
})
|
||||
#define virt_to_page(x) pfn_to_page(virt_to_pfn(x))
|
||||
#else
|
||||
#define page_to_virt(x) ({ \
|
||||
|
@ -63,10 +63,7 @@ extern u64 idmap_ptrs_per_pgd;
|
||||
|
||||
static inline bool __cpu_uses_extended_idmap(void)
|
||||
{
|
||||
if (IS_ENABLED(CONFIG_ARM64_VA_BITS_52))
|
||||
return false;
|
||||
|
||||
return unlikely(idmap_t0sz != TCR_T0SZ(VA_BITS));
|
||||
return unlikely(idmap_t0sz != TCR_T0SZ(vabits_actual));
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -338,7 +338,7 @@ __create_page_tables:
|
||||
*/
|
||||
adrp x5, __idmap_text_end
|
||||
clz x5, x5
|
||||
cmp x5, TCR_T0SZ(VA_BITS) // default T0SZ small enough?
|
||||
cmp x5, TCR_T0SZ(VA_BITS_MIN) // default T0SZ small enough?
|
||||
b.ge 1f // .. then skip VA range extension
|
||||
|
||||
adr_l x6, idmap_t0sz
|
||||
|
@ -182,7 +182,7 @@ void __hyp_text __kvm_tlb_flush_vmid(struct kvm *kvm)
|
||||
__tlb_switch_to_host(kvm, &cxt);
|
||||
}
|
||||
|
||||
void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
|
||||
void __hyp_text __kvm_flush_cpu_context(struct kvm_vcpu *vcpu)
|
||||
{
|
||||
struct kvm *kvm = kern_hyp_va(kern_hyp_va(vcpu)->kvm);
|
||||
struct tlb_inv_context cxt;
|
||||
@ -191,6 +191,7 @@ void __hyp_text __kvm_tlb_flush_local_vmid(struct kvm_vcpu *vcpu)
|
||||
__tlb_switch_to_guest(kvm, &cxt);
|
||||
|
||||
__tlbi(vmalle1);
|
||||
asm volatile("ic iallu");
|
||||
dsb(nsh);
|
||||
isb();
|
||||
|
||||
|
@ -378,10 +378,10 @@ void kvm_set_ipa_limit(void)
|
||||
pr_info("kvm: Limiting the IPA size due to kernel %s Address limit\n",
|
||||
(va_max < pa_max) ? "Virtual" : "Physical");
|
||||
|
||||
WARN(ipa_max < KVM_PHYS_SHIFT,
|
||||
"KVM IPA limit (%d bit) is smaller than default size\n", ipa_max);
|
||||
kvm_ipa_limit = ipa_max;
|
||||
kvm_info("IPA Size Limit: %dbits\n", kvm_ipa_limit);
|
||||
kvm_info("IPA Size Limit: %d bits%s\n", kvm_ipa_limit,
|
||||
((kvm_ipa_limit < KVM_PHYS_SHIFT) ?
|
||||
" (Reduced IPA size, limited VM/VMM compatibility)" : ""));
|
||||
}
|
||||
|
||||
/*
|
||||
@ -408,6 +408,11 @@ int kvm_arm_setup_stage2(struct kvm *kvm, unsigned long type)
|
||||
return -EINVAL;
|
||||
} else {
|
||||
phys_shift = KVM_PHYS_SHIFT;
|
||||
if (phys_shift > kvm_ipa_limit) {
|
||||
pr_warn_once("%s using unsupported default IPA limit, upgrade your VMM\n",
|
||||
current->comm);
|
||||
return -EINVAL;
|
||||
}
|
||||
}
|
||||
|
||||
parange = read_sanitised_ftr_reg(SYS_ID_AA64MMFR0_EL1) & 7;
|
||||
|
@ -251,6 +251,18 @@ int pfn_valid(unsigned long pfn)
|
||||
|
||||
if (!valid_section(__nr_to_section(pfn_to_section_nr(pfn))))
|
||||
return 0;
|
||||
|
||||
/*
|
||||
* ZONE_DEVICE memory does not have the memblock entries.
|
||||
* memblock_is_map_memory() check for ZONE_DEVICE based
|
||||
* addresses will always fail. Even the normal hotplugged
|
||||
* memory will never have MEMBLOCK_NOMAP flag set in their
|
||||
* memblock entries. Skip memblock search for all non early
|
||||
* memory sections covering all of hotplug memory including
|
||||
* both normal and ZONE_DEVICE based.
|
||||
*/
|
||||
if (!early_section(__pfn_to_section(pfn)))
|
||||
return pfn_section_valid(__pfn_to_section(pfn), pfn);
|
||||
#endif
|
||||
return memblock_is_map_memory(addr);
|
||||
}
|
||||
|
@ -38,7 +38,7 @@
|
||||
#define NO_BLOCK_MAPPINGS BIT(0)
|
||||
#define NO_CONT_MAPPINGS BIT(1)
|
||||
|
||||
u64 idmap_t0sz = TCR_T0SZ(VA_BITS);
|
||||
u64 idmap_t0sz = TCR_T0SZ(VA_BITS_MIN);
|
||||
u64 idmap_ptrs_per_pgd = PTRS_PER_PGD;
|
||||
|
||||
u64 __section(".mmuoff.data.write") vabits_actual;
|
||||
|
@ -72,7 +72,7 @@ void __patch_exception(int exc, unsigned long addr);
|
||||
#endif
|
||||
|
||||
#define OP_RT_RA_MASK 0xffff0000UL
|
||||
#define LIS_R2 0x3c020000UL
|
||||
#define LIS_R2 0x3c400000UL
|
||||
#define ADDIS_R2_R12 0x3c4c0000UL
|
||||
#define ADDI_R2_R2 0x38420000UL
|
||||
|
||||
|
@ -59,6 +59,9 @@ struct machdep_calls {
|
||||
int (*pcibios_root_bridge_prepare)(struct pci_host_bridge
|
||||
*bridge);
|
||||
|
||||
/* finds all the pci_controllers present at boot */
|
||||
void (*discover_phbs)(void);
|
||||
|
||||
/* To setup PHBs when using automatic OF platform driver for PCI */
|
||||
int (*pci_setup_phb)(struct pci_controller *host);
|
||||
|
||||
|
@ -62,6 +62,9 @@ struct pt_regs
|
||||
};
|
||||
#endif
|
||||
|
||||
|
||||
#define STACK_FRAME_WITH_PT_REGS (STACK_FRAME_OVERHEAD + sizeof(struct pt_regs))
|
||||
|
||||
#ifdef __powerpc64__
|
||||
|
||||
/*
|
||||
|
@ -285,7 +285,7 @@ int main(void)
|
||||
|
||||
/* Interrupt register frame */
|
||||
DEFINE(INT_FRAME_SIZE, STACK_INT_FRAME_SIZE);
|
||||
DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_OVERHEAD + sizeof(struct pt_regs));
|
||||
DEFINE(SWITCH_FRAME_SIZE, STACK_FRAME_WITH_PT_REGS);
|
||||
STACK_PT_REGS_OFFSET(GPR0, gpr[0]);
|
||||
STACK_PT_REGS_OFFSET(GPR1, gpr[1]);
|
||||
STACK_PT_REGS_OFFSET(GPR2, gpr[2]);
|
||||
|
@ -418,10 +418,11 @@ InstructionTLBMiss:
|
||||
cmplw 0,r1,r3
|
||||
#endif
|
||||
mfspr r2, SPRN_SPRG_PGDIR
|
||||
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
||||
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC | _PAGE_USER
|
||||
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
|
||||
bge- 112f
|
||||
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
||||
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
||||
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
||||
#endif
|
||||
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
||||
@ -480,9 +481,10 @@ DataLoadTLBMiss:
|
||||
lis r1,PAGE_OFFSET@h /* check if kernel address */
|
||||
cmplw 0,r1,r3
|
||||
mfspr r2, SPRN_SPRG_PGDIR
|
||||
li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
||||
li r1, _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
|
||||
bge- 112f
|
||||
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
||||
li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
||||
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
||||
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
||||
lwz r2,0(r2) /* get pmd entry */
|
||||
@ -556,9 +558,10 @@ DataStoreTLBMiss:
|
||||
lis r1,PAGE_OFFSET@h /* check if kernel address */
|
||||
cmplw 0,r1,r3
|
||||
mfspr r2, SPRN_SPRG_PGDIR
|
||||
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
||||
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_USER
|
||||
bge- 112f
|
||||
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
||||
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
||||
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
||||
112: rlwimi r2,r3,12,20,29 /* insert top 10 bits of address */
|
||||
lwz r2,0(r2) /* get pmd entry */
|
||||
|
@ -1669,3 +1669,13 @@ static void fixup_hide_host_resource_fsl(struct pci_dev *dev)
|
||||
}
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_MOTOROLA, PCI_ANY_ID, fixup_hide_host_resource_fsl);
|
||||
DECLARE_PCI_FIXUP_HEADER(PCI_VENDOR_ID_FREESCALE, PCI_ANY_ID, fixup_hide_host_resource_fsl);
|
||||
|
||||
|
||||
static int __init discover_phbs(void)
|
||||
{
|
||||
if (ppc_md.discover_phbs)
|
||||
ppc_md.discover_phbs();
|
||||
|
||||
return 0;
|
||||
}
|
||||
core_initcall(discover_phbs);
|
||||
|
@ -2081,7 +2081,7 @@ void show_stack(struct task_struct *tsk, unsigned long *stack)
|
||||
* See if this is an exception frame.
|
||||
* We look for the "regshere" marker in the current frame.
|
||||
*/
|
||||
if (validate_sp(sp, tsk, STACK_INT_FRAME_SIZE)
|
||||
if (validate_sp(sp, tsk, STACK_FRAME_WITH_PT_REGS)
|
||||
&& stack[STACK_FRAME_MARKER] == STACK_FRAME_REGS_MARKER) {
|
||||
struct pt_regs *regs = (struct pt_regs *)
|
||||
(sp + STACK_FRAME_OVERHEAD);
|
||||
|
@ -513,8 +513,11 @@ out:
|
||||
die("Unrecoverable nested System Reset", regs, SIGABRT);
|
||||
#endif
|
||||
/* Must die if the interrupt is not recoverable */
|
||||
if (!(regs->msr & MSR_RI))
|
||||
if (!(regs->msr & MSR_RI)) {
|
||||
/* For the reason explained in die_mce, nmi_exit before die */
|
||||
nmi_exit();
|
||||
die("Unrecoverable System Reset", regs, SIGABRT);
|
||||
}
|
||||
|
||||
if (saved_hsrrs) {
|
||||
mtspr(SPRN_HSRR0, hsrr0);
|
||||
|
@ -2075,7 +2075,17 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
||||
left += period;
|
||||
if (left <= 0)
|
||||
left = period;
|
||||
record = siar_valid(regs);
|
||||
|
||||
/*
|
||||
* If address is not requested in the sample via
|
||||
* PERF_SAMPLE_IP, just record that sample irrespective
|
||||
* of SIAR valid check.
|
||||
*/
|
||||
if (event->attr.sample_type & PERF_SAMPLE_IP)
|
||||
record = siar_valid(regs);
|
||||
else
|
||||
record = 1;
|
||||
|
||||
event->hw.last_period = event->hw.sample_period;
|
||||
}
|
||||
if (left < 0x80000000LL)
|
||||
@ -2093,9 +2103,10 @@ static void record_and_restart(struct perf_event *event, unsigned long val,
|
||||
* MMCR2. Check attr.exclude_kernel and address to drop the sample in
|
||||
* these cases.
|
||||
*/
|
||||
if (event->attr.exclude_kernel && record)
|
||||
if (is_kernel_addr(mfspr(SPRN_SIAR)))
|
||||
record = 0;
|
||||
if (event->attr.exclude_kernel &&
|
||||
(event->attr.sample_type & PERF_SAMPLE_IP) &&
|
||||
is_kernel_addr(mfspr(SPRN_SIAR)))
|
||||
record = 0;
|
||||
|
||||
/*
|
||||
* Finally record data if requested.
|
||||
|
@ -4,6 +4,7 @@
|
||||
* Copyright 2006-2007 Michael Ellerman, IBM Corp.
|
||||
*/
|
||||
|
||||
#include <linux/crash_dump.h>
|
||||
#include <linux/device.h>
|
||||
#include <linux/irq.h>
|
||||
#include <linux/msi.h>
|
||||
@ -458,8 +459,28 @@ again:
|
||||
return hwirq;
|
||||
}
|
||||
|
||||
virq = irq_create_mapping_affinity(NULL, hwirq,
|
||||
entry->affinity);
|
||||
/*
|
||||
* Depending on the number of online CPUs in the original
|
||||
* kernel, it is likely for CPU #0 to be offline in a kdump
|
||||
* kernel. The associated IRQs in the affinity mappings
|
||||
* provided by irq_create_affinity_masks() are thus not
|
||||
* started by irq_startup(), as per-design for managed IRQs.
|
||||
* This can be a problem with multi-queue block devices driven
|
||||
* by blk-mq : such a non-started IRQ is very likely paired
|
||||
* with the single queue enforced by blk-mq during kdump (see
|
||||
* blk_mq_alloc_tag_set()). This causes the device to remain
|
||||
* silent and likely hangs the guest at some point.
|
||||
*
|
||||
* We don't really care for fine-grained affinity when doing
|
||||
* kdump actually : simply ignore the pre-computed affinity
|
||||
* masks in this case and let the default mask with all CPUs
|
||||
* be used when creating the IRQ mappings.
|
||||
*/
|
||||
if (is_kdump_kernel())
|
||||
virq = irq_create_mapping(NULL, hwirq);
|
||||
else
|
||||
virq = irq_create_mapping_affinity(NULL, hwirq,
|
||||
entry->affinity);
|
||||
|
||||
if (!virq) {
|
||||
pr_debug("rtas_msi: Failed mapping hwirq %d\n", hwirq);
|
||||
|
@ -765,7 +765,7 @@ static int smp_add_core(struct sclp_core_entry *core, cpumask_t *avail,
|
||||
static int __smp_rescan_cpus(struct sclp_core_info *info, bool early)
|
||||
{
|
||||
struct sclp_core_entry *core;
|
||||
cpumask_t avail;
|
||||
static cpumask_t avail;
|
||||
bool configured;
|
||||
u16 core_id;
|
||||
int nr, i;
|
||||
|
@ -57,36 +57,40 @@ static inline int sparc_validate_prot(unsigned long prot, unsigned long addr)
|
||||
{
|
||||
if (prot & ~(PROT_READ | PROT_WRITE | PROT_EXEC | PROT_SEM | PROT_ADI))
|
||||
return 0;
|
||||
if (prot & PROT_ADI) {
|
||||
if (!adi_capable())
|
||||
return 0;
|
||||
|
||||
if (addr) {
|
||||
struct vm_area_struct *vma;
|
||||
|
||||
vma = find_vma(current->mm, addr);
|
||||
if (vma) {
|
||||
/* ADI can not be enabled on PFN
|
||||
* mapped pages
|
||||
*/
|
||||
if (vma->vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
|
||||
return 0;
|
||||
|
||||
/* Mergeable pages can become unmergeable
|
||||
* if ADI is enabled on them even if they
|
||||
* have identical data on them. This can be
|
||||
* because ADI enabled pages with identical
|
||||
* data may still not have identical ADI
|
||||
* tags on them. Disallow ADI on mergeable
|
||||
* pages.
|
||||
*/
|
||||
if (vma->vm_flags & VM_MERGEABLE)
|
||||
return 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
return 1;
|
||||
}
|
||||
|
||||
#define arch_validate_flags(vm_flags) arch_validate_flags(vm_flags)
|
||||
/* arch_validate_flags() - Ensure combination of flags is valid for a
|
||||
* VMA.
|
||||
*/
|
||||
static inline bool arch_validate_flags(unsigned long vm_flags)
|
||||
{
|
||||
/* If ADI is being enabled on this VMA, check for ADI
|
||||
* capability on the platform and ensure VMA is suitable
|
||||
* for ADI
|
||||
*/
|
||||
if (vm_flags & VM_SPARC_ADI) {
|
||||
if (!adi_capable())
|
||||
return false;
|
||||
|
||||
/* ADI can not be enabled on PFN mapped pages */
|
||||
if (vm_flags & (VM_PFNMAP | VM_MIXEDMAP))
|
||||
return false;
|
||||
|
||||
/* Mergeable pages can become unmergeable
|
||||
* if ADI is enabled on them even if they
|
||||
* have identical data on them. This can be
|
||||
* because ADI enabled pages with identical
|
||||
* data may still not have identical ADI
|
||||
* tags on them. Disallow ADI on mergeable
|
||||
* pages.
|
||||
*/
|
||||
if (vm_flags & VM_MERGEABLE)
|
||||
return false;
|
||||
}
|
||||
return true;
|
||||
}
|
||||
#endif /* CONFIG_SPARC64 */
|
||||
|
||||
#endif /* __ASSEMBLY__ */
|
||||
|
@ -197,6 +197,9 @@ unsigned long __init bootmem_init(unsigned long *pages_avail)
|
||||
size = memblock_phys_mem_size() - memblock_reserved_size();
|
||||
*pages_avail = (size >> PAGE_SHIFT) - high_pages;
|
||||
|
||||
/* Only allow low memory to be allocated via memblock allocation */
|
||||
memblock_set_current_limit(max_low_pfn << PAGE_SHIFT);
|
||||
|
||||
return max_pfn;
|
||||
}
|
||||
|
||||
|
@ -357,8 +357,8 @@ static bool deref_stack_regs(struct unwind_state *state, unsigned long addr,
|
||||
if (!stack_access_ok(state, addr, sizeof(struct pt_regs)))
|
||||
return false;
|
||||
|
||||
*ip = regs->ip;
|
||||
*sp = regs->sp;
|
||||
*ip = READ_ONCE_NOCHECK(regs->ip);
|
||||
*sp = READ_ONCE_NOCHECK(regs->sp);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -370,8 +370,8 @@ static bool deref_stack_iret_regs(struct unwind_state *state, unsigned long addr
|
||||
if (!stack_access_ok(state, addr, IRET_FRAME_SIZE))
|
||||
return false;
|
||||
|
||||
*ip = regs->ip;
|
||||
*sp = regs->sp;
|
||||
*ip = READ_ONCE_NOCHECK(regs->ip);
|
||||
*sp = READ_ONCE_NOCHECK(regs->sp);
|
||||
return true;
|
||||
}
|
||||
|
||||
@ -392,12 +392,12 @@ static bool get_reg(struct unwind_state *state, unsigned int reg_off,
|
||||
return false;
|
||||
|
||||
if (state->full_regs) {
|
||||
*val = ((unsigned long *)state->regs)[reg];
|
||||
*val = READ_ONCE_NOCHECK(((unsigned long *)state->regs)[reg]);
|
||||
return true;
|
||||
}
|
||||
|
||||
if (state->prev_regs) {
|
||||
*val = ((unsigned long *)state->prev_regs)[reg];
|
||||
*val = READ_ONCE_NOCHECK(((unsigned long *)state->prev_regs)[reg]);
|
||||
return true;
|
||||
}
|
||||
|
||||
|
@ -812,6 +812,9 @@ int software_node_register(const struct software_node *node)
|
||||
if (software_node_to_swnode(node))
|
||||
return -EEXIST;
|
||||
|
||||
if (node->parent && !parent)
|
||||
return -EINVAL;
|
||||
|
||||
return PTR_ERR_OR_ZERO(swnode_register(node, parent, 0));
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(software_node_register);
|
||||
|
@ -869,6 +869,7 @@ static int rsxx_pci_probe(struct pci_dev *dev,
|
||||
card->event_wq = create_singlethread_workqueue(DRIVER_NAME"_event");
|
||||
if (!card->event_wq) {
|
||||
dev_err(CARD_TO_DEV(card), "Failed card event setup.\n");
|
||||
st = -ENOMEM;
|
||||
goto failed_event_handler;
|
||||
}
|
||||
|
||||
|
@ -627,7 +627,7 @@ static ssize_t writeback_store(struct device *dev,
|
||||
struct bio_vec bio_vec;
|
||||
struct page *page;
|
||||
ssize_t ret = len;
|
||||
int mode;
|
||||
int mode, err;
|
||||
unsigned long blk_idx = 0;
|
||||
|
||||
if (sysfs_streq(buf, "idle"))
|
||||
@ -719,12 +719,17 @@ static ssize_t writeback_store(struct device *dev,
|
||||
* XXX: A single page IO would be inefficient for write
|
||||
* but it would be not bad as starter.
|
||||
*/
|
||||
ret = submit_bio_wait(&bio);
|
||||
if (ret) {
|
||||
err = submit_bio_wait(&bio);
|
||||
if (err) {
|
||||
zram_slot_lock(zram, index);
|
||||
zram_clear_flag(zram, index, ZRAM_UNDER_WB);
|
||||
zram_clear_flag(zram, index, ZRAM_IDLE);
|
||||
zram_slot_unlock(zram, index);
|
||||
/*
|
||||
* Return last IO error unless every IO were
|
||||
* not suceeded.
|
||||
*/
|
||||
ret = err;
|
||||
continue;
|
||||
}
|
||||
|
||||
|
@ -474,14 +474,28 @@ static vm_fault_t drm_gem_shmem_fault(struct vm_fault *vmf)
|
||||
struct drm_gem_object *obj = vma->vm_private_data;
|
||||
struct drm_gem_shmem_object *shmem = to_drm_gem_shmem_obj(obj);
|
||||
loff_t num_pages = obj->size >> PAGE_SHIFT;
|
||||
vm_fault_t ret;
|
||||
struct page *page;
|
||||
pgoff_t page_offset;
|
||||
|
||||
if (vmf->pgoff >= num_pages || WARN_ON_ONCE(!shmem->pages))
|
||||
return VM_FAULT_SIGBUS;
|
||||
/* We don't use vmf->pgoff since that has the fake offset */
|
||||
page_offset = (vmf->address - vma->vm_start) >> PAGE_SHIFT;
|
||||
|
||||
page = shmem->pages[vmf->pgoff];
|
||||
mutex_lock(&shmem->pages_lock);
|
||||
|
||||
return vmf_insert_page(vma, vmf->address, page);
|
||||
if (page_offset >= num_pages ||
|
||||
WARN_ON_ONCE(!shmem->pages) ||
|
||||
shmem->madv < 0) {
|
||||
ret = VM_FAULT_SIGBUS;
|
||||
} else {
|
||||
page = shmem->pages[page_offset];
|
||||
|
||||
ret = vmf_insert_page(vma, vmf->address, page);
|
||||
}
|
||||
|
||||
mutex_unlock(&shmem->pages_lock);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void drm_gem_shmem_vm_open(struct vm_area_struct *vma)
|
||||
@ -549,9 +563,6 @@ int drm_gem_shmem_mmap(struct file *filp, struct vm_area_struct *vma)
|
||||
vma->vm_flags &= ~VM_PFNMAP;
|
||||
vma->vm_flags |= VM_MIXEDMAP;
|
||||
|
||||
/* Remove the fake offset */
|
||||
vma->vm_pgoff -= drm_vma_node_start(&shmem->base.vma_node);
|
||||
|
||||
return 0;
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(drm_gem_shmem_mmap);
|
||||
|
@ -99,6 +99,8 @@ static int compat_drm_version(struct file *file, unsigned int cmd,
|
||||
if (copy_from_user(&v32, (void __user *)arg, sizeof(v32)))
|
||||
return -EFAULT;
|
||||
|
||||
memset(&v, 0, sizeof(v));
|
||||
|
||||
v = (struct drm_version) {
|
||||
.name_len = v32.name_len,
|
||||
.name = compat_ptr(v32.name),
|
||||
@ -137,6 +139,9 @@ static int compat_drm_getunique(struct file *file, unsigned int cmd,
|
||||
|
||||
if (copy_from_user(&uq32, (void __user *)arg, sizeof(uq32)))
|
||||
return -EFAULT;
|
||||
|
||||
memset(&uq, 0, sizeof(uq));
|
||||
|
||||
uq = (struct drm_unique){
|
||||
.unique_len = uq32.unique_len,
|
||||
.unique = compat_ptr(uq32.unique),
|
||||
@ -265,6 +270,8 @@ static int compat_drm_getclient(struct file *file, unsigned int cmd,
|
||||
if (copy_from_user(&c32, argp, sizeof(c32)))
|
||||
return -EFAULT;
|
||||
|
||||
memset(&client, 0, sizeof(client));
|
||||
|
||||
client.idx = c32.idx;
|
||||
|
||||
err = drm_ioctl_kernel(file, drm_getclient, &client, 0);
|
||||
@ -850,6 +857,8 @@ static int compat_drm_wait_vblank(struct file *file, unsigned int cmd,
|
||||
if (copy_from_user(&req32, argp, sizeof(req32)))
|
||||
return -EFAULT;
|
||||
|
||||
memset(&req, 0, sizeof(req));
|
||||
|
||||
req.request.type = req32.request.type;
|
||||
req.request.sequence = req32.request.sequence;
|
||||
req.request.signal = req32.request.signal;
|
||||
@ -887,6 +896,8 @@ static int compat_drm_mode_addfb2(struct file *file, unsigned int cmd,
|
||||
struct drm_mode_fb_cmd2 req64;
|
||||
int err;
|
||||
|
||||
memset(&req64, 0, sizeof(req64));
|
||||
|
||||
if (copy_from_user(&req64, argp,
|
||||
offsetof(drm_mode_fb_cmd232_t, modifier)))
|
||||
return -EFAULT;
|
||||
|
@ -420,6 +420,16 @@ static int meson_probe_remote(struct platform_device *pdev,
|
||||
return count;
|
||||
}
|
||||
|
||||
static void meson_drv_shutdown(struct platform_device *pdev)
|
||||
{
|
||||
struct meson_drm *priv = dev_get_drvdata(&pdev->dev);
|
||||
struct drm_device *drm = priv->drm;
|
||||
|
||||
DRM_DEBUG_DRIVER("\n");
|
||||
drm_kms_helper_poll_fini(drm);
|
||||
drm_atomic_helper_shutdown(drm);
|
||||
}
|
||||
|
||||
static int meson_drv_probe(struct platform_device *pdev)
|
||||
{
|
||||
struct component_match *match = NULL;
|
||||
@ -469,6 +479,7 @@ MODULE_DEVICE_TABLE(of, dt_match);
|
||||
|
||||
static struct platform_driver meson_drm_platform_driver = {
|
||||
.probe = meson_drv_probe,
|
||||
.shutdown = meson_drv_shutdown,
|
||||
.driver = {
|
||||
.name = "meson-drm",
|
||||
.of_match_table = dt_match,
|
||||
|
@ -325,6 +325,7 @@ static void qxl_crtc_update_monitors_config(struct drm_crtc *crtc,
|
||||
|
||||
head.id = i;
|
||||
head.flags = 0;
|
||||
head.surface_id = 0;
|
||||
oldcount = qdev->monitors_config->count;
|
||||
if (crtc->state->active) {
|
||||
struct drm_display_mode *mode = &crtc->mode;
|
||||
|
@ -995,7 +995,12 @@ static void logi_hidpp_recv_queue_notif(struct hid_device *hdev,
|
||||
workitem.reports_supported |= STD_KEYBOARD;
|
||||
break;
|
||||
case 0x0d:
|
||||
device_type = "eQUAD Lightspeed 1_1";
|
||||
device_type = "eQUAD Lightspeed 1.1";
|
||||
logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
|
||||
workitem.reports_supported |= STD_KEYBOARD;
|
||||
break;
|
||||
case 0x0f:
|
||||
device_type = "eQUAD Lightspeed 1.2";
|
||||
logi_hidpp_dev_conn_notif_equad(hdev, hidpp_report, &workitem);
|
||||
workitem.reports_supported |= STD_KEYBOARD;
|
||||
break;
|
||||
|
@ -89,7 +89,6 @@
|
||||
|
||||
#define RCAR_BUS_PHASE_START (MDBS | MIE | ESG)
|
||||
#define RCAR_BUS_PHASE_DATA (MDBS | MIE)
|
||||
#define RCAR_BUS_MASK_DATA (~(ESG | FSB) & 0xFF)
|
||||
#define RCAR_BUS_PHASE_STOP (MDBS | MIE | FSB)
|
||||
|
||||
#define RCAR_IRQ_SEND (MNR | MAL | MST | MAT | MDE)
|
||||
@ -117,6 +116,7 @@ enum rcar_i2c_type {
|
||||
};
|
||||
|
||||
struct rcar_i2c_priv {
|
||||
u32 flags;
|
||||
void __iomem *io;
|
||||
struct i2c_adapter adap;
|
||||
struct i2c_msg *msg;
|
||||
@ -127,7 +127,6 @@ struct rcar_i2c_priv {
|
||||
|
||||
int pos;
|
||||
u32 icccr;
|
||||
u32 flags;
|
||||
u8 recovery_icmcr; /* protected by adapter lock */
|
||||
enum rcar_i2c_type devtype;
|
||||
struct i2c_client *slave;
|
||||
@ -616,7 +615,7 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
||||
/*
|
||||
* This driver has a lock-free design because there are IP cores (at least
|
||||
* R-Car Gen2) which have an inherent race condition in their hardware design.
|
||||
* There, we need to clear RCAR_BUS_MASK_DATA bits as soon as possible after
|
||||
* There, we need to switch to RCAR_BUS_PHASE_DATA as soon as possible after
|
||||
* the interrupt was generated, otherwise an unwanted repeated message gets
|
||||
* generated. It turned out that taking a spinlock at the beginning of the ISR
|
||||
* was already causing repeated messages. Thus, this driver was converted to
|
||||
@ -625,13 +624,11 @@ static bool rcar_i2c_slave_irq(struct rcar_i2c_priv *priv)
|
||||
static irqreturn_t rcar_i2c_irq(int irq, void *ptr)
|
||||
{
|
||||
struct rcar_i2c_priv *priv = ptr;
|
||||
u32 msr, val;
|
||||
u32 msr;
|
||||
|
||||
/* Clear START or STOP immediately, except for REPSTART after read */
|
||||
if (likely(!(priv->flags & ID_P_REP_AFTER_RD))) {
|
||||
val = rcar_i2c_read(priv, ICMCR);
|
||||
rcar_i2c_write(priv, ICMCR, val & RCAR_BUS_MASK_DATA);
|
||||
}
|
||||
if (likely(!(priv->flags & ID_P_REP_AFTER_RD)))
|
||||
rcar_i2c_write(priv, ICMCR, RCAR_BUS_PHASE_DATA);
|
||||
|
||||
msr = rcar_i2c_read(priv, ICMSR);
|
||||
|
||||
|
@ -48,6 +48,7 @@
|
||||
#include <linux/efi.h>
|
||||
#include <linux/input.h>
|
||||
#include <linux/input/mt.h>
|
||||
#include <linux/ktime.h>
|
||||
#include <linux/leds.h>
|
||||
#include <linux/module.h>
|
||||
#include <linux/spinlock.h>
|
||||
@ -400,7 +401,7 @@ struct applespi_data {
|
||||
unsigned int cmd_msg_cntr;
|
||||
/* lock to protect the above parameters and flags below */
|
||||
spinlock_t cmd_msg_lock;
|
||||
bool cmd_msg_queued;
|
||||
ktime_t cmd_msg_queued;
|
||||
enum applespi_evt_type cmd_evt_type;
|
||||
|
||||
struct led_classdev backlight_info;
|
||||
@ -716,7 +717,7 @@ static void applespi_msg_complete(struct applespi_data *applespi,
|
||||
wake_up_all(&applespi->drain_complete);
|
||||
|
||||
if (is_write_msg) {
|
||||
applespi->cmd_msg_queued = false;
|
||||
applespi->cmd_msg_queued = 0;
|
||||
applespi_send_cmd_msg(applespi);
|
||||
}
|
||||
|
||||
@ -758,8 +759,16 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
|
||||
return 0;
|
||||
|
||||
/* check whether send is in progress */
|
||||
if (applespi->cmd_msg_queued)
|
||||
return 0;
|
||||
if (applespi->cmd_msg_queued) {
|
||||
if (ktime_ms_delta(ktime_get(), applespi->cmd_msg_queued) < 1000)
|
||||
return 0;
|
||||
|
||||
dev_warn(&applespi->spi->dev, "Command %d timed out\n",
|
||||
applespi->cmd_evt_type);
|
||||
|
||||
applespi->cmd_msg_queued = 0;
|
||||
applespi->write_active = false;
|
||||
}
|
||||
|
||||
/* set up packet */
|
||||
memset(packet, 0, APPLESPI_PACKET_SIZE);
|
||||
@ -856,7 +865,7 @@ static int applespi_send_cmd_msg(struct applespi_data *applespi)
|
||||
return sts;
|
||||
}
|
||||
|
||||
applespi->cmd_msg_queued = true;
|
||||
applespi->cmd_msg_queued = ktime_get_coarse();
|
||||
applespi->write_active = true;
|
||||
|
||||
return 0;
|
||||
@ -1908,7 +1917,7 @@ static int __maybe_unused applespi_resume(struct device *dev)
|
||||
applespi->drain = false;
|
||||
applespi->have_cl_led_on = false;
|
||||
applespi->have_bl_level = 0;
|
||||
applespi->cmd_msg_queued = false;
|
||||
applespi->cmd_msg_queued = 0;
|
||||
applespi->read_active = false;
|
||||
applespi->write_active = false;
|
||||
|
||||
|
@ -12,6 +12,7 @@
|
||||
#include <linux/acpi.h>
|
||||
#include <linux/list.h>
|
||||
#include <linux/bitmap.h>
|
||||
#include <linux/delay.h>
|
||||
#include <linux/slab.h>
|
||||
#include <linux/syscore_ops.h>
|
||||
#include <linux/interrupt.h>
|
||||
@ -253,6 +254,8 @@ static enum iommu_init_state init_state = IOMMU_START_STATE;
|
||||
static int amd_iommu_enable_interrupts(void);
|
||||
static int __init iommu_go_to_state(enum iommu_init_state state);
|
||||
static void init_device_table_dma(void);
|
||||
static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
|
||||
u8 fxn, u64 *value, bool is_write);
|
||||
|
||||
static bool amd_iommu_pre_enabled = true;
|
||||
|
||||
@ -1672,13 +1675,11 @@ static int __init init_iommu_all(struct acpi_table_header *table)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int iommu_pc_get_set_reg(struct amd_iommu *iommu, u8 bank, u8 cntr,
|
||||
u8 fxn, u64 *value, bool is_write);
|
||||
|
||||
static void init_iommu_perf_ctr(struct amd_iommu *iommu)
|
||||
static void __init init_iommu_perf_ctr(struct amd_iommu *iommu)
|
||||
{
|
||||
int retry;
|
||||
struct pci_dev *pdev = iommu->dev;
|
||||
u64 val = 0xabcd, val2 = 0, save_reg = 0;
|
||||
u64 val = 0xabcd, val2 = 0, save_reg, save_src;
|
||||
|
||||
if (!iommu_feature(iommu, FEATURE_PC))
|
||||
return;
|
||||
@ -1686,17 +1687,39 @@ static void init_iommu_perf_ctr(struct amd_iommu *iommu)
|
||||
amd_iommu_pc_present = true;
|
||||
|
||||
/* save the value to restore, if writable */
|
||||
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false))
|
||||
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, false) ||
|
||||
iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, false))
|
||||
goto pc_false;
|
||||
|
||||
/*
|
||||
* Disable power gating by programing the performance counter
|
||||
* source to 20 (i.e. counts the reads and writes from/to IOMMU
|
||||
* Reserved Register [MMIO Offset 1FF8h] that are ignored.),
|
||||
* which never get incremented during this init phase.
|
||||
* (Note: The event is also deprecated.)
|
||||
*/
|
||||
val = 20;
|
||||
if (iommu_pc_get_set_reg(iommu, 0, 0, 8, &val, true))
|
||||
goto pc_false;
|
||||
|
||||
/* Check if the performance counters can be written to */
|
||||
if ((iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true)) ||
|
||||
(iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false)) ||
|
||||
(val != val2))
|
||||
goto pc_false;
|
||||
val = 0xabcd;
|
||||
for (retry = 5; retry; retry--) {
|
||||
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &val, true) ||
|
||||
iommu_pc_get_set_reg(iommu, 0, 0, 0, &val2, false) ||
|
||||
val2)
|
||||
break;
|
||||
|
||||
/* Wait about 20 msec for power gating to disable and retry. */
|
||||
msleep(20);
|
||||
}
|
||||
|
||||
/* restore */
|
||||
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true))
|
||||
if (iommu_pc_get_set_reg(iommu, 0, 0, 0, &save_reg, true) ||
|
||||
iommu_pc_get_set_reg(iommu, 0, 0, 8, &save_src, true))
|
||||
goto pc_false;
|
||||
|
||||
if (val != val2)
|
||||
goto pc_false;
|
||||
|
||||
pci_info(pdev, "IOMMU performance counters supported\n");
|
||||
|
@ -245,7 +245,7 @@ static int vsp1_du_pipeline_setup_brx(struct vsp1_device *vsp1,
|
||||
brx = &vsp1->bru->entity;
|
||||
else if (pipe->brx && !drm_pipe->force_brx_release)
|
||||
brx = pipe->brx;
|
||||
else if (!vsp1->bru->entity.pipe)
|
||||
else if (vsp1_feature(vsp1, VSP1_HAS_BRU) && !vsp1->bru->entity.pipe)
|
||||
brx = &vsp1->bru->entity;
|
||||
else
|
||||
brx = &vsp1->brs->entity;
|
||||
@ -462,9 +462,9 @@ static int vsp1_du_pipeline_setup_inputs(struct vsp1_device *vsp1,
|
||||
* make sure it is present in the pipeline's list of entities if it
|
||||
* wasn't already.
|
||||
*/
|
||||
if (!use_uif) {
|
||||
if (drm_pipe->uif && !use_uif) {
|
||||
drm_pipe->uif->pipe = NULL;
|
||||
} else if (!drm_pipe->uif->pipe) {
|
||||
} else if (drm_pipe->uif && !drm_pipe->uif->pipe) {
|
||||
drm_pipe->uif->pipe = pipe;
|
||||
list_add_tail(&drm_pipe->uif->list_pipe, &pipe->entities);
|
||||
}
|
||||
|
@ -5,6 +5,7 @@ obj-y += keymaps/
|
||||
obj-$(CONFIG_RC_CORE) += rc-core.o
|
||||
rc-core-y := rc-main.o rc-ir-raw.o
|
||||
rc-core-$(CONFIG_LIRC) += lirc_dev.o
|
||||
rc-core-$(CONFIG_MEDIA_CEC_RC) += keymaps/rc-cec.o
|
||||
rc-core-$(CONFIG_BPF_LIRC_MODE2) += bpf-lirc.o
|
||||
obj-$(CONFIG_IR_NEC_DECODER) += ir-nec-decoder.o
|
||||
obj-$(CONFIG_IR_RC5_DECODER) += ir-rc5-decoder.o
|
||||
|
@ -20,7 +20,6 @@ obj-$(CONFIG_RC_MAP) += rc-adstech-dvb-t-pci.o \
|
||||
rc-behold.o \
|
||||
rc-behold-columbus.o \
|
||||
rc-budget-ci-old.o \
|
||||
rc-cec.o \
|
||||
rc-cinergy-1400.o \
|
||||
rc-cinergy.o \
|
||||
rc-d680-dmb.o \
|
||||
|
@ -1,5 +1,15 @@
|
||||
// SPDX-License-Identifier: GPL-2.0-or-later
|
||||
/* Keytable for the CEC remote control
|
||||
*
|
||||
* This keymap is unusual in that it can't be built as a module,
|
||||
* instead it is registered directly in rc-main.c if CONFIG_MEDIA_CEC_RC
|
||||
* is set. This is because it can be called from drm_dp_cec_set_edid() via
|
||||
* cec_register_adapter() in an asynchronous context, and it is not
|
||||
* allowed to use request_module() to load rc-cec.ko in that case.
|
||||
*
|
||||
* Since this keymap is only used if CONFIG_MEDIA_CEC_RC is set, we
|
||||
* just compile this keymap into the rc-core module and never as a
|
||||
* separate module.
|
||||
*
|
||||
* Copyright (c) 2015 by Kamil Debski
|
||||
*/
|
||||
@ -152,7 +162,7 @@ static struct rc_map_table cec[] = {
|
||||
/* 0x77-0xff: Reserved */
|
||||
};
|
||||
|
||||
static struct rc_map_list cec_map = {
|
||||
struct rc_map_list cec_map = {
|
||||
.map = {
|
||||
.scan = cec,
|
||||
.size = ARRAY_SIZE(cec),
|
||||
@ -160,19 +170,3 @@ static struct rc_map_list cec_map = {
|
||||
.name = RC_MAP_CEC,
|
||||
}
|
||||
};
|
||||
|
||||
static int __init init_rc_map_cec(void)
|
||||
{
|
||||
return rc_map_register(&cec_map);
|
||||
}
|
||||
|
||||
static void __exit exit_rc_map_cec(void)
|
||||
{
|
||||
rc_map_unregister(&cec_map);
|
||||
}
|
||||
|
||||
module_init(init_rc_map_cec);
|
||||
module_exit(exit_rc_map_cec);
|
||||
|
||||
MODULE_LICENSE("GPL");
|
||||
MODULE_AUTHOR("Kamil Debski");
|
||||
|
@ -2033,6 +2033,9 @@ static int __init rc_core_init(void)
|
||||
|
||||
led_trigger_register_simple("rc-feedback", &led_feedback);
|
||||
rc_map_register(&empty_map);
|
||||
#ifdef CONFIG_MEDIA_CEC_RC
|
||||
rc_map_register(&cec_map);
|
||||
#endif
|
||||
|
||||
return 0;
|
||||
}
|
||||
@ -2042,6 +2045,9 @@ static void __exit rc_core_exit(void)
|
||||
lirc_dev_exit();
|
||||
class_unregister(&rc_class);
|
||||
led_trigger_unregister_simple(led_feedback);
|
||||
#ifdef CONFIG_MEDIA_CEC_RC
|
||||
rc_map_unregister(&cec_map);
|
||||
#endif
|
||||
rc_map_unregister(&empty_map);
|
||||
}
|
||||
|
||||
|
@ -399,7 +399,7 @@ void usbtv_audio_free(struct usbtv *usbtv)
|
||||
cancel_work_sync(&usbtv->snd_trigger);
|
||||
|
||||
if (usbtv->snd && usbtv->udev) {
|
||||
snd_card_free(usbtv->snd);
|
||||
snd_card_free_when_closed(usbtv->snd);
|
||||
usbtv->snd = NULL;
|
||||
}
|
||||
}
|
||||
|
@ -924,6 +924,11 @@ static int fastrpc_internal_invoke(struct fastrpc_user *fl, u32 kernel,
|
||||
if (!fl->cctx->rpdev)
|
||||
return -EPIPE;
|
||||
|
||||
if (handle == FASTRPC_INIT_HANDLE && !kernel) {
|
||||
dev_warn_ratelimited(fl->sctx->dev, "user app trying to send a kernel RPC message (%d)\n", handle);
|
||||
return -EPERM;
|
||||
}
|
||||
|
||||
ctx = fastrpc_context_alloc(fl, kernel, sc, args);
|
||||
if (IS_ERR(ctx))
|
||||
return PTR_ERR(ctx);
|
||||
|
@ -166,6 +166,7 @@ static const struct of_device_id pvpanic_mmio_match[] = {
|
||||
{ .compatible = "qemu,pvpanic-mmio", },
|
||||
{}
|
||||
};
|
||||
MODULE_DEVICE_TABLE(of, pvpanic_mmio_match);
|
||||
|
||||
static struct platform_driver pvpanic_mmio_driver = {
|
||||
.driver = {
|
||||
|
@ -373,11 +373,6 @@ void mmc_remove_card(struct mmc_card *card)
|
||||
mmc_remove_card_debugfs(card);
|
||||
#endif
|
||||
|
||||
if (host->cqe_enabled) {
|
||||
host->cqe_ops->cqe_disable(host);
|
||||
host->cqe_enabled = false;
|
||||
}
|
||||
|
||||
if (mmc_card_present(card)) {
|
||||
if (mmc_host_is_spi(card->host)) {
|
||||
pr_info("%s: SPI card removed\n",
|
||||
@ -390,6 +385,10 @@ void mmc_remove_card(struct mmc_card *card)
|
||||
of_node_put(card->dev.of_node);
|
||||
}
|
||||
|
||||
if (host->cqe_enabled) {
|
||||
host->cqe_ops->cqe_disable(host);
|
||||
host->cqe_enabled = false;
|
||||
}
|
||||
|
||||
put_device(&card->dev);
|
||||
}
|
||||
|
||||
|
@ -424,10 +424,6 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
||||
|
||||
/* EXT_CSD value is in units of 10ms, but we store in ms */
|
||||
card->ext_csd.part_time = 10 * ext_csd[EXT_CSD_PART_SWITCH_TIME];
|
||||
/* Some eMMC set the value too low so set a minimum */
|
||||
if (card->ext_csd.part_time &&
|
||||
card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
|
||||
card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
|
||||
|
||||
/* Sleep / awake timeout in 100ns units */
|
||||
if (sa_shift > 0 && sa_shift <= 0x17)
|
||||
@ -617,6 +613,17 @@ static int mmc_decode_ext_csd(struct mmc_card *card, u8 *ext_csd)
|
||||
card->ext_csd.data_sector_size = 512;
|
||||
}
|
||||
|
||||
/*
|
||||
* GENERIC_CMD6_TIME is to be used "unless a specific timeout is defined
|
||||
* when accessing a specific field", so use it here if there is no
|
||||
* PARTITION_SWITCH_TIME.
|
||||
*/
|
||||
if (!card->ext_csd.part_time)
|
||||
card->ext_csd.part_time = card->ext_csd.generic_cmd6_time;
|
||||
/* Some eMMC set the value too low so set a minimum */
|
||||
if (card->ext_csd.part_time < MMC_MIN_PART_SWITCH_TIME)
|
||||
card->ext_csd.part_time = MMC_MIN_PART_SWITCH_TIME;
|
||||
|
||||
/* eMMC v5 or later */
|
||||
if (card->ext_csd.rev >= 7) {
|
||||
memcpy(card->ext_csd.fwrev, &ext_csd[EXT_CSD_FIRMWARE_VERSION],
|
||||
|
@ -1020,13 +1020,13 @@ static void msdc_track_cmd_data(struct msdc_host *host,
|
||||
static void msdc_request_done(struct msdc_host *host, struct mmc_request *mrq)
|
||||
{
|
||||
unsigned long flags;
|
||||
bool ret;
|
||||
|
||||
ret = cancel_delayed_work(&host->req_timeout);
|
||||
if (!ret) {
|
||||
/* delay work already running */
|
||||
return;
|
||||
}
|
||||
/*
|
||||
* No need check the return value of cancel_delayed_work, as only ONE
|
||||
* path will go here!
|
||||
*/
|
||||
cancel_delayed_work(&host->req_timeout);
|
||||
|
||||
spin_lock_irqsave(&host->lock, flags);
|
||||
host->mrq = NULL;
|
||||
spin_unlock_irqrestore(&host->lock, flags);
|
||||
@ -1046,7 +1046,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
|
||||
bool done = false;
|
||||
bool sbc_error;
|
||||
unsigned long flags;
|
||||
u32 *rsp = cmd->resp;
|
||||
u32 *rsp;
|
||||
|
||||
if (mrq->sbc && cmd == mrq->cmd &&
|
||||
(events & (MSDC_INT_ACMDRDY | MSDC_INT_ACMDCRCERR
|
||||
@ -1067,6 +1067,7 @@ static bool msdc_cmd_done(struct msdc_host *host, int events,
|
||||
|
||||
if (done)
|
||||
return true;
|
||||
rsp = cmd->resp;
|
||||
|
||||
sdr_clr_bits(host->base + MSDC_INTEN, cmd_ints_mask);
|
||||
|
||||
@ -1254,7 +1255,7 @@ static void msdc_data_xfer_next(struct msdc_host *host,
|
||||
static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
|
||||
struct mmc_request *mrq, struct mmc_data *data)
|
||||
{
|
||||
struct mmc_command *stop = data->stop;
|
||||
struct mmc_command *stop;
|
||||
unsigned long flags;
|
||||
bool done;
|
||||
unsigned int check_data = events &
|
||||
@ -1270,6 +1271,7 @@ static bool msdc_data_xfer_done(struct msdc_host *host, u32 events,
|
||||
|
||||
if (done)
|
||||
return true;
|
||||
stop = data->stop;
|
||||
|
||||
if (check_data || (stop && stop->error)) {
|
||||
dev_dbg(host->dev, "DMA status: 0x%8X\n",
|
||||
|
@ -644,7 +644,7 @@ static int mxs_mmc_probe(struct platform_device *pdev)
|
||||
|
||||
ret = mmc_of_parse(mmc);
|
||||
if (ret)
|
||||
goto out_clk_disable;
|
||||
goto out_free_dma;
|
||||
|
||||
mmc->ocr_avail = MMC_VDD_32_33 | MMC_VDD_33_34;
|
||||
|
||||
|
@ -548,7 +548,7 @@ static int flexcan_chip_freeze(struct flexcan_priv *priv)
|
||||
u32 reg;
|
||||
|
||||
reg = priv->read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_HALT;
|
||||
reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT;
|
||||
priv->write(reg, ®s->mcr);
|
||||
|
||||
while (timeout-- && !(priv->read(®s->mcr) & FLEXCAN_MCR_FRZ_ACK))
|
||||
@ -1057,10 +1057,13 @@ static int flexcan_chip_start(struct net_device *dev)
|
||||
|
||||
flexcan_set_bittiming(dev);
|
||||
|
||||
/* set freeze, halt */
|
||||
err = flexcan_chip_freeze(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* MCR
|
||||
*
|
||||
* enable freeze
|
||||
* halt now
|
||||
* only supervisor access
|
||||
* enable warning int
|
||||
* enable individual RX masking
|
||||
@ -1069,9 +1072,8 @@ static int flexcan_chip_start(struct net_device *dev)
|
||||
*/
|
||||
reg_mcr = priv->read(®s->mcr);
|
||||
reg_mcr &= ~FLEXCAN_MCR_MAXMB(0xff);
|
||||
reg_mcr |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT | FLEXCAN_MCR_SUPV |
|
||||
FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ | FLEXCAN_MCR_IDAM_C |
|
||||
FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
||||
reg_mcr |= FLEXCAN_MCR_SUPV | FLEXCAN_MCR_WRN_EN | FLEXCAN_MCR_IRMQ |
|
||||
FLEXCAN_MCR_IDAM_C | FLEXCAN_MCR_MAXMB(priv->tx_mb_idx);
|
||||
|
||||
/* MCR
|
||||
*
|
||||
@ -1432,10 +1434,14 @@ static int register_flexcandev(struct net_device *dev)
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* set freeze, halt and activate FIFO, restrict register access */
|
||||
/* set freeze, halt */
|
||||
err = flexcan_chip_freeze(priv);
|
||||
if (err)
|
||||
goto out_chip_disable;
|
||||
|
||||
/* activate FIFO, restrict register access */
|
||||
reg = priv->read(®s->mcr);
|
||||
reg |= FLEXCAN_MCR_FRZ | FLEXCAN_MCR_HALT |
|
||||
FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
||||
reg |= FLEXCAN_MCR_FEN | FLEXCAN_MCR_SUPV;
|
||||
priv->write(reg, ®s->mcr);
|
||||
|
||||
/* Currently we only support newer versions of this core
|
||||
|
@ -325,14 +325,14 @@ static int tcan4x5x_init(struct m_can_classdev *cdev)
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Zero out the MCAN buffers */
|
||||
m_can_init_ram(cdev);
|
||||
|
||||
ret = regmap_update_bits(tcan4x5x->regmap, TCAN4X5X_CONFIG,
|
||||
TCAN4X5X_MODE_SEL_MASK, TCAN4X5X_MODE_NORMAL);
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
/* Zero out the MCAN buffers */
|
||||
m_can_init_ram(cdev);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
|
@ -1897,13 +1897,16 @@ static int alx_resume(struct device *dev)
|
||||
|
||||
if (!netif_running(alx->dev))
|
||||
return 0;
|
||||
netif_device_attach(alx->dev);
|
||||
|
||||
rtnl_lock();
|
||||
err = __alx_open(alx, true);
|
||||
rtnl_unlock();
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
return err;
|
||||
netif_device_attach(alx->dev);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static SIMPLE_DEV_PM_OPS(alx_pm_ops, alx_suspend, alx_resume);
|
||||
|
@ -7925,10 +7925,18 @@ static void bnxt_setup_inta(struct bnxt *bp)
|
||||
bp->irq_tbl[0].handler = bnxt_inta;
|
||||
}
|
||||
|
||||
static int bnxt_init_int_mode(struct bnxt *bp);
|
||||
|
||||
static int bnxt_setup_int_mode(struct bnxt *bp)
|
||||
{
|
||||
int rc;
|
||||
|
||||
if (!bp->irq_tbl) {
|
||||
rc = bnxt_init_int_mode(bp);
|
||||
if (rc || !bp->irq_tbl)
|
||||
return rc ?: -ENODEV;
|
||||
}
|
||||
|
||||
if (bp->flags & BNXT_FLAG_USING_MSIX)
|
||||
bnxt_setup_msix(bp);
|
||||
else
|
||||
@ -8113,7 +8121,7 @@ static int bnxt_init_inta(struct bnxt *bp)
|
||||
|
||||
static int bnxt_init_int_mode(struct bnxt *bp)
|
||||
{
|
||||
int rc = 0;
|
||||
int rc = -ENODEV;
|
||||
|
||||
if (bp->flags & BNXT_FLAG_MSIX_CAP)
|
||||
rc = bnxt_init_msix(bp);
|
||||
@ -8748,7 +8756,8 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
||||
{
|
||||
struct hwrm_func_drv_if_change_output *resp = bp->hwrm_cmd_resp_addr;
|
||||
struct hwrm_func_drv_if_change_input req = {0};
|
||||
bool resc_reinit = false, fw_reset = false;
|
||||
bool fw_reset = !bp->irq_tbl;
|
||||
bool resc_reinit = false;
|
||||
u32 flags = 0;
|
||||
int rc;
|
||||
|
||||
@ -8776,6 +8785,7 @@ static int bnxt_hwrm_if_change(struct bnxt *bp, bool up)
|
||||
|
||||
if (test_bit(BNXT_STATE_IN_FW_RESET, &bp->state) && !fw_reset) {
|
||||
netdev_err(bp->dev, "RESET_DONE not set during FW reset.\n");
|
||||
set_bit(BNXT_STATE_ABORT_ERR, &bp->state);
|
||||
return -ENODEV;
|
||||
}
|
||||
if (resc_reinit || fw_reset) {
|
||||
|
@ -134,6 +134,8 @@ struct board_info {
|
||||
u32 wake_state;
|
||||
|
||||
int ip_summed;
|
||||
|
||||
struct regulator *power_supply;
|
||||
};
|
||||
|
||||
/* debug code */
|
||||
@ -1454,7 +1456,7 @@ dm9000_probe(struct platform_device *pdev)
|
||||
if (ret) {
|
||||
dev_err(dev, "failed to request reset gpio %d: %d\n",
|
||||
reset_gpios, ret);
|
||||
return -ENODEV;
|
||||
goto out_regulator_disable;
|
||||
}
|
||||
|
||||
/* According to manual PWRST# Low Period Min 1ms */
|
||||
@ -1466,8 +1468,10 @@ dm9000_probe(struct platform_device *pdev)
|
||||
|
||||
if (!pdata) {
|
||||
pdata = dm9000_parse_dt(&pdev->dev);
|
||||
if (IS_ERR(pdata))
|
||||
return PTR_ERR(pdata);
|
||||
if (IS_ERR(pdata)) {
|
||||
ret = PTR_ERR(pdata);
|
||||
goto out_regulator_disable;
|
||||
}
|
||||
}
|
||||
|
||||
/* Init network device */
|
||||
@ -1484,6 +1488,8 @@ dm9000_probe(struct platform_device *pdev)
|
||||
|
||||
db->dev = &pdev->dev;
|
||||
db->ndev = ndev;
|
||||
if (!IS_ERR(power))
|
||||
db->power_supply = power;
|
||||
|
||||
spin_lock_init(&db->lock);
|
||||
mutex_init(&db->addr_lock);
|
||||
@ -1708,6 +1714,10 @@ out:
|
||||
dm9000_release_board(pdev, db);
|
||||
free_netdev(ndev);
|
||||
|
||||
out_regulator_disable:
|
||||
if (!IS_ERR(power))
|
||||
regulator_disable(power);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -1765,10 +1775,13 @@ static int
|
||||
dm9000_drv_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct net_device *ndev = platform_get_drvdata(pdev);
|
||||
struct board_info *dm = to_dm9000_board(ndev);
|
||||
|
||||
unregister_netdev(ndev);
|
||||
dm9000_release_board(pdev, netdev_priv(ndev));
|
||||
dm9000_release_board(pdev, dm);
|
||||
free_netdev(ndev); /* free device structure */
|
||||
if (dm->power_supply)
|
||||
regulator_disable(dm->power_supply);
|
||||
|
||||
dev_dbg(&pdev->dev, "released and freed device\n");
|
||||
return 0;
|
||||
|
@ -942,7 +942,7 @@ static void enetc_free_rxtx_rings(struct enetc_ndev_priv *priv)
|
||||
enetc_free_tx_ring(priv->tx_ring[i]);
|
||||
}
|
||||
|
||||
static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
{
|
||||
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
||||
|
||||
@ -963,7 +963,7 @@ static int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
{
|
||||
int size = cbdr->bd_count * sizeof(struct enetc_cbd);
|
||||
|
||||
@ -971,7 +971,7 @@ static void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr)
|
||||
cbdr->bd_base = NULL;
|
||||
}
|
||||
|
||||
static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
||||
void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
||||
{
|
||||
/* set CBDR cache attributes */
|
||||
enetc_wr(hw, ENETC_SICAR2,
|
||||
@ -991,7 +991,7 @@ static void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr)
|
||||
cbdr->cir = hw->reg + ENETC_SICBDRCIR;
|
||||
}
|
||||
|
||||
static void enetc_clear_cbdr(struct enetc_hw *hw)
|
||||
void enetc_clear_cbdr(struct enetc_hw *hw)
|
||||
{
|
||||
enetc_wr(hw, ENETC_SICBDRMR, 0);
|
||||
}
|
||||
@ -1016,13 +1016,12 @@ static int enetc_setup_default_rss_table(struct enetc_si *si, int num_groups)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static int enetc_configure_si(struct enetc_ndev_priv *priv)
|
||||
int enetc_configure_si(struct enetc_ndev_priv *priv)
|
||||
{
|
||||
struct enetc_si *si = priv->si;
|
||||
struct enetc_hw *hw = &si->hw;
|
||||
int err;
|
||||
|
||||
enetc_setup_cbdr(hw, &si->cbd_ring);
|
||||
/* set SI cache attributes */
|
||||
enetc_wr(hw, ENETC_SICAR0,
|
||||
ENETC_SICAR_RD_COHERENT | ENETC_SICAR_WR_COHERENT);
|
||||
@ -1068,6 +1067,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
||||
if (err)
|
||||
return err;
|
||||
|
||||
enetc_setup_cbdr(&si->hw, &si->cbd_ring);
|
||||
|
||||
priv->cls_rules = kcalloc(si->num_fs_entries, sizeof(*priv->cls_rules),
|
||||
GFP_KERNEL);
|
||||
if (!priv->cls_rules) {
|
||||
@ -1075,14 +1076,8 @@ int enetc_alloc_si_resources(struct enetc_ndev_priv *priv)
|
||||
goto err_alloc_cls;
|
||||
}
|
||||
|
||||
err = enetc_configure_si(priv);
|
||||
if (err)
|
||||
goto err_config_si;
|
||||
|
||||
return 0;
|
||||
|
||||
err_config_si:
|
||||
kfree(priv->cls_rules);
|
||||
err_alloc_cls:
|
||||
enetc_clear_cbdr(&si->hw);
|
||||
enetc_free_cbdr(priv->dev, &si->cbd_ring);
|
||||
|
@ -221,6 +221,7 @@ void enetc_get_si_caps(struct enetc_si *si);
|
||||
void enetc_init_si_rings_params(struct enetc_ndev_priv *priv);
|
||||
int enetc_alloc_si_resources(struct enetc_ndev_priv *priv);
|
||||
void enetc_free_si_resources(struct enetc_ndev_priv *priv);
|
||||
int enetc_configure_si(struct enetc_ndev_priv *priv);
|
||||
|
||||
int enetc_open(struct net_device *ndev);
|
||||
int enetc_close(struct net_device *ndev);
|
||||
@ -236,6 +237,10 @@ int enetc_setup_tc(struct net_device *ndev, enum tc_setup_type type,
|
||||
void enetc_set_ethtool_ops(struct net_device *ndev);
|
||||
|
||||
/* control buffer descriptor ring (CBDR) */
|
||||
int enetc_alloc_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
||||
void enetc_free_cbdr(struct device *dev, struct enetc_cbdr *cbdr);
|
||||
void enetc_setup_cbdr(struct enetc_hw *hw, struct enetc_cbdr *cbdr);
|
||||
void enetc_clear_cbdr(struct enetc_hw *hw);
|
||||
int enetc_set_mac_flt_entry(struct enetc_si *si, int index,
|
||||
char *mac_addr, int si_map);
|
||||
int enetc_clear_mac_flt_entry(struct enetc_si *si, int index);
|
||||
|
@ -854,6 +854,26 @@ static int enetc_init_port_rss_memory(struct enetc_si *si)
|
||||
return err;
|
||||
}
|
||||
|
||||
static void enetc_init_unused_port(struct enetc_si *si)
|
||||
{
|
||||
struct device *dev = &si->pdev->dev;
|
||||
struct enetc_hw *hw = &si->hw;
|
||||
int err;
|
||||
|
||||
si->cbd_ring.bd_count = ENETC_CBDR_DEFAULT_SIZE;
|
||||
err = enetc_alloc_cbdr(dev, &si->cbd_ring);
|
||||
if (err)
|
||||
return;
|
||||
|
||||
enetc_setup_cbdr(hw, &si->cbd_ring);
|
||||
|
||||
enetc_init_port_rfs_memory(si);
|
||||
enetc_init_port_rss_memory(si);
|
||||
|
||||
enetc_clear_cbdr(hw);
|
||||
enetc_free_cbdr(dev, &si->cbd_ring);
|
||||
}
|
||||
|
||||
static int enetc_pf_probe(struct pci_dev *pdev,
|
||||
const struct pci_device_id *ent)
|
||||
{
|
||||
@ -863,11 +883,6 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
||||
struct enetc_pf *pf;
|
||||
int err;
|
||||
|
||||
if (pdev->dev.of_node && !of_device_is_available(pdev->dev.of_node)) {
|
||||
dev_info(&pdev->dev, "device is disabled, skipping\n");
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
err = enetc_pci_probe(pdev, KBUILD_MODNAME, sizeof(*pf));
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "PCI probing failed\n");
|
||||
@ -881,6 +896,13 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
||||
goto err_map_pf_space;
|
||||
}
|
||||
|
||||
if (pdev->dev.of_node && !of_device_is_available(pdev->dev.of_node)) {
|
||||
enetc_init_unused_port(si);
|
||||
dev_info(&pdev->dev, "device is disabled, skipping\n");
|
||||
err = -ENODEV;
|
||||
goto err_device_disabled;
|
||||
}
|
||||
|
||||
pf = enetc_si_priv(si);
|
||||
pf->si = si;
|
||||
pf->total_vfs = pci_sriov_get_totalvfs(pdev);
|
||||
@ -920,6 +942,12 @@ static int enetc_pf_probe(struct pci_dev *pdev,
|
||||
goto err_init_port_rss;
|
||||
}
|
||||
|
||||
err = enetc_configure_si(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to configure SI\n");
|
||||
goto err_config_si;
|
||||
}
|
||||
|
||||
err = enetc_alloc_msix(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
||||
@ -945,6 +973,7 @@ err_reg_netdev:
|
||||
enetc_mdio_remove(pf);
|
||||
enetc_of_put_phy(priv);
|
||||
enetc_free_msix(priv);
|
||||
err_config_si:
|
||||
err_init_port_rss:
|
||||
err_init_port_rfs:
|
||||
err_alloc_msix:
|
||||
@ -953,6 +982,7 @@ err_alloc_si_res:
|
||||
si->ndev = NULL;
|
||||
free_netdev(ndev);
|
||||
err_alloc_netdev:
|
||||
err_device_disabled:
|
||||
err_map_pf_space:
|
||||
enetc_pci_remove(pdev);
|
||||
|
||||
|
@ -189,6 +189,12 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
||||
goto err_alloc_si_res;
|
||||
}
|
||||
|
||||
err = enetc_configure_si(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "Failed to configure SI\n");
|
||||
goto err_config_si;
|
||||
}
|
||||
|
||||
err = enetc_alloc_msix(priv);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "MSIX alloc failed\n");
|
||||
@ -208,6 +214,7 @@ static int enetc_vf_probe(struct pci_dev *pdev,
|
||||
|
||||
err_reg_netdev:
|
||||
enetc_free_msix(priv);
|
||||
err_config_si:
|
||||
err_alloc_msix:
|
||||
enetc_free_si_resources(priv);
|
||||
err_alloc_si_res:
|
||||
|
@ -1018,16 +1018,16 @@ struct hclge_fd_tcam_config_3_cmd {
|
||||
#define HCLGE_FD_AD_DROP_B 0
|
||||
#define HCLGE_FD_AD_DIRECT_QID_B 1
|
||||
#define HCLGE_FD_AD_QID_S 2
|
||||
#define HCLGE_FD_AD_QID_M GENMASK(12, 2)
|
||||
#define HCLGE_FD_AD_QID_M GENMASK(11, 2)
|
||||
#define HCLGE_FD_AD_USE_COUNTER_B 12
|
||||
#define HCLGE_FD_AD_COUNTER_NUM_S 13
|
||||
#define HCLGE_FD_AD_COUNTER_NUM_M GENMASK(20, 13)
|
||||
#define HCLGE_FD_AD_NXT_STEP_B 20
|
||||
#define HCLGE_FD_AD_NXT_KEY_S 21
|
||||
#define HCLGE_FD_AD_NXT_KEY_M GENMASK(26, 21)
|
||||
#define HCLGE_FD_AD_NXT_KEY_M GENMASK(25, 21)
|
||||
#define HCLGE_FD_AD_WR_RULE_ID_B 0
|
||||
#define HCLGE_FD_AD_RULE_ID_S 1
|
||||
#define HCLGE_FD_AD_RULE_ID_M GENMASK(13, 1)
|
||||
#define HCLGE_FD_AD_RULE_ID_M GENMASK(12, 1)
|
||||
|
||||
struct hclge_fd_ad_config_cmd {
|
||||
u8 stage;
|
||||
|
@ -4908,9 +4908,9 @@ static bool hclge_fd_convert_tuple(u32 tuple_bit, u8 *key_x, u8 *key_y,
|
||||
case BIT(INNER_SRC_MAC):
|
||||
for (i = 0; i < ETH_ALEN; i++) {
|
||||
calc_x(key_x[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
||||
rule->tuples.src_mac[i]);
|
||||
rule->tuples_mask.src_mac[i]);
|
||||
calc_y(key_y[ETH_ALEN - 1 - i], rule->tuples.src_mac[i],
|
||||
rule->tuples.src_mac[i]);
|
||||
rule->tuples_mask.src_mac[i]);
|
||||
}
|
||||
|
||||
return true;
|
||||
@ -5939,8 +5939,7 @@ static int hclge_get_fd_rule_info(struct hnae3_handle *handle,
|
||||
fs->h_ext.vlan_tci = cpu_to_be16(rule->tuples.vlan_tag1);
|
||||
fs->m_ext.vlan_tci =
|
||||
rule->unused_tuple & BIT(INNER_VLAN_TAG_FST) ?
|
||||
cpu_to_be16(VLAN_VID_MASK) :
|
||||
cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
||||
0 : cpu_to_be16(rule->tuples_mask.vlan_tag1);
|
||||
}
|
||||
|
||||
if (fs->flow_type & FLOW_MAC_EXT) {
|
||||
|
@ -1753,10 +1753,9 @@ static int ibmvnic_set_mac(struct net_device *netdev, void *p)
|
||||
if (!is_valid_ether_addr(addr->sa_data))
|
||||
return -EADDRNOTAVAIL;
|
||||
|
||||
if (adapter->state != VNIC_PROBED) {
|
||||
ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
||||
ether_addr_copy(adapter->mac_addr, addr->sa_data);
|
||||
if (adapter->state != VNIC_PROBED)
|
||||
rc = __ibmvnic_set_mac(netdev, addr->sa_data);
|
||||
}
|
||||
|
||||
return rc;
|
||||
}
|
||||
|
@ -15142,6 +15142,8 @@ static int i40e_probe(struct pci_dev *pdev, const struct pci_device_id *ent)
|
||||
if (err) {
|
||||
dev_info(&pdev->dev,
|
||||
"setup of misc vector failed: %d\n", err);
|
||||
i40e_cloud_filter_exit(pf);
|
||||
i40e_fdir_teardown(pf);
|
||||
goto err_vsis;
|
||||
}
|
||||
}
|
||||
|
@ -575,6 +575,11 @@ static int ixgbe_ipsec_add_sa(struct xfrm_state *xs)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
||||
netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (ixgbe_ipsec_check_mgmt_ip(xs)) {
|
||||
netdev_err(dev, "IPsec IP addr clash with mgmt filters\n");
|
||||
return -EINVAL;
|
||||
|
@ -272,6 +272,11 @@ static int ixgbevf_ipsec_add_sa(struct xfrm_state *xs)
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xs->props.mode != XFRM_MODE_TRANSPORT) {
|
||||
netdev_err(dev, "Unsupported mode for ipsec offload\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
if (xs->xso.flags & XFRM_OFFLOAD_INBOUND) {
|
||||
struct rx_sa rsa;
|
||||
|
||||
|
@ -47,7 +47,7 @@
|
||||
#define EN_ETHTOOL_SHORT_MASK cpu_to_be16(0xffff)
|
||||
#define EN_ETHTOOL_WORD_MASK cpu_to_be32(0xffffffff)
|
||||
|
||||
static int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
||||
int mlx4_en_moderation_update(struct mlx4_en_priv *priv)
|
||||
{
|
||||
int i, t;
|
||||
int err = 0;
|
||||
|
@ -3657,6 +3657,8 @@ int mlx4_en_reset_config(struct net_device *dev,
|
||||
en_err(priv, "Failed starting port\n");
|
||||
}
|
||||
|
||||
if (!err)
|
||||
err = mlx4_en_moderation_update(priv);
|
||||
out:
|
||||
mutex_unlock(&mdev->state_lock);
|
||||
kfree(tmp);
|
||||
|
@ -797,6 +797,7 @@ void mlx4_en_ptp_overflow_check(struct mlx4_en_dev *mdev);
|
||||
#define DEV_FEATURE_CHANGED(dev, new_features, feature) \
|
||||
((dev->features & feature) ^ (new_features & feature))
|
||||
|
||||
int mlx4_en_moderation_update(struct mlx4_en_priv *priv);
|
||||
int mlx4_en_reset_config(struct net_device *dev,
|
||||
struct hwtstamp_config ts_config,
|
||||
netdev_features_t new_features);
|
||||
|
@ -610,6 +610,8 @@ static struct sh_eth_cpu_data r7s72100_data = {
|
||||
EESR_TDE,
|
||||
.fdr_value = 0x0000070f,
|
||||
|
||||
.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
||||
|
||||
.no_psr = 1,
|
||||
.apr = 1,
|
||||
.mpr = 1,
|
||||
@ -828,6 +830,8 @@ static struct sh_eth_cpu_data r7s9210_data = {
|
||||
|
||||
.fdr_value = 0x0000070f,
|
||||
|
||||
.trscer_err_mask = DESC_I_RINT8 | DESC_I_RINT5,
|
||||
|
||||
.apr = 1,
|
||||
.mpr = 1,
|
||||
.tpauser = 1,
|
||||
@ -1131,6 +1135,9 @@ static struct sh_eth_cpu_data sh771x_data = {
|
||||
EESIPR_CEEFIP | EESIPR_CELFIP |
|
||||
EESIPR_RRFIP | EESIPR_RTLFIP | EESIPR_RTSFIP |
|
||||
EESIPR_PREIP | EESIPR_CERFIP,
|
||||
|
||||
.trscer_err_mask = DESC_I_RINT8,
|
||||
|
||||
.tsu = 1,
|
||||
.dual_port = 1,
|
||||
};
|
||||
|
@ -116,6 +116,23 @@ static void dwmac4_dma_init_channel(void __iomem *ioaddr,
|
||||
ioaddr + DMA_CHAN_INTR_ENA(chan));
|
||||
}
|
||||
|
||||
static void dwmac410_dma_init_channel(void __iomem *ioaddr,
|
||||
struct stmmac_dma_cfg *dma_cfg, u32 chan)
|
||||
{
|
||||
u32 value;
|
||||
|
||||
/* common channel control register config */
|
||||
value = readl(ioaddr + DMA_CHAN_CONTROL(chan));
|
||||
if (dma_cfg->pblx8)
|
||||
value = value | DMA_BUS_MODE_PBL;
|
||||
|
||||
writel(value, ioaddr + DMA_CHAN_CONTROL(chan));
|
||||
|
||||
/* Mask interrupts by writing to CSR7 */
|
||||
writel(DMA_CHAN_INTR_DEFAULT_MASK_4_10,
|
||||
ioaddr + DMA_CHAN_INTR_ENA(chan));
|
||||
}
|
||||
|
||||
static void dwmac4_dma_init(void __iomem *ioaddr,
|
||||
struct stmmac_dma_cfg *dma_cfg, int atds)
|
||||
{
|
||||
@ -462,7 +479,7 @@ const struct stmmac_dma_ops dwmac4_dma_ops = {
|
||||
const struct stmmac_dma_ops dwmac410_dma_ops = {
|
||||
.reset = dwmac4_dma_reset,
|
||||
.init = dwmac4_dma_init,
|
||||
.init_chan = dwmac4_dma_init_channel,
|
||||
.init_chan = dwmac410_dma_init_channel,
|
||||
.init_rx_chan = dwmac4_dma_init_rx_chan,
|
||||
.init_tx_chan = dwmac4_dma_init_tx_chan,
|
||||
.axi = dwmac4_dma_axi,
|
||||
|
@ -60,10 +60,6 @@ void dwmac4_dma_stop_tx(void __iomem *ioaddr, u32 chan)
|
||||
|
||||
value &= ~DMA_CONTROL_ST;
|
||||
writel(value, ioaddr + DMA_CHAN_TX_CONTROL(chan));
|
||||
|
||||
value = readl(ioaddr + GMAC_CONFIG);
|
||||
value &= ~GMAC_CONFIG_TE;
|
||||
writel(value, ioaddr + GMAC_CONFIG);
|
||||
}
|
||||
|
||||
void dwmac4_dma_start_rx(void __iomem *ioaddr, u32 chan)
|
||||
|
@ -4821,6 +4821,8 @@ static void stmmac_reset_queues_param(struct stmmac_priv *priv)
|
||||
tx_q->cur_tx = 0;
|
||||
tx_q->dirty_tx = 0;
|
||||
tx_q->mss = 0;
|
||||
|
||||
netdev_tx_reset_queue(netdev_get_tx_queue(priv->dev, queue));
|
||||
}
|
||||
}
|
||||
|
||||
|
@ -292,6 +292,7 @@ nsim_create(struct nsim_dev *nsim_dev, struct nsim_dev_port *nsim_dev_port)
|
||||
|
||||
ns = netdev_priv(dev);
|
||||
ns->netdev = dev;
|
||||
u64_stats_init(&ns->syncp);
|
||||
ns->nsim_dev = nsim_dev;
|
||||
ns->nsim_dev_port = nsim_dev_port;
|
||||
ns->nsim_bus_dev = nsim_dev->nsim_bus_dev;
|
||||
|
@ -345,15 +345,16 @@ int phy_ethtool_ksettings_set(struct phy_device *phydev,
|
||||
|
||||
phydev->autoneg = autoneg;
|
||||
|
||||
phydev->speed = speed;
|
||||
if (autoneg == AUTONEG_DISABLE) {
|
||||
phydev->speed = speed;
|
||||
phydev->duplex = duplex;
|
||||
}
|
||||
|
||||
linkmode_copy(phydev->advertising, advertising);
|
||||
|
||||
linkmode_mod_bit(ETHTOOL_LINK_MODE_Autoneg_BIT,
|
||||
phydev->advertising, autoneg == AUTONEG_ENABLE);
|
||||
|
||||
phydev->duplex = duplex;
|
||||
|
||||
phydev->mdix_ctrl = cmd->base.eth_tp_mdix_ctrl;
|
||||
|
||||
/* Restart the PHY */
|
||||
|
@ -441,13 +441,6 @@ static ssize_t add_mux_store(struct device *d, struct device_attribute *attr, c
|
||||
goto err;
|
||||
}
|
||||
|
||||
/* we don't want to modify a running netdev */
|
||||
if (netif_running(dev->net)) {
|
||||
netdev_err(dev->net, "Cannot change a running device\n");
|
||||
ret = -EBUSY;
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = qmimux_register_device(dev->net, mux_id);
|
||||
if (!ret) {
|
||||
info->flags |= QMI_WWAN_FLAG_MUX;
|
||||
@ -477,13 +470,6 @@ static ssize_t del_mux_store(struct device *d, struct device_attribute *attr, c
|
||||
if (!rtnl_trylock())
|
||||
return restart_syscall();
|
||||
|
||||
/* we don't want to modify a running netdev */
|
||||
if (netif_running(dev->net)) {
|
||||
netdev_err(dev->net, "Cannot change a running device\n");
|
||||
ret = -EBUSY;
|
||||
goto err;
|
||||
}
|
||||
|
||||
del_dev = qmimux_find_dev(dev, mux_id);
|
||||
if (!del_dev) {
|
||||
netdev_err(dev->net, "mux_id not present\n");
|
||||
|
@ -283,7 +283,6 @@ static int lapbeth_open(struct net_device *dev)
|
||||
return -ENODEV;
|
||||
}
|
||||
|
||||
netif_start_queue(dev);
|
||||
return 0;
|
||||
}
|
||||
|
||||
@ -291,8 +290,6 @@ static int lapbeth_close(struct net_device *dev)
|
||||
{
|
||||
int err;
|
||||
|
||||
netif_stop_queue(dev);
|
||||
|
||||
if ((err = lapb_unregister(dev)) != LAPB_OK)
|
||||
pr_err("lapb_unregister error: %d\n", err);
|
||||
|
||||
|
@ -177,7 +177,8 @@ struct ath_frame_info {
|
||||
s8 txq;
|
||||
u8 keyix;
|
||||
u8 rtscts_rate;
|
||||
u8 retries : 7;
|
||||
u8 retries : 6;
|
||||
u8 dyn_smps : 1;
|
||||
u8 baw_tracked : 1;
|
||||
u8 tx_power;
|
||||
enum ath9k_key_type keytype:2;
|
||||
|
@ -1271,6 +1271,11 @@ static void ath_buf_set_rate(struct ath_softc *sc, struct ath_buf *bf,
|
||||
is_40, is_sgi, is_sp);
|
||||
if (rix < 8 && (tx_info->flags & IEEE80211_TX_CTL_STBC))
|
||||
info->rates[i].RateFlags |= ATH9K_RATESERIES_STBC;
|
||||
if (rix >= 8 && fi->dyn_smps) {
|
||||
info->rates[i].RateFlags |=
|
||||
ATH9K_RATESERIES_RTS_CTS;
|
||||
info->flags |= ATH9K_TXDESC_CTSENA;
|
||||
}
|
||||
|
||||
info->txpower[i] = ath_get_rate_txpower(sc, bf, rix,
|
||||
is_40, false);
|
||||
@ -2111,6 +2116,7 @@ static void setup_frame_info(struct ieee80211_hw *hw,
|
||||
fi->keyix = an->ps_key;
|
||||
else
|
||||
fi->keyix = ATH9K_TXKEYIX_INVALID;
|
||||
fi->dyn_smps = sta && sta->smps_mode == IEEE80211_SMPS_DYNAMIC;
|
||||
fi->keytype = keytype;
|
||||
fi->framelen = framelen;
|
||||
fi->tx_power = txpower;
|
||||
|
@ -454,13 +454,13 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
||||
{
|
||||
struct sk_buff *skb = q->rx_head;
|
||||
struct skb_shared_info *shinfo = skb_shinfo(skb);
|
||||
int nr_frags = shinfo->nr_frags;
|
||||
|
||||
if (shinfo->nr_frags < ARRAY_SIZE(shinfo->frags)) {
|
||||
if (nr_frags < ARRAY_SIZE(shinfo->frags)) {
|
||||
struct page *page = virt_to_head_page(data);
|
||||
int offset = data - page_address(page) + q->buf_offset;
|
||||
|
||||
skb_add_rx_frag(skb, shinfo->nr_frags, page, offset, len,
|
||||
q->buf_size);
|
||||
skb_add_rx_frag(skb, nr_frags, page, offset, len, q->buf_size);
|
||||
} else {
|
||||
skb_free_frag(data);
|
||||
}
|
||||
@ -469,7 +469,10 @@ mt76_add_fragment(struct mt76_dev *dev, struct mt76_queue *q, void *data,
|
||||
return;
|
||||
|
||||
q->rx_head = NULL;
|
||||
dev->drv->rx_skb(dev, q - dev->q_rx, skb);
|
||||
if (nr_frags < ARRAY_SIZE(shinfo->frags))
|
||||
dev->drv->rx_skb(dev, q - dev->q_rx, skb);
|
||||
else
|
||||
dev_kfree_skb(skb);
|
||||
}
|
||||
|
||||
static int
|
||||
|
@ -455,7 +455,6 @@ static void nvme_free_ns_head(struct kref *ref)
|
||||
|
||||
nvme_mpath_remove_disk(head);
|
||||
ida_simple_remove(&head->subsys->ns_ida, head->instance);
|
||||
list_del_init(&head->entry);
|
||||
cleanup_srcu_struct(&head->srcu);
|
||||
nvme_put_subsystem(head->subsys);
|
||||
kfree(head);
|
||||
@ -3374,7 +3373,6 @@ static int __nvme_check_ids(struct nvme_subsystem *subsys,
|
||||
|
||||
list_for_each_entry(h, &subsys->nsheads, entry) {
|
||||
if (nvme_ns_ids_valid(&new->ids) &&
|
||||
!list_empty(&h->list) &&
|
||||
nvme_ns_ids_equal(&new->ids, &h->ids))
|
||||
return -EINVAL;
|
||||
}
|
||||
@ -3469,6 +3467,7 @@ static int nvme_init_ns_head(struct nvme_ns *ns, unsigned nsid,
|
||||
"IDs don't match for shared namespace %d\n",
|
||||
nsid);
|
||||
ret = -EINVAL;
|
||||
nvme_put_ns_head(head);
|
||||
goto out_unlock;
|
||||
}
|
||||
}
|
||||
@ -3629,6 +3628,8 @@ static int nvme_alloc_ns(struct nvme_ctrl *ctrl, unsigned nsid)
|
||||
out_unlink_ns:
|
||||
mutex_lock(&ctrl->subsys->lock);
|
||||
list_del_rcu(&ns->siblings);
|
||||
if (list_empty(&ns->head->list))
|
||||
list_del_init(&ns->head->entry);
|
||||
mutex_unlock(&ctrl->subsys->lock);
|
||||
nvme_put_ns_head(ns->head);
|
||||
out_free_id:
|
||||
@ -3651,7 +3652,10 @@ static void nvme_ns_remove(struct nvme_ns *ns)
|
||||
|
||||
mutex_lock(&ns->ctrl->subsys->lock);
|
||||
list_del_rcu(&ns->siblings);
|
||||
if (list_empty(&ns->head->list))
|
||||
list_del_init(&ns->head->entry);
|
||||
mutex_unlock(&ns->ctrl->subsys->lock);
|
||||
|
||||
synchronize_rcu(); /* guarantee not available in head->list */
|
||||
nvme_mpath_clear_current_path(ns);
|
||||
synchronize_srcu(&ns->head->srcu); /* wait for concurrent submissions */
|
||||
|
@ -384,13 +384,9 @@ static int xgene_msi_hwirq_alloc(unsigned int cpu)
|
||||
if (!msi_group->gic_irq)
|
||||
continue;
|
||||
|
||||
irq_set_chained_handler(msi_group->gic_irq,
|
||||
xgene_msi_isr);
|
||||
err = irq_set_handler_data(msi_group->gic_irq, msi_group);
|
||||
if (err) {
|
||||
pr_err("failed to register GIC IRQ handler\n");
|
||||
return -EINVAL;
|
||||
}
|
||||
irq_set_chained_handler_and_data(msi_group->gic_irq,
|
||||
xgene_msi_isr, msi_group);
|
||||
|
||||
/*
|
||||
* Statically allocate MSI GIC IRQs to each CPU core.
|
||||
* With 8-core X-Gene v1, 2 MSI GIC IRQs are allocated
|
||||
|
@ -1063,14 +1063,14 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
||||
err = of_pci_get_devfn(child);
|
||||
if (err < 0) {
|
||||
dev_err(dev, "failed to parse devfn: %d\n", err);
|
||||
return err;
|
||||
goto error_put_node;
|
||||
}
|
||||
|
||||
slot = PCI_SLOT(err);
|
||||
|
||||
err = mtk_pcie_parse_port(pcie, child, slot);
|
||||
if (err)
|
||||
return err;
|
||||
goto error_put_node;
|
||||
}
|
||||
|
||||
err = mtk_pcie_subsys_powerup(pcie);
|
||||
@ -1086,6 +1086,9 @@ static int mtk_pcie_setup(struct mtk_pcie *pcie)
|
||||
mtk_pcie_subsys_powerdown(pcie);
|
||||
|
||||
return 0;
|
||||
error_put_node:
|
||||
of_node_put(child);
|
||||
return err;
|
||||
}
|
||||
|
||||
static int mtk_pcie_probe(struct platform_device *pdev)
|
||||
|
@ -3904,6 +3904,10 @@ int pci_register_io_range(struct fwnode_handle *fwnode, phys_addr_t addr,
|
||||
ret = logic_pio_register_range(range);
|
||||
if (ret)
|
||||
kfree(range);
|
||||
|
||||
/* Ignore duplicates due to deferred probing */
|
||||
if (ret == -EEXIST)
|
||||
ret = 0;
|
||||
#endif
|
||||
|
||||
return ret;
|
||||
|
@ -426,11 +426,8 @@ static int olpc_ec_probe(struct platform_device *pdev)
|
||||
|
||||
/* get the EC revision */
|
||||
err = olpc_ec_cmd(EC_FIRMWARE_REV, NULL, 0, &ec->version, 1);
|
||||
if (err) {
|
||||
ec_priv = NULL;
|
||||
kfree(ec);
|
||||
return err;
|
||||
}
|
||||
if (err)
|
||||
goto error;
|
||||
|
||||
config.dev = pdev->dev.parent;
|
||||
config.driver_data = ec;
|
||||
@ -440,12 +437,16 @@ static int olpc_ec_probe(struct platform_device *pdev)
|
||||
if (IS_ERR(ec->dcon_rdev)) {
|
||||
dev_err(&pdev->dev, "failed to register DCON regulator\n");
|
||||
err = PTR_ERR(ec->dcon_rdev);
|
||||
kfree(ec);
|
||||
return err;
|
||||
goto error;
|
||||
}
|
||||
|
||||
ec->dbgfs_dir = olpc_ec_setup_debugfs();
|
||||
|
||||
return 0;
|
||||
|
||||
error:
|
||||
ec_priv = NULL;
|
||||
kfree(ec);
|
||||
return err;
|
||||
}
|
||||
|
||||
|
@ -3087,7 +3087,8 @@ static blk_status_t do_dasd_request(struct blk_mq_hw_ctx *hctx,
|
||||
|
||||
basedev = block->base;
|
||||
spin_lock_irq(&dq->lock);
|
||||
if (basedev->state < DASD_STATE_READY) {
|
||||
if (basedev->state < DASD_STATE_READY ||
|
||||
test_bit(DASD_FLAG_OFFLINE, &basedev->flags)) {
|
||||
DBF_DEV_EVENT(DBF_ERR, basedev,
|
||||
"device not ready for request %p", req);
|
||||
rc = BLK_STS_IOERR;
|
||||
@ -3522,8 +3523,6 @@ void dasd_generic_remove(struct ccw_device *cdev)
|
||||
struct dasd_device *device;
|
||||
struct dasd_block *block;
|
||||
|
||||
cdev->handler = NULL;
|
||||
|
||||
device = dasd_device_from_cdev(cdev);
|
||||
if (IS_ERR(device)) {
|
||||
dasd_remove_sysfs_files(cdev);
|
||||
@ -3542,6 +3541,7 @@ void dasd_generic_remove(struct ccw_device *cdev)
|
||||
* no quite down yet.
|
||||
*/
|
||||
dasd_set_target_state(device, DASD_STATE_NEW);
|
||||
cdev->handler = NULL;
|
||||
/* dasd_delete_device destroys the device reference. */
|
||||
block = device->block;
|
||||
dasd_delete_device(device);
|
||||
|
@ -506,7 +506,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return copy_to_user((void __user *)arg, &info, minsz);
|
||||
return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
||||
}
|
||||
case VFIO_DEVICE_GET_REGION_INFO:
|
||||
{
|
||||
@ -524,7 +524,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
||||
if (ret)
|
||||
return ret;
|
||||
|
||||
return copy_to_user((void __user *)arg, &info, minsz);
|
||||
return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
||||
}
|
||||
case VFIO_DEVICE_GET_IRQ_INFO:
|
||||
{
|
||||
@ -545,7 +545,7 @@ static ssize_t vfio_ccw_mdev_ioctl(struct mdev_device *mdev,
|
||||
if (info.count == -1)
|
||||
return -EINVAL;
|
||||
|
||||
return copy_to_user((void __user *)arg, &info, minsz);
|
||||
return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
||||
}
|
||||
case VFIO_DEVICE_SET_IRQS:
|
||||
{
|
||||
|
@ -1279,7 +1279,7 @@ static int vfio_ap_mdev_get_device_info(unsigned long arg)
|
||||
info.num_regions = 0;
|
||||
info.num_irqs = 0;
|
||||
|
||||
return copy_to_user((void __user *)arg, &info, minsz);
|
||||
return copy_to_user((void __user *)arg, &info, minsz) ? -EFAULT : 0;
|
||||
}
|
||||
|
||||
static ssize_t vfio_ap_mdev_ioctl(struct mdev_device *mdev,
|
||||
|
@ -1532,14 +1532,9 @@ check_mgmt:
|
||||
}
|
||||
rc = iscsi_prep_scsi_cmd_pdu(conn->task);
|
||||
if (rc) {
|
||||
if (rc == -ENOMEM || rc == -EACCES) {
|
||||
spin_lock_bh(&conn->taskqueuelock);
|
||||
list_add_tail(&conn->task->running,
|
||||
&conn->cmdqueue);
|
||||
conn->task = NULL;
|
||||
spin_unlock_bh(&conn->taskqueuelock);
|
||||
goto done;
|
||||
} else
|
||||
if (rc == -ENOMEM || rc == -EACCES)
|
||||
fail_scsi_task(conn->task, DID_IMM_RETRY);
|
||||
else
|
||||
fail_scsi_task(conn->task, DID_ABORT);
|
||||
spin_lock_bh(&conn->taskqueuelock);
|
||||
continue;
|
||||
|
@ -924,8 +924,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
||||
mask |= STM32H7_SPI_SR_RXP;
|
||||
|
||||
if (!(sr & mask)) {
|
||||
dev_dbg(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
|
||||
sr, ier);
|
||||
dev_warn(spi->dev, "spurious IT (sr=0x%08x, ier=0x%08x)\n",
|
||||
sr, ier);
|
||||
spin_unlock_irqrestore(&spi->lock, flags);
|
||||
return IRQ_NONE;
|
||||
}
|
||||
@ -952,15 +952,8 @@ static irqreturn_t stm32h7_spi_irq_thread(int irq, void *dev_id)
|
||||
}
|
||||
|
||||
if (sr & STM32H7_SPI_SR_OVR) {
|
||||
dev_warn(spi->dev, "Overrun: received value discarded\n");
|
||||
if (!spi->cur_usedma && (spi->rx_buf && (spi->rx_len > 0)))
|
||||
stm32h7_spi_read_rxfifo(spi, false);
|
||||
/*
|
||||
* If overrun is detected while using DMA, it means that
|
||||
* something went wrong, so stop the current transfer
|
||||
*/
|
||||
if (spi->cur_usedma)
|
||||
end = true;
|
||||
dev_err(spi->dev, "Overrun: RX data lost\n");
|
||||
end = true;
|
||||
}
|
||||
|
||||
if (sr & STM32H7_SPI_SR_EOT) {
|
||||
|
@ -260,6 +260,7 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
|
||||
struct apci1032_private *devpriv = dev->private;
|
||||
struct comedi_subdevice *s = dev->read_subdev;
|
||||
unsigned int ctrl;
|
||||
unsigned short val;
|
||||
|
||||
/* check interrupt is from this device */
|
||||
if ((inl(devpriv->amcc_iobase + AMCC_OP_REG_INTCSR) &
|
||||
@ -275,7 +276,8 @@ static irqreturn_t apci1032_interrupt(int irq, void *d)
|
||||
outl(ctrl & ~APCI1032_CTRL_INT_ENA, dev->iobase + APCI1032_CTRL_REG);
|
||||
|
||||
s->state = inl(dev->iobase + APCI1032_STATUS_REG) & 0xffff;
|
||||
comedi_buf_write_samples(s, &s->state, 1);
|
||||
val = s->state;
|
||||
comedi_buf_write_samples(s, &val, 1);
|
||||
comedi_handle_events(dev, s);
|
||||
|
||||
/* enable the interrupt */
|
||||
|
@ -208,7 +208,7 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
|
||||
struct comedi_device *dev = d;
|
||||
struct apci1500_private *devpriv = dev->private;
|
||||
struct comedi_subdevice *s = dev->read_subdev;
|
||||
unsigned int status = 0;
|
||||
unsigned short status = 0;
|
||||
unsigned int val;
|
||||
|
||||
val = inl(devpriv->amcc + AMCC_OP_REG_INTCSR);
|
||||
@ -238,14 +238,14 @@ static irqreturn_t apci1500_interrupt(int irq, void *d)
|
||||
*
|
||||
* Mask Meaning
|
||||
* ---------- ------------------------------------------
|
||||
* 0x00000001 Event 1 has occurred
|
||||
* 0x00000010 Event 2 has occurred
|
||||
* 0x00000100 Counter/timer 1 has run down (not implemented)
|
||||
* 0x00001000 Counter/timer 2 has run down (not implemented)
|
||||
* 0x00010000 Counter 3 has run down (not implemented)
|
||||
* 0x00100000 Watchdog has run down (not implemented)
|
||||
* 0x01000000 Voltage error
|
||||
* 0x10000000 Short-circuit error
|
||||
* 0b00000001 Event 1 has occurred
|
||||
* 0b00000010 Event 2 has occurred
|
||||
* 0b00000100 Counter/timer 1 has run down (not implemented)
|
||||
* 0b00001000 Counter/timer 2 has run down (not implemented)
|
||||
* 0b00010000 Counter 3 has run down (not implemented)
|
||||
* 0b00100000 Watchdog has run down (not implemented)
|
||||
* 0b01000000 Voltage error
|
||||
* 0b10000000 Short-circuit error
|
||||
*/
|
||||
comedi_buf_write_samples(s, &status, 1);
|
||||
comedi_handle_events(dev, s);
|
||||
|
@ -300,11 +300,11 @@ static int pci1710_ai_eoc(struct comedi_device *dev,
|
||||
static int pci1710_ai_read_sample(struct comedi_device *dev,
|
||||
struct comedi_subdevice *s,
|
||||
unsigned int cur_chan,
|
||||
unsigned int *val)
|
||||
unsigned short *val)
|
||||
{
|
||||
const struct boardtype *board = dev->board_ptr;
|
||||
struct pci1710_private *devpriv = dev->private;
|
||||
unsigned int sample;
|
||||
unsigned short sample;
|
||||
unsigned int chan;
|
||||
|
||||
sample = inw(dev->iobase + PCI171X_AD_DATA_REG);
|
||||
@ -345,7 +345,7 @@ static int pci1710_ai_insn_read(struct comedi_device *dev,
|
||||
pci1710_ai_setup_chanlist(dev, s, &insn->chanspec, 1, 1);
|
||||
|
||||
for (i = 0; i < insn->n; i++) {
|
||||
unsigned int val;
|
||||
unsigned short val;
|
||||
|
||||
/* start conversion */
|
||||
outw(0, dev->iobase + PCI171X_SOFTTRG_REG);
|
||||
@ -395,7 +395,7 @@ static void pci1710_handle_every_sample(struct comedi_device *dev,
|
||||
{
|
||||
struct comedi_cmd *cmd = &s->async->cmd;
|
||||
unsigned int status;
|
||||
unsigned int val;
|
||||
unsigned short val;
|
||||
int ret;
|
||||
|
||||
status = inw(dev->iobase + PCI171X_STATUS_REG);
|
||||
@ -455,7 +455,7 @@ static void pci1710_handle_fifo(struct comedi_device *dev,
|
||||
}
|
||||
|
||||
for (i = 0; i < devpriv->max_samples; i++) {
|
||||
unsigned int val;
|
||||
unsigned short val;
|
||||
int ret;
|
||||
|
||||
ret = pci1710_ai_read_sample(dev, s, s->async->cur_chan, &val);
|
||||
|
@ -186,7 +186,7 @@ static irqreturn_t das6402_interrupt(int irq, void *d)
|
||||
if (status & DAS6402_STATUS_FFULL) {
|
||||
async->events |= COMEDI_CB_OVERFLOW;
|
||||
} else if (status & DAS6402_STATUS_FFNE) {
|
||||
unsigned int val;
|
||||
unsigned short val;
|
||||
|
||||
val = das6402_ai_read_sample(dev, s);
|
||||
comedi_buf_write_samples(s, &val, 1);
|
||||
|
@ -427,7 +427,7 @@ static irqreturn_t das800_interrupt(int irq, void *d)
|
||||
struct comedi_cmd *cmd;
|
||||
unsigned long irq_flags;
|
||||
unsigned int status;
|
||||
unsigned int val;
|
||||
unsigned short val;
|
||||
bool fifo_empty;
|
||||
bool fifo_overflow;
|
||||
int i;
|
||||
|
@ -404,7 +404,7 @@ static irqreturn_t dmm32at_isr(int irq, void *d)
|
||||
{
|
||||
struct comedi_device *dev = d;
|
||||
unsigned char intstat;
|
||||
unsigned int val;
|
||||
unsigned short val;
|
||||
int i;
|
||||
|
||||
if (!dev->attached) {
|
||||
|
@ -924,7 +924,7 @@ static irqreturn_t me4000_ai_isr(int irq, void *dev_id)
|
||||
struct comedi_subdevice *s = dev->read_subdev;
|
||||
int i;
|
||||
int c = 0;
|
||||
unsigned int lval;
|
||||
unsigned short lval;
|
||||
|
||||
if (!dev->attached)
|
||||
return IRQ_NONE;
|
||||
|
@ -184,7 +184,7 @@ static irqreturn_t pcl711_interrupt(int irq, void *d)
|
||||
struct comedi_device *dev = d;
|
||||
struct comedi_subdevice *s = dev->read_subdev;
|
||||
struct comedi_cmd *cmd = &s->async->cmd;
|
||||
unsigned int data;
|
||||
unsigned short data;
|
||||
|
||||
if (!dev->attached) {
|
||||
dev_err(dev->class_dev, "spurious interrupt\n");
|
||||
|
@ -423,7 +423,7 @@ static int pcl818_ai_eoc(struct comedi_device *dev,
|
||||
|
||||
static bool pcl818_ai_write_sample(struct comedi_device *dev,
|
||||
struct comedi_subdevice *s,
|
||||
unsigned int chan, unsigned int val)
|
||||
unsigned int chan, unsigned short val)
|
||||
{
|
||||
struct pcl818_private *devpriv = dev->private;
|
||||
struct comedi_cmd *cmd = &s->async->cmd;
|
||||
|
@ -1120,6 +1120,7 @@ static int ks_wlan_set_scan(struct net_device *dev,
|
||||
{
|
||||
struct ks_wlan_private *priv = netdev_priv(dev);
|
||||
struct iw_scan_req *req = NULL;
|
||||
int len;
|
||||
|
||||
if (priv->sleep_mode == SLP_SLEEP)
|
||||
return -EPERM;
|
||||
@ -1129,8 +1130,9 @@ static int ks_wlan_set_scan(struct net_device *dev,
|
||||
if (wrqu->data.length == sizeof(struct iw_scan_req) &&
|
||||
wrqu->data.flags & IW_SCAN_THIS_ESSID) {
|
||||
req = (struct iw_scan_req *)extra;
|
||||
priv->scan_ssid_len = req->essid_len;
|
||||
memcpy(priv->scan_ssid, req->essid, priv->scan_ssid_len);
|
||||
len = min_t(int, req->essid_len, IW_ESSID_MAX_SIZE);
|
||||
priv->scan_ssid_len = len;
|
||||
memcpy(priv->scan_ssid, req->essid, len);
|
||||
} else {
|
||||
priv->scan_ssid_len = 0;
|
||||
}
|
||||
|
@ -784,6 +784,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
||||
/* SSID */
|
||||
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _SSID_IE_, &ie_len, (pbss_network->ie_length - _BEACON_IE_OFFSET_));
|
||||
if (p && ie_len > 0) {
|
||||
ie_len = min_t(int, ie_len, sizeof(pbss_network->ssid.ssid));
|
||||
memset(&pbss_network->ssid, 0, sizeof(struct ndis_802_11_ssid));
|
||||
memcpy(pbss_network->ssid.ssid, (p + 2), ie_len);
|
||||
pbss_network->ssid.ssid_length = ie_len;
|
||||
@ -802,6 +803,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
||||
/* get supported rates */
|
||||
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _SUPPORTEDRATES_IE_, &ie_len, (pbss_network->ie_length - _BEACON_IE_OFFSET_));
|
||||
if (p) {
|
||||
ie_len = min_t(int, ie_len, NDIS_802_11_LENGTH_RATES_EX);
|
||||
memcpy(supportRate, p + 2, ie_len);
|
||||
supportRateNum = ie_len;
|
||||
}
|
||||
@ -809,6 +811,8 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
||||
/* get ext_supported rates */
|
||||
p = rtw_get_ie(ie + _BEACON_IE_OFFSET_, _EXT_SUPPORTEDRATES_IE_, &ie_len, pbss_network->ie_length - _BEACON_IE_OFFSET_);
|
||||
if (p) {
|
||||
ie_len = min_t(int, ie_len,
|
||||
NDIS_802_11_LENGTH_RATES_EX - supportRateNum);
|
||||
memcpy(supportRate + supportRateNum, p + 2, ie_len);
|
||||
supportRateNum += ie_len;
|
||||
}
|
||||
@ -922,6 +926,7 @@ int rtw_check_beacon_data(struct adapter *padapter, u8 *pbuf, int len)
|
||||
|
||||
pht_cap->mcs.rx_mask[0] = 0xff;
|
||||
pht_cap->mcs.rx_mask[1] = 0x0;
|
||||
ie_len = min_t(int, ie_len, sizeof(pmlmepriv->htpriv.ht_cap));
|
||||
memcpy(&pmlmepriv->htpriv.ht_cap, p + 2, ie_len);
|
||||
}
|
||||
|
||||
|
@ -1160,9 +1160,11 @@ static int rtw_wx_set_scan(struct net_device *dev, struct iw_request_info *a,
|
||||
break;
|
||||
}
|
||||
sec_len = *(pos++); len -= 1;
|
||||
if (sec_len > 0 && sec_len <= len) {
|
||||
if (sec_len > 0 &&
|
||||
sec_len <= len &&
|
||||
sec_len <= 32) {
|
||||
ssid[ssid_index].ssid_length = sec_len;
|
||||
memcpy(ssid[ssid_index].ssid, pos, ssid[ssid_index].ssid_length);
|
||||
memcpy(ssid[ssid_index].ssid, pos, sec_len);
|
||||
ssid_index++;
|
||||
}
|
||||
pos += sec_len;
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user