This is the 5.4.78 stable release
-----BEGIN PGP SIGNATURE----- iQIzBAABCAAdFiEEZH8oZUiU471FcZm+ONu9yGCSaT4FAl+1Zg0ACgkQONu9yGCS aT75KBAAqvo33a5xoTM+FQQRsRSKaRNOhCQooXEB1bJcas3y+yQ6ehmwCJ8/K1tC JilD+NQt6uuwH2f2cLrH0e4EQcvno390qF/wOCF377bUnKklsxydyaLSLhGYTqR9 5u/vZVf/QoWZc6BvDwPWNo/NwuRPgJ+sVjuFvtt08l0pGQou26WGujl6ElJKBiLV SbbRDlx/f8cJa/oqN8TL/V/VDqJfVLcv6hFRvf44newSUJK05LgCVoM76WEcSQLj GYrtCNwffJtnCUzUr/SctNymsgmjj65df6tKmS0vntWH5kTBnCKK/Mnly38gQbeB nvci1siOUjnnrkBhydKixO4Q6OZmrbuM0g3vXmW5/Az7HjRcX84BRu+yE7aArE3/ GMAIO/D1Wj9Dhxs59cu12IWxRaljkT+5FsZYV55TgcRMmWHq/YzBYFSW15fZ9xEw ehel9m5ou+HqVtz+bR+ar3v6M2bhedJ0fFvXnbN2OhMwHsEUTuYqfTb7k/21dUwE P5k8qGGcYKE1q1gb/Dp3p/hDBjr5h4Mg7z7S8diGsVv3klgrtttgqkOo79JfTESz BS5vsF9yS0k23xemCl3jZ41X9uReXnE3lvEeuDBDdYvHPwnjyzPeUN5jgN6abQm7 CTxp0oPIFW+O8MV+vgF1joK6ykbK8rJRjIUcfzHeI6oKt+HQBJY= =gimO -----END PGP SIGNATURE----- Merge 5.4.78 into android11-5.4-lts Changes in 5.4.78 drm/i915/gem: Flush coherency domains on first set-domain-ioctl time: Prevent undefined behaviour in timespec64_to_ns() nbd: don't update block size after device is started KVM: arm64: Force PTE mapping on fault resulting in a device mapping PCI: qcom: Make sure PCIe is reset before init for rev 2.1.0 usb: dwc3: gadget: Continue to process pending requests usb: dwc3: gadget: Reclaim extra TRBs after request completion btrfs: tracepoints: output proper root owner for trace_find_free_extent() btrfs: sysfs: init devices outside of the chunk_mutex btrfs: reschedule when cloning lots of extents ASoC: Intel: kbl_rt5663_max98927: Fix kabylake_ssp_fixup function genirq: Let GENERIC_IRQ_IPI select IRQ_DOMAIN_HIERARCHY hv_balloon: disable warning when floor reached net: xfrm: fix a race condition during allocing spi ASoC: codecs: wcd9335: Set digital gain range correctly xfs: set xefi_discard when creating a deferred agfl free log intent item netfilter: use actual socket sk rather than skb sk when routing harder netfilter: nf_tables: missing validation from the abort path netfilter: ipset: Update byte and packet counters regardless of whether they match powerpc/eeh_cache: Fix a possible debugfs deadlock perf trace: Fix segfault when trying to trace events by cgroup perf tools: Add missing swap for ino_generation ALSA: hda: prevent undefined shift in snd_hdac_ext_bus_get_link() iommu/vt-d: Fix a bug for PDP check in prq_event_thread afs: Fix warning due to unadvanced marshalling pointer can: rx-offload: don't call kfree_skb() from IRQ context can: dev: can_get_echo_skb(): prevent call to kfree_skb() in hard IRQ context can: dev: __can_get_echo_skb(): fix real payload length return value for RTR frames can: can_create_echo_skb(): fix echo skb generation: always use skb_clone() can: j1939: swap addr and pgn in the send example can: j1939: j1939_sk_bind(): return failure if netdev is down can: ti_hecc: ti_hecc_probe(): add missed clk_disable_unprepare() in error path can: xilinx_can: handle failure cases of pm_runtime_get_sync can: peak_usb: add range checking in decode operations can: peak_usb: peak_usb_get_ts_time(): fix timestamp wrapping can: peak_canfd: pucan_handle_can_rx(): fix echo management when loopback is on can: flexcan: remove FLEXCAN_QUIRK_DISABLE_MECR quirk for LS1021A can: flexcan: flexcan_remove(): disable wakeup completely xfs: flush new eof page on truncate to avoid post-eof corruption xfs: fix scrub flagging rtinherit even if there is no rt device tpm: efi: Don't create binary_bios_measurements file for an empty log random32: make prandom_u32() output unpredictable KVM: arm64: ARM_SMCCC_ARCH_WORKAROUND_1 doesn't return SMCCC_RET_NOT_REQUIRED KVM: x86: don't expose MSR_IA32_UMWAIT_CONTROL unconditionally ath9k_htc: Use appropriate rs_datalen type ASoC: qcom: sdm845: set driver name correctly ASoC: cs42l51: manage mclk shutdown delay usb: dwc3: pci: add support for the Intel Alder Lake-S opp: Reduce the size of critical section in _opp_table_kref_release() usb: gadget: goku_udc: fix potential crashes in probe selftests/ftrace: check for do_sys_openat2 in user-memory test selftests: pidfd: fix compilation errors due to wait.h ALSA: hda: Separate runtime and system suspend ALSA: hda: Reinstate runtime_allow() for all hda controllers gfs2: Free rd_bits later in gfs2_clear_rgrpd to fix use-after-free gfs2: Add missing truncate_inode_pages_final for sd_aspace gfs2: check for live vs. read-only file system in gfs2_fitrim scsi: hpsa: Fix memory leak in hpsa_init_one() drm/amdgpu: perform srbm soft reset always on SDMA resume drm/amd/pm: perform SMC reset on suspend/hibernation drm/amd/pm: do not use ixFEATURE_STATUS for checking smc running mac80211: fix use of skb payload instead of header cfg80211: initialize wdev data earlier cfg80211: regulatory: Fix inconsistent format argument tracing: Fix the checking of stackidx in __ftrace_trace_stack scsi: scsi_dh_alua: Avoid crash during alua_bus_detach() scsi: mpt3sas: Fix timeouts observed while reenabling IRQ nvme: introduce nvme_sync_io_queues nvme-rdma: avoid race between time out and tear down nvme-tcp: avoid race between time out and tear down nvme-rdma: avoid repeated request completion nvme-tcp: avoid repeated request completion iommu/amd: Increase interrupt remapping table limit to 512 entries s390/smp: move rcu_cpu_starting() earlier vfio: platform: fix reference leak in vfio_platform_open vfio/pci: Bypass IGD init in case of -ENODEV i2c: mediatek: move dma reset before i2c reset amd/amdgpu: Disable VCN DPG mode for Picasso selftests: proc: fix warning: _GNU_SOURCE redefined riscv: Set text_offset correctly for M-Mode i2c: sh_mobile: implement atomic transfers tpm_tis: Disable interrupts on ThinkPad T490s spi: bcm2835: remove use of uninitialized gpio flags variable tick/common: Touch watchdog in tick_unfreeze() on all CPUs mfd: sprd: Add wakeup capability for PMIC IRQ pinctrl: intel: Set default bias in case no particular value given ARM: 9019/1: kprobes: Avoid fortify_panic() when copying optprobe template bpf: Don't rely on GCC __attribute__((optimize)) to disable GCSE pinctrl: aspeed: Fix GPI only function problem. net/mlx5: Fix deletion of duplicate rules SUNRPC: Fix general protection fault in trace_rpc_xdr_overflow() bpf: Zero-fill re-used per-cpu map element nbd: fix a block_device refcount leak in nbd_release igc: Fix returning wrong statistics xfs: fix flags argument to rmap lookup when converting shared file rmaps xfs: set the unwritten bit in rmap lookup flags in xchk_bmap_get_rmapextents xfs: fix rmap key and record comparison functions xfs: fix brainos in the refcount scrubber's rmap fragment processor lan743x: fix "BUG: invalid wait context" when setting rx mode xfs: fix a missing unlock on error in xfs_fs_map_blocks of/address: Fix of_node memory leak in of_dma_is_coherent cosa: Add missing kfree in error path of cosa_write vrf: Fix fast path output packet handling with async Netfilter rules perf: Fix get_recursion_context() erofs: derive atime instead of leaving it empty ext4: correctly report "not supported" for {usr,grp}jquota when !CONFIG_QUOTA ext4: unlock xattr_sem properly in ext4_inline_data_truncate() btrfs: ref-verify: fix memory leak in btrfs_ref_tree_mod btrfs: fix min reserved size calculation in merge_reloc_root btrfs: dev-replace: fail mount if we don't have replace item with target device KVM: arm64: Don't hide ID registers from userspace thunderbolt: Fix memory leak if ida_simple_get() fails in enumerate_services() thunderbolt: Add the missed ida_simple_remove() in ring_request_msix() uio: Fix use-after-free in uio_unregister_device() usb: cdc-acm: Add DISABLE_ECHO for Renesas USB Download mode xhci: hisilicon: fix refercence leak in xhci_histb_probe virtio: virtio_console: fix DMA memory allocation for rproc serial mei: protect mei_cl_mtu from null dereference futex: Don't enable IRQs unconditionally in put_pi_state() jbd2: fix up sparse warnings in checkpoint code mm/slub: fix panic in slab_alloc_node() Revert "kernel/reboot.c: convert simple_strtoul to kstrtoint" reboot: fix overflow parsing reboot cpu number ocfs2: initialize ip_next_orphan btrfs: fix potential overflow in cluster_pages_for_defrag on 32bit arch selinux: Fix error return code in sel_ib_pkey_sid_slow() gpio: pcie-idio-24: Fix irq mask when masking gpio: pcie-idio-24: Fix IRQ Enable Register value gpio: pcie-idio-24: Enable PEX8311 interrupts mmc: sdhci-of-esdhc: Handle pulse width detection erratum for more SoCs mmc: renesas_sdhi_core: Add missing tmio_mmc_host_free() at remove don't dump the threads that had been already exiting when zapped. drm/gma500: Fix out-of-bounds access to struct drm_device.vblank[] pinctrl: amd: use higher precision for 512 RtcClk pinctrl: amd: fix incorrect way to disable debounce filter swiotlb: fix "x86: Don't panic if can not alloc buffer for swiotlb" IPv6: Set SIT tunnel hard_header_len to zero net/af_iucv: fix null pointer dereference on shutdown net: udp: fix UDP header access on Fast/frag0 UDP GRO net: Update window_clamp if SOCK_RCVBUF is set net/x25: Fix null-ptr-deref in x25_connect tipc: fix memory leak in tipc_topsrv_start() r8169: fix potential skb double free in an error path drm/i915: Correctly set SFC capability for video engines powerpc/603: Always fault when _PAGE_ACCESSED is not set x86/speculation: Allow IBPB to be conditionally enabled on CPUs with always-on STIBP perf scripting python: Avoid declaring function pointers with a visibility attribute perf/core: Fix race in the perf_mmap_close() function net: sch_generic: fix the missing new qdisc assignment bug Convert trailing spaces and periods in path components Linux 5.4.78 Signed-off-by: Greg Kroah-Hartman <gregkh@google.com> Change-Id: Iac77690a370f99dc3518ab5bd4660fc31d0832c0
This commit is contained in:
commit
118da4b0e4
@ -414,8 +414,8 @@ Send:
|
||||
.can_family = AF_CAN,
|
||||
.can_addr.j1939 = {
|
||||
.name = J1939_NO_NAME;
|
||||
.pgn = 0x30,
|
||||
.addr = 0x12300,
|
||||
.addr = 0x30,
|
||||
.pgn = 0x12300,
|
||||
},
|
||||
};
|
||||
|
||||
|
2
Makefile
2
Makefile
@ -1,7 +1,7 @@
|
||||
# SPDX-License-Identifier: GPL-2.0
|
||||
VERSION = 5
|
||||
PATCHLEVEL = 4
|
||||
SUBLEVEL = 77
|
||||
SUBLEVEL = 78
|
||||
EXTRAVERSION =
|
||||
NAME = Kleptomaniac Octopus
|
||||
|
||||
|
@ -44,20 +44,20 @@ int kprobe_exceptions_notify(struct notifier_block *self,
|
||||
unsigned long val, void *data);
|
||||
|
||||
/* optinsn template addresses */
|
||||
extern __visible kprobe_opcode_t optprobe_template_entry;
|
||||
extern __visible kprobe_opcode_t optprobe_template_val;
|
||||
extern __visible kprobe_opcode_t optprobe_template_call;
|
||||
extern __visible kprobe_opcode_t optprobe_template_end;
|
||||
extern __visible kprobe_opcode_t optprobe_template_sub_sp;
|
||||
extern __visible kprobe_opcode_t optprobe_template_add_sp;
|
||||
extern __visible kprobe_opcode_t optprobe_template_restore_begin;
|
||||
extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn;
|
||||
extern __visible kprobe_opcode_t optprobe_template_restore_end;
|
||||
extern __visible kprobe_opcode_t optprobe_template_entry[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_val[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_call[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_end[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_sub_sp[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_add_sp[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_restore_begin[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_restore_orig_insn[];
|
||||
extern __visible kprobe_opcode_t optprobe_template_restore_end[];
|
||||
|
||||
#define MAX_OPTIMIZED_LENGTH 4
|
||||
#define MAX_OPTINSN_SIZE \
|
||||
((unsigned long)&optprobe_template_end - \
|
||||
(unsigned long)&optprobe_template_entry)
|
||||
((unsigned long)optprobe_template_end - \
|
||||
(unsigned long)optprobe_template_entry)
|
||||
#define RELATIVEJUMP_SIZE 4
|
||||
|
||||
struct arch_optimized_insn {
|
||||
|
@ -85,21 +85,21 @@ asm (
|
||||
"optprobe_template_end:\n");
|
||||
|
||||
#define TMPL_VAL_IDX \
|
||||
((unsigned long *)&optprobe_template_val - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_val - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_CALL_IDX \
|
||||
((unsigned long *)&optprobe_template_call - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_call - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_END_IDX \
|
||||
((unsigned long *)&optprobe_template_end - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_end - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_ADD_SP \
|
||||
((unsigned long *)&optprobe_template_add_sp - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_add_sp - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_SUB_SP \
|
||||
((unsigned long *)&optprobe_template_sub_sp - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_sub_sp - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_RESTORE_BEGIN \
|
||||
((unsigned long *)&optprobe_template_restore_begin - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_restore_begin - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_RESTORE_ORIGN_INSN \
|
||||
((unsigned long *)&optprobe_template_restore_orig_insn - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_restore_orig_insn - (unsigned long *)optprobe_template_entry)
|
||||
#define TMPL_RESTORE_END \
|
||||
((unsigned long *)&optprobe_template_restore_end - (unsigned long *)&optprobe_template_entry)
|
||||
((unsigned long *)optprobe_template_restore_end - (unsigned long *)optprobe_template_entry)
|
||||
|
||||
/*
|
||||
* ARM can always optimize an instruction when using ARM ISA, except
|
||||
@ -234,7 +234,7 @@ int arch_prepare_optimized_kprobe(struct optimized_kprobe *op, struct kprobe *or
|
||||
}
|
||||
|
||||
/* Copy arch-dep-instance from template. */
|
||||
memcpy(code, (unsigned long *)&optprobe_template_entry,
|
||||
memcpy(code, (unsigned long *)optprobe_template_entry,
|
||||
TMPL_END_IDX * sizeof(kprobe_opcode_t));
|
||||
|
||||
/* Adjust buffer according to instruction. */
|
||||
|
@ -1132,16 +1132,6 @@ static unsigned int sve_visibility(const struct kvm_vcpu *vcpu,
|
||||
return REG_HIDDEN_USER | REG_HIDDEN_GUEST;
|
||||
}
|
||||
|
||||
/* Visibility overrides for SVE-specific ID registers */
|
||||
static unsigned int sve_id_visibility(const struct kvm_vcpu *vcpu,
|
||||
const struct sys_reg_desc *rd)
|
||||
{
|
||||
if (vcpu_has_sve(vcpu))
|
||||
return 0;
|
||||
|
||||
return REG_HIDDEN_USER;
|
||||
}
|
||||
|
||||
/* Generate the emulated ID_AA64ZFR0_EL1 value exposed to the guest */
|
||||
static u64 guest_id_aa64zfr0_el1(const struct kvm_vcpu *vcpu)
|
||||
{
|
||||
@ -1168,9 +1158,6 @@ static int get_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
|
||||
{
|
||||
u64 val;
|
||||
|
||||
if (WARN_ON(!vcpu_has_sve(vcpu)))
|
||||
return -ENOENT;
|
||||
|
||||
val = guest_id_aa64zfr0_el1(vcpu);
|
||||
return reg_to_user(uaddr, &val, reg->id);
|
||||
}
|
||||
@ -1183,9 +1170,6 @@ static int set_id_aa64zfr0_el1(struct kvm_vcpu *vcpu,
|
||||
int err;
|
||||
u64 val;
|
||||
|
||||
if (WARN_ON(!vcpu_has_sve(vcpu)))
|
||||
return -ENOENT;
|
||||
|
||||
err = reg_from_user(&val, uaddr, id);
|
||||
if (err)
|
||||
return err;
|
||||
@ -1448,7 +1432,7 @@ static const struct sys_reg_desc sys_reg_descs[] = {
|
||||
ID_SANITISED(ID_AA64PFR1_EL1),
|
||||
ID_UNALLOCATED(4,2),
|
||||
ID_UNALLOCATED(4,3),
|
||||
{ SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, .visibility = sve_id_visibility },
|
||||
{ SYS_DESC(SYS_ID_AA64ZFR0_EL1), access_id_aa64zfr0_el1, .get_user = get_id_aa64zfr0_el1, .set_user = set_id_aa64zfr0_el1, },
|
||||
ID_UNALLOCATED(4,5),
|
||||
ID_UNALLOCATED(4,6),
|
||||
ID_UNALLOCATED(4,7),
|
||||
|
@ -272,8 +272,9 @@ static int eeh_addr_cache_show(struct seq_file *s, void *v)
|
||||
{
|
||||
struct pci_io_addr_range *piar;
|
||||
struct rb_node *n;
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock(&pci_io_addr_cache_root.piar_lock);
|
||||
spin_lock_irqsave(&pci_io_addr_cache_root.piar_lock, flags);
|
||||
for (n = rb_first(&pci_io_addr_cache_root.rb_root); n; n = rb_next(n)) {
|
||||
piar = rb_entry(n, struct pci_io_addr_range, rb_node);
|
||||
|
||||
@ -281,7 +282,7 @@ static int eeh_addr_cache_show(struct seq_file *s, void *v)
|
||||
(piar->flags & IORESOURCE_IO) ? "i/o" : "mem",
|
||||
&piar->addr_lo, &piar->addr_hi, pci_name(piar->pcidev));
|
||||
}
|
||||
spin_unlock(&pci_io_addr_cache_root.piar_lock);
|
||||
spin_unlock_irqrestore(&pci_io_addr_cache_root.piar_lock, flags);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -418,11 +418,7 @@ InstructionTLBMiss:
|
||||
cmplw 0,r1,r3
|
||||
#endif
|
||||
mfspr r2, SPRN_SPRG_PGDIR
|
||||
#ifdef CONFIG_SWAP
|
||||
li r1,_PAGE_PRESENT | _PAGE_ACCESSED | _PAGE_EXEC
|
||||
#else
|
||||
li r1,_PAGE_PRESENT | _PAGE_EXEC
|
||||
#endif
|
||||
#if defined(CONFIG_MODULES) || defined(CONFIG_DEBUG_PAGEALLOC)
|
||||
bge- 112f
|
||||
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
||||
@ -484,11 +480,7 @@ DataLoadTLBMiss:
|
||||
lis r1,PAGE_OFFSET@h /* check if kernel address */
|
||||
cmplw 0,r1,r3
|
||||
mfspr r2, SPRN_SPRG_PGDIR
|
||||
#ifdef CONFIG_SWAP
|
||||
li r1, _PAGE_PRESENT | _PAGE_ACCESSED
|
||||
#else
|
||||
li r1, _PAGE_PRESENT
|
||||
#endif
|
||||
bge- 112f
|
||||
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
||||
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
||||
@ -564,11 +556,7 @@ DataStoreTLBMiss:
|
||||
lis r1,PAGE_OFFSET@h /* check if kernel address */
|
||||
cmplw 0,r1,r3
|
||||
mfspr r2, SPRN_SPRG_PGDIR
|
||||
#ifdef CONFIG_SWAP
|
||||
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT | _PAGE_ACCESSED
|
||||
#else
|
||||
li r1, _PAGE_RW | _PAGE_DIRTY | _PAGE_PRESENT
|
||||
#endif
|
||||
bge- 112f
|
||||
lis r2, (swapper_pg_dir - PAGE_OFFSET)@ha /* if kernel address, use */
|
||||
addi r2, r2, (swapper_pg_dir - PAGE_OFFSET)@l /* kernel page table */
|
||||
|
@ -26,12 +26,17 @@ ENTRY(_start)
|
||||
/* reserved */
|
||||
.word 0
|
||||
.balign 8
|
||||
#ifdef CONFIG_RISCV_M_MODE
|
||||
/* Image load offset (0MB) from start of RAM for M-mode */
|
||||
.dword 0
|
||||
#else
|
||||
#if __riscv_xlen == 64
|
||||
/* Image load offset(2MB) from start of RAM */
|
||||
.dword 0x200000
|
||||
#else
|
||||
/* Image load offset(4MB) from start of RAM */
|
||||
.dword 0x400000
|
||||
#endif
|
||||
#endif
|
||||
/* Effective size of kernel image */
|
||||
.dword _end - _start
|
||||
|
@ -845,13 +845,14 @@ void __init smp_detect_cpus(void)
|
||||
|
||||
static void smp_init_secondary(void)
|
||||
{
|
||||
int cpu = smp_processor_id();
|
||||
int cpu = raw_smp_processor_id();
|
||||
|
||||
S390_lowcore.last_update_clock = get_tod_clock();
|
||||
restore_access_regs(S390_lowcore.access_regs_save_area);
|
||||
set_cpu_flag(CIF_ASCE_PRIMARY);
|
||||
set_cpu_flag(CIF_ASCE_SECONDARY);
|
||||
cpu_init();
|
||||
rcu_cpu_starting(cpu);
|
||||
preempt_disable();
|
||||
init_cpu_timer();
|
||||
vtime_init();
|
||||
|
@ -1252,6 +1252,14 @@ static int ssb_prctl_set(struct task_struct *task, unsigned long ctrl)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static bool is_spec_ib_user_controlled(void)
|
||||
{
|
||||
return spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
|
||||
spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP;
|
||||
}
|
||||
|
||||
static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
|
||||
{
|
||||
switch (ctrl) {
|
||||
@ -1259,17 +1267,26 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
|
||||
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
|
||||
return 0;
|
||||
/*
|
||||
* Indirect branch speculation is always disabled in strict
|
||||
* mode. It can neither be enabled if it was force-disabled
|
||||
* by a previous prctl call.
|
||||
|
||||
/*
|
||||
* With strict mode for both IBPB and STIBP, the instruction
|
||||
* code paths avoid checking this task flag and instead,
|
||||
* unconditionally run the instruction. However, STIBP and IBPB
|
||||
* are independent and either can be set to conditionally
|
||||
* enabled regardless of the mode of the other.
|
||||
*
|
||||
* If either is set to conditional, allow the task flag to be
|
||||
* updated, unless it was force-disabled by a previous prctl
|
||||
* call. Currently, this is possible on an AMD CPU which has the
|
||||
* feature X86_FEATURE_AMD_STIBP_ALWAYS_ON. In this case, if the
|
||||
* kernel is booted with 'spectre_v2_user=seccomp', then
|
||||
* spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP and
|
||||
* spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED.
|
||||
*/
|
||||
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED ||
|
||||
if (!is_spec_ib_user_controlled() ||
|
||||
task_spec_ib_force_disable(task))
|
||||
return -EPERM;
|
||||
|
||||
task_clear_spec_ib_disable(task);
|
||||
task_update_spec_tif(task);
|
||||
break;
|
||||
@ -1282,10 +1299,10 @@ static int ib_prctl_set(struct task_struct *task, unsigned long ctrl)
|
||||
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
|
||||
return -EPERM;
|
||||
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
|
||||
|
||||
if (!is_spec_ib_user_controlled())
|
||||
return 0;
|
||||
|
||||
task_set_spec_ib_disable(task);
|
||||
if (ctrl == PR_SPEC_FORCE_DISABLE)
|
||||
task_set_spec_ib_force_disable(task);
|
||||
@ -1350,20 +1367,17 @@ static int ib_prctl_get(struct task_struct *task)
|
||||
if (spectre_v2_user_ibpb == SPECTRE_V2_USER_NONE &&
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_NONE)
|
||||
return PR_SPEC_ENABLE;
|
||||
else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
|
||||
return PR_SPEC_DISABLE;
|
||||
else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_PRCTL ||
|
||||
spectre_v2_user_ibpb == SPECTRE_V2_USER_SECCOMP ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_PRCTL ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_SECCOMP) {
|
||||
else if (is_spec_ib_user_controlled()) {
|
||||
if (task_spec_ib_force_disable(task))
|
||||
return PR_SPEC_PRCTL | PR_SPEC_FORCE_DISABLE;
|
||||
if (task_spec_ib_disable(task))
|
||||
return PR_SPEC_PRCTL | PR_SPEC_DISABLE;
|
||||
return PR_SPEC_PRCTL | PR_SPEC_ENABLE;
|
||||
} else
|
||||
} else if (spectre_v2_user_ibpb == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT ||
|
||||
spectre_v2_user_stibp == SPECTRE_V2_USER_STRICT_PREFERRED)
|
||||
return PR_SPEC_DISABLE;
|
||||
else
|
||||
return PR_SPEC_NOT_AFFECTED;
|
||||
}
|
||||
|
||||
|
@ -5235,6 +5235,10 @@ static void kvm_init_msr_list(void)
|
||||
if (!kvm_x86_ops->rdtscp_supported())
|
||||
continue;
|
||||
break;
|
||||
case MSR_IA32_UMWAIT_CONTROL:
|
||||
if (!boot_cpu_has(X86_FEATURE_WAITPKG))
|
||||
continue;
|
||||
break;
|
||||
case MSR_IA32_RTIT_CTL:
|
||||
case MSR_IA32_RTIT_STATUS:
|
||||
if (!kvm_x86_ops->pt_supported())
|
||||
|
@ -296,7 +296,7 @@ static void nbd_size_clear(struct nbd_device *nbd)
|
||||
}
|
||||
}
|
||||
|
||||
static void nbd_size_update(struct nbd_device *nbd)
|
||||
static void nbd_size_update(struct nbd_device *nbd, bool start)
|
||||
{
|
||||
struct nbd_config *config = nbd->config;
|
||||
struct block_device *bdev = bdget_disk(nbd->disk, 0);
|
||||
@ -312,7 +312,8 @@ static void nbd_size_update(struct nbd_device *nbd)
|
||||
if (bdev) {
|
||||
if (bdev->bd_disk) {
|
||||
bd_set_size(bdev, config->bytesize);
|
||||
set_blocksize(bdev, config->blksize);
|
||||
if (start)
|
||||
set_blocksize(bdev, config->blksize);
|
||||
} else
|
||||
bdev->bd_invalidated = 1;
|
||||
bdput(bdev);
|
||||
@ -327,7 +328,7 @@ static void nbd_size_set(struct nbd_device *nbd, loff_t blocksize,
|
||||
config->blksize = blocksize;
|
||||
config->bytesize = blocksize * nr_blocks;
|
||||
if (nbd->task_recv != NULL)
|
||||
nbd_size_update(nbd);
|
||||
nbd_size_update(nbd, false);
|
||||
}
|
||||
|
||||
static void nbd_complete_rq(struct request *req)
|
||||
@ -1293,7 +1294,7 @@ static int nbd_start_device(struct nbd_device *nbd)
|
||||
args->index = i;
|
||||
queue_work(nbd->recv_workq, &args->work);
|
||||
}
|
||||
nbd_size_update(nbd);
|
||||
nbd_size_update(nbd, true);
|
||||
return error;
|
||||
}
|
||||
|
||||
@ -1502,6 +1503,7 @@ static void nbd_release(struct gendisk *disk, fmode_t mode)
|
||||
if (test_bit(NBD_RT_DISCONNECT_ON_CLOSE, &nbd->config->runtime_flags) &&
|
||||
bdev->bd_openers == 0)
|
||||
nbd_disconnect_and_put(nbd);
|
||||
bdput(bdev);
|
||||
|
||||
nbd_config_put(nbd);
|
||||
nbd_put(nbd);
|
||||
|
@ -1249,7 +1249,6 @@ void add_interrupt_randomness(int irq, int irq_flags)
|
||||
|
||||
fast_mix(fast_pool);
|
||||
add_interrupt_bench(cycles);
|
||||
this_cpu_add(net_rand_state.s1, fast_pool->pool[cycles & 3]);
|
||||
|
||||
if (unlikely(crng_init == 0)) {
|
||||
if ((fast_pool->count >= 64) &&
|
||||
|
@ -41,6 +41,11 @@ int tpm_read_log_efi(struct tpm_chip *chip)
|
||||
log_size = log_tbl->size;
|
||||
memunmap(log_tbl);
|
||||
|
||||
if (!log_size) {
|
||||
pr_warn("UEFI TPM log area empty\n");
|
||||
return -EIO;
|
||||
}
|
||||
|
||||
log_tbl = memremap(efi.tpm_log, sizeof(*log_tbl) + log_size,
|
||||
MEMREMAP_WB);
|
||||
if (!log_tbl) {
|
||||
|
@ -27,6 +27,7 @@
|
||||
#include <linux/of.h>
|
||||
#include <linux/of_device.h>
|
||||
#include <linux/kernel.h>
|
||||
#include <linux/dmi.h>
|
||||
#include "tpm.h"
|
||||
#include "tpm_tis_core.h"
|
||||
|
||||
@ -49,8 +50,8 @@ static inline struct tpm_tis_tcg_phy *to_tpm_tis_tcg_phy(struct tpm_tis_data *da
|
||||
return container_of(data, struct tpm_tis_tcg_phy, priv);
|
||||
}
|
||||
|
||||
static bool interrupts = true;
|
||||
module_param(interrupts, bool, 0444);
|
||||
static int interrupts = -1;
|
||||
module_param(interrupts, int, 0444);
|
||||
MODULE_PARM_DESC(interrupts, "Enable interrupts");
|
||||
|
||||
static bool itpm;
|
||||
@ -63,6 +64,28 @@ module_param(force, bool, 0444);
|
||||
MODULE_PARM_DESC(force, "Force device probe rather than using ACPI entry");
|
||||
#endif
|
||||
|
||||
static int tpm_tis_disable_irq(const struct dmi_system_id *d)
|
||||
{
|
||||
if (interrupts == -1) {
|
||||
pr_notice("tpm_tis: %s detected: disabling interrupts.\n", d->ident);
|
||||
interrupts = 0;
|
||||
}
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
static const struct dmi_system_id tpm_tis_dmi_table[] = {
|
||||
{
|
||||
.callback = tpm_tis_disable_irq,
|
||||
.ident = "ThinkPad T490s",
|
||||
.matches = {
|
||||
DMI_MATCH(DMI_SYS_VENDOR, "LENOVO"),
|
||||
DMI_MATCH(DMI_PRODUCT_VERSION, "ThinkPad T490s"),
|
||||
},
|
||||
},
|
||||
{}
|
||||
};
|
||||
|
||||
#if defined(CONFIG_PNP) && defined(CONFIG_ACPI)
|
||||
static int has_hid(struct acpi_device *dev, const char *hid)
|
||||
{
|
||||
@ -192,6 +215,8 @@ static int tpm_tis_init(struct device *dev, struct tpm_info *tpm_info)
|
||||
int irq = -1;
|
||||
int rc;
|
||||
|
||||
dmi_check_system(tpm_tis_dmi_table);
|
||||
|
||||
rc = check_acpi_tpm2(dev);
|
||||
if (rc)
|
||||
return rc;
|
||||
|
@ -435,12 +435,12 @@ static struct port_buffer *alloc_buf(struct virtio_device *vdev, size_t buf_size
|
||||
/*
|
||||
* Allocate DMA memory from ancestor. When a virtio
|
||||
* device is created by remoteproc, the DMA memory is
|
||||
* associated with the grandparent device:
|
||||
* vdev => rproc => platform-dev.
|
||||
* associated with the parent device:
|
||||
* virtioY => remoteprocX#vdevYbuffer.
|
||||
*/
|
||||
if (!vdev->dev.parent || !vdev->dev.parent->parent)
|
||||
buf->dev = vdev->dev.parent;
|
||||
if (!buf->dev)
|
||||
goto free_buf;
|
||||
buf->dev = vdev->dev.parent->parent;
|
||||
|
||||
/* Increase device refcnt to avoid freeing it */
|
||||
get_device(buf->dev);
|
||||
|
@ -28,6 +28,47 @@
|
||||
#include <linux/spinlock.h>
|
||||
#include <linux/types.h>
|
||||
|
||||
/*
|
||||
* PLX PEX8311 PCI LCS_INTCSR Interrupt Control/Status
|
||||
*
|
||||
* Bit: Description
|
||||
* 0: Enable Interrupt Sources (Bit 0)
|
||||
* 1: Enable Interrupt Sources (Bit 1)
|
||||
* 2: Generate Internal PCI Bus Internal SERR# Interrupt
|
||||
* 3: Mailbox Interrupt Enable
|
||||
* 4: Power Management Interrupt Enable
|
||||
* 5: Power Management Interrupt
|
||||
* 6: Slave Read Local Data Parity Check Error Enable
|
||||
* 7: Slave Read Local Data Parity Check Error Status
|
||||
* 8: Internal PCI Wire Interrupt Enable
|
||||
* 9: PCI Express Doorbell Interrupt Enable
|
||||
* 10: PCI Abort Interrupt Enable
|
||||
* 11: Local Interrupt Input Enable
|
||||
* 12: Retry Abort Enable
|
||||
* 13: PCI Express Doorbell Interrupt Active
|
||||
* 14: PCI Abort Interrupt Active
|
||||
* 15: Local Interrupt Input Active
|
||||
* 16: Local Interrupt Output Enable
|
||||
* 17: Local Doorbell Interrupt Enable
|
||||
* 18: DMA Channel 0 Interrupt Enable
|
||||
* 19: DMA Channel 1 Interrupt Enable
|
||||
* 20: Local Doorbell Interrupt Active
|
||||
* 21: DMA Channel 0 Interrupt Active
|
||||
* 22: DMA Channel 1 Interrupt Active
|
||||
* 23: Built-In Self-Test (BIST) Interrupt Active
|
||||
* 24: Direct Master was the Bus Master during a Master or Target Abort
|
||||
* 25: DMA Channel 0 was the Bus Master during a Master or Target Abort
|
||||
* 26: DMA Channel 1 was the Bus Master during a Master or Target Abort
|
||||
* 27: Target Abort after internal 256 consecutive Master Retrys
|
||||
* 28: PCI Bus wrote data to LCS_MBOX0
|
||||
* 29: PCI Bus wrote data to LCS_MBOX1
|
||||
* 30: PCI Bus wrote data to LCS_MBOX2
|
||||
* 31: PCI Bus wrote data to LCS_MBOX3
|
||||
*/
|
||||
#define PLX_PEX8311_PCI_LCS_INTCSR 0x68
|
||||
#define INTCSR_INTERNAL_PCI_WIRE BIT(8)
|
||||
#define INTCSR_LOCAL_INPUT BIT(11)
|
||||
|
||||
/**
|
||||
* struct idio_24_gpio_reg - GPIO device registers structure
|
||||
* @out0_7: Read: FET Outputs 0-7
|
||||
@ -92,6 +133,7 @@ struct idio_24_gpio_reg {
|
||||
struct idio_24_gpio {
|
||||
struct gpio_chip chip;
|
||||
raw_spinlock_t lock;
|
||||
__u8 __iomem *plx;
|
||||
struct idio_24_gpio_reg __iomem *reg;
|
||||
unsigned long irq_mask;
|
||||
};
|
||||
@ -360,13 +402,13 @@ static void idio_24_irq_mask(struct irq_data *data)
|
||||
unsigned long flags;
|
||||
const unsigned long bit_offset = irqd_to_hwirq(data) - 24;
|
||||
unsigned char new_irq_mask;
|
||||
const unsigned long bank_offset = bit_offset/8 * 8;
|
||||
const unsigned long bank_offset = bit_offset / 8;
|
||||
unsigned char cos_enable_state;
|
||||
|
||||
raw_spin_lock_irqsave(&idio24gpio->lock, flags);
|
||||
|
||||
idio24gpio->irq_mask &= BIT(bit_offset);
|
||||
new_irq_mask = idio24gpio->irq_mask >> bank_offset;
|
||||
idio24gpio->irq_mask &= ~BIT(bit_offset);
|
||||
new_irq_mask = idio24gpio->irq_mask >> bank_offset * 8;
|
||||
|
||||
if (!new_irq_mask) {
|
||||
cos_enable_state = ioread8(&idio24gpio->reg->cos_enable);
|
||||
@ -389,12 +431,12 @@ static void idio_24_irq_unmask(struct irq_data *data)
|
||||
unsigned long flags;
|
||||
unsigned char prev_irq_mask;
|
||||
const unsigned long bit_offset = irqd_to_hwirq(data) - 24;
|
||||
const unsigned long bank_offset = bit_offset/8 * 8;
|
||||
const unsigned long bank_offset = bit_offset / 8;
|
||||
unsigned char cos_enable_state;
|
||||
|
||||
raw_spin_lock_irqsave(&idio24gpio->lock, flags);
|
||||
|
||||
prev_irq_mask = idio24gpio->irq_mask >> bank_offset;
|
||||
prev_irq_mask = idio24gpio->irq_mask >> bank_offset * 8;
|
||||
idio24gpio->irq_mask |= BIT(bit_offset);
|
||||
|
||||
if (!prev_irq_mask) {
|
||||
@ -481,6 +523,7 @@ static int idio_24_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
struct device *const dev = &pdev->dev;
|
||||
struct idio_24_gpio *idio24gpio;
|
||||
int err;
|
||||
const size_t pci_plx_bar_index = 1;
|
||||
const size_t pci_bar_index = 2;
|
||||
const char *const name = pci_name(pdev);
|
||||
|
||||
@ -494,12 +537,13 @@ static int idio_24_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
return err;
|
||||
}
|
||||
|
||||
err = pcim_iomap_regions(pdev, BIT(pci_bar_index), name);
|
||||
err = pcim_iomap_regions(pdev, BIT(pci_plx_bar_index) | BIT(pci_bar_index), name);
|
||||
if (err) {
|
||||
dev_err(dev, "Unable to map PCI I/O addresses (%d)\n", err);
|
||||
return err;
|
||||
}
|
||||
|
||||
idio24gpio->plx = pcim_iomap_table(pdev)[pci_plx_bar_index];
|
||||
idio24gpio->reg = pcim_iomap_table(pdev)[pci_bar_index];
|
||||
|
||||
idio24gpio->chip.label = name;
|
||||
@ -520,6 +564,12 @@ static int idio_24_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
|
||||
/* Software board reset */
|
||||
iowrite8(0, &idio24gpio->reg->soft_reset);
|
||||
/*
|
||||
* enable PLX PEX8311 internal PCI wire interrupt and local interrupt
|
||||
* input
|
||||
*/
|
||||
iowrite8((INTCSR_INTERNAL_PCI_WIRE | INTCSR_LOCAL_INPUT) >> 8,
|
||||
idio24gpio->plx + PLX_PEX8311_PCI_LCS_INTCSR + 1);
|
||||
|
||||
err = devm_gpiochip_add_data(dev, &idio24gpio->chip, idio24gpio);
|
||||
if (err) {
|
||||
|
@ -1071,22 +1071,19 @@ static int cik_sdma_soft_reset(void *handle)
|
||||
{
|
||||
u32 srbm_soft_reset = 0;
|
||||
struct amdgpu_device *adev = (struct amdgpu_device *)handle;
|
||||
u32 tmp = RREG32(mmSRBM_STATUS2);
|
||||
u32 tmp;
|
||||
|
||||
if (tmp & SRBM_STATUS2__SDMA_BUSY_MASK) {
|
||||
/* sdma0 */
|
||||
tmp = RREG32(mmSDMA0_F32_CNTL + SDMA0_REGISTER_OFFSET);
|
||||
tmp |= SDMA0_F32_CNTL__HALT_MASK;
|
||||
WREG32(mmSDMA0_F32_CNTL + SDMA0_REGISTER_OFFSET, tmp);
|
||||
srbm_soft_reset |= SRBM_SOFT_RESET__SOFT_RESET_SDMA_MASK;
|
||||
}
|
||||
if (tmp & SRBM_STATUS2__SDMA1_BUSY_MASK) {
|
||||
/* sdma1 */
|
||||
tmp = RREG32(mmSDMA0_F32_CNTL + SDMA1_REGISTER_OFFSET);
|
||||
tmp |= SDMA0_F32_CNTL__HALT_MASK;
|
||||
WREG32(mmSDMA0_F32_CNTL + SDMA1_REGISTER_OFFSET, tmp);
|
||||
srbm_soft_reset |= SRBM_SOFT_RESET__SOFT_RESET_SDMA1_MASK;
|
||||
}
|
||||
/* sdma0 */
|
||||
tmp = RREG32(mmSDMA0_F32_CNTL + SDMA0_REGISTER_OFFSET);
|
||||
tmp |= SDMA0_F32_CNTL__HALT_MASK;
|
||||
WREG32(mmSDMA0_F32_CNTL + SDMA0_REGISTER_OFFSET, tmp);
|
||||
srbm_soft_reset |= SRBM_SOFT_RESET__SOFT_RESET_SDMA_MASK;
|
||||
|
||||
/* sdma1 */
|
||||
tmp = RREG32(mmSDMA0_F32_CNTL + SDMA1_REGISTER_OFFSET);
|
||||
tmp |= SDMA0_F32_CNTL__HALT_MASK;
|
||||
WREG32(mmSDMA0_F32_CNTL + SDMA1_REGISTER_OFFSET, tmp);
|
||||
srbm_soft_reset |= SRBM_SOFT_RESET__SOFT_RESET_SDMA1_MASK;
|
||||
|
||||
if (srbm_soft_reset) {
|
||||
tmp = RREG32(mmSRBM_SOFT_RESET);
|
||||
|
@ -1144,8 +1144,7 @@ static int soc15_common_early_init(void *handle)
|
||||
|
||||
adev->pg_flags = AMD_PG_SUPPORT_SDMA |
|
||||
AMD_PG_SUPPORT_MMHUB |
|
||||
AMD_PG_SUPPORT_VCN |
|
||||
AMD_PG_SUPPORT_VCN_DPG;
|
||||
AMD_PG_SUPPORT_VCN;
|
||||
} else {
|
||||
adev->cg_flags = AMD_CG_SUPPORT_GFX_MGCG |
|
||||
AMD_CG_SUPPORT_GFX_MGLS |
|
||||
|
@ -1533,6 +1533,10 @@ int smu7_disable_dpm_tasks(struct pp_hwmgr *hwmgr)
|
||||
PP_ASSERT_WITH_CODE((tmp_result == 0),
|
||||
"Failed to reset to default!", result = tmp_result);
|
||||
|
||||
tmp_result = smum_stop_smc(hwmgr);
|
||||
PP_ASSERT_WITH_CODE((tmp_result == 0),
|
||||
"Failed to stop smc!", result = tmp_result);
|
||||
|
||||
tmp_result = smu7_force_switch_to_arbf0(hwmgr);
|
||||
PP_ASSERT_WITH_CODE((tmp_result == 0),
|
||||
"Failed to force to switch arbf0!", result = tmp_result);
|
||||
|
@ -229,6 +229,7 @@ struct pp_smumgr_func {
|
||||
bool (*is_hw_avfs_present)(struct pp_hwmgr *hwmgr);
|
||||
int (*update_dpm_settings)(struct pp_hwmgr *hwmgr, void *profile_setting);
|
||||
int (*smc_table_manager)(struct pp_hwmgr *hwmgr, uint8_t *table, uint16_t table_id, bool rw); /*rw: true for read, false for write */
|
||||
int (*stop_smc)(struct pp_hwmgr *hwmgr);
|
||||
};
|
||||
|
||||
struct pp_hwmgr_func {
|
||||
|
@ -114,4 +114,6 @@ extern int smum_update_dpm_settings(struct pp_hwmgr *hwmgr, void *profile_settin
|
||||
|
||||
extern int smum_smc_table_manager(struct pp_hwmgr *hwmgr, uint8_t *table, uint16_t table_id, bool rw);
|
||||
|
||||
extern int smum_stop_smc(struct pp_hwmgr *hwmgr);
|
||||
|
||||
#endif
|
||||
|
@ -2725,10 +2725,7 @@ static int ci_initialize_mc_reg_table(struct pp_hwmgr *hwmgr)
|
||||
|
||||
static bool ci_is_dpm_running(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
return (1 == PHM_READ_INDIRECT_FIELD(hwmgr->device,
|
||||
CGS_IND_REG__SMC, FEATURE_STATUS,
|
||||
VOLTAGE_CONTROLLER_ON))
|
||||
? true : false;
|
||||
return ci_is_smc_ram_running(hwmgr);
|
||||
}
|
||||
|
||||
static int ci_smu_init(struct pp_hwmgr *hwmgr)
|
||||
@ -2936,6 +2933,29 @@ static int ci_update_smc_table(struct pp_hwmgr *hwmgr, uint32_t type)
|
||||
return 0;
|
||||
}
|
||||
|
||||
static void ci_reset_smc(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
PHM_WRITE_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC,
|
||||
SMC_SYSCON_RESET_CNTL,
|
||||
rst_reg, 1);
|
||||
}
|
||||
|
||||
|
||||
static void ci_stop_smc_clock(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
PHM_WRITE_INDIRECT_FIELD(hwmgr->device, CGS_IND_REG__SMC,
|
||||
SMC_SYSCON_CLOCK_CNTL_0,
|
||||
ck_disable, 1);
|
||||
}
|
||||
|
||||
static int ci_stop_smc(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
ci_reset_smc(hwmgr);
|
||||
ci_stop_smc_clock(hwmgr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
||||
const struct pp_smumgr_func ci_smu_funcs = {
|
||||
.name = "ci_smu",
|
||||
.smu_init = ci_smu_init,
|
||||
@ -2960,4 +2980,5 @@ const struct pp_smumgr_func ci_smu_funcs = {
|
||||
.is_dpm_running = ci_is_dpm_running,
|
||||
.update_dpm_settings = ci_update_dpm_settings,
|
||||
.update_smc_table = ci_update_smc_table,
|
||||
.stop_smc = ci_stop_smc,
|
||||
};
|
||||
|
@ -217,3 +217,11 @@ int smum_smc_table_manager(struct pp_hwmgr *hwmgr, uint8_t *table, uint16_t tabl
|
||||
|
||||
return -EINVAL;
|
||||
}
|
||||
|
||||
int smum_stop_smc(struct pp_hwmgr *hwmgr)
|
||||
{
|
||||
if (hwmgr->smumgr_funcs->stop_smc)
|
||||
return hwmgr->smumgr_funcs->stop_smc(hwmgr);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -337,6 +337,7 @@ int psb_irq_postinstall(struct drm_device *dev)
|
||||
{
|
||||
struct drm_psb_private *dev_priv = dev->dev_private;
|
||||
unsigned long irqflags;
|
||||
unsigned int i;
|
||||
|
||||
spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags);
|
||||
|
||||
@ -349,20 +350,12 @@ int psb_irq_postinstall(struct drm_device *dev)
|
||||
PSB_WVDC32(dev_priv->vdc_irq_mask, PSB_INT_ENABLE_R);
|
||||
PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM);
|
||||
|
||||
if (dev->vblank[0].enabled)
|
||||
psb_enable_pipestat(dev_priv, 0, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
else
|
||||
psb_disable_pipestat(dev_priv, 0, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
|
||||
if (dev->vblank[1].enabled)
|
||||
psb_enable_pipestat(dev_priv, 1, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
else
|
||||
psb_disable_pipestat(dev_priv, 1, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
|
||||
if (dev->vblank[2].enabled)
|
||||
psb_enable_pipestat(dev_priv, 2, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
else
|
||||
psb_disable_pipestat(dev_priv, 2, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
for (i = 0; i < dev->num_crtcs; ++i) {
|
||||
if (dev->vblank[i].enabled)
|
||||
psb_enable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
else
|
||||
psb_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
}
|
||||
|
||||
if (dev_priv->ops->hotplug_enable)
|
||||
dev_priv->ops->hotplug_enable(dev, true);
|
||||
@ -375,6 +368,7 @@ void psb_irq_uninstall(struct drm_device *dev)
|
||||
{
|
||||
struct drm_psb_private *dev_priv = dev->dev_private;
|
||||
unsigned long irqflags;
|
||||
unsigned int i;
|
||||
|
||||
spin_lock_irqsave(&dev_priv->irqmask_lock, irqflags);
|
||||
|
||||
@ -383,14 +377,10 @@ void psb_irq_uninstall(struct drm_device *dev)
|
||||
|
||||
PSB_WVDC32(0xFFFFFFFF, PSB_HWSTAM);
|
||||
|
||||
if (dev->vblank[0].enabled)
|
||||
psb_disable_pipestat(dev_priv, 0, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
|
||||
if (dev->vblank[1].enabled)
|
||||
psb_disable_pipestat(dev_priv, 1, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
|
||||
if (dev->vblank[2].enabled)
|
||||
psb_disable_pipestat(dev_priv, 2, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
for (i = 0; i < dev->num_crtcs; ++i) {
|
||||
if (dev->vblank[i].enabled)
|
||||
psb_disable_pipestat(dev_priv, i, PIPE_VBLANK_INTERRUPT_ENABLE);
|
||||
}
|
||||
|
||||
dev_priv->vdc_irq_mask &= _PSB_IRQ_SGX_FLAG |
|
||||
_PSB_IRQ_MSVDX_FLAG |
|
||||
|
@ -605,21 +605,6 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
|
||||
if (!obj)
|
||||
return -ENOENT;
|
||||
|
||||
/*
|
||||
* Already in the desired write domain? Nothing for us to do!
|
||||
*
|
||||
* We apply a little bit of cunning here to catch a broader set of
|
||||
* no-ops. If obj->write_domain is set, we must be in the same
|
||||
* obj->read_domains, and only that domain. Therefore, if that
|
||||
* obj->write_domain matches the request read_domains, we are
|
||||
* already in the same read/write domain and can skip the operation,
|
||||
* without having to further check the requested write_domain.
|
||||
*/
|
||||
if (READ_ONCE(obj->write_domain) == read_domains) {
|
||||
err = 0;
|
||||
goto out;
|
||||
}
|
||||
|
||||
/*
|
||||
* Try to flush the object off the GPU without holding the lock.
|
||||
* We will repeat the flush holding the lock in the normal manner
|
||||
@ -657,6 +642,19 @@ i915_gem_set_domain_ioctl(struct drm_device *dev, void *data,
|
||||
if (err)
|
||||
goto out;
|
||||
|
||||
/*
|
||||
* Already in the desired write domain? Nothing for us to do!
|
||||
*
|
||||
* We apply a little bit of cunning here to catch a broader set of
|
||||
* no-ops. If obj->write_domain is set, we must be in the same
|
||||
* obj->read_domains, and only that domain. Therefore, if that
|
||||
* obj->write_domain matches the request read_domains, we are
|
||||
* already in the same read/write domain and can skip the operation,
|
||||
* without having to further check the requested write_domain.
|
||||
*/
|
||||
if (READ_ONCE(obj->write_domain) == read_domains)
|
||||
goto out_unpin;
|
||||
|
||||
err = i915_gem_object_lock_interruptible(obj);
|
||||
if (err)
|
||||
goto out_unpin;
|
||||
|
@ -354,7 +354,8 @@ static void __setup_engine_capabilities(struct intel_engine_cs *engine)
|
||||
* instances.
|
||||
*/
|
||||
if ((INTEL_GEN(i915) >= 11 &&
|
||||
RUNTIME_INFO(i915)->vdbox_sfc_access & engine->mask) ||
|
||||
(RUNTIME_INFO(i915)->vdbox_sfc_access &
|
||||
BIT(engine->instance))) ||
|
||||
(INTEL_GEN(i915) >= 9 && engine->instance == 0))
|
||||
engine->uabi_capabilities |=
|
||||
I915_VIDEO_AND_ENHANCE_CLASS_CAPABILITY_SFC;
|
||||
|
@ -1277,7 +1277,7 @@ static void balloon_up(struct work_struct *dummy)
|
||||
|
||||
/* Refuse to balloon below the floor. */
|
||||
if (avail_pages < num_pages || avail_pages - num_pages < floor) {
|
||||
pr_warn("Balloon request will be partially fulfilled. %s\n",
|
||||
pr_info("Balloon request will be partially fulfilled. %s\n",
|
||||
avail_pages < num_pages ? "Not enough memory." :
|
||||
"Balloon floor reached.");
|
||||
|
||||
|
@ -389,6 +389,10 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
|
||||
{
|
||||
u16 control_reg;
|
||||
|
||||
writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST);
|
||||
udelay(50);
|
||||
writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
|
||||
|
||||
mtk_i2c_writew(i2c, I2C_SOFT_RST, OFFSET_SOFTRESET);
|
||||
|
||||
/* Set ioconfig */
|
||||
@ -419,10 +423,6 @@ static void mtk_i2c_init_hw(struct mtk_i2c *i2c)
|
||||
|
||||
mtk_i2c_writew(i2c, control_reg, OFFSET_CONTROL);
|
||||
mtk_i2c_writew(i2c, I2C_DELAY_LEN, OFFSET_DELAY_LEN);
|
||||
|
||||
writel(I2C_DMA_HARD_RST, i2c->pdmabase + OFFSET_RST);
|
||||
udelay(50);
|
||||
writel(I2C_DMA_CLR_FLAG, i2c->pdmabase + OFFSET_RST);
|
||||
}
|
||||
|
||||
/*
|
||||
|
@ -129,6 +129,7 @@ struct sh_mobile_i2c_data {
|
||||
int sr;
|
||||
bool send_stop;
|
||||
bool stop_after_dma;
|
||||
bool atomic_xfer;
|
||||
|
||||
struct resource *res;
|
||||
struct dma_chan *dma_tx;
|
||||
@ -333,13 +334,15 @@ static unsigned char i2c_op(struct sh_mobile_i2c_data *pd, enum sh_mobile_i2c_op
|
||||
ret = iic_rd(pd, ICDR);
|
||||
break;
|
||||
case OP_RX_STOP: /* enable DTE interrupt, issue stop */
|
||||
iic_wr(pd, ICIC,
|
||||
ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE);
|
||||
if (!pd->atomic_xfer)
|
||||
iic_wr(pd, ICIC,
|
||||
ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE);
|
||||
iic_wr(pd, ICCR, ICCR_ICE | ICCR_RACK);
|
||||
break;
|
||||
case OP_RX_STOP_DATA: /* enable DTE interrupt, read data, issue stop */
|
||||
iic_wr(pd, ICIC,
|
||||
ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE);
|
||||
if (!pd->atomic_xfer)
|
||||
iic_wr(pd, ICIC,
|
||||
ICIC_DTEE | ICIC_WAITE | ICIC_ALE | ICIC_TACKE);
|
||||
ret = iic_rd(pd, ICDR);
|
||||
iic_wr(pd, ICCR, ICCR_ICE | ICCR_RACK);
|
||||
break;
|
||||
@ -435,7 +438,8 @@ static irqreturn_t sh_mobile_i2c_isr(int irq, void *dev_id)
|
||||
|
||||
if (wakeup) {
|
||||
pd->sr |= SW_DONE;
|
||||
wake_up(&pd->wait);
|
||||
if (!pd->atomic_xfer)
|
||||
wake_up(&pd->wait);
|
||||
}
|
||||
|
||||
/* defeat write posting to avoid spurious WAIT interrupts */
|
||||
@ -587,6 +591,9 @@ static void start_ch(struct sh_mobile_i2c_data *pd, struct i2c_msg *usr_msg,
|
||||
pd->pos = -1;
|
||||
pd->sr = 0;
|
||||
|
||||
if (pd->atomic_xfer)
|
||||
return;
|
||||
|
||||
pd->dma_buf = i2c_get_dma_safe_msg_buf(pd->msg, 8);
|
||||
if (pd->dma_buf)
|
||||
sh_mobile_i2c_xfer_dma(pd);
|
||||
@ -643,15 +650,13 @@ static int poll_busy(struct sh_mobile_i2c_data *pd)
|
||||
return i ? 0 : -ETIMEDOUT;
|
||||
}
|
||||
|
||||
static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter,
|
||||
struct i2c_msg *msgs,
|
||||
int num)
|
||||
static int sh_mobile_xfer(struct sh_mobile_i2c_data *pd,
|
||||
struct i2c_msg *msgs, int num)
|
||||
{
|
||||
struct sh_mobile_i2c_data *pd = i2c_get_adapdata(adapter);
|
||||
struct i2c_msg *msg;
|
||||
int err = 0;
|
||||
int i;
|
||||
long timeout;
|
||||
long time_left;
|
||||
|
||||
/* Wake up device and enable clock */
|
||||
pm_runtime_get_sync(pd->dev);
|
||||
@ -668,15 +673,35 @@ static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter,
|
||||
if (do_start)
|
||||
i2c_op(pd, OP_START);
|
||||
|
||||
/* The interrupt handler takes care of the rest... */
|
||||
timeout = wait_event_timeout(pd->wait,
|
||||
pd->sr & (ICSR_TACK | SW_DONE),
|
||||
adapter->timeout);
|
||||
if (pd->atomic_xfer) {
|
||||
unsigned long j = jiffies + pd->adap.timeout;
|
||||
|
||||
/* 'stop_after_dma' tells if DMA transfer was complete */
|
||||
i2c_put_dma_safe_msg_buf(pd->dma_buf, pd->msg, pd->stop_after_dma);
|
||||
time_left = time_before_eq(jiffies, j);
|
||||
while (time_left &&
|
||||
!(pd->sr & (ICSR_TACK | SW_DONE))) {
|
||||
unsigned char sr = iic_rd(pd, ICSR);
|
||||
|
||||
if (!timeout) {
|
||||
if (sr & (ICSR_AL | ICSR_TACK |
|
||||
ICSR_WAIT | ICSR_DTE)) {
|
||||
sh_mobile_i2c_isr(0, pd);
|
||||
udelay(150);
|
||||
} else {
|
||||
cpu_relax();
|
||||
}
|
||||
time_left = time_before_eq(jiffies, j);
|
||||
}
|
||||
} else {
|
||||
/* The interrupt handler takes care of the rest... */
|
||||
time_left = wait_event_timeout(pd->wait,
|
||||
pd->sr & (ICSR_TACK | SW_DONE),
|
||||
pd->adap.timeout);
|
||||
|
||||
/* 'stop_after_dma' tells if DMA xfer was complete */
|
||||
i2c_put_dma_safe_msg_buf(pd->dma_buf, pd->msg,
|
||||
pd->stop_after_dma);
|
||||
}
|
||||
|
||||
if (!time_left) {
|
||||
dev_err(pd->dev, "Transfer request timed out\n");
|
||||
if (pd->dma_direction != DMA_NONE)
|
||||
sh_mobile_i2c_cleanup_dma(pd);
|
||||
@ -702,14 +727,35 @@ static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter,
|
||||
return err ?: num;
|
||||
}
|
||||
|
||||
static int sh_mobile_i2c_xfer(struct i2c_adapter *adapter,
|
||||
struct i2c_msg *msgs,
|
||||
int num)
|
||||
{
|
||||
struct sh_mobile_i2c_data *pd = i2c_get_adapdata(adapter);
|
||||
|
||||
pd->atomic_xfer = false;
|
||||
return sh_mobile_xfer(pd, msgs, num);
|
||||
}
|
||||
|
||||
static int sh_mobile_i2c_xfer_atomic(struct i2c_adapter *adapter,
|
||||
struct i2c_msg *msgs,
|
||||
int num)
|
||||
{
|
||||
struct sh_mobile_i2c_data *pd = i2c_get_adapdata(adapter);
|
||||
|
||||
pd->atomic_xfer = true;
|
||||
return sh_mobile_xfer(pd, msgs, num);
|
||||
}
|
||||
|
||||
static u32 sh_mobile_i2c_func(struct i2c_adapter *adapter)
|
||||
{
|
||||
return I2C_FUNC_I2C | I2C_FUNC_SMBUS_EMUL | I2C_FUNC_PROTOCOL_MANGLING;
|
||||
}
|
||||
|
||||
static const struct i2c_algorithm sh_mobile_i2c_algorithm = {
|
||||
.functionality = sh_mobile_i2c_func,
|
||||
.master_xfer = sh_mobile_i2c_xfer,
|
||||
.functionality = sh_mobile_i2c_func,
|
||||
.master_xfer = sh_mobile_i2c_xfer,
|
||||
.master_xfer_atomic = sh_mobile_i2c_xfer_atomic,
|
||||
};
|
||||
|
||||
static const struct i2c_adapter_quirks sh_mobile_i2c_quirks = {
|
||||
|
@ -406,7 +406,11 @@ extern bool amd_iommu_np_cache;
|
||||
/* Only true if all IOMMUs support device IOTLBs */
|
||||
extern bool amd_iommu_iotlb_sup;
|
||||
|
||||
#define MAX_IRQS_PER_TABLE 256
|
||||
/*
|
||||
* AMD IOMMU hardware only support 512 IRTEs despite
|
||||
* the architectural limitation of 2048 entries.
|
||||
*/
|
||||
#define MAX_IRQS_PER_TABLE 512
|
||||
#define IRQ_TABLE_ALIGNMENT 128
|
||||
|
||||
struct irq_remap_table {
|
||||
|
@ -646,7 +646,7 @@ static irqreturn_t prq_event_thread(int irq, void *d)
|
||||
resp.qw0 = QI_PGRP_PASID(req->pasid) |
|
||||
QI_PGRP_DID(req->rid) |
|
||||
QI_PGRP_PASID_P(req->pasid_present) |
|
||||
QI_PGRP_PDP(req->pasid_present) |
|
||||
QI_PGRP_PDP(req->priv_data_present) |
|
||||
QI_PGRP_RESP_CODE(result) |
|
||||
QI_PGRP_RESP_TYPE;
|
||||
resp.qw1 = QI_PGRP_IDX(req->prg_index) |
|
||||
|
@ -128,11 +128,11 @@ static inline u8 mei_cl_me_id(const struct mei_cl *cl)
|
||||
*
|
||||
* @cl: host client
|
||||
*
|
||||
* Return: mtu
|
||||
* Return: mtu or 0 if client is not connected
|
||||
*/
|
||||
static inline size_t mei_cl_mtu(const struct mei_cl *cl)
|
||||
{
|
||||
return cl->me_cl->props.max_msg_length;
|
||||
return cl->me_cl ? cl->me_cl->props.max_msg_length : 0;
|
||||
}
|
||||
|
||||
/**
|
||||
|
@ -874,6 +874,7 @@ int renesas_sdhi_remove(struct platform_device *pdev)
|
||||
|
||||
tmio_mmc_host_remove(host);
|
||||
renesas_sdhi_clk_disable(host);
|
||||
tmio_mmc_host_free(host);
|
||||
|
||||
return 0;
|
||||
}
|
||||
|
@ -1212,6 +1212,8 @@ static struct soc_device_attribute soc_fixup_sdhc_clkdivs[] = {
|
||||
|
||||
static struct soc_device_attribute soc_unreliable_pulse_detection[] = {
|
||||
{ .family = "QorIQ LX2160A", .revision = "1.0", },
|
||||
{ .family = "QorIQ LX2160A", .revision = "2.0", },
|
||||
{ .family = "QorIQ LS1028A", .revision = "1.0", },
|
||||
{ },
|
||||
};
|
||||
|
||||
|
@ -486,9 +486,13 @@ __can_get_echo_skb(struct net_device *dev, unsigned int idx, u8 *len_ptr)
|
||||
*/
|
||||
struct sk_buff *skb = priv->echo_skb[idx];
|
||||
struct canfd_frame *cf = (struct canfd_frame *)skb->data;
|
||||
u8 len = cf->len;
|
||||
|
||||
*len_ptr = len;
|
||||
/* get the real payload length for netdev statistics */
|
||||
if (cf->can_id & CAN_RTR_FLAG)
|
||||
*len_ptr = 0;
|
||||
else
|
||||
*len_ptr = cf->len;
|
||||
|
||||
priv->echo_skb[idx] = NULL;
|
||||
|
||||
return skb;
|
||||
@ -512,7 +516,11 @@ unsigned int can_get_echo_skb(struct net_device *dev, unsigned int idx)
|
||||
if (!skb)
|
||||
return 0;
|
||||
|
||||
netif_rx(skb);
|
||||
skb_get(skb);
|
||||
if (netif_rx(skb) == NET_RX_SUCCESS)
|
||||
dev_consume_skb_any(skb);
|
||||
else
|
||||
dev_kfree_skb_any(skb);
|
||||
|
||||
return len;
|
||||
}
|
||||
|
@ -321,8 +321,7 @@ static const struct flexcan_devtype_data fsl_vf610_devtype_data = {
|
||||
|
||||
static const struct flexcan_devtype_data fsl_ls1021a_r2_devtype_data = {
|
||||
.quirks = FLEXCAN_QUIRK_DISABLE_RXFG | FLEXCAN_QUIRK_ENABLE_EACEN_RRS |
|
||||
FLEXCAN_QUIRK_DISABLE_MECR | FLEXCAN_QUIRK_BROKEN_PERR_STATE |
|
||||
FLEXCAN_QUIRK_USE_OFF_TIMESTAMP,
|
||||
FLEXCAN_QUIRK_BROKEN_PERR_STATE | FLEXCAN_QUIRK_USE_OFF_TIMESTAMP,
|
||||
};
|
||||
|
||||
static const struct can_bittiming_const flexcan_bittiming_const = {
|
||||
@ -1677,6 +1676,8 @@ static int flexcan_remove(struct platform_device *pdev)
|
||||
{
|
||||
struct net_device *dev = platform_get_drvdata(pdev);
|
||||
|
||||
device_set_wakeup_enable(&pdev->dev, false);
|
||||
device_set_wakeup_capable(&pdev->dev, false);
|
||||
unregister_flexcandev(dev);
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
free_candev(dev);
|
||||
|
@ -248,8 +248,7 @@ static int pucan_handle_can_rx(struct peak_canfd_priv *priv,
|
||||
cf_len = get_can_dlc(pucan_msg_get_dlc(msg));
|
||||
|
||||
/* if this frame is an echo, */
|
||||
if ((rx_msg_flags & PUCAN_MSG_LOOPED_BACK) &&
|
||||
!(rx_msg_flags & PUCAN_MSG_SELF_RECEIVE)) {
|
||||
if (rx_msg_flags & PUCAN_MSG_LOOPED_BACK) {
|
||||
unsigned long flags;
|
||||
|
||||
spin_lock_irqsave(&priv->echo_lock, flags);
|
||||
@ -263,7 +262,13 @@ static int pucan_handle_can_rx(struct peak_canfd_priv *priv,
|
||||
netif_wake_queue(priv->ndev);
|
||||
|
||||
spin_unlock_irqrestore(&priv->echo_lock, flags);
|
||||
return 0;
|
||||
|
||||
/* if this frame is only an echo, stop here. Otherwise,
|
||||
* continue to push this application self-received frame into
|
||||
* its own rx queue.
|
||||
*/
|
||||
if (!(rx_msg_flags & PUCAN_MSG_SELF_RECEIVE))
|
||||
return 0;
|
||||
}
|
||||
|
||||
/* otherwise, it should be pushed into rx fifo */
|
||||
|
@ -272,7 +272,7 @@ int can_rx_offload_queue_sorted(struct can_rx_offload *offload,
|
||||
|
||||
if (skb_queue_len(&offload->skb_queue) >
|
||||
offload->skb_queue_len_max) {
|
||||
kfree_skb(skb);
|
||||
dev_kfree_skb_any(skb);
|
||||
return -ENOBUFS;
|
||||
}
|
||||
|
||||
@ -317,7 +317,7 @@ int can_rx_offload_queue_tail(struct can_rx_offload *offload,
|
||||
{
|
||||
if (skb_queue_len(&offload->skb_queue) >
|
||||
offload->skb_queue_len_max) {
|
||||
kfree_skb(skb);
|
||||
dev_kfree_skb_any(skb);
|
||||
return -ENOBUFS;
|
||||
}
|
||||
|
||||
|
@ -936,7 +936,7 @@ static int ti_hecc_probe(struct platform_device *pdev)
|
||||
err = clk_prepare_enable(priv->clk);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "clk_prepare_enable() failed\n");
|
||||
goto probe_exit_clk;
|
||||
goto probe_exit_release_clk;
|
||||
}
|
||||
|
||||
priv->offload.mailbox_read = ti_hecc_mailbox_read;
|
||||
@ -945,7 +945,7 @@ static int ti_hecc_probe(struct platform_device *pdev)
|
||||
err = can_rx_offload_add_timestamp(ndev, &priv->offload);
|
||||
if (err) {
|
||||
dev_err(&pdev->dev, "can_rx_offload_add_timestamp() failed\n");
|
||||
goto probe_exit_clk;
|
||||
goto probe_exit_disable_clk;
|
||||
}
|
||||
|
||||
err = register_candev(ndev);
|
||||
@ -963,7 +963,9 @@ static int ti_hecc_probe(struct platform_device *pdev)
|
||||
|
||||
probe_exit_offload:
|
||||
can_rx_offload_del(&priv->offload);
|
||||
probe_exit_clk:
|
||||
probe_exit_disable_clk:
|
||||
clk_disable_unprepare(priv->clk);
|
||||
probe_exit_release_clk:
|
||||
clk_put(priv->clk);
|
||||
probe_exit_candev:
|
||||
free_candev(ndev);
|
||||
|
@ -130,14 +130,55 @@ void peak_usb_get_ts_time(struct peak_time_ref *time_ref, u32 ts, ktime_t *time)
|
||||
/* protect from getting time before setting now */
|
||||
if (ktime_to_ns(time_ref->tv_host)) {
|
||||
u64 delta_us;
|
||||
s64 delta_ts = 0;
|
||||
|
||||
delta_us = ts - time_ref->ts_dev_2;
|
||||
if (ts < time_ref->ts_dev_2)
|
||||
delta_us &= (1 << time_ref->adapter->ts_used_bits) - 1;
|
||||
/* General case: dev_ts_1 < dev_ts_2 < ts, with:
|
||||
*
|
||||
* - dev_ts_1 = previous sync timestamp
|
||||
* - dev_ts_2 = last sync timestamp
|
||||
* - ts = event timestamp
|
||||
* - ts_period = known sync period (theoretical)
|
||||
* ~ dev_ts2 - dev_ts1
|
||||
* *but*:
|
||||
*
|
||||
* - time counters wrap (see adapter->ts_used_bits)
|
||||
* - sometimes, dev_ts_1 < ts < dev_ts2
|
||||
*
|
||||
* "normal" case (sync time counters increase):
|
||||
* must take into account case when ts wraps (tsw)
|
||||
*
|
||||
* < ts_period > < >
|
||||
* | | |
|
||||
* ---+--------+----+-------0-+--+-->
|
||||
* ts_dev_1 | ts_dev_2 |
|
||||
* ts tsw
|
||||
*/
|
||||
if (time_ref->ts_dev_1 < time_ref->ts_dev_2) {
|
||||
/* case when event time (tsw) wraps */
|
||||
if (ts < time_ref->ts_dev_1)
|
||||
delta_ts = 1 << time_ref->adapter->ts_used_bits;
|
||||
|
||||
delta_us += time_ref->ts_total;
|
||||
/* Otherwise, sync time counter (ts_dev_2) has wrapped:
|
||||
* handle case when event time (tsn) hasn't.
|
||||
*
|
||||
* < ts_period > < >
|
||||
* | | |
|
||||
* ---+--------+--0-+---------+--+-->
|
||||
* ts_dev_1 | ts_dev_2 |
|
||||
* tsn ts
|
||||
*/
|
||||
} else if (time_ref->ts_dev_1 < ts) {
|
||||
delta_ts = -(1 << time_ref->adapter->ts_used_bits);
|
||||
}
|
||||
|
||||
delta_us *= time_ref->adapter->us_per_ts_scale;
|
||||
/* add delay between last sync and event timestamps */
|
||||
delta_ts += (signed int)(ts - time_ref->ts_dev_2);
|
||||
|
||||
/* add time from beginning to last sync */
|
||||
delta_ts += time_ref->ts_total;
|
||||
|
||||
/* convert ticks number into microseconds */
|
||||
delta_us = delta_ts * time_ref->adapter->us_per_ts_scale;
|
||||
delta_us >>= time_ref->adapter->us_per_ts_shift;
|
||||
|
||||
*time = ktime_add_us(time_ref->tv_host_0, delta_us);
|
||||
|
@ -468,12 +468,18 @@ static int pcan_usb_fd_decode_canmsg(struct pcan_usb_fd_if *usb_if,
|
||||
struct pucan_msg *rx_msg)
|
||||
{
|
||||
struct pucan_rx_msg *rm = (struct pucan_rx_msg *)rx_msg;
|
||||
struct peak_usb_device *dev = usb_if->dev[pucan_msg_get_channel(rm)];
|
||||
struct net_device *netdev = dev->netdev;
|
||||
struct peak_usb_device *dev;
|
||||
struct net_device *netdev;
|
||||
struct canfd_frame *cfd;
|
||||
struct sk_buff *skb;
|
||||
const u16 rx_msg_flags = le16_to_cpu(rm->flags);
|
||||
|
||||
if (pucan_msg_get_channel(rm) >= ARRAY_SIZE(usb_if->dev))
|
||||
return -ENOMEM;
|
||||
|
||||
dev = usb_if->dev[pucan_msg_get_channel(rm)];
|
||||
netdev = dev->netdev;
|
||||
|
||||
if (rx_msg_flags & PUCAN_MSG_EXT_DATA_LEN) {
|
||||
/* CANFD frame case */
|
||||
skb = alloc_canfd_skb(netdev, &cfd);
|
||||
@ -519,15 +525,21 @@ static int pcan_usb_fd_decode_status(struct pcan_usb_fd_if *usb_if,
|
||||
struct pucan_msg *rx_msg)
|
||||
{
|
||||
struct pucan_status_msg *sm = (struct pucan_status_msg *)rx_msg;
|
||||
struct peak_usb_device *dev = usb_if->dev[pucan_stmsg_get_channel(sm)];
|
||||
struct pcan_usb_fd_device *pdev =
|
||||
container_of(dev, struct pcan_usb_fd_device, dev);
|
||||
struct pcan_usb_fd_device *pdev;
|
||||
enum can_state new_state = CAN_STATE_ERROR_ACTIVE;
|
||||
enum can_state rx_state, tx_state;
|
||||
struct net_device *netdev = dev->netdev;
|
||||
struct peak_usb_device *dev;
|
||||
struct net_device *netdev;
|
||||
struct can_frame *cf;
|
||||
struct sk_buff *skb;
|
||||
|
||||
if (pucan_stmsg_get_channel(sm) >= ARRAY_SIZE(usb_if->dev))
|
||||
return -ENOMEM;
|
||||
|
||||
dev = usb_if->dev[pucan_stmsg_get_channel(sm)];
|
||||
pdev = container_of(dev, struct pcan_usb_fd_device, dev);
|
||||
netdev = dev->netdev;
|
||||
|
||||
/* nothing should be sent while in BUS_OFF state */
|
||||
if (dev->can.state == CAN_STATE_BUS_OFF)
|
||||
return 0;
|
||||
@ -579,9 +591,14 @@ static int pcan_usb_fd_decode_error(struct pcan_usb_fd_if *usb_if,
|
||||
struct pucan_msg *rx_msg)
|
||||
{
|
||||
struct pucan_error_msg *er = (struct pucan_error_msg *)rx_msg;
|
||||
struct peak_usb_device *dev = usb_if->dev[pucan_ermsg_get_channel(er)];
|
||||
struct pcan_usb_fd_device *pdev =
|
||||
container_of(dev, struct pcan_usb_fd_device, dev);
|
||||
struct pcan_usb_fd_device *pdev;
|
||||
struct peak_usb_device *dev;
|
||||
|
||||
if (pucan_ermsg_get_channel(er) >= ARRAY_SIZE(usb_if->dev))
|
||||
return -EINVAL;
|
||||
|
||||
dev = usb_if->dev[pucan_ermsg_get_channel(er)];
|
||||
pdev = container_of(dev, struct pcan_usb_fd_device, dev);
|
||||
|
||||
/* keep a trace of tx and rx error counters for later use */
|
||||
pdev->bec.txerr = er->tx_err_cnt;
|
||||
@ -595,11 +612,17 @@ static int pcan_usb_fd_decode_overrun(struct pcan_usb_fd_if *usb_if,
|
||||
struct pucan_msg *rx_msg)
|
||||
{
|
||||
struct pcan_ufd_ovr_msg *ov = (struct pcan_ufd_ovr_msg *)rx_msg;
|
||||
struct peak_usb_device *dev = usb_if->dev[pufd_omsg_get_channel(ov)];
|
||||
struct net_device *netdev = dev->netdev;
|
||||
struct peak_usb_device *dev;
|
||||
struct net_device *netdev;
|
||||
struct can_frame *cf;
|
||||
struct sk_buff *skb;
|
||||
|
||||
if (pufd_omsg_get_channel(ov) >= ARRAY_SIZE(usb_if->dev))
|
||||
return -EINVAL;
|
||||
|
||||
dev = usb_if->dev[pufd_omsg_get_channel(ov)];
|
||||
netdev = dev->netdev;
|
||||
|
||||
/* allocate an skb to store the error frame */
|
||||
skb = alloc_can_err_skb(netdev, &cf);
|
||||
if (!skb)
|
||||
@ -716,6 +739,9 @@ static int pcan_usb_fd_encode_msg(struct peak_usb_device *dev,
|
||||
u16 tx_msg_size, tx_msg_flags;
|
||||
u8 can_dlc;
|
||||
|
||||
if (cfd->len > CANFD_MAX_DLEN)
|
||||
return -EINVAL;
|
||||
|
||||
tx_msg_size = ALIGN(sizeof(struct pucan_tx_msg) + cfd->len, 4);
|
||||
tx_msg->size = cpu_to_le16(tx_msg_size);
|
||||
tx_msg->type = cpu_to_le16(PUCAN_MSG_CAN_TX);
|
||||
|
@ -1384,7 +1384,7 @@ static int xcan_open(struct net_device *ndev)
|
||||
if (ret < 0) {
|
||||
netdev_err(ndev, "%s: pm_runtime_get failed(%d)\n",
|
||||
__func__, ret);
|
||||
return ret;
|
||||
goto err;
|
||||
}
|
||||
|
||||
ret = request_irq(ndev->irq, xcan_interrupt, priv->irq_flags,
|
||||
@ -1468,6 +1468,7 @@ static int xcan_get_berr_counter(const struct net_device *ndev,
|
||||
if (ret < 0) {
|
||||
netdev_err(ndev, "%s: pm_runtime_get failed(%d)\n",
|
||||
__func__, ret);
|
||||
pm_runtime_put(priv->dev);
|
||||
return ret;
|
||||
}
|
||||
|
||||
@ -1783,7 +1784,7 @@ static int xcan_probe(struct platform_device *pdev)
|
||||
if (ret < 0) {
|
||||
netdev_err(ndev, "%s: pm_runtime_get failed(%d)\n",
|
||||
__func__, ret);
|
||||
goto err_pmdisable;
|
||||
goto err_disableclks;
|
||||
}
|
||||
|
||||
if (priv->read_reg(priv, XCAN_SR_OFFSET) != XCAN_SR_CONFIG_MASK) {
|
||||
@ -1818,7 +1819,6 @@ static int xcan_probe(struct platform_device *pdev)
|
||||
|
||||
err_disableclks:
|
||||
pm_runtime_put(priv->dev);
|
||||
err_pmdisable:
|
||||
pm_runtime_disable(&pdev->dev);
|
||||
err_free:
|
||||
free_candev(ndev);
|
||||
|
@ -2222,21 +2222,23 @@ static int igc_change_mtu(struct net_device *netdev, int new_mtu)
|
||||
}
|
||||
|
||||
/**
|
||||
* igc_get_stats - Get System Network Statistics
|
||||
* igc_get_stats64 - Get System Network Statistics
|
||||
* @netdev: network interface device structure
|
||||
* @stats: rtnl_link_stats64 pointer
|
||||
*
|
||||
* Returns the address of the device statistics structure.
|
||||
* The statistics are updated here and also from the timer callback.
|
||||
*/
|
||||
static struct net_device_stats *igc_get_stats(struct net_device *netdev)
|
||||
static void igc_get_stats64(struct net_device *netdev,
|
||||
struct rtnl_link_stats64 *stats)
|
||||
{
|
||||
struct igc_adapter *adapter = netdev_priv(netdev);
|
||||
|
||||
spin_lock(&adapter->stats64_lock);
|
||||
if (!test_bit(__IGC_RESETTING, &adapter->state))
|
||||
igc_update_stats(adapter);
|
||||
|
||||
/* only return the current stats */
|
||||
return &netdev->stats;
|
||||
memcpy(stats, &adapter->stats64, sizeof(*stats));
|
||||
spin_unlock(&adapter->stats64_lock);
|
||||
}
|
||||
|
||||
static netdev_features_t igc_fix_features(struct net_device *netdev,
|
||||
@ -3984,7 +3986,7 @@ static const struct net_device_ops igc_netdev_ops = {
|
||||
.ndo_start_xmit = igc_xmit_frame,
|
||||
.ndo_set_mac_address = igc_set_mac,
|
||||
.ndo_change_mtu = igc_change_mtu,
|
||||
.ndo_get_stats = igc_get_stats,
|
||||
.ndo_get_stats64 = igc_get_stats64,
|
||||
.ndo_fix_features = igc_fix_features,
|
||||
.ndo_set_features = igc_set_features,
|
||||
.ndo_features_check = igc_features_check,
|
||||
|
@ -1923,10 +1923,11 @@ void mlx5_del_flow_rules(struct mlx5_flow_handle *handle)
|
||||
down_write_ref_node(&fte->node, false);
|
||||
for (i = handle->num_rules - 1; i >= 0; i--)
|
||||
tree_remove_node(&handle->rule[i]->node, true);
|
||||
if (fte->modify_mask && fte->dests_size) {
|
||||
modify_fte(fte);
|
||||
if (fte->dests_size) {
|
||||
if (fte->modify_mask)
|
||||
modify_fte(fte);
|
||||
up_write_ref_node(&fte->node, false);
|
||||
} else {
|
||||
} else if (list_empty(&fte->node.children)) {
|
||||
del_hw_fte(&fte->node);
|
||||
/* Avoid double call to del_hw_fte */
|
||||
fte->node.del_hw_func = NULL;
|
||||
|
@ -672,14 +672,12 @@ clean_up:
|
||||
static int lan743x_dp_write(struct lan743x_adapter *adapter,
|
||||
u32 select, u32 addr, u32 length, u32 *buf)
|
||||
{
|
||||
int ret = -EIO;
|
||||
u32 dp_sel;
|
||||
int i;
|
||||
|
||||
mutex_lock(&adapter->dp_lock);
|
||||
if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
|
||||
1, 40, 100, 100))
|
||||
goto unlock;
|
||||
return -EIO;
|
||||
dp_sel = lan743x_csr_read(adapter, DP_SEL);
|
||||
dp_sel &= ~DP_SEL_MASK_;
|
||||
dp_sel |= select;
|
||||
@ -691,13 +689,10 @@ static int lan743x_dp_write(struct lan743x_adapter *adapter,
|
||||
lan743x_csr_write(adapter, DP_CMD, DP_CMD_WRITE_);
|
||||
if (lan743x_csr_wait_for_bit(adapter, DP_SEL, DP_SEL_DPRDY_,
|
||||
1, 40, 100, 100))
|
||||
goto unlock;
|
||||
return -EIO;
|
||||
}
|
||||
ret = 0;
|
||||
|
||||
unlock:
|
||||
mutex_unlock(&adapter->dp_lock);
|
||||
return ret;
|
||||
return 0;
|
||||
}
|
||||
|
||||
static u32 lan743x_mac_mii_access(u16 id, u16 index, int read)
|
||||
@ -2674,7 +2669,6 @@ static int lan743x_hardware_init(struct lan743x_adapter *adapter,
|
||||
|
||||
adapter->intr.irq = adapter->pdev->irq;
|
||||
lan743x_csr_write(adapter, INT_EN_CLR, 0xFFFFFFFF);
|
||||
mutex_init(&adapter->dp_lock);
|
||||
|
||||
ret = lan743x_gpio_init(adapter);
|
||||
if (ret)
|
||||
|
@ -706,9 +706,6 @@ struct lan743x_adapter {
|
||||
struct lan743x_csr csr;
|
||||
struct lan743x_intr intr;
|
||||
|
||||
/* lock, used to prevent concurrent access to data port */
|
||||
struct mutex dp_lock;
|
||||
|
||||
struct lan743x_gpio gpio;
|
||||
struct lan743x_ptp ptp;
|
||||
|
||||
|
@ -5846,7 +5846,8 @@ static bool rtl8169_tso_csum_v2(struct rtl8169_private *tp,
|
||||
opts[1] |= transport_offset << TCPHO_SHIFT;
|
||||
} else {
|
||||
if (unlikely(rtl_test_hw_pad_bug(tp, skb)))
|
||||
return !eth_skb_pad(skb);
|
||||
/* eth_skb_pad would free the skb on error */
|
||||
return !__skb_put_padto(skb, ETH_ZLEN, false);
|
||||
}
|
||||
|
||||
return true;
|
||||
|
@ -332,8 +332,7 @@ static netdev_tx_t vrf_xmit(struct sk_buff *skb, struct net_device *dev)
|
||||
return ret;
|
||||
}
|
||||
|
||||
static int vrf_finish_direct(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
static void vrf_finish_direct(struct sk_buff *skb)
|
||||
{
|
||||
struct net_device *vrf_dev = skb->dev;
|
||||
|
||||
@ -352,7 +351,8 @@ static int vrf_finish_direct(struct net *net, struct sock *sk,
|
||||
skb_pull(skb, ETH_HLEN);
|
||||
}
|
||||
|
||||
return 1;
|
||||
/* reset skb device */
|
||||
nf_reset_ct(skb);
|
||||
}
|
||||
|
||||
#if IS_ENABLED(CONFIG_IPV6)
|
||||
@ -431,15 +431,41 @@ static struct sk_buff *vrf_ip6_out_redirect(struct net_device *vrf_dev,
|
||||
return skb;
|
||||
}
|
||||
|
||||
static int vrf_output6_direct_finish(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
vrf_finish_direct(skb);
|
||||
|
||||
return vrf_ip6_local_out(net, sk, skb);
|
||||
}
|
||||
|
||||
static int vrf_output6_direct(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
int err = 1;
|
||||
|
||||
skb->protocol = htons(ETH_P_IPV6);
|
||||
|
||||
return NF_HOOK_COND(NFPROTO_IPV6, NF_INET_POST_ROUTING,
|
||||
net, sk, skb, NULL, skb->dev,
|
||||
vrf_finish_direct,
|
||||
!(IPCB(skb)->flags & IPSKB_REROUTED));
|
||||
if (!(IPCB(skb)->flags & IPSKB_REROUTED))
|
||||
err = nf_hook(NFPROTO_IPV6, NF_INET_POST_ROUTING, net, sk, skb,
|
||||
NULL, skb->dev, vrf_output6_direct_finish);
|
||||
|
||||
if (likely(err == 1))
|
||||
vrf_finish_direct(skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int vrf_ip6_out_direct_finish(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = vrf_output6_direct(net, sk, skb);
|
||||
if (likely(err == 1))
|
||||
err = vrf_ip6_local_out(net, sk, skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
|
||||
@ -452,18 +478,15 @@ static struct sk_buff *vrf_ip6_out_direct(struct net_device *vrf_dev,
|
||||
skb->dev = vrf_dev;
|
||||
|
||||
err = nf_hook(NFPROTO_IPV6, NF_INET_LOCAL_OUT, net, sk,
|
||||
skb, NULL, vrf_dev, vrf_output6_direct);
|
||||
skb, NULL, vrf_dev, vrf_ip6_out_direct_finish);
|
||||
|
||||
if (likely(err == 1))
|
||||
err = vrf_output6_direct(net, sk, skb);
|
||||
|
||||
/* reset skb device */
|
||||
if (likely(err == 1))
|
||||
nf_reset_ct(skb);
|
||||
else
|
||||
skb = NULL;
|
||||
return skb;
|
||||
|
||||
return skb;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct sk_buff *vrf_ip6_out(struct net_device *vrf_dev,
|
||||
@ -643,15 +666,41 @@ static struct sk_buff *vrf_ip_out_redirect(struct net_device *vrf_dev,
|
||||
return skb;
|
||||
}
|
||||
|
||||
static int vrf_output_direct_finish(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
vrf_finish_direct(skb);
|
||||
|
||||
return vrf_ip_local_out(net, sk, skb);
|
||||
}
|
||||
|
||||
static int vrf_output_direct(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
int err = 1;
|
||||
|
||||
skb->protocol = htons(ETH_P_IP);
|
||||
|
||||
return NF_HOOK_COND(NFPROTO_IPV4, NF_INET_POST_ROUTING,
|
||||
net, sk, skb, NULL, skb->dev,
|
||||
vrf_finish_direct,
|
||||
!(IPCB(skb)->flags & IPSKB_REROUTED));
|
||||
if (!(IPCB(skb)->flags & IPSKB_REROUTED))
|
||||
err = nf_hook(NFPROTO_IPV4, NF_INET_POST_ROUTING, net, sk, skb,
|
||||
NULL, skb->dev, vrf_output_direct_finish);
|
||||
|
||||
if (likely(err == 1))
|
||||
vrf_finish_direct(skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static int vrf_ip_out_direct_finish(struct net *net, struct sock *sk,
|
||||
struct sk_buff *skb)
|
||||
{
|
||||
int err;
|
||||
|
||||
err = vrf_output_direct(net, sk, skb);
|
||||
if (likely(err == 1))
|
||||
err = vrf_ip_local_out(net, sk, skb);
|
||||
|
||||
return err;
|
||||
}
|
||||
|
||||
static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
|
||||
@ -664,18 +713,15 @@ static struct sk_buff *vrf_ip_out_direct(struct net_device *vrf_dev,
|
||||
skb->dev = vrf_dev;
|
||||
|
||||
err = nf_hook(NFPROTO_IPV4, NF_INET_LOCAL_OUT, net, sk,
|
||||
skb, NULL, vrf_dev, vrf_output_direct);
|
||||
skb, NULL, vrf_dev, vrf_ip_out_direct_finish);
|
||||
|
||||
if (likely(err == 1))
|
||||
err = vrf_output_direct(net, sk, skb);
|
||||
|
||||
/* reset skb device */
|
||||
if (likely(err == 1))
|
||||
nf_reset_ct(skb);
|
||||
else
|
||||
skb = NULL;
|
||||
return skb;
|
||||
|
||||
return skb;
|
||||
return NULL;
|
||||
}
|
||||
|
||||
static struct sk_buff *vrf_ip_out(struct net_device *vrf_dev,
|
||||
|
@ -889,6 +889,7 @@ static ssize_t cosa_write(struct file *file,
|
||||
chan->tx_status = 1;
|
||||
spin_unlock_irqrestore(&cosa->lock, flags);
|
||||
up(&chan->wsem);
|
||||
kfree(kbuf);
|
||||
return -ERESTARTSYS;
|
||||
}
|
||||
}
|
||||
|
@ -973,7 +973,7 @@ static bool ath9k_rx_prepare(struct ath9k_htc_priv *priv,
|
||||
struct ath_htc_rx_status *rxstatus;
|
||||
struct ath_rx_status rx_stats;
|
||||
bool decrypt_error = false;
|
||||
__be16 rs_datalen;
|
||||
u16 rs_datalen;
|
||||
bool is_phyerr;
|
||||
|
||||
if (skb->len < HTC_RX_FRAME_HEADER_SIZE) {
|
||||
|
@ -4226,8 +4226,7 @@ void nvme_start_queues(struct nvme_ctrl *ctrl)
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvme_start_queues);
|
||||
|
||||
|
||||
void nvme_sync_queues(struct nvme_ctrl *ctrl)
|
||||
void nvme_sync_io_queues(struct nvme_ctrl *ctrl)
|
||||
{
|
||||
struct nvme_ns *ns;
|
||||
|
||||
@ -4235,7 +4234,12 @@ void nvme_sync_queues(struct nvme_ctrl *ctrl)
|
||||
list_for_each_entry(ns, &ctrl->namespaces, list)
|
||||
blk_sync_queue(ns->queue);
|
||||
up_read(&ctrl->namespaces_rwsem);
|
||||
}
|
||||
EXPORT_SYMBOL_GPL(nvme_sync_io_queues);
|
||||
|
||||
void nvme_sync_queues(struct nvme_ctrl *ctrl)
|
||||
{
|
||||
nvme_sync_io_queues(ctrl);
|
||||
if (ctrl->admin_q)
|
||||
blk_sync_queue(ctrl->admin_q);
|
||||
}
|
||||
|
@ -494,6 +494,7 @@ void nvme_stop_queues(struct nvme_ctrl *ctrl);
|
||||
void nvme_start_queues(struct nvme_ctrl *ctrl);
|
||||
void nvme_kill_queues(struct nvme_ctrl *ctrl);
|
||||
void nvme_sync_queues(struct nvme_ctrl *ctrl);
|
||||
void nvme_sync_io_queues(struct nvme_ctrl *ctrl);
|
||||
void nvme_unfreeze(struct nvme_ctrl *ctrl);
|
||||
void nvme_wait_freeze(struct nvme_ctrl *ctrl);
|
||||
int nvme_wait_freeze_timeout(struct nvme_ctrl *ctrl, long timeout);
|
||||
|
@ -110,7 +110,6 @@ struct nvme_rdma_ctrl {
|
||||
struct sockaddr_storage src_addr;
|
||||
|
||||
struct nvme_ctrl ctrl;
|
||||
struct mutex teardown_lock;
|
||||
bool use_inline_data;
|
||||
u32 io_queues[HCTX_MAX_TYPES];
|
||||
};
|
||||
@ -933,8 +932,8 @@ out_free_io_queues:
|
||||
static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
|
||||
bool remove)
|
||||
{
|
||||
mutex_lock(&ctrl->teardown_lock);
|
||||
blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
|
||||
blk_sync_queue(ctrl->ctrl.admin_q);
|
||||
nvme_rdma_stop_queue(&ctrl->queues[0]);
|
||||
if (ctrl->ctrl.admin_tagset) {
|
||||
blk_mq_tagset_busy_iter(ctrl->ctrl.admin_tagset,
|
||||
@ -944,16 +943,15 @@ static void nvme_rdma_teardown_admin_queue(struct nvme_rdma_ctrl *ctrl,
|
||||
if (remove)
|
||||
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
|
||||
nvme_rdma_destroy_admin_queue(ctrl, remove);
|
||||
mutex_unlock(&ctrl->teardown_lock);
|
||||
}
|
||||
|
||||
static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
|
||||
bool remove)
|
||||
{
|
||||
mutex_lock(&ctrl->teardown_lock);
|
||||
if (ctrl->ctrl.queue_count > 1) {
|
||||
nvme_start_freeze(&ctrl->ctrl);
|
||||
nvme_stop_queues(&ctrl->ctrl);
|
||||
nvme_sync_io_queues(&ctrl->ctrl);
|
||||
nvme_rdma_stop_io_queues(ctrl);
|
||||
if (ctrl->ctrl.tagset) {
|
||||
blk_mq_tagset_busy_iter(ctrl->ctrl.tagset,
|
||||
@ -964,7 +962,6 @@ static void nvme_rdma_teardown_io_queues(struct nvme_rdma_ctrl *ctrl,
|
||||
nvme_start_queues(&ctrl->ctrl);
|
||||
nvme_rdma_destroy_io_queues(ctrl, remove);
|
||||
}
|
||||
mutex_unlock(&ctrl->teardown_lock);
|
||||
}
|
||||
|
||||
static void nvme_rdma_free_ctrl(struct nvme_ctrl *nctrl)
|
||||
@ -1728,16 +1725,12 @@ static void nvme_rdma_complete_timed_out(struct request *rq)
|
||||
{
|
||||
struct nvme_rdma_request *req = blk_mq_rq_to_pdu(rq);
|
||||
struct nvme_rdma_queue *queue = req->queue;
|
||||
struct nvme_rdma_ctrl *ctrl = queue->ctrl;
|
||||
|
||||
/* fence other contexts that may complete the command */
|
||||
mutex_lock(&ctrl->teardown_lock);
|
||||
nvme_rdma_stop_queue(queue);
|
||||
if (!blk_mq_request_completed(rq)) {
|
||||
if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) {
|
||||
nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
|
||||
blk_mq_complete_request(rq);
|
||||
}
|
||||
mutex_unlock(&ctrl->teardown_lock);
|
||||
}
|
||||
|
||||
static enum blk_eh_timer_return
|
||||
@ -2029,7 +2022,6 @@ static struct nvme_ctrl *nvme_rdma_create_ctrl(struct device *dev,
|
||||
return ERR_PTR(-ENOMEM);
|
||||
ctrl->ctrl.opts = opts;
|
||||
INIT_LIST_HEAD(&ctrl->list);
|
||||
mutex_init(&ctrl->teardown_lock);
|
||||
|
||||
if (!(opts->mask & NVMF_OPT_TRSVCID)) {
|
||||
opts->trsvcid =
|
||||
|
@ -110,7 +110,6 @@ struct nvme_tcp_ctrl {
|
||||
struct sockaddr_storage src_addr;
|
||||
struct nvme_ctrl ctrl;
|
||||
|
||||
struct mutex teardown_lock;
|
||||
struct work_struct err_work;
|
||||
struct delayed_work connect_work;
|
||||
struct nvme_tcp_request async_req;
|
||||
@ -1797,8 +1796,8 @@ out_free_queue:
|
||||
static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
|
||||
bool remove)
|
||||
{
|
||||
mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
|
||||
blk_mq_quiesce_queue(ctrl->admin_q);
|
||||
blk_sync_queue(ctrl->admin_q);
|
||||
nvme_tcp_stop_queue(ctrl, 0);
|
||||
if (ctrl->admin_tagset) {
|
||||
blk_mq_tagset_busy_iter(ctrl->admin_tagset,
|
||||
@ -1808,18 +1807,17 @@ static void nvme_tcp_teardown_admin_queue(struct nvme_ctrl *ctrl,
|
||||
if (remove)
|
||||
blk_mq_unquiesce_queue(ctrl->admin_q);
|
||||
nvme_tcp_destroy_admin_queue(ctrl, remove);
|
||||
mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
|
||||
}
|
||||
|
||||
static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
|
||||
bool remove)
|
||||
{
|
||||
mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
|
||||
if (ctrl->queue_count <= 1)
|
||||
goto out;
|
||||
return;
|
||||
blk_mq_quiesce_queue(ctrl->admin_q);
|
||||
nvme_start_freeze(ctrl);
|
||||
nvme_stop_queues(ctrl);
|
||||
nvme_sync_io_queues(ctrl);
|
||||
nvme_tcp_stop_io_queues(ctrl);
|
||||
if (ctrl->tagset) {
|
||||
blk_mq_tagset_busy_iter(ctrl->tagset,
|
||||
@ -1829,8 +1827,6 @@ static void nvme_tcp_teardown_io_queues(struct nvme_ctrl *ctrl,
|
||||
if (remove)
|
||||
nvme_start_queues(ctrl);
|
||||
nvme_tcp_destroy_io_queues(ctrl, remove);
|
||||
out:
|
||||
mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
|
||||
}
|
||||
|
||||
static void nvme_tcp_reconnect_or_remove(struct nvme_ctrl *ctrl)
|
||||
@ -2074,14 +2070,11 @@ static void nvme_tcp_complete_timed_out(struct request *rq)
|
||||
struct nvme_tcp_request *req = blk_mq_rq_to_pdu(rq);
|
||||
struct nvme_ctrl *ctrl = &req->queue->ctrl->ctrl;
|
||||
|
||||
/* fence other contexts that may complete the command */
|
||||
mutex_lock(&to_tcp_ctrl(ctrl)->teardown_lock);
|
||||
nvme_tcp_stop_queue(ctrl, nvme_tcp_queue_id(req->queue));
|
||||
if (!blk_mq_request_completed(rq)) {
|
||||
if (blk_mq_request_started(rq) && !blk_mq_request_completed(rq)) {
|
||||
nvme_req(rq)->status = NVME_SC_HOST_ABORTED_CMD;
|
||||
blk_mq_complete_request(rq);
|
||||
}
|
||||
mutex_unlock(&to_tcp_ctrl(ctrl)->teardown_lock);
|
||||
}
|
||||
|
||||
static enum blk_eh_timer_return
|
||||
@ -2344,7 +2337,6 @@ static struct nvme_ctrl *nvme_tcp_create_ctrl(struct device *dev,
|
||||
nvme_tcp_reconnect_ctrl_work);
|
||||
INIT_WORK(&ctrl->err_work, nvme_tcp_error_recovery_work);
|
||||
INIT_WORK(&ctrl->ctrl.reset_work, nvme_reset_ctrl_work);
|
||||
mutex_init(&ctrl->teardown_lock);
|
||||
|
||||
if (!(opts->mask & NVMF_OPT_TRSVCID)) {
|
||||
opts->trsvcid =
|
||||
|
@ -1019,11 +1019,13 @@ out:
|
||||
*/
|
||||
bool of_dma_is_coherent(struct device_node *np)
|
||||
{
|
||||
struct device_node *node = of_node_get(np);
|
||||
struct device_node *node;
|
||||
|
||||
if (IS_ENABLED(CONFIG_OF_DMA_DEFAULT_COHERENT))
|
||||
return true;
|
||||
|
||||
node = of_node_get(np);
|
||||
|
||||
while (node) {
|
||||
if (of_property_read_bool(node, "dma-coherent")) {
|
||||
of_node_put(node);
|
||||
|
@ -1047,6 +1047,10 @@ static void _opp_table_kref_release(struct kref *kref)
|
||||
struct opp_table *opp_table = container_of(kref, struct opp_table, kref);
|
||||
struct opp_device *opp_dev, *temp;
|
||||
|
||||
/* Drop the lock as soon as we can */
|
||||
list_del(&opp_table->node);
|
||||
mutex_unlock(&opp_table_lock);
|
||||
|
||||
_of_clear_opp_table(opp_table);
|
||||
|
||||
/* Release clk */
|
||||
@ -1068,10 +1072,7 @@ static void _opp_table_kref_release(struct kref *kref)
|
||||
|
||||
mutex_destroy(&opp_table->genpd_virt_dev_lock);
|
||||
mutex_destroy(&opp_table->lock);
|
||||
list_del(&opp_table->node);
|
||||
kfree(opp_table);
|
||||
|
||||
mutex_unlock(&opp_table_lock);
|
||||
}
|
||||
|
||||
void _opp_remove_all_static(struct opp_table *opp_table)
|
||||
|
@ -314,6 +314,9 @@ static void qcom_pcie_deinit_2_1_0(struct qcom_pcie *pcie)
|
||||
clk_disable_unprepare(res->core_clk);
|
||||
clk_disable_unprepare(res->aux_clk);
|
||||
clk_disable_unprepare(res->ref_clk);
|
||||
|
||||
writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL);
|
||||
|
||||
regulator_bulk_disable(ARRAY_SIZE(res->supplies), res->supplies);
|
||||
}
|
||||
|
||||
@ -326,6 +329,16 @@ static int qcom_pcie_init_2_1_0(struct qcom_pcie *pcie)
|
||||
u32 val;
|
||||
int ret;
|
||||
|
||||
/* reset the PCIe interface as uboot can leave it undefined state */
|
||||
reset_control_assert(res->pci_reset);
|
||||
reset_control_assert(res->axi_reset);
|
||||
reset_control_assert(res->ahb_reset);
|
||||
reset_control_assert(res->por_reset);
|
||||
reset_control_assert(res->ext_reset);
|
||||
reset_control_assert(res->phy_reset);
|
||||
|
||||
writel(1, pcie->parf + PCIE20_PARF_PHY_CTRL);
|
||||
|
||||
ret = regulator_bulk_enable(ARRAY_SIZE(res->supplies), res->supplies);
|
||||
if (ret < 0) {
|
||||
dev_err(dev, "cannot enable regulators\n");
|
||||
|
@ -277,13 +277,14 @@ int aspeed_pinmux_set_mux(struct pinctrl_dev *pctldev, unsigned int function,
|
||||
static bool aspeed_expr_is_gpio(const struct aspeed_sig_expr *expr)
|
||||
{
|
||||
/*
|
||||
* The signal type is GPIO if the signal name has "GPIO" as a prefix.
|
||||
* The signal type is GPIO if the signal name has "GPI" as a prefix.
|
||||
* strncmp (rather than strcmp) is used to implement the prefix
|
||||
* requirement.
|
||||
*
|
||||
* expr->signal might look like "GPIOT3" in the GPIO case.
|
||||
* expr->signal might look like "GPIOB1" in the GPIO case.
|
||||
* expr->signal might look like "GPIT0" in the GPI case.
|
||||
*/
|
||||
return strncmp(expr->signal, "GPIO", 4) == 0;
|
||||
return strncmp(expr->signal, "GPI", 3) == 0;
|
||||
}
|
||||
|
||||
static bool aspeed_gpio_in_exprs(const struct aspeed_sig_expr **exprs)
|
||||
|
@ -662,6 +662,10 @@ static int intel_config_set_pull(struct intel_pinctrl *pctrl, unsigned int pin,
|
||||
|
||||
value |= PADCFG1_TERM_UP;
|
||||
|
||||
/* Set default strength value in case none is given */
|
||||
if (arg == 1)
|
||||
arg = 5000;
|
||||
|
||||
switch (arg) {
|
||||
case 20000:
|
||||
value |= PADCFG1_TERM_20K << PADCFG1_TERM_SHIFT;
|
||||
@ -684,6 +688,10 @@ static int intel_config_set_pull(struct intel_pinctrl *pctrl, unsigned int pin,
|
||||
case PIN_CONFIG_BIAS_PULL_DOWN:
|
||||
value &= ~(PADCFG1_TERM_UP | PADCFG1_TERM_MASK);
|
||||
|
||||
/* Set default strength value in case none is given */
|
||||
if (arg == 1)
|
||||
arg = 5000;
|
||||
|
||||
switch (arg) {
|
||||
case 20000:
|
||||
value |= PADCFG1_TERM_20K << PADCFG1_TERM_SHIFT;
|
||||
|
@ -153,7 +153,7 @@ static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
|
||||
pin_reg |= BIT(DB_TMR_OUT_UNIT_OFF);
|
||||
pin_reg &= ~BIT(DB_TMR_LARGE_OFF);
|
||||
} else if (debounce < 250000) {
|
||||
time = debounce / 15600;
|
||||
time = debounce / 15625;
|
||||
pin_reg |= time & DB_TMR_OUT_MASK;
|
||||
pin_reg &= ~BIT(DB_TMR_OUT_UNIT_OFF);
|
||||
pin_reg |= BIT(DB_TMR_LARGE_OFF);
|
||||
@ -163,14 +163,14 @@ static int amd_gpio_set_debounce(struct gpio_chip *gc, unsigned offset,
|
||||
pin_reg |= BIT(DB_TMR_OUT_UNIT_OFF);
|
||||
pin_reg |= BIT(DB_TMR_LARGE_OFF);
|
||||
} else {
|
||||
pin_reg &= ~DB_CNTRl_MASK;
|
||||
pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF);
|
||||
ret = -EINVAL;
|
||||
}
|
||||
} else {
|
||||
pin_reg &= ~BIT(DB_TMR_OUT_UNIT_OFF);
|
||||
pin_reg &= ~BIT(DB_TMR_LARGE_OFF);
|
||||
pin_reg &= ~DB_TMR_OUT_MASK;
|
||||
pin_reg &= ~DB_CNTRl_MASK;
|
||||
pin_reg &= ~(DB_CNTRl_MASK << DB_CNTRL_OFF);
|
||||
}
|
||||
writel(pin_reg, gpio_dev->base + offset * 4);
|
||||
raw_spin_unlock_irqrestore(&gpio_dev->lock, flags);
|
||||
|
@ -658,8 +658,8 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(h,
|
||||
&tmp_pg->dh_list, node) {
|
||||
/* h->sdev should always be valid */
|
||||
BUG_ON(!h->sdev);
|
||||
if (!h->sdev)
|
||||
continue;
|
||||
h->sdev->access_state = desc[0];
|
||||
}
|
||||
rcu_read_unlock();
|
||||
@ -705,7 +705,8 @@ static int alua_rtpg(struct scsi_device *sdev, struct alua_port_group *pg)
|
||||
pg->expiry = 0;
|
||||
rcu_read_lock();
|
||||
list_for_each_entry_rcu(h, &pg->dh_list, node) {
|
||||
BUG_ON(!h->sdev);
|
||||
if (!h->sdev)
|
||||
continue;
|
||||
h->sdev->access_state =
|
||||
(pg->state & SCSI_ACCESS_STATE_MASK);
|
||||
if (pg->pref)
|
||||
@ -1147,7 +1148,6 @@ static void alua_bus_detach(struct scsi_device *sdev)
|
||||
spin_lock(&h->pg_lock);
|
||||
pg = rcu_dereference_protected(h->pg, lockdep_is_held(&h->pg_lock));
|
||||
rcu_assign_pointer(h->pg, NULL);
|
||||
h->sdev = NULL;
|
||||
spin_unlock(&h->pg_lock);
|
||||
if (pg) {
|
||||
spin_lock_irq(&pg->lock);
|
||||
@ -1156,6 +1156,7 @@ static void alua_bus_detach(struct scsi_device *sdev)
|
||||
kref_put(&pg->kref, release_port_group);
|
||||
}
|
||||
sdev->handler_data = NULL;
|
||||
synchronize_rcu();
|
||||
kfree(h);
|
||||
}
|
||||
|
||||
|
@ -8854,7 +8854,7 @@ reinit_after_soft_reset:
|
||||
/* hook into SCSI subsystem */
|
||||
rc = hpsa_scsi_add_host(h);
|
||||
if (rc)
|
||||
goto clean7; /* perf, sg, cmd, irq, shost, pci, lu, aer/h */
|
||||
goto clean8; /* lastlogicals, perf, sg, cmd, irq, shost, pci, lu, aer/h */
|
||||
|
||||
/* Monitor the controller for firmware lockups */
|
||||
h->heartbeat_sample_interval = HEARTBEAT_SAMPLE_INTERVAL;
|
||||
@ -8869,6 +8869,8 @@ reinit_after_soft_reset:
|
||||
HPSA_EVENT_MONITOR_INTERVAL);
|
||||
return 0;
|
||||
|
||||
clean8: /* lastlogicals, perf, sg, cmd, irq, shost, pci, lu, aer/h */
|
||||
kfree(h->lastlogicals);
|
||||
clean7: /* perf, sg, cmd, irq, shost, pci, lu, aer/h */
|
||||
hpsa_free_performant_mode(h);
|
||||
h->access.set_intr_mask(h, HPSA_INTR_OFF);
|
||||
|
@ -1641,6 +1641,13 @@ _base_irqpoll(struct irq_poll *irqpoll, int budget)
|
||||
reply_q->irq_poll_scheduled = false;
|
||||
reply_q->irq_line_enable = true;
|
||||
enable_irq(reply_q->os_irq);
|
||||
/*
|
||||
* Go for one more round of processing the
|
||||
* reply descriptor post queue incase if HBA
|
||||
* Firmware has posted some reply descriptors
|
||||
* while reenabling the IRQ.
|
||||
*/
|
||||
_base_process_reply_queue(reply_q);
|
||||
}
|
||||
|
||||
return num_entries;
|
||||
|
@ -1179,7 +1179,6 @@ static int bcm2835_spi_setup(struct spi_device *spi)
|
||||
struct spi_controller *ctlr = spi->controller;
|
||||
struct bcm2835_spi *bs = spi_controller_get_devdata(ctlr);
|
||||
struct gpio_chip *chip;
|
||||
enum gpio_lookup_flags lflags;
|
||||
u32 cs;
|
||||
|
||||
/*
|
||||
@ -1247,7 +1246,7 @@ static int bcm2835_spi_setup(struct spi_device *spi)
|
||||
|
||||
spi->cs_gpiod = gpiochip_request_own_desc(chip, 8 - spi->chip_select,
|
||||
DRV_NAME,
|
||||
lflags,
|
||||
GPIO_LOOKUP_FLAGS_DEFAULT,
|
||||
GPIOD_OUT_LOW);
|
||||
if (IS_ERR(spi->cs_gpiod))
|
||||
return PTR_ERR(spi->cs_gpiod);
|
||||
|
@ -410,12 +410,23 @@ static int ring_request_msix(struct tb_ring *ring, bool no_suspend)
|
||||
|
||||
ring->vector = ret;
|
||||
|
||||
ring->irq = pci_irq_vector(ring->nhi->pdev, ring->vector);
|
||||
if (ring->irq < 0)
|
||||
return ring->irq;
|
||||
ret = pci_irq_vector(ring->nhi->pdev, ring->vector);
|
||||
if (ret < 0)
|
||||
goto err_ida_remove;
|
||||
|
||||
ring->irq = ret;
|
||||
|
||||
irqflags = no_suspend ? IRQF_NO_SUSPEND : 0;
|
||||
return request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
|
||||
ret = request_irq(ring->irq, ring_msix, irqflags, "thunderbolt", ring);
|
||||
if (ret)
|
||||
goto err_ida_remove;
|
||||
|
||||
return 0;
|
||||
|
||||
err_ida_remove:
|
||||
ida_simple_remove(&nhi->msix_ida, ring->vector);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
static void ring_release_msix(struct tb_ring *ring)
|
||||
|
@ -830,6 +830,7 @@ static void enumerate_services(struct tb_xdomain *xd)
|
||||
|
||||
id = ida_simple_get(&xd->service_ids, 0, 0, GFP_KERNEL);
|
||||
if (id < 0) {
|
||||
kfree(svc->key);
|
||||
kfree(svc);
|
||||
break;
|
||||
}
|
||||
|
@ -413,10 +413,10 @@ static int uio_get_minor(struct uio_device *idev)
|
||||
return retval;
|
||||
}
|
||||
|
||||
static void uio_free_minor(struct uio_device *idev)
|
||||
static void uio_free_minor(unsigned long minor)
|
||||
{
|
||||
mutex_lock(&minor_lock);
|
||||
idr_remove(&uio_idr, idev->minor);
|
||||
idr_remove(&uio_idr, minor);
|
||||
mutex_unlock(&minor_lock);
|
||||
}
|
||||
|
||||
@ -990,7 +990,7 @@ err_request_irq:
|
||||
err_uio_dev_add_attributes:
|
||||
device_del(&idev->dev);
|
||||
err_device_create:
|
||||
uio_free_minor(idev);
|
||||
uio_free_minor(idev->minor);
|
||||
put_device(&idev->dev);
|
||||
return ret;
|
||||
}
|
||||
@ -1004,11 +1004,13 @@ EXPORT_SYMBOL_GPL(__uio_register_device);
|
||||
void uio_unregister_device(struct uio_info *info)
|
||||
{
|
||||
struct uio_device *idev;
|
||||
unsigned long minor;
|
||||
|
||||
if (!info || !info->uio_dev)
|
||||
return;
|
||||
|
||||
idev = info->uio_dev;
|
||||
minor = idev->minor;
|
||||
|
||||
mutex_lock(&idev->info_lock);
|
||||
uio_dev_del_attributes(idev);
|
||||
@ -1024,7 +1026,7 @@ void uio_unregister_device(struct uio_info *info)
|
||||
|
||||
device_unregister(&idev->dev);
|
||||
|
||||
uio_free_minor(idev);
|
||||
uio_free_minor(minor);
|
||||
|
||||
return;
|
||||
}
|
||||
|
@ -1706,6 +1706,15 @@ static const struct usb_device_id acm_ids[] = {
|
||||
{ USB_DEVICE(0x0870, 0x0001), /* Metricom GS Modem */
|
||||
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
|
||||
},
|
||||
{ USB_DEVICE(0x045b, 0x023c), /* Renesas USB Download mode */
|
||||
.driver_info = DISABLE_ECHO, /* Don't echo banner */
|
||||
},
|
||||
{ USB_DEVICE(0x045b, 0x0248), /* Renesas USB Download mode */
|
||||
.driver_info = DISABLE_ECHO, /* Don't echo banner */
|
||||
},
|
||||
{ USB_DEVICE(0x045b, 0x024D), /* Renesas USB Download mode */
|
||||
.driver_info = DISABLE_ECHO, /* Don't echo banner */
|
||||
},
|
||||
{ USB_DEVICE(0x0e8d, 0x0003), /* FIREFLY, MediaTek Inc; andrey.arapov@gmail.com */
|
||||
.driver_info = NO_UNION_NORMAL, /* has no union descriptor */
|
||||
},
|
||||
|
@ -40,6 +40,7 @@
|
||||
#define PCI_DEVICE_ID_INTEL_TGPLP 0xa0ee
|
||||
#define PCI_DEVICE_ID_INTEL_TGPH 0x43ee
|
||||
#define PCI_DEVICE_ID_INTEL_JSP 0x4dee
|
||||
#define PCI_DEVICE_ID_INTEL_ADLS 0x7ae1
|
||||
|
||||
#define PCI_INTEL_BXT_DSM_GUID "732b85d5-b7a7-4a1b-9ba0-4bbd00ffd511"
|
||||
#define PCI_INTEL_BXT_FUNC_PMU_PWR 4
|
||||
@ -367,6 +368,9 @@ static const struct pci_device_id dwc3_pci_id_table[] = {
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_JSP),
|
||||
(kernel_ulong_t) &dwc3_pci_intel_properties, },
|
||||
|
||||
{ PCI_VDEVICE(INTEL, PCI_DEVICE_ID_INTEL_ADLS),
|
||||
(kernel_ulong_t) &dwc3_pci_intel_properties, },
|
||||
|
||||
{ PCI_VDEVICE(AMD, PCI_DEVICE_ID_AMD_NL_USB),
|
||||
(kernel_ulong_t) &dwc3_pci_amd_properties, },
|
||||
{ } /* Terminating Entry */
|
||||
|
@ -2627,6 +2627,11 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
|
||||
ret = dwc3_gadget_ep_reclaim_trb_linear(dep, req, event,
|
||||
status);
|
||||
|
||||
req->request.actual = req->request.length - req->remaining;
|
||||
|
||||
if (!dwc3_gadget_ep_request_completed(req))
|
||||
goto out;
|
||||
|
||||
if (req->needs_extra_trb) {
|
||||
unsigned int maxp = usb_endpoint_maxp(dep->endpoint.desc);
|
||||
|
||||
@ -2642,13 +2647,6 @@ static int dwc3_gadget_ep_cleanup_completed_request(struct dwc3_ep *dep,
|
||||
req->needs_extra_trb = false;
|
||||
}
|
||||
|
||||
req->request.actual = req->request.length - req->remaining;
|
||||
|
||||
if (!dwc3_gadget_ep_request_completed(req)) {
|
||||
__dwc3_gadget_kick_transfer(dep);
|
||||
goto out;
|
||||
}
|
||||
|
||||
dwc3_gadget_giveback(dep, req, status);
|
||||
|
||||
out:
|
||||
@ -2671,6 +2669,24 @@ static void dwc3_gadget_ep_cleanup_completed_requests(struct dwc3_ep *dep,
|
||||
}
|
||||
}
|
||||
|
||||
static bool dwc3_gadget_ep_should_continue(struct dwc3_ep *dep)
|
||||
{
|
||||
struct dwc3_request *req;
|
||||
|
||||
if (!list_empty(&dep->pending_list))
|
||||
return true;
|
||||
|
||||
/*
|
||||
* We only need to check the first entry of the started list. We can
|
||||
* assume the completed requests are removed from the started list.
|
||||
*/
|
||||
req = next_request(&dep->started_list);
|
||||
if (!req)
|
||||
return false;
|
||||
|
||||
return !dwc3_gadget_ep_request_completed(req);
|
||||
}
|
||||
|
||||
static void dwc3_gadget_endpoint_frame_from_event(struct dwc3_ep *dep,
|
||||
const struct dwc3_event_depevt *event)
|
||||
{
|
||||
@ -2700,6 +2716,8 @@ static void dwc3_gadget_endpoint_transfer_in_progress(struct dwc3_ep *dep,
|
||||
|
||||
if (stop)
|
||||
dwc3_stop_active_transfer(dep, true, true);
|
||||
else if (dwc3_gadget_ep_should_continue(dep))
|
||||
__dwc3_gadget_kick_transfer(dep);
|
||||
|
||||
/*
|
||||
* WORKAROUND: This is the 2nd half of U1/U2 -> U0 workaround.
|
||||
|
@ -1757,6 +1757,7 @@ static int goku_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
goto err;
|
||||
}
|
||||
|
||||
pci_set_drvdata(pdev, dev);
|
||||
spin_lock_init(&dev->lock);
|
||||
dev->pdev = pdev;
|
||||
dev->gadget.ops = &goku_ops;
|
||||
@ -1790,7 +1791,6 @@ static int goku_probe(struct pci_dev *pdev, const struct pci_device_id *id)
|
||||
}
|
||||
dev->regs = (struct goku_udc_regs __iomem *) base;
|
||||
|
||||
pci_set_drvdata(pdev, dev);
|
||||
INFO(dev, "%s\n", driver_desc);
|
||||
INFO(dev, "version: " DRIVER_VERSION " %s\n", dmastr());
|
||||
INFO(dev, "irq %d, pci mem %p\n", pdev->irq, base);
|
||||
|
@ -241,7 +241,7 @@ static int xhci_histb_probe(struct platform_device *pdev)
|
||||
/* Initialize dma_mask and coherent_dma_mask to 32-bits */
|
||||
ret = dma_set_mask_and_coherent(dev, DMA_BIT_MASK(32));
|
||||
if (ret)
|
||||
return ret;
|
||||
goto disable_pm;
|
||||
|
||||
hcd = usb_create_hcd(driver, dev, dev_name(dev));
|
||||
if (!hcd) {
|
||||
|
@ -334,7 +334,7 @@ static int vfio_pci_enable(struct vfio_pci_device *vdev)
|
||||
pdev->vendor == PCI_VENDOR_ID_INTEL &&
|
||||
IS_ENABLED(CONFIG_VFIO_PCI_IGD)) {
|
||||
ret = vfio_pci_igd_init(vdev);
|
||||
if (ret) {
|
||||
if (ret && ret != -ENODEV) {
|
||||
pci_warn(pdev, "Failed to setup Intel IGD regions\n");
|
||||
goto disable_exit;
|
||||
}
|
||||
|
@ -267,7 +267,7 @@ static int vfio_platform_open(void *device_data)
|
||||
|
||||
ret = pm_runtime_get_sync(vdev->device);
|
||||
if (ret < 0)
|
||||
goto err_pm;
|
||||
goto err_rst;
|
||||
|
||||
ret = vfio_platform_call_reset(vdev, &extra_dbg);
|
||||
if (ret && vdev->reset_required) {
|
||||
@ -284,7 +284,6 @@ static int vfio_platform_open(void *device_data)
|
||||
|
||||
err_rst:
|
||||
pm_runtime_put(vdev->device);
|
||||
err_pm:
|
||||
vfio_platform_irq_cleanup(vdev);
|
||||
err_irq:
|
||||
vfio_platform_regions_cleanup(vdev);
|
||||
|
@ -2162,6 +2162,7 @@ int yfs_fs_store_opaque_acl2(struct afs_fs_cursor *fc, const struct afs_acl *acl
|
||||
memcpy(bp, acl->data, acl->size);
|
||||
if (acl->size != size)
|
||||
memset((void *)bp + acl->size, 0, size - acl->size);
|
||||
bp += size / sizeof(__be32);
|
||||
yfs_check_req(call, bp);
|
||||
|
||||
trace_afs_make_fs_call(call, &vnode->fid);
|
||||
|
@ -55,6 +55,17 @@ int btrfs_init_dev_replace(struct btrfs_fs_info *fs_info)
|
||||
ret = btrfs_search_slot(NULL, dev_root, &key, path, 0, 0);
|
||||
if (ret) {
|
||||
no_valid_dev_replace_entry_found:
|
||||
/*
|
||||
* We don't have a replace item or it's corrupted. If there is
|
||||
* a replace target, fail the mount.
|
||||
*/
|
||||
if (btrfs_find_device(fs_info->fs_devices,
|
||||
BTRFS_DEV_REPLACE_DEVID, NULL, NULL, false)) {
|
||||
btrfs_err(fs_info,
|
||||
"found replace target device without a valid replace item");
|
||||
ret = -EUCLEAN;
|
||||
goto out;
|
||||
}
|
||||
ret = 0;
|
||||
dev_replace->replace_state =
|
||||
BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED;
|
||||
@ -107,8 +118,19 @@ no_valid_dev_replace_entry_found:
|
||||
case BTRFS_IOCTL_DEV_REPLACE_STATE_NEVER_STARTED:
|
||||
case BTRFS_IOCTL_DEV_REPLACE_STATE_FINISHED:
|
||||
case BTRFS_IOCTL_DEV_REPLACE_STATE_CANCELED:
|
||||
dev_replace->srcdev = NULL;
|
||||
dev_replace->tgtdev = NULL;
|
||||
/*
|
||||
* We don't have an active replace item but if there is a
|
||||
* replace target, fail the mount.
|
||||
*/
|
||||
if (btrfs_find_device(fs_info->fs_devices,
|
||||
BTRFS_DEV_REPLACE_DEVID, NULL, NULL, false)) {
|
||||
btrfs_err(fs_info,
|
||||
"replace devid present without an active replace item");
|
||||
ret = -EUCLEAN;
|
||||
} else {
|
||||
dev_replace->srcdev = NULL;
|
||||
dev_replace->tgtdev = NULL;
|
||||
}
|
||||
break;
|
||||
case BTRFS_IOCTL_DEV_REPLACE_STATE_STARTED:
|
||||
case BTRFS_IOCTL_DEV_REPLACE_STATE_SUSPENDED:
|
||||
|
@ -3800,11 +3800,12 @@ static int find_free_extent_update_loop(struct btrfs_fs_info *fs_info,
|
||||
* |- Push harder to find free extents
|
||||
* |- If not found, re-iterate all block groups
|
||||
*/
|
||||
static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
|
||||
static noinline int find_free_extent(struct btrfs_root *root,
|
||||
u64 ram_bytes, u64 num_bytes, u64 empty_size,
|
||||
u64 hint_byte, struct btrfs_key *ins,
|
||||
u64 flags, int delalloc)
|
||||
{
|
||||
struct btrfs_fs_info *fs_info = root->fs_info;
|
||||
int ret = 0;
|
||||
int cache_block_group_error = 0;
|
||||
struct btrfs_free_cluster *last_ptr = NULL;
|
||||
@ -3833,7 +3834,7 @@ static noinline int find_free_extent(struct btrfs_fs_info *fs_info,
|
||||
ins->objectid = 0;
|
||||
ins->offset = 0;
|
||||
|
||||
trace_find_free_extent(fs_info, num_bytes, empty_size, flags);
|
||||
trace_find_free_extent(root, num_bytes, empty_size, flags);
|
||||
|
||||
space_info = btrfs_find_space_info(fs_info, flags);
|
||||
if (!space_info) {
|
||||
@ -4141,7 +4142,7 @@ int btrfs_reserve_extent(struct btrfs_root *root, u64 ram_bytes,
|
||||
flags = get_alloc_profile_by_root(root, is_data);
|
||||
again:
|
||||
WARN_ON(num_bytes < fs_info->sectorsize);
|
||||
ret = find_free_extent(fs_info, ram_bytes, num_bytes, empty_size,
|
||||
ret = find_free_extent(root, ram_bytes, num_bytes, empty_size,
|
||||
hint_byte, ins, flags, delalloc);
|
||||
if (!ret && !is_data) {
|
||||
btrfs_dec_block_group_reservations(fs_info, ins->objectid);
|
||||
|
@ -1255,6 +1255,7 @@ static int cluster_pages_for_defrag(struct inode *inode,
|
||||
u64 page_start;
|
||||
u64 page_end;
|
||||
u64 page_cnt;
|
||||
u64 start = (u64)start_index << PAGE_SHIFT;
|
||||
int ret;
|
||||
int i;
|
||||
int i_done;
|
||||
@ -1271,8 +1272,7 @@ static int cluster_pages_for_defrag(struct inode *inode,
|
||||
page_cnt = min_t(u64, (u64)num_pages, (u64)file_end - start_index + 1);
|
||||
|
||||
ret = btrfs_delalloc_reserve_space(inode, &data_reserved,
|
||||
start_index << PAGE_SHIFT,
|
||||
page_cnt << PAGE_SHIFT);
|
||||
start, page_cnt << PAGE_SHIFT);
|
||||
if (ret)
|
||||
return ret;
|
||||
i_done = 0;
|
||||
@ -1361,8 +1361,7 @@ again:
|
||||
btrfs_mod_outstanding_extents(BTRFS_I(inode), 1);
|
||||
spin_unlock(&BTRFS_I(inode)->lock);
|
||||
btrfs_delalloc_release_space(inode, data_reserved,
|
||||
start_index << PAGE_SHIFT,
|
||||
(page_cnt - i_done) << PAGE_SHIFT, true);
|
||||
start, (page_cnt - i_done) << PAGE_SHIFT, true);
|
||||
}
|
||||
|
||||
|
||||
@ -1389,8 +1388,7 @@ out:
|
||||
put_page(pages[i]);
|
||||
}
|
||||
btrfs_delalloc_release_space(inode, data_reserved,
|
||||
start_index << PAGE_SHIFT,
|
||||
page_cnt << PAGE_SHIFT, true);
|
||||
start, page_cnt << PAGE_SHIFT, true);
|
||||
btrfs_delalloc_release_extents(BTRFS_I(inode), page_cnt << PAGE_SHIFT);
|
||||
extent_changeset_free(data_reserved);
|
||||
return ret;
|
||||
@ -3752,6 +3750,8 @@ process_slot:
|
||||
ret = -EINTR;
|
||||
goto out;
|
||||
}
|
||||
|
||||
cond_resched();
|
||||
}
|
||||
ret = 0;
|
||||
|
||||
|
@ -851,6 +851,7 @@ int btrfs_ref_tree_mod(struct btrfs_fs_info *fs_info,
|
||||
"dropping a ref for a root that doesn't have a ref on the block");
|
||||
dump_block_entry(fs_info, be);
|
||||
dump_ref_action(fs_info, ra);
|
||||
kfree(ref);
|
||||
kfree(ra);
|
||||
goto out_unlock;
|
||||
}
|
||||
|
@ -2287,6 +2287,7 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
|
||||
struct btrfs_root_item *root_item;
|
||||
struct btrfs_path *path;
|
||||
struct extent_buffer *leaf;
|
||||
int reserve_level;
|
||||
int level;
|
||||
int max_level;
|
||||
int replaced = 0;
|
||||
@ -2335,7 +2336,8 @@ static noinline_for_stack int merge_reloc_root(struct reloc_control *rc,
|
||||
* Thus the needed metadata size is at most root_level * nodesize,
|
||||
* and * 2 since we have two trees to COW.
|
||||
*/
|
||||
min_reserved = fs_info->nodesize * btrfs_root_level(root_item) * 2;
|
||||
reserve_level = max_t(int, 1, btrfs_root_level(root_item));
|
||||
min_reserved = fs_info->nodesize * reserve_level * 2;
|
||||
memset(&next_key, 0, sizeof(next_key));
|
||||
|
||||
while (1) {
|
||||
|
@ -1245,22 +1245,13 @@ again:
|
||||
continue;
|
||||
}
|
||||
|
||||
if (device->devid == BTRFS_DEV_REPLACE_DEVID) {
|
||||
/*
|
||||
* In the first step, keep the device which has
|
||||
* the correct fsid and the devid that is used
|
||||
* for the dev_replace procedure.
|
||||
* In the second step, the dev_replace state is
|
||||
* read from the device tree and it is known
|
||||
* whether the procedure is really active or
|
||||
* not, which means whether this device is
|
||||
* used or whether it should be removed.
|
||||
*/
|
||||
if (step == 0 || test_bit(BTRFS_DEV_STATE_REPLACE_TGT,
|
||||
&device->dev_state)) {
|
||||
continue;
|
||||
}
|
||||
}
|
||||
/*
|
||||
* We have already validated the presence of BTRFS_DEV_REPLACE_DEVID,
|
||||
* in btrfs_init_dev_replace() so just continue.
|
||||
*/
|
||||
if (device->devid == BTRFS_DEV_REPLACE_DEVID)
|
||||
continue;
|
||||
|
||||
if (device->bdev) {
|
||||
blkdev_put(device->bdev, device->mode);
|
||||
device->bdev = NULL;
|
||||
@ -1269,9 +1260,6 @@ again:
|
||||
if (test_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state)) {
|
||||
list_del_init(&device->dev_alloc_list);
|
||||
clear_bit(BTRFS_DEV_STATE_WRITEABLE, &device->dev_state);
|
||||
if (!test_bit(BTRFS_DEV_STATE_REPLACE_TGT,
|
||||
&device->dev_state))
|
||||
fs_devices->rw_devices--;
|
||||
}
|
||||
list_del_init(&device->dev_list);
|
||||
fs_devices->num_devices--;
|
||||
@ -2728,9 +2716,6 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
|
||||
btrfs_set_super_num_devices(fs_info->super_copy,
|
||||
orig_super_num_devices + 1);
|
||||
|
||||
/* add sysfs device entry */
|
||||
btrfs_sysfs_add_device_link(fs_devices, device);
|
||||
|
||||
/*
|
||||
* we've got more storage, clear any full flags on the space
|
||||
* infos
|
||||
@ -2738,6 +2723,10 @@ int btrfs_init_new_device(struct btrfs_fs_info *fs_info, const char *device_path
|
||||
btrfs_clear_space_info_full(fs_info);
|
||||
|
||||
mutex_unlock(&fs_info->chunk_mutex);
|
||||
|
||||
/* Add sysfs device entry */
|
||||
btrfs_sysfs_add_device_link(fs_devices, device);
|
||||
|
||||
mutex_unlock(&fs_devices->device_list_mutex);
|
||||
|
||||
if (seeding_dev) {
|
||||
|
@ -488,7 +488,13 @@ cifsConvertToUTF16(__le16 *target, const char *source, int srclen,
|
||||
else if (map_chars == SFM_MAP_UNI_RSVD) {
|
||||
bool end_of_string;
|
||||
|
||||
if (i == srclen - 1)
|
||||
/**
|
||||
* Remap spaces and periods found at the end of every
|
||||
* component of the path. The special cases of '.' and
|
||||
* '..' do not need to be dealt with explicitly because
|
||||
* they are addressed in namei.c:link_path_walk().
|
||||
**/
|
||||
if ((i == srclen - 1) || (source[i+1] == '\\'))
|
||||
end_of_string = true;
|
||||
else
|
||||
end_of_string = false;
|
||||
|
@ -107,11 +107,9 @@ static struct page *erofs_read_inode(struct inode *inode,
|
||||
i_gid_write(inode, le32_to_cpu(die->i_gid));
|
||||
set_nlink(inode, le32_to_cpu(die->i_nlink));
|
||||
|
||||
/* ns timestamp */
|
||||
inode->i_mtime.tv_sec = inode->i_ctime.tv_sec =
|
||||
le64_to_cpu(die->i_ctime);
|
||||
inode->i_mtime.tv_nsec = inode->i_ctime.tv_nsec =
|
||||
le32_to_cpu(die->i_ctime_nsec);
|
||||
/* extended inode has its own timestamp */
|
||||
inode->i_ctime.tv_sec = le64_to_cpu(die->i_ctime);
|
||||
inode->i_ctime.tv_nsec = le32_to_cpu(die->i_ctime_nsec);
|
||||
|
||||
inode->i_size = le64_to_cpu(die->i_size);
|
||||
|
||||
@ -149,11 +147,9 @@ static struct page *erofs_read_inode(struct inode *inode,
|
||||
i_gid_write(inode, le16_to_cpu(dic->i_gid));
|
||||
set_nlink(inode, le16_to_cpu(dic->i_nlink));
|
||||
|
||||
/* use build time to derive all file time */
|
||||
inode->i_mtime.tv_sec = inode->i_ctime.tv_sec =
|
||||
sbi->build_time;
|
||||
inode->i_mtime.tv_nsec = inode->i_ctime.tv_nsec =
|
||||
sbi->build_time_nsec;
|
||||
/* use build time for compact inodes */
|
||||
inode->i_ctime.tv_sec = sbi->build_time;
|
||||
inode->i_ctime.tv_nsec = sbi->build_time_nsec;
|
||||
|
||||
inode->i_size = le32_to_cpu(dic->i_size);
|
||||
if (erofs_inode_is_data_compressed(vi->datalayout))
|
||||
@ -167,6 +163,11 @@ static struct page *erofs_read_inode(struct inode *inode,
|
||||
goto err_out;
|
||||
}
|
||||
|
||||
inode->i_mtime.tv_sec = inode->i_ctime.tv_sec;
|
||||
inode->i_atime.tv_sec = inode->i_ctime.tv_sec;
|
||||
inode->i_mtime.tv_nsec = inode->i_ctime.tv_nsec;
|
||||
inode->i_atime.tv_nsec = inode->i_ctime.tv_nsec;
|
||||
|
||||
if (!nblks)
|
||||
/* measure inode.i_blocks as generic filesystems */
|
||||
inode->i_blocks = roundup(inode->i_size, EROFS_BLKSIZ) >> 9;
|
||||
|
@ -1937,6 +1937,7 @@ int ext4_inline_data_truncate(struct inode *inode, int *has_inline)
|
||||
|
||||
ext4_write_lock_xattr(inode, &no_expand);
|
||||
if (!ext4_has_inline_data(inode)) {
|
||||
ext4_write_unlock_xattr(inode, &no_expand);
|
||||
*has_inline = 0;
|
||||
ext4_journal_stop(handle);
|
||||
return 0;
|
||||
|
@ -1781,8 +1781,8 @@ static const struct mount_opts {
|
||||
{Opt_noquota, (EXT4_MOUNT_QUOTA | EXT4_MOUNT_USRQUOTA |
|
||||
EXT4_MOUNT_GRPQUOTA | EXT4_MOUNT_PRJQUOTA),
|
||||
MOPT_CLEAR | MOPT_Q},
|
||||
{Opt_usrjquota, 0, MOPT_Q},
|
||||
{Opt_grpjquota, 0, MOPT_Q},
|
||||
{Opt_usrjquota, 0, MOPT_Q | MOPT_STRING},
|
||||
{Opt_grpjquota, 0, MOPT_Q | MOPT_STRING},
|
||||
{Opt_offusrjquota, 0, MOPT_Q},
|
||||
{Opt_offgrpjquota, 0, MOPT_Q},
|
||||
{Opt_jqfmt_vfsold, QFMT_VFS_OLD, MOPT_QFMT},
|
||||
|
@ -736,9 +736,9 @@ void gfs2_clear_rgrpd(struct gfs2_sbd *sdp)
|
||||
}
|
||||
|
||||
gfs2_free_clones(rgd);
|
||||
return_all_reservations(rgd);
|
||||
kfree(rgd->rd_bits);
|
||||
rgd->rd_bits = NULL;
|
||||
return_all_reservations(rgd);
|
||||
kmem_cache_free(gfs2_rgrpd_cachep, rgd);
|
||||
}
|
||||
}
|
||||
@ -1410,6 +1410,9 @@ int gfs2_fitrim(struct file *filp, void __user *argp)
|
||||
if (!capable(CAP_SYS_ADMIN))
|
||||
return -EPERM;
|
||||
|
||||
if (!test_bit(SDF_JOURNAL_LIVE, &sdp->sd_flags))
|
||||
return -EROFS;
|
||||
|
||||
if (!blk_queue_discard(q))
|
||||
return -EOPNOTSUPP;
|
||||
|
||||
|
@ -689,6 +689,7 @@ restart:
|
||||
gfs2_jindex_free(sdp);
|
||||
/* Take apart glock structures and buffer lists */
|
||||
gfs2_gl_hash_clear(sdp);
|
||||
truncate_inode_pages_final(&sdp->sd_aspace);
|
||||
gfs2_delete_debugfs_file(sdp);
|
||||
/* Unmount the locking protocol */
|
||||
gfs2_lm_unmount(sdp);
|
||||
|
@ -106,6 +106,8 @@ static int __try_to_free_cp_buf(struct journal_head *jh)
|
||||
* for a checkpoint to free up some space in the log.
|
||||
*/
|
||||
void __jbd2_log_wait_for_space(journal_t *journal)
|
||||
__acquires(&journal->j_state_lock)
|
||||
__releases(&journal->j_state_lock)
|
||||
{
|
||||
int nblocks, space_left;
|
||||
/* assert_spin_locked(&journal->j_state_lock); */
|
||||
|
@ -171,8 +171,10 @@ static void wait_transaction_switching(journal_t *journal)
|
||||
DEFINE_WAIT(wait);
|
||||
|
||||
if (WARN_ON(!journal->j_running_transaction ||
|
||||
journal->j_running_transaction->t_state != T_SWITCH))
|
||||
journal->j_running_transaction->t_state != T_SWITCH)) {
|
||||
read_unlock(&journal->j_state_lock);
|
||||
return;
|
||||
}
|
||||
prepare_to_wait(&journal->j_wait_transaction_locked, &wait,
|
||||
TASK_UNINTERRUPTIBLE);
|
||||
read_unlock(&journal->j_state_lock);
|
||||
|
@ -1692,6 +1692,7 @@ static void ocfs2_inode_init_once(void *data)
|
||||
|
||||
oi->ip_blkno = 0ULL;
|
||||
oi->ip_clusters = 0;
|
||||
oi->ip_next_orphan = NULL;
|
||||
|
||||
ocfs2_resv_init_once(&oi->ip_la_data_resv);
|
||||
|
||||
|
@ -2209,6 +2209,7 @@ xfs_defer_agfl_block(
|
||||
new->xefi_startblock = XFS_AGB_TO_FSB(mp, agno, agbno);
|
||||
new->xefi_blockcount = 1;
|
||||
new->xefi_oinfo = *oinfo;
|
||||
new->xefi_skip_discard = false;
|
||||
|
||||
trace_xfs_agfl_free_defer(mp, agno, 0, agbno, 1);
|
||||
|
||||
|
@ -52,9 +52,9 @@ struct xfs_extent_free_item
|
||||
{
|
||||
xfs_fsblock_t xefi_startblock;/* starting fs block number */
|
||||
xfs_extlen_t xefi_blockcount;/* number of blocks in extent */
|
||||
bool xefi_skip_discard;
|
||||
struct list_head xefi_list;
|
||||
struct xfs_owner_info xefi_oinfo; /* extent owner */
|
||||
bool xefi_skip_discard;
|
||||
};
|
||||
|
||||
#define XFS_BMAP_MAX_NMAP 4
|
||||
|
@ -1379,7 +1379,7 @@ xfs_rmap_convert_shared(
|
||||
* record for our insertion point. This will also give us the record for
|
||||
* start block contiguity tests.
|
||||
*/
|
||||
error = xfs_rmap_lookup_le_range(cur, bno, owner, offset, flags,
|
||||
error = xfs_rmap_lookup_le_range(cur, bno, owner, offset, oldext,
|
||||
&PREV, &i);
|
||||
if (error)
|
||||
goto done;
|
||||
|
@ -243,8 +243,8 @@ xfs_rmapbt_key_diff(
|
||||
else if (y > x)
|
||||
return -1;
|
||||
|
||||
x = XFS_RMAP_OFF(be64_to_cpu(kp->rm_offset));
|
||||
y = rec->rm_offset;
|
||||
x = be64_to_cpu(kp->rm_offset);
|
||||
y = xfs_rmap_irec_offset_pack(rec);
|
||||
if (x > y)
|
||||
return 1;
|
||||
else if (y > x)
|
||||
@ -275,8 +275,8 @@ xfs_rmapbt_diff_two_keys(
|
||||
else if (y > x)
|
||||
return -1;
|
||||
|
||||
x = XFS_RMAP_OFF(be64_to_cpu(kp1->rm_offset));
|
||||
y = XFS_RMAP_OFF(be64_to_cpu(kp2->rm_offset));
|
||||
x = be64_to_cpu(kp1->rm_offset);
|
||||
y = be64_to_cpu(kp2->rm_offset);
|
||||
if (x > y)
|
||||
return 1;
|
||||
else if (y > x)
|
||||
@ -390,8 +390,8 @@ xfs_rmapbt_keys_inorder(
|
||||
return 1;
|
||||
else if (a > b)
|
||||
return 0;
|
||||
a = XFS_RMAP_OFF(be64_to_cpu(k1->rmap.rm_offset));
|
||||
b = XFS_RMAP_OFF(be64_to_cpu(k2->rmap.rm_offset));
|
||||
a = be64_to_cpu(k1->rmap.rm_offset);
|
||||
b = be64_to_cpu(k2->rmap.rm_offset);
|
||||
if (a <= b)
|
||||
return 1;
|
||||
return 0;
|
||||
@ -420,8 +420,8 @@ xfs_rmapbt_recs_inorder(
|
||||
return 1;
|
||||
else if (a > b)
|
||||
return 0;
|
||||
a = XFS_RMAP_OFF(be64_to_cpu(r1->rmap.rm_offset));
|
||||
b = XFS_RMAP_OFF(be64_to_cpu(r2->rmap.rm_offset));
|
||||
a = be64_to_cpu(r1->rmap.rm_offset);
|
||||
b = be64_to_cpu(r2->rmap.rm_offset);
|
||||
if (a <= b)
|
||||
return 1;
|
||||
return 0;
|
||||
|
@ -113,6 +113,8 @@ xchk_bmap_get_rmap(
|
||||
|
||||
if (info->whichfork == XFS_ATTR_FORK)
|
||||
rflags |= XFS_RMAP_ATTR_FORK;
|
||||
if (irec->br_state == XFS_EXT_UNWRITTEN)
|
||||
rflags |= XFS_RMAP_UNWRITTEN;
|
||||
|
||||
/*
|
||||
* CoW staging extents are owned (on disk) by the refcountbt, so
|
||||
|
@ -121,8 +121,7 @@ xchk_inode_flags(
|
||||
goto bad;
|
||||
|
||||
/* rt flags require rt device */
|
||||
if ((flags & (XFS_DIFLAG_REALTIME | XFS_DIFLAG_RTINHERIT)) &&
|
||||
!mp->m_rtdev_targp)
|
||||
if ((flags & XFS_DIFLAG_REALTIME) && !mp->m_rtdev_targp)
|
||||
goto bad;
|
||||
|
||||
/* new rt bitmap flag only valid for rbmino */
|
||||
|
@ -170,7 +170,6 @@ xchk_refcountbt_process_rmap_fragments(
|
||||
*/
|
||||
INIT_LIST_HEAD(&worklist);
|
||||
rbno = NULLAGBLOCK;
|
||||
nr = 1;
|
||||
|
||||
/* Make sure the fragments actually /are/ in agbno order. */
|
||||
bno = 0;
|
||||
@ -184,15 +183,14 @@ xchk_refcountbt_process_rmap_fragments(
|
||||
* Find all the rmaps that start at or before the refc extent,
|
||||
* and put them on the worklist.
|
||||
*/
|
||||
nr = 0;
|
||||
list_for_each_entry_safe(frag, n, &refchk->fragments, list) {
|
||||
if (frag->rm.rm_startblock > refchk->bno)
|
||||
goto done;
|
||||
if (frag->rm.rm_startblock > refchk->bno || nr > target_nr)
|
||||
break;
|
||||
bno = frag->rm.rm_startblock + frag->rm.rm_blockcount;
|
||||
if (bno < rbno)
|
||||
rbno = bno;
|
||||
list_move_tail(&frag->list, &worklist);
|
||||
if (nr == target_nr)
|
||||
break;
|
||||
nr++;
|
||||
}
|
||||
|
||||
|
@ -885,6 +885,16 @@ xfs_setattr_size(
|
||||
error = iomap_zero_range(inode, oldsize, newsize - oldsize,
|
||||
&did_zeroing, &xfs_iomap_ops);
|
||||
} else {
|
||||
/*
|
||||
* iomap won't detect a dirty page over an unwritten block (or a
|
||||
* cow block over a hole) and subsequently skips zeroing the
|
||||
* newly post-EOF portion of the page. Flush the new EOF to
|
||||
* convert the block before the pagecache truncate.
|
||||
*/
|
||||
error = filemap_write_and_wait_range(inode->i_mapping, newsize,
|
||||
newsize);
|
||||
if (error)
|
||||
return error;
|
||||
error = iomap_truncate_page(inode, newsize, &did_zeroing,
|
||||
&xfs_iomap_ops);
|
||||
}
|
||||
|
@ -134,7 +134,7 @@ xfs_fs_map_blocks(
|
||||
goto out_unlock;
|
||||
error = invalidate_inode_pages2(inode->i_mapping);
|
||||
if (WARN_ON_ONCE(error))
|
||||
return error;
|
||||
goto out_unlock;
|
||||
|
||||
end_fsb = XFS_B_TO_FSB(mp, (xfs_ufsize_t)offset + length);
|
||||
offset_fsb = XFS_B_TO_FSBT(mp, offset);
|
||||
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user