search for: atomic_xchg

Displaying 20 results from an estimated 42 matches for "atomic_xchg".

2013 Aug 20
7
[PATCH] btrfs-progs: use btrfs error code for kernel errors
Now with the below kernel patch, the excl operations like dev add/replace/resize and balance returns the btrfs error code defined in btrfs.h, this patch will help btrfs-progs (and thus user) to know the error string on the terminal (instead of /var/log/messages as previously kernel did). This patch depends on the btrfs kernel patch: btrfs: return btrfs error code for dev excl ops err
2016 Apr 18
0
[PATCH v4 30/37] clk: seperate the locking from the implementation in nvkm_clk_update
..._prog(struct nvkm_clk *clk, int pstateid) } static void -nvkm_clk_update_work(struct work_struct *work) +nvkm_clk_update_impl(struct nvkm_clk *clk) { - struct nvkm_clk *clk = container_of(work, typeof(*clk), work); struct nvkm_subdev *subdev = &clk->subdev; int pstate, ret; - if (!atomic_xchg(&clk->waiting, 0)) - return; clk->pwrsrc = power_supply_is_system_supplied(); if (clk->pstate) @@ -350,6 +347,17 @@ nvkm_clk_update_work(struct work_struct *work) nvkm_error(subdev, "error setting pstate %d: %d\n", pstate, ret); } +} + +static void +nvkm_cl...
2010 Aug 17
1
BUG? a racy code at o2hb_heartbeat_group_drop_item()
...iterations and then updates its value. In the case where other threads manipulate the same &reg->hr_steady_iterations concurrently, race condition might be possible. I think it would be better to guarantee consecutive executions of read and write by special purposed atomic operations (e.g. atomic_xchg) Please examine the issue and let me know your opinion. Thank you. Sincerely Shin Hong
2013 Mar 04
1
[PATCH] Btrfs: allow running defrag in parallel to administrative tasks
...l.c | 8 -------- 1 file changed, 8 deletions(-) diff --git a/fs/btrfs/ioctl.c b/fs/btrfs/ioctl.c index b908960..40631cf 100644 --- a/fs/btrfs/ioctl.c +++ b/fs/btrfs/ioctl.c @@ -2245,13 +2245,6 @@ static int btrfs_ioctl_defrag(struct file *file, void __user *argp) if (ret) return ret; - if (atomic_xchg(&root->fs_info->mutually_exclusive_operation_running, - 1)) { - pr_info("btrfs: dev add/delete/balance/replace/resize operation in progress\n"); - mnt_drop_write_file(file); - return -EINVAL; - } - if (btrfs_root_readonly(root)) { ret = -EROFS; goto out; @@ -2306,7...
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...y usage of this arg, perhaps it would be better to simply remove it and shrink the caller's code a bit? It is also used in 3/8, but we can read the "fresh" value of ->qlcode (trylock does this anyway), and perhaps it can actually help if it is already unlocked. > + prev_qcode = atomic_xchg(&lock->qlcode, my_qcode); > + /* > + * It is possible that we may accidentally steal the lock. If this is > + * the case, we need to either release it if not the head of the queue > + * or get the lock and be done with it. > + */ > + if (unlikely(!(prev_qcode & _QSP...
2014 Mar 02
1
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...y usage of this arg, perhaps it would be better to simply remove it and shrink the caller's code a bit? It is also used in 3/8, but we can read the "fresh" value of ->qlcode (trylock does this anyway), and perhaps it can actually help if it is already unlocked. > + prev_qcode = atomic_xchg(&lock->qlcode, my_qcode); > + /* > + * It is possible that we may accidentally steal the lock. If this is > + * the case, we need to either release it if not the head of the queue > + * or get the lock and be done with it. > + */ > + if (unlikely(!(prev_qcode & _QSP...
2016 Mar 21
0
[PATCH v2 20/22] clk: add nvkm_clk_reclock function
...y(pstate, &clk->states, head) { if (idx++ == pstatei) break; @@ -292,7 +295,7 @@ nvkm_pstate_work(struct work_struct *work) { struct nvkm_clk *clk = container_of(work, typeof(*clk), work); struct nvkm_subdev *subdev = &clk->subdev; - int pstate; + int pstate, ret; if (!atomic_xchg(&clk->waiting, 0)) return; @@ -312,12 +315,10 @@ nvkm_pstate_work(struct work_struct *work) } nvkm_trace(subdev, "-> %d\n", pstate); - if (pstate != clk->pstate) { - int ret = nvkm_pstate_prog(clk, pstate); - if (ret) { - nvkm_error(subdev, "error setting ps...
2017 Mar 05
0
[PATCH 9/9] clk: Check pm_runtime status before reclocking
...++ b/drm/nouveau/nvkm/subdev/clk/base.c @@ -320,6 +320,7 @@ nvkm_clk_update_work(struct work_struct *work) { struct nvkm_clk *clk = container_of(work, typeof(*clk), work); struct nvkm_subdev *subdev = &clk->subdev; + struct device *dev = subdev->device->dev; int pstate; if (!atomic_xchg(&clk->waiting, 0)) @@ -345,7 +346,14 @@ nvkm_clk_update_work(struct work_struct *work) pstate = NVKM_CLK_PSTATE_DEFAULT; } - clk->func->update(clk, pstate); + // only call into the code if the GPU is powered on + if (!pm_runtime_suspended(dev)) { + // it would be a shame if the...
2016 Apr 18
0
[PATCH v4 35/37] clk: set clocks to pre suspend state after suspend
...&clk->subdev; int pstate; @@ -349,7 +349,7 @@ nvkm_clk_update_impl(struct nvkm_clk *clk) pstate = -1; } - clk->func->update(clk, pstate); + clk->func->update(clk, pstate, force); } static void @@ -360,7 +360,7 @@ nvkm_clk_update_work(struct work_struct *work) if (!atomic_xchg(&clk->waiting, 0)) return; - nvkm_clk_update_impl(clk); + nvkm_clk_update_impl(clk, false); wake_up_all(&clk->wait); nvkm_notify_get(&clk->pwrsrc_ntfy); @@ -613,11 +613,7 @@ nvkm_clk_init(struct nvkm_subdev *subdev) if (clk->func->init) return clk->func...
2016 Apr 18
0
[PATCH v4 22/37] clk: rename nvkm_pstate_calc to nvkm_clk_update
...nvkm_clk *clk, int pstatei) } static void -nvkm_pstate_work(struct work_struct *work) +nvkm_clk_update_work(struct work_struct *work) { struct nvkm_clk *clk = container_of(work, typeof(*clk), work); struct nvkm_subdev *subdev = &clk->subdev; - int pstate; + int pstate, ret; if (!atomic_xchg(&clk->waiting, 0)) return; @@ -327,21 +331,25 @@ nvkm_pstate_work(struct work_struct *work) } nvkm_trace(subdev, "-> %d\n", pstate); - if (pstate != clk->pstate) { - int ret = nvkm_pstate_prog(clk, pstate); - if (ret) { - nvkm_error(subdev, "error setting ps...
2017 Mar 05
15
[PATCH 0/9] clk subdev updates
This series addresses various issues inside the reclocking code: 1. after resume the set clocks are reset 2. reclocking not possible while GPU is suspended 3. nouveau always does full reclocks even if only a change of the voltage is required Some of the patches were part of the bigger reclocking series I sent months ago, some things have changed though. This is also preparation work of
2016 Apr 18
63
[PATCH v4 00/37] Volting/Clocking improvements for Fermi and newer
We are slowly getting there! v4 of the series with some realy good improvements, so I am sure this is like 95% done and only needs some proper polishing and proper Reviews! I also added the NvVoltOffsetmV module parameter, so that a user is able to over and !under!-volt the GPU. Overvolting makes sense, when there are still some reclocking issues left, which might be solved by a higher voltage.
2014 Apr 02
0
[PATCH v8 01/10] qspinlock: A generic 4-byte queue spinlock implementation
...UT] + * @ncode: New queue code to be exchanged + * Return: An enum exitval value + */ +static inline enum exitval +queue_code_xchg(struct qspinlock *lock, u32 *ocode, u32 ncode) +{ + ncode |= _QLOCK_LOCKED; /* Set lock bit */ + + /* + * Exchange current copy of the queue node code + */ + *ocode = atomic_xchg(&lock->qlcode, ncode); + + if (likely(*ocode & _QLOCK_LOCKED)) { + *ocode &= ~_QLOCK_LOCKED; /* Clear the lock bit */ + return NORMAL_EXIT; + } + /* + * It is possible that we may accidentally steal the lock during + * the unlock-lock transition. If this is the case, we need to e...
2014 Feb 26
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...L; + + /* + * The lock may be available at this point, try again if no task was + * waiting in the queue. + */ + if (!(qsval >> _QCODE_OFFSET) && queue_spin_trylock(lock)) { + put_qnode(); + return; + } + + /* + * Exchange current copy of the queue node code + */ + prev_qcode = atomic_xchg(&lock->qlcode, my_qcode); + /* + * It is possible that we may accidentally steal the lock. If this is + * the case, we need to either release it if not the head of the queue + * or get the lock and be done with it. + */ + if (unlikely(!(prev_qcode & _QSPINLOCK_LOCKED))) { + if (prev...
2014 Feb 27
0
[PATCH v5 1/8] qspinlock: Introducing a 4-byte queue spinlock implementation
...L; + + /* + * The lock may be available at this point, try again if no task was + * waiting in the queue. + */ + if (!(qsval >> _QCODE_OFFSET) && queue_spin_trylock(lock)) { + put_qnode(); + return; + } + + /* + * Exchange current copy of the queue node code + */ + prev_qcode = atomic_xchg(&lock->qlcode, my_qcode); + /* + * It is possible that we may accidentally steal the lock. If this is + * the case, we need to either release it if not the head of the queue + * or get the lock and be done with it. + */ + if (unlikely(!(prev_qcode & _QSPINLOCK_LOCKED))) { + if (prev...
2017 Jul 01
7
[PATCH v2 0/7] clk subdev updates
This series addresses various issues inside the reclocking code: 1. after resume the set clocks are reset 2. reclocking not possible while GPU is suspended Some of the patches were part of the bigger reclocking series I sent months ago, some things have changed though. This is also preparation work of changing the clock state due to temperature changes and dynamic reclocking. v2: remove commits
2011 Dec 09
10
[PATCH 0/3] Btrfs: add IO error device stats
The goal is to detect when drives start to get an increased error rate, when drives should be replaced soon. Therefore statistic counters are added that count IO errors (read, write and flush). Additionally, the software detected errors like checksum errors and corrupted blocks are counted. An ioctl interface is added to get the device statistic counters. A second ioctl is added to atomically get
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2014 Feb 27
14
[PATCH v5 0/8] qspinlock: a 4-byte queue spinlock with PV support
v4->v5: - Move the optimized 2-task contending code to the generic file to enable more architectures to use it without code duplication. - Address some of the style-related comments by PeterZ. - Allow the use of unfair queue spinlock in a real para-virtualized execution environment. - Add para-virtualization support to the qspinlock code by ensuring that the lock holder and queue
2020 Jul 03
7
[PATCH v2 0/6] powerpc: queued spinlocks and rwlocks
v2 is updated to account for feedback from Will, Peter, and Waiman (thank you), and trims off a couple of RFC and unrelated patches. Thanks, Nick Nicholas Piggin (6): powerpc/powernv: must include hvcall.h to get PAPR defines powerpc/pseries: move some PAPR paravirt functions to their own file powerpc: move spinlock implementation to simple_spinlock powerpc/64s: implement queued