search for: waiter

Displaying 20 results from an estimated 268 matches for "waiter".

Did you mean: waited
2020 Mar 23
2
Samba still DNS Exit Code 23
...man:samba(7) man:smb.conf(5) Main PID: 4339 (samba) Status: "smbd: ready to serve connections..." Tasks: 61 (limit: 4915) Memory: 190.5M CGroup: /system.slice/samba-ad-dc.service +-4339 samba: root process +-4340 samba: tfork waiter process +-4341 samba: task[s3fs] pre-fork master +-4342 samba: tfork waiter process +-4343 samba: task[rpc] pre-fork master +-4344 samba: tfork waiter process +-4345 samba: tfork waiter process +-4346 /usr/sbin/smbd -D --option...
2020 Mar 23
0
Samba still DNS Exit Code 23
...: 4339 (samba) > > Status: "smbd: ready to serve connections..." > > Tasks: 61 (limit: 4915) > > Memory: 190.5M > > CGroup: /system.slice/samba-ad-dc.service > > +-4339 samba: root process > > +-4340 samba: tfork waiter process > > +-4341 samba: task[s3fs] pre-fork master > > +-4342 samba: tfork waiter process > > +-4343 samba: task[rpc] pre-fork master > > +-4344 samba: tfork waiter process > > +-4345 samba: tfork waiter pro...
2020 Oct 18
2
samba start issues after classic upgrade
...==== and mine ended with the 2nd to last line, that is "Setting password for administrator" did not show up. While samba is trying to start I see a lot of processes: =========================== 3388 ? Ss 0:00 samba: root process . 3389 ? S 0:00 samba: tfork waiter process(3390) 3390 ? S 0:00 samba: task[s3fs] pre-fork master 3391 ? S 0:00 samba: tfork waiter process(3393) 3392 ? S 0:00 samba: tfork waiter process(3394) 3393 ? S 0:00 samba: task[rpc] pre-fork master 3394 ? Ss 0:00 /usr/local/samb...
2020 Mar 23
2
Samba still DNS Exit Code 23
...: 4339 (samba) > > Status: "smbd: ready to serve connections..." > > Tasks: 61 (limit: 4915) > > Memory: 190.5M > > CGroup: /system.slice/samba-ad-dc.service > > +-4339 samba: root process > > +-4340 samba: tfork waiter process > > +-4341 samba: task[s3fs] pre-fork master > > +-4342 samba: tfork waiter process > > +-4343 samba: task[rpc] pre-fork master > > +-4344 samba: tfork waiter process > > +-4345 samba: tfork waiter pro...
2014 Mar 13
1
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
...aolo Bonzini wrote: > Il 13/03/2014 12:21, David Vrabel ha scritto: >> On 12/03/14 18:54, Waiman Long wrote: >>> This patch adds para-virtualization support to the queue spinlock in >>> the same way as was done in the PV ticket lock code. In essence, the >>> lock waiters will spin for a specified number of times (QSPIN_THRESHOLD >>> = 2^14) and then halted itself. The queue head waiter will spins >>> 2*QSPIN_THRESHOLD times before halting itself. When it has spinned >>> QSPIN_THRESHOLD times, the queue head will assume that the lock >...
2014 Mar 13
1
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
...aolo Bonzini wrote: > Il 13/03/2014 12:21, David Vrabel ha scritto: >> On 12/03/14 18:54, Waiman Long wrote: >>> This patch adds para-virtualization support to the queue spinlock in >>> the same way as was done in the PV ticket lock code. In essence, the >>> lock waiters will spin for a specified number of times (QSPIN_THRESHOLD >>> = 2^14) and then halted itself. The queue head waiter will spins >>> 2*QSPIN_THRESHOLD times before halting itself. When it has spinned >>> QSPIN_THRESHOLD times, the queue head will assume that the lock >...
2014 Mar 13
3
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
On 12/03/14 18:54, Waiman Long wrote: > This patch adds para-virtualization support to the queue spinlock in > the same way as was done in the PV ticket lock code. In essence, the > lock waiters will spin for a specified number of times (QSPIN_THRESHOLD > = 2^14) and then halted itself. The queue head waiter will spins > 2*QSPIN_THRESHOLD times before halting itself. When it has spinned > QSPIN_THRESHOLD times, the queue head will assume that the lock > holder may be scheduled...
2014 Mar 13
3
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
On 12/03/14 18:54, Waiman Long wrote: > This patch adds para-virtualization support to the queue spinlock in > the same way as was done in the PV ticket lock code. In essence, the > lock waiters will spin for a specified number of times (QSPIN_THRESHOLD > = 2^14) and then halted itself. The queue head waiter will spins > 2*QSPIN_THRESHOLD times before halting itself. When it has spinned > QSPIN_THRESHOLD times, the queue head will assume that the lock > holder may be scheduled...
2020 Nov 04
2
Active - Deactivating
...--no-process-group $SAMBAOPTIONS (code=exited, status=1/FAILURE) Main PID: 21886 (code=exited, status=1/FAILURE) Status: "samba: ready to serve connections..." Tasks: 8 (limit: 4701) Memory: 32.0M CGroup: /system.slice/samba-ad-dc.service ??21887 samba: tfork waiter process(21888) ??21889 samba: tfork waiter process(21890) ??21891 samba: tfork waiter process(21895) ??21896 samba: tfork waiter process(21898) ??21899 samba: tfork waiter process(21901) ??21902 samba: tfork waiter process(21906)...
2018 Jan 11
0
[PATCH 1/3] gpu: host1x: Add support for DMA fences
...include "intr.h" +#include "syncpt.h" +#include "cdma.h" +#include "channel.h" +#include "dev.h" + +struct host1x_fence { + struct dma_fence base; + spinlock_t lock; + + struct host1x_syncpt *syncpt; + u32 threshold; + + struct host1x *host; + void *waiter; + + char timeline_name[10]; +}; + +static inline struct host1x_fence *to_host1x_fence(struct dma_fence *fence) +{ + return (struct host1x_fence *)fence; +} + +static const char *host1x_fence_get_driver_name(struct dma_fence *fence) +{ + return "host1x"; +} + +static const char *host1x_fe...
2018 Jan 11
6
[PATCH 0/3] drm/tegra: Add support for fence FDs
From: Thierry Reding <treding at nvidia.com> This set of patches adds support for fences to Tegra DRM and complements the fence FD support for Nouveau. Technically this isn't necessary for a fence-based synchronization loop with Nouveau because the KMS core takes care of all that, but engines behind host1x can use the IOCTL extensions provided here to emit fence FDs that in turn can be
2014 Mar 13
0
[PATCH RFC v6 09/11] pvqspinlock, x86: Add qspinlock para-virtualization support
Il 13/03/2014 12:21, David Vrabel ha scritto: > On 12/03/14 18:54, Waiman Long wrote: >> This patch adds para-virtualization support to the queue spinlock in >> the same way as was done in the PV ticket lock code. In essence, the >> lock waiters will spin for a specified number of times (QSPIN_THRESHOLD >> = 2^14) and then halted itself. The queue head waiter will spins >> 2*QSPIN_THRESHOLD times before halting itself. When it has spinned >> QSPIN_THRESHOLD times, the queue head will assume that the lock >> holder...
2010 Jul 21
2
Issues reshaping data
...WORD2.RT ... WORD25.RT 1 1 My 100 friend 200 ... ... 1 2 John's 250 dog 320 ... ... 1 3 His 120 waiter 400 ... ... 2 1 My 100 friend 200 ... ... 2 2 John's 250 dog 320 ... ... 2 3 His 120...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...y kind of queueing gets you into a world of hurt with virt. The simple test-and-set lock (as per the above) still sucks due to lock holder preemption, but at least the suckage doesn't queue. Because with queueing you not only have to worry about the lock holder getting preemption, but also the waiter(s). Take the situation of 3 (v)CPUs where cpu0 holds the lock but is preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after which cpu0 gets back online. The simple test-and-set lock will now let cpu2 acquire. Your queue however will just sit there spinning, waiting for cpu1 to come...
2014 Mar 14
4
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
...y kind of queueing gets you into a world of hurt with virt. The simple test-and-set lock (as per the above) still sucks due to lock holder preemption, but at least the suckage doesn't queue. Because with queueing you not only have to worry about the lock holder getting preemption, but also the waiter(s). Take the situation of 3 (v)CPUs where cpu0 holds the lock but is preempted. cpu1 queues, cpu2 queues. Then cpu1 gets preempted, after which cpu0 gets back online. The simple test-and-set lock will now let cpu2 acquire. Your queue however will just sit there spinning, waiting for cpu1 to come...
2014 May 14
2
[PATCH v10 03/19] qspinlock: Add pending bit
...> >Thanks. > > Yes, the performance gain is worth it. The primary goal is to be not worse > than ticket spinlock in light load situation which is the most common case. > This feature is need to achieve that. Ok. I've seen merit in pvqspinlock even with slightly slower first-waiter, so I would have happily sacrificed those horrible branches. (I prefer elegant to optimized code, but I can see why we want to be strictly better than ticketlock.) Peter mentioned that we are focusing on bare-metal patches, so I'll withold my other paravirt rants until they are polished. And...
2014 May 14
2
[PATCH v10 03/19] qspinlock: Add pending bit
...> >Thanks. > > Yes, the performance gain is worth it. The primary goal is to be not worse > than ticket spinlock in light load situation which is the most common case. > This feature is need to achieve that. Ok. I've seen merit in pvqspinlock even with slightly slower first-waiter, so I would have happily sacrificed those horrible branches. (I prefer elegant to optimized code, but I can see why we want to be strictly better than ticketlock.) Peter mentioned that we are focusing on bare-metal patches, so I'll withold my other paravirt rants until they are polished. And...
2014 Jul 01
2
[RFC PATCH v2] Implement Batched (group) ticket lock
On 07/01/2014 01:35 PM, Peter Zijlstra wrote: > On Sat, Jun 28, 2014 at 02:47:04PM +0530, Raghavendra K T wrote: >> In virtualized environment there are mainly three problems >> related to spinlocks that affects performance. >> 1. LHP (lock holder preemption) >> 2. Lock Waiter Preemption (LWP) >> 3. Starvation/fairness >> >> Though Ticketlocks solve fairness problem it worsens LWP, LHP problems. Though >> pv-ticketlocks tried to address these problems we can further improve at the >> cost of relaxed fairness. The following patch tries to a...
2014 Jul 01
2
[RFC PATCH v2] Implement Batched (group) ticket lock
On 07/01/2014 01:35 PM, Peter Zijlstra wrote: > On Sat, Jun 28, 2014 at 02:47:04PM +0530, Raghavendra K T wrote: >> In virtualized environment there are mainly three problems >> related to spinlocks that affects performance. >> 1. LHP (lock holder preemption) >> 2. Lock Waiter Preemption (LWP) >> 3. Starvation/fairness >> >> Though Ticketlocks solve fairness problem it worsens LWP, LHP problems. Though >> pv-ticketlocks tried to address these problems we can further improve at the >> cost of relaxed fairness. The following patch tries to a...
2014 May 28
7
[RFC] Implement Batched (group) ticket lock
In virtualized environment there are mainly three problems related to spinlocks that affect performance. 1. LHP (lock holder preemption) 2. Lock Waiter Preemption (LWP) 3. Starvation/fairness Though ticketlocks solve the fairness problem, it worsens LWP, LHP problems. pv-ticketlocks tried to address this. But we can further improve at the cost of relaxed fairness. In this patch, we form a batch of eligible lock holders and we serve the eligible...