Displaying 20 results from an estimated 40 matches for "sched_setaffinity".
Did you mean:
sched_getaffinity
2018 Sep 05
2
Domain vCPU threads affinity
Hello,
According to the docs, vcpupin will use either cgroups or sched_setaffinity
to pin vcpu threads to cpus. How is this decision made?
I observe differences even on different hosts featuring the same version of
libvirtd (1.3.1): on one host vcpupin affects cpuset.cpus (cgroup), and on
the other it affects vcpu threads affinity (observed through taskset).
Thanks,
Nikos
------...
2018 Jan 24
0
libasan bug: pthread_create never returns
...int
main(void)
{
struct sched_param schedule;
schedule.sched_priority = 50;
if (sched_setscheduler(getpid(), SCHED_RR, &schedule) == 1) {
perror("sched_setscheduler");
exit(EXIT_FAILURE);
}
cpu_set_t cpuset;
CPU_ZERO(&cpuset);
CPU_SET(1, &cpuset);
if (sched_setaffinity(getpid(), sizeof(cpuset), &cpuset) == -1) {
perror("sched_setaffinity");
exit(EXIT_FAILURE);
}
printf("Hey from main\n");
schedule.sched_priority = 20;
pthread_attr_t attr;
pthread_attr_init(&attr);
pthread_attr_setschedpolicy(&attr, SCHED_RR);
pthre...
2011 Jul 27
0
klibc 1.5.24 release
Enough small fixes have pilled up to make it worth a release:
A Google patch adds sched_setaffinity, sched_getaffinity support.
Openembedded uses kexec_load(). Gentoo folks add a Kbuild fix.
ipconfig no longer wild guesses a nameserver when none is provided
by the DHCP server. strndup() and unlinkat() saw fixes for various
problems. codingstyle cleanup in kinit and tools.
git repository:
git:/...
2010 Dec 06
1
R with ATLAS avoids Linux cpu affinity
...y request on the cluster, thus preventing badly behaved multi-threaded libraries from consuming more cores than requested. An example of this is R compiled against multithreaded ATLAS, which needs to be bound into a single core if a user submits a 1 core job. Grid Engine achieves this through the sched_setaffinity system call under Linux 2.6. For most applications (including if I write a test C program that uses ATLAS BLAS), this works well, and prevents threads from 'leaking' outside the cpu set they are assigned. However, R appears to be able to avoid the core binding. This is *very* strange as...
2016 Jul 21
0
[PATCH v3 1/4] kernel/sched: introduce vcpu preempted check interface
...currently running or not.
+ *
+ * This allows us to terminate optimistic spin loops and block, analogous to
+ * the native optimistic spin heuristic of testing if the lock owner task is
+ * running or not.
+ */
+#ifndef vcpu_is_preempted
+#define vcpu_is_preempted(cpu) false
+#endif
+
extern long sched_setaffinity(pid_t pid, const struct cpumask *new_mask);
extern long sched_getaffinity(pid_t pid, struct cpumask *mask);
--
2.4.11
2018 Sep 05
0
Re: Domain vCPU threads affinity
On Wed, Sep 05, 2018 at 03:48:45PM +0300, Nikos Anastopoulos wrote:
>Hello,
>
>According to the docs, vcpupin will use either cgroups or sched_setaffinity
>to pin vcpu threads to cpus. How is this decision made?
>I observe differences even on different hosts featuring the same version of
>libvirtd (1.3.1): on one host vcpupin affects cpuset.cpus (cgroup), and on
>the other it affects vcpu threads affinity (observed through taskset).
It a...
2010 Aug 12
2
thread locked while flushing to database
...'m running a simple c test program, to
evaluate xapian performance, and inspect advantages in multiple indexing.
I'm starting two threads, and each thread writes to his database.
main th -> indexing thread_1 -> db1
(dispatcher) -> indexing thread_2 -> db2
I use sched_setaffinity to bind each indexing thread to a specific core.
During indexing phase i see both core running, but when my threads try to
flush to the databases one of them keeps working, the other thread stop the
execution (0% cpu) and stracing his pid seems that it's blocked in a futex.
Why does this happ...
2016 May 23
9
CentOS 7, container question
Hi, folks,
We would like to run a container on a server, the reason being the COST
of a Sybase license (it's by core), and what we can afford is a 4-core
license. Now, the server's a nice Dell w/ 32 cores, so, ideally, what
we want to do is set up containers, then, in one container, *only* have
it see 4 cores, while the rest of the server, including (possibly)
other containers, can see
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption
2016 Jul 21
5
[PATCH v3 0/4] implement vcpu preempted check
change from v2:
no code change, fix typos, update some comments
change from v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption issues.
test-case:
perf record -a perf bench sched messaging -g
2016 Jun 28
11
[PATCH v2 0/4] implement vcpu preempted check
change fomr v1:
a simplier definition of default vcpu_is_preempted
skip mahcine type check on ppc, and add config. remove dedicated macro.
add one patch to drop overload of rwsem_spin_on_owner and mutex_spin_on_owner.
add more comments
thanks boqun and Peter's suggestion.
This patch set aims to fix lock holder preemption issues.
test-case:
perf record -a perf bench sched messaging -g
2011 Mar 04
0
Wine release 1.3.15
...a missing break.
d3dcompiler_43: Avoid an unintended fall-through.
Andrew Eikum (1):
dsound: Also handle two-to-six-channel conversions.
Andrew Nguyen (2):
configure: Check for additional libxml2 headers to reject inadequate libxml2 versions.
configure: Check for a modern sched_setaffinity prototype.
Andr? Hentschel (9):
advapi32: Add stub for EnableTraceEx.
odbccp32: Improve some stubs.
msvcrt/tests: Don't test function directly when reporting errno.
ntoskrnl.exe: Be more verbose in MmGetSystemRoutineAddress.
msvcrt: Implement _wfindfirst64....
2013 Feb 18
9
[PATCH 0/5] vringh
This introduces vringh, which are generic accessors for virtio rings (host side).
There's a host-side implementation in vhost, but it assumes that the rings are
in userspace, and is tied to the vhost implementation. I have patches to adapt
it to use vringh, but I'm pushing this in the next merge window for Sjur, who has
CAIF patches which need it.
This also includes a test program in
2013 Feb 18
9
[PATCH 0/5] vringh
This introduces vringh, which are generic accessors for virtio rings (host side).
There's a host-side implementation in vhost, but it assumes that the rings are
in userspace, and is tied to the vhost implementation. I have patches to adapt
it to use vringh, but I'm pushing this in the next merge window for Sjur, who has
CAIF patches which need it.
This also includes a test program in
2013 Jan 17
8
[PATCH 1/6] virtio_host: host-side implementation of virtio rings.
Getting use of virtio rings correct is tricky, and a recent patch saw
an implementation of in-kernel rings (as separate from userspace).
This patch attempts to abstract the business of dealing with the
virtio ring layout from the access (userspace or direct); to do this,
we use function pointers, which gcc inlines correctly.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
2013 Jan 17
8
[PATCH 1/6] virtio_host: host-side implementation of virtio rings.
Getting use of virtio rings correct is tricky, and a recent patch saw
an implementation of in-kernel rings (as separate from userspace).
This patch attempts to abstract the business of dealing with the
virtio ring layout from the access (userspace or direct); to do this,
we use function pointers, which gcc inlines correctly.
Signed-off-by: Rusty Russell <rusty at rustcorp.com.au>
---
2014 Sep 01
6
[PATCH v4 0/4] virtio: Clean up scatterlists and use the DMA API
This fixes virtio on Xen guests as well as on any other platform
that uses virtio_pci on which physical addresses don't match bus
addresses.
This can be tested with:
virtme-run --xen xen --kimg arch/x86/boot/bzImage --console
using virtme from here:
https://git.kernel.org/cgit/utils/kernel/virtme/virtme.git
Without these patches, the guest hangs forever. With these patches,
2014 Sep 01
6
[PATCH v4 0/4] virtio: Clean up scatterlists and use the DMA API
This fixes virtio on Xen guests as well as on any other platform
that uses virtio_pci on which physical addresses don't match bus
addresses.
This can be tested with:
virtme-run --xen xen --kimg arch/x86/boot/bzImage --console
using virtme from here:
https://git.kernel.org/cgit/utils/kernel/virtme/virtme.git
Without these patches, the guest hangs forever. With these patches,
2014 Aug 28
6
[PATCH v3 0/5] virtio: Clean up scatterlists and use the DMA API
This fixes virtio on Xen guests as well as on any other platform
that uses virtio_pci on which physical addresses don't match bus
addresses.
This can be tested with:
virtme-run --xen xen --kimg arch/x86/boot/bzImage --console
using virtme from here:
https://git.kernel.org/cgit/utils/kernel/virtme/virtme.git
Without these patches, the guest hangs forever. With these patches,