Displaying 20 results from an estimated 51 matches for "thread1".
Did you mean:
thread
2007 Jul 27
2
[LLVMdev] Forcing JIT of all functions before execution starts (was: Implementing sizeof)
...xt stuff, which
surprisingly enough seems to work *almost* perfectly under the JITted
environment, with the exception that the JITter doesn't like running in
anything other than the primary fibre. For example,
#include <stdio.h>
#include "../../mcp/tools/mcp2/mcpapi.h"
void thread1(void *udata)
{
int i;
for(i=0; i<10; i++)
{
printf("--- thread1: Hello (%d)\n", i);
}
}
void thread2(void *udata)
{
int i;
for(i=0; i<10; i++)
{
printf("--- thread2 Hello (%d)\n", i);
}
}
int main(int argc, char **argv)
{
printf("--- Running threads ser...
2007 Jul 27
0
[LLVMdev] Implementing sizeof
Check out http://nondot.org/sabre/LLVMNotes
-Chris
http://nondot.org/sabre
http://llvm.org
On Jul 27, 2007, at 12:00 PM, Sarah Thompson <thompson at email.arc.nasa.gov
> wrote:
> Hi folks,
>
> Assuming that I'm writing a pass and that for bizarre reasons I need
> to
> programmatically do the equivalent of a C/C++ sizeof on a Value (or a
> Type, it doesn't
2007 Jul 27
3
[LLVMdev] Implementing sizeof
Hi folks,
Assuming that I'm writing a pass and that for bizarre reasons I need to
programmatically do the equivalent of a C/C++ sizeof on a Value (or a
Type, it doesn't matter which really), yielding a result in bytes, what
is the known-safe way to do this? I notice that doing something like
struct thingy
{
... some stuff ...
};
...
printf("Size = %d",
2018 Jun 08
2
XRay FDR mode doesn’t log main thread calls
...< __xray_log_flushLog() << std::endl;
// CHECK: Flush status {{.*}}
__xray_unpatch();
return 0;
}
// Check that we're able to see two threads, each entering and exiting fA().
// TRACE-DAG: - { type: 0, func-id: [[FIDA:[0-9]+]], function: {{.*fA.*}},
cpu: {{.*}}, thread: [[THREAD1:[0-9]+]], kind: function-enter, tsc:
{{[0-9]+}} }
// TRACE: - { type: 0, func-id: [[FIDA]], function: {{.*fA.*}}, cpu:
{{.*}}, thread: [[THREAD1]], kind: function-{{exit|tail-exit}}, tsc:
{{[0-9]+}} }
// TRACE-DAG: - { type: 0, func-id: [[FIDA]], function: {{.*fA.*}}, cpu:
{{.*}}, thread: [[T...
2017 Feb 21
2
no connectivity to some hosts behind tinc for the first few seconds
...e is one issue I'm not able to
solve: *sometimes*, connectivity to *some* destinations does not work
for the first few seconds.
To demonstrate:
$ mongo mongo.example.com:27017
MongoDB shell version: 3.2.12
connecting to: mongo.example.com:27017/test
2017-02-21T03:29:30.243+0000 W NETWORK [thread1] Failed to connect to
10.1.2.3:27017 after 5000ms milliseconds, giving up.
2017-02-21T03:29:30.243+0000 E QUERY [thread1] Error: couldn't
connect to server mongo.example.com:27017, connection attempt failed :
connect at src/mongo/shell/mongo.js:231:14
@(connect):1:6
exception: connect fai...
2017 Dec 05
2
How can I check backtrace files ?
Hello,
I carefully read [1] which details how backtrace files can be produced.
Maybe this seems natural to some, but how can I go one step futher, and
check that produced XXX-thread1.txt, XXX-brief.txt, ... files are OK ?
In other words, where can I find an example on how to use one of those
files and check by myself, that if a system ever fails, I won't have to
wait for another failure to provide required data to support teams ?
Best regards
[1] https://wiki.asterisk.or...
2014 Jun 19
2
About memory index/search in multithread program
...;t support memory index/search ?
I know there is a method can create memory datebase, like this:
Xapian::WritableDatabase db(Xapian::InMemory::open());
*But, if i use these in multithread program, i need create many
datebases!!*
Xapian::WritableDatabase db1(Xapian::InMemory::open()); //used in thread1
Xapian::WritableDatabase db2(Xapian::InMemory::open()); //used in thread2
because WritableDatabase object isn't thread-safe. And use lock is slowly.
*So, is there some solutions that One database, many thread can use??*
-------------- next part --------------
An HTML attachment was scrubbed....
2008 Jun 26
4
Pfilestat vs. prstat
...e pfilestat script (I think) assumes that a "read entry" will be followed by a "read return" probe. On a per-thread basis, I can see this happening, but if the application has multiple threads/readers/writers, isn''t it possible to get interleaved syscalls? For example,
Thread1-> read entry
Thread2 -> write entry
Thread1 -> read return
Thread2 -> write return
If so, then shouldn''t the pfilestat script be using thread local variables for timing versus globabl variables?
--
This message posted from opensolaris.org
2014 Apr 25
2
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
...real: 3.743; user: 7.280; system: 0.000
>>
>> % cat coverage_mt_vec.cc
>> #include <pthread.h>
>> #include <vector>
>>
>> std::vector<int> v(1000);
>>
>> __attribute__((noinline)) void foo() { v[0] = 42; }
>>
>> void *Thread1(void *) {
>> for (int i = 0; i < 100000000; i++)
>> foo();
>> return 0;
>> }
>>
>> __attribute__((noinline)) void bar() { v[999] = 66; }
>>
>> void *Thread2(void *) {
>> for (int i = 0; i < 100000000; i++)
>> bar();
>&g...
2018 Feb 02
0
santizer problems with dynamic thread local storage
...test/msan/POWERPC64/Output/dtls_test.c.tmp-so.so
-shared
/home/seurer/llvm/build/llvm-test/projects/compiler-rt/test/msan/POWERPC64/Output/dtls_test.c.tmp
2>&1
--
Exit Code: 77
Command Output (stdout):
--
==12029==WARNING: MemorySanitizer: use-of-uninitialized-value
#0 0x10d636528 in Thread1
/home/seurer/llvm/llvm-test/projects/compiler-rt/test/msan/dtls_test.c:22:7
#1 0x10d635630 in __msan::MsanThread::ThreadStart()
/home/seurer/llvm/llvm-test/projects/compiler-rt/lib/msan/msan_thread.cc:77
#2 0x10d5c07c0 in MsanThreadStartFunc(void*)
/home/seurer/llvm/llvm-test/projects/...
2014 Apr 23
4
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
...c && time ./a.out
> TIME: real: 3.743; user: 7.280; system: 0.000
>
> % cat coverage_mt_vec.cc
> #include <pthread.h>
> #include <vector>
>
> std::vector<int> v(1000);
>
> __attribute__((noinline)) void foo() { v[0] = 42; }
>
> void *Thread1(void *) {
> for (int i = 0; i < 100000000; i++)
> foo();
> return 0;
> }
>
> __attribute__((noinline)) void bar() { v[999] = 66; }
>
> void *Thread2(void *) {
> for (int i = 0; i < 100000000; i++)
> bar();
> return 0;
> }
>
>...
2014 Apr 18
4
[LLVMdev] multithreaded performance disaster with -fprofile-instr-generate (contention on profile counters)
On Fri, Apr 18, 2014 at 12:13 AM, Dmitry Vyukov <dvyukov at google.com> wrote:
> Hi,
>
> This is long thread, so I will combine several comments into single email.
>
>
> >> - 8-bit per-thread counters, dumping into central counters on overflow.
> >The overflow will happen very quickly with 8bit counter.
>
> Yes, but it reduces contention by 256x (a thread
2023 Jun 06
2
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...6/6/23 4:49 AM, Stefano Garzarella wrote:
> On Mon, Jun 05, 2023 at 01:57:30PM -0500, Mike Christie wrote:
>> If userspace does VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER we
>> can race where:
>> 1. thread0 calls vhost_transport_send_pkt -> vhost_work_queue
>> 2. thread1 does VHOST_SET_OWNER which calls vhost_worker_create.
>> 3. vhost_worker_create will set the dev->worker pointer before setting
>> the worker->vtsk pointer.
>> 4. thread0's vhost_work_queue will see the dev->worker pointer is
>> set and try to call vhost_task_wa...
2023 Jun 06
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
...arzarella wrote:
> > On Mon, Jun 05, 2023 at 01:57:30PM -0500, Mike Christie wrote:
> >> If userspace does VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER we
> >> can race where:
> >> 1. thread0 calls vhost_transport_send_pkt -> vhost_work_queue
> >> 2. thread1 does VHOST_SET_OWNER which calls vhost_worker_create.
> >> 3. vhost_worker_create will set the dev->worker pointer before setting
> >> the worker->vtsk pointer.
> >> 4. thread0's vhost_work_queue will see the dev->worker pointer is
> >> set and try t...
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
...out_of_memory() when it reached __alloc_pages_may_oom() and held oom_lock
> mutex. Since vb->balloon_lock mutex is already held by fill_balloon(), it
> will cause OOM lockup. Thus, do not wait for vb->balloon_lock mutex if
> leak_balloon() is called from out_of_memory().
>
> Thread1 Thread2
> fill_balloon()
> takes a balloon_lock
> balloon_page_enqueue()
> alloc_page(GFP_HIGHUSER_MOVABLE)
> direct reclaim (__GFP_FS context) takes a fs lock
> waits for that fs lock...
2017 Oct 13
2
[PATCH] virtio: avoid possible OOM lockup at virtballoon_oom_notify()
...out_of_memory() when it reached __alloc_pages_may_oom() and held oom_lock
> mutex. Since vb->balloon_lock mutex is already held by fill_balloon(), it
> will cause OOM lockup. Thus, do not wait for vb->balloon_lock mutex if
> leak_balloon() is called from out_of_memory().
>
> Thread1 Thread2
> fill_balloon()
> takes a balloon_lock
> balloon_page_enqueue()
> alloc_page(GFP_HIGHUSER_MOVABLE)
> direct reclaim (__GFP_FS context) takes a fs lock
> waits for that fs lock...
2023 Jun 06
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
On Mon, Jun 05, 2023 at 01:57:30PM -0500, Mike Christie wrote:
>If userspace does VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER we
>can race where:
>1. thread0 calls vhost_transport_send_pkt -> vhost_work_queue
>2. thread1 does VHOST_SET_OWNER which calls vhost_worker_create.
>3. vhost_worker_create will set the dev->worker pointer before setting
>the worker->vtsk pointer.
>4. thread0's vhost_work_queue will see the dev->worker pointer is
>set and try to call vhost_task_wake using not yet set...
2023 Jun 05
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
If userspace does VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER we
can race where:
1. thread0 calls vhost_transport_send_pkt -> vhost_work_queue
2. thread1 does VHOST_SET_OWNER which calls vhost_worker_create.
3. vhost_worker_create will set the dev->worker pointer before setting
the worker->vtsk pointer.
4. thread0's vhost_work_queue will see the dev->worker pointer is
set and try to call vhost_task_wake using not yet set worker->vtsk...
2023 Jun 05
1
[PATCH 1/1] vhost: Fix crash during early vhost_transport_send_pkt calls
If userspace does VHOST_VSOCK_SET_GUEST_CID before VHOST_SET_OWNER we
can race where:
1. thread0 calls vhost_transport_send_pkt -> vhost_work_queue
2. thread1 does VHOST_SET_OWNER which calls vhost_worker_create.
3. vhost_worker_create will set the dev->worker pointer before setting
the worker->vtsk pointer.
4. thread0's vhost_work_queue will see the dev->worker pointer is
set and try to call vhost_task_wake using not yet set worker->vtsk...
2017 Oct 13
4
[PATCH] virtio_balloon: fix deadlock on OOM
...all_chain() callback via
out_of_memory() when it reached __alloc_pages_may_oom() and held oom_lock
mutex. Since vb->balloon_lock mutex is already held by fill_balloon(), it
will cause OOM lockup. Thus, do not wait for vb->balloon_lock mutex if
leak_balloon() is called from out_of_memory().
Thread1 Thread2
fill_balloon()
takes a balloon_lock
balloon_page_enqueue()
alloc_page(GFP_HIGHUSER_MOVABLE)
direct reclaim (__GFP_FS context) takes a fs lock
waits for that fs lock alloc_page(GFP_NOFS...