Displaying 5 results from an estimated 5 matches for "lwps".
Did you mean:
lwp
2008 Oct 06
7
RFE for lwpkill() action
Hi,
I''m trying to use dtrace to signal threads in my app when certain events happen, and the raise() action seemed adequate -- looking at the kernel sources, it sends a signal to the currently executing thread, and a quick microbenchmark confirmed this. However, further testing showed that if lots of threads hit the event simultaneously, the signal only gets delivered once, and may also
2009 Apr 22
1
prstat -Z and load average values in different zones give same numeric results
Folks,
Perplexing question about load average display with prstat -Z
Solaris 10 OS U4 (08/07)
We have 4 zones with very different processes and workloads..
The prstat -Z command issued within each of the zones, correctly displays
the number of processes and lwps, but the load average value looks
exactly the
same on all non-global zones..I mean all 3 values (1,5,15 load averages)
are the same
which is quasi impossible given the different workloads..
Is there a bug here?
Thanks,
-Nobel
2008 Apr 18
2
plockstat: failed to add to aggregate: Abort due to drop
...0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 149 0 150 0 java/23
21162 7677 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 165 0 167 0 java/21
21162 7677 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 163 0 165 0 java/18
21162 7677 0.0 0.0 0.0 0.0 0.0 100 0.0 0.0 135 0 136 0 java/16
Total: 1 processes, 88 lwps, load averages: 9.14, 10.79, 7.43
# plockstat -e 5 -s 10 -A -p 21162
plockstat: failed to add to aggregate: Abort due to drop
any idea why it failed ?
this is Solaris 10 11/06 s10s_u3wos_10 SPARC with KU - 118833-36 on E2900
with UltraSPARC-IV+ processor.
Thanks,
James Yang
Global Unix Suppor...
2010 Jun 25
11
Maximum zfs send/receive throughput
It seems we are hitting a boundary with zfs send/receive over a network
link (10Gb/s). We can see peak values of up to 150MB/s, but on average
about 40-50MB/s are replicated. This is far away from the bandwidth that
a 10Gb link can offer.
Is it possible, that ZFS is giving replication a too low
priority/throttling it too much?
2009 Nov 11
0
High load when ''zfs send'' to the file
Hello,
when I run ''zfs send'' into the file, system (Ultra Sparc 45) had this load:
# zfs send -R backup/zones at moving_09112009 >
/tank/archive_snapshots/exa_all_zones_09112009.snap
Total: 107 processes, 951 lwps, load averages: 54.95, 59.46, 50.25
Is it normal?
Regards,
Jan Hlodan