Displaying 12 results from an estimated 12 matches for "248k".
Did you mean:
248
2011 Sep 01
1
No buffer space available - loses network connectivity
...96% 1.00K 198 4 792K size-1024
649 298 45% 0.06K 11 59 44K pid
600 227 37% 0.09K 15 40 60K journal_head
590 298 50% 0.06K 10 59 40K delayacct_cache
496 424 85% 0.50K 62 8 248K size-512
413 156 37% 0.06K 7 59 28K fs_cache
404 44 10% 0.02K 2 202 8K biovec-1
390 293 75% 0.12K 13 30 52K bio
327 327 100% 4.00K 327 1 1308K size-4096
320 190 59% 0.38K 32...
2013 Sep 26
1
Problems sending log to rsyslog
...then check the asterisk log directory:
root at voip:~# ls -lh /var/log/asterisk/
total 3.7M
drwxr-xr-x 2 asterisk asterisk 4.0K Jul 22 20:57 cdr-csv
drwxr-xr-x 2 asterisk asterisk 4.0K Jun 28 14:16 cdr-custom
-rw-rw---- 1 asterisk asterisk 252K Sep 26 09:37 messages
-rw-rw---- 1 asterisk asterisk 248K Sep 22 05:14 messages.1
-rw-r----- 1 syslog adm 0 Sep 26 06:47 messages.log
-rw-rw---- 1 asterisk asterisk 118 Sep 26 10:07 queue_log
root at voip:~#
It does not seem like much is being written to messages.log compared
to messages. Anything I missed?
2011 Sep 01
0
No buffer space available - loses network connectivity
...767 96% 1.00K 198 4 792K size-1024
649 298 45% 0.06K 11 59 44K pid
600 227 37% 0.09K 15 40 60K journal_head
590 298 50% 0.06K 10 59 40K delayacct_cache
496 424 85% 0.50K 62 8 248K size-512
413 156 37% 0.06K 7 59 28K fs_cache
404 44 10% 0.02K 2 202 8K biovec-1
390 293 75% 0.12K 13 30 52K bio
327 327 100% 4.00K 327 1 1308K size-4096
320 190 59% 0.38K 32 10...
2012 Jul 18
1
About GlusterFS
...en I run " glusterfs -f /etc/glusterfs/glusterfs.vol /mnt/glusterfs " on
client side it is working . But when I check " df -h "
then error is "
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 290G 83G 193G 30% /
none 984M 248K 983M 1% /dev
none 988M 180K 988M 1% /dev/shm
none 988M 224K 988M 1% /var/run
none 988M 0 988M 0% /var/lock
none 988M 0 988M 0% /lib/init/rw
none 290G 83G 193G 30% /var/lib/ureadahead/de...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a
2004 Mar 27
0
Oops with md/ext3 on 2.4.25 on alpha architecture
...t;hdc1,2>
md: running: <hdc1><hda1>
md: hdc1's event counter: 00000001
md: hda1's event counter: 00000001
md: md0: raid array is not clean -- starting background reconstruction
md: RAID level 1 does not need chunksize! Continuing anyway.
md0: max total readahead window set to 248k
md0: 1 data-disks, max readahead per data-disk: 248k
raid1: device hdc1 operational as mirror 0
raid1: device hda1 operational as mirror 1
raid1: raid set md0 not clean; reconstructing mirrors
raid1: raid set md0 active with 2 out of 2 mirrors
md: updating md0 RAID superblock on device
md: hdc1 [ev...
2010 Apr 08
1
ZFS monitoring - best practices?
We''re starting to grow our ZFS environment and really need to start
standardizing our monitoring procedures.
OS tools are great for spot troubleshooting and sar can be used for
some trending, but we''d really like to tie this into an SNMP based
system that can generate graphs for us (via RRD or other).
Whether or not we do this via our standard enterprise monitoring tool
or
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2010 Mar 02
3
Very unresponsive, sometimes stalling domU (5.4, x86_64)
...0 0| 40k 0 | 126B 178B| 0 0 | 27 31
0 0 32 68 0 0| 0 0 |3034B 178B| 0 0 | 61 15
0 3 63 34 0 0| 280k 1840k|6350B 820B| 0 0 | 378 354
0 0 44 56 0 0| 0 336k| 66B 178B| 0 0 | 73 88
0 0 50 50 0 0|8192B 248k| 66B 178B| 0 0 | 62 52
0 0 50 50 0 0| 336k 200k| 126B 178B| 0 0 | 65 71
0 0 55 45 0 0| 72k 368k| 126B 178B| 0 0 | 80 100
0 0 52 48 0 0| 192k 176k| 66B 178B| 0 0 | 54 69
0 0 41 59 0 0| 112k 272k| 66B 178B|...
2010 Mar 02
3
Very unresponsive, sometimes stalling domU (5.4, x86_64)
...0 0| 40k 0 | 126B 178B| 0 0 | 27 31
0 0 32 68 0 0| 0 0 |3034B 178B| 0 0 | 61 15
0 3 63 34 0 0| 280k 1840k|6350B 820B| 0 0 | 378 354
0 0 44 56 0 0| 0 336k| 66B 178B| 0 0 | 73 88
0 0 50 50 0 0|8192B 248k| 66B 178B| 0 0 | 62 52
0 0 50 50 0 0| 336k 200k| 126B 178B| 0 0 | 65 71
0 0 55 45 0 0| 72k 368k| 126B 178B| 0 0 | 80 100
0 0 52 48 0 0| 192k 176k| 66B 178B| 0 0 | 54 69
0 0 41 59 0 0| 112k 272k| 66B 178B|...
2010 Nov 11
8
zpool import panics
...2 16K 128K 650K 3.12M 100.00 bplist
1893 1 16K 512 1.50K 512 100.00 DSL dataset next
clones
1910 1 16K 128K 21.0K 128K 100.00 bplist
1911 1 16K 512 1.50K 512 100.00 DSL dataset next
clones
1913 2 16K 4K 248K 144K 100.00 SPA space map
1986 2 16K 4K 227K 140K 100.00 SPA space map
2203 2 16K 4K 4.50K 4K 100.00 SPA space map
2204 2 16K 4K 36.0K 16K 100.00 SPA space map
2205 2 16K 4K 218K 136K 100.00 SPA space m...
2019 Apr 30
6
Disk space and RAM requirements in docs
...X/special
260K build/utils/not/CMakeFiles
260K build/tools/clang/test/Index/Output/comment-custom-block-command.cpp.tmp
256K build/utils/not/CMakeFiles/not.dir
252K build/tools/polly/lib/External/CMakeFiles/PollyISL.dir/isl/imath
252K build/tools/clang/test/Modules/Output/macros.c.tmp/3K8X5FQSVUXUN
248K build/tools/clang/test/CXX/class
248K build/tools/clang/test/CoverageMapping
248K build/examples/Kaleidoscope
244K build/tools/clang/test/PCH/Output/modified-module-dependency.m.tmp-dir
244K build/tools/clang/test/CoverageMapping/Output
240K build/tools/clang/test/CXX/dcl.dcl/dcl.spec
232K build/to...