Displaying 7 results from an estimated 7 matches for "248m".
Did you mean:
248
2010 Jun 29
0
Problem in migrating DRBD from CentOS4.4 to CentOS5.5
...=====================
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroot-LogVol02
3.0G 1.1G 1.7G 39% /
/dev/mapper/vgroot-LogVol03
11G 8.5G 1.7G 84% /usr
/dev/hda1 494M 37M 432M 8% /boot
tmpfs 248M 0 248M 0% /dev/shm
/dev/mapper/vgroot-LogVol01
8.5G 148M 7.9G 2% /home
/dev/mapper/vgroot-LogVol00
13G 161M 12G 2% /backup
Currently we are migrating to CentOS5.5
Step 1: We have migrated to CentOS5.5 Alone and Used Older DRBD 8.0.0
Ve...
2012 Mar 24
3
FreeBSD 9.0 - GPT boot problems?
...part show ada0
=> 34 250069613 ada0 GPT (119G)
34 128 1 freebsd-boot (64k)
162 119537664 2 freebsd-ufs (57G)
119537826 8388608 3 freebsd-swap (4.0G)
127926434 121634816 4 freebsd-ufs (58G)
249561250 508397 - free - (248M)
and root is on ada0p2, with swap on ada0p3:
root@kg-vm2# df -h
Filesystem Size Used Avail Capacity Mounted on
/dev/ada0p2 56G 2.3G 49G 4% /
devfs 1.0k 1.0k 0B 100% /dev
root@kg-vm2# swapinfo -h
Device 1K-blocks Used Avail Capacity
/...
2008 Jan 23
1
FreeBSD 6.3-Release + squid 2.6.17 = Hang process.
...0004 in ?? ()
#7 0x080e6edb in ?? ()
#8 0xbfbfe4a4 in ?? ()
#9 0x00000004 in ?? ()
#10 0x00000000 in ?? ()
netstat -h 8:
118K 0 37M 204K 0 262M 0
121K 0 37M 204K 0 255M 0
124K 0 30M 204K 0 248M 0
116K 0 36M 201K 0 257M 0
117K 0 40M 202K 0 260M 0
120K 0 45M 205K 0 261M 0
120K 0 49M 201K 0 253M 0
106K 0 41M 178K 0...
2013 Sep 26
29
[Bug 69827] New: Uneven, jerky mouse movement, increasing CPU usage
...TIME+ COMMAND
14487 jmoe 20 0 2916m 1.0g 982m S 8.0 12.8 5:47.24 VirtualBox
14430 jmoe 20 0 1937m 178m 49m S 7.0 2.2 6:00.63 gnome-shell
14125 root 20 0 342m 93m 25m S 2.3 1.2 3:20.56 Xorg
15310 jmoe 20 0 1878m 248m 45m S 1.7 3.1 6:13.28 thunderbird-bin
16267 jmoe 20 0 340m 17m 12m S 0.7 0.2 0:28.97 gkrellm
14445 jmoe 20 0 1099m 30m 19m S 0.3 0.4 0:06.15 nautilus
18756 root 20 0 0 0 0 S 0.3 0.0 0:01.58 kworker/2:2
19684 roo...
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2012 Nov 03
0
mtrr_gran_size and mtrr_chunk_size
...e: 16M chunk_size: 64M num_reg: 10 lose cover RAM: 8M
gran_size: 16M chunk_size: 128M num_reg: 10 lose cover RAM: 8M
gran_size: 16M chunk_size: 256M num_reg: 10 lose cover RAM: 8M
*BAD*gran_size: 16M chunk_size: 512M num_reg: 10 lose cover
RAM: -248M
*BAD*gran_size: 16M chunk_size: 1G num_reg: 10 lose cover RAM:
-504M
*BAD*gran_size: 16M chunk_size: 2G num_reg: 10 lose cover RAM:
-504M
gran_size: 32M chunk_size: 32M num_reg: 10 lose cover RAM: 40M
gran_size: 32M chunk_size: 64M num_reg: 10 l...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a