Displaying 5 results from an estimated 5 matches for "302m".
Did you mean:
302
2018 May 08
1
mount failing client to gluster cluster.
.../dev/shm
tmpfs 5.0M 4.0K 5.0M 1% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/kvm01--vg-home 243G 61M 231G 1% /home
/dev/mapper/kvm01--vg-tmp 1.8G 5.6M 1.7G 1% /tmp
/dev/mapper/kvm01--vg-var 9.2G 302M 8.4G 4% /var
/dev/sda1 236M 63M 161M 28% /boot
tmpfs 1.6G 4.0K 1.6G 1% /run/user/115
tmpfs 1.6G 0 1.6G 0% /run/user/1000
glusterp1.graywitch.co.nz:/gv0 932G 247G 685G 27% /isos
also, I can mount the su...
2015 Jun 27
3
Anyone else think the latest Xorg fix is hogging stuff?
...4152 hardtolo 20 0 915m 28m 20m S 1.7 0.4 182:27.00
knotify4
27 root 20 0 0 0 0 S 1.0 0.0 28:58.96
events/0
12581 wild-bil 20 0 302m 14m 9.9m S 1.0 0.2 0:02.43
gnome-terminal
26741 hardtolo 20 0 15300 1420 892 S 1.0 0.0 0:02.47 top
Anyone else pound the crap out of a desktop with FF and see Xorg getting
"fat"?
TIA for any clues or response.
Bill
2016 Mar 08
0
OCFS2 showing "No space left on device" on a device with free space
...contig-bg /dev/drbd0
Here is the output before tunefs.ocfs2 was run
user at server:/storage/www/site.com/current/sites/default/files/tmp$
<http://daddysdeals.co.za/current/sites/default/files/tmp$> df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 10G 302M 9.2G 4% /
udev 10M 0 10M 0% /dev
snippet removed
/dev/mapper/vg1-storage 99G 2.5G 91G 3% /storage
/dev/drbd0 100G 43G 58G 43% /storage/shared <= Look here.
The question I would like to ask. Is this a feature I can safely just roll
out,...
2015 Jun 28
0
Anyone else think the latest Xorg fix is hogging stuff?
...> 4152 hardtolo 20 0 915m 28m 20m S 1.7 0.4 182:27.00
> knotify4
> 27 root 20 0 0 0 0 S 1.0 0.0 28:58.96
> events/0
> 12581 wild-bil 20 0 302m 14m 9.9m S 1.0 0.2 0:02.43
> gnome-terminal
> 26741 hardtolo 20 0 15300 1420 892 S 1.0 0.0 0:02.47 top
>
> Anyone else pound the crap out of a desktop with FF and see Xorg getting
> "fat"?
>
> TIA for any clues o...
2008 Nov 18
3
High system in %system load .
...----------------
#top
last pid: 47964; load averages: 1.26, 1.62, 1.75 up
0+19:17:13 17:11:06
287 processes: 10 running, 277 sleeping
CPU states: 2.2% user, 0.0% nice, 28.3% system, 0.2% interrupt, 69.3%
idle
Mem: 1286M Active, 1729M Inact, 478M Wired, 131M Cache, 214M Buf, 302M Free
Swap: 8192M Total, 8192M Free
-------------------------------------------------------------------
#vmstat 5
procs memory page disks faults cpu
r b w avm fre flt re pi po fr sr ad4 ad6 in sy cs us
sy id
1 24 0 4357672 419648 10598...