Displaying 8 results from an estimated 8 matches for "236m".
Did you mean:
236
2008 Jun 01
1
capacity query
..._hwcap2.so.1
14G 4.0G 10G 28% /lib/libc.so.1
fd 0K 0K 0K 0% /dev/fd
swap 6.4G 28K 6.4G 1% /tmp
swap 6.4G 24K 6.4G 1% /var/run
swap 9.8G 24K 236M 1% /swap
# swap -l
swapfile dev swaplo blocks free
/dev/zvol/dsk/swap/vol 181,1 8 19922936 19687704
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
swap 9.61G 236M 24.5K /swap
swap/vol 9.61G 236M 9.61G -
# zp...
2018 May 08
1
mount failing client to gluster cluster.
...5.0M 1% /run/lock
tmpfs 7.8G 0 7.8G 0% /sys/fs/cgroup
/dev/mapper/kvm01--vg-home 243G 61M 231G 1% /home
/dev/mapper/kvm01--vg-tmp 1.8G 5.6M 1.7G 1% /tmp
/dev/mapper/kvm01--vg-var 9.2G 302M 8.4G 4% /var
/dev/sda1 236M 63M 161M 28% /boot
tmpfs 1.6G 4.0K 1.6G 1% /run/user/115
tmpfs 1.6G 0 1.6G 0% /run/user/1000
glusterp1.graywitch.co.nz:/gv0 932G 247G 685G 27% /isos
also, I can mount the sub-directory fine on the gluster cluster itself,
====...
2017 Sep 14
1
Re: [PATCH v3 4/6] lib: qemu: Allow parallel qemu binaries to be used with cache conflicts.
On Tuesday, 12 September 2017 19:04:22 CEST Richard W.M. Jones wrote:
> Rename the cache files like ‘qemu.stat’ etc so they include the qemu
> binary "key" (ie. size and mtime) in the name. This allows a single
> user to use multiple qemu binaries in parallel without conflicts.
> ---
My concern here is that these files will pile up in the caches of the
various users --
2007 Aug 21
0
Saftware RAID1 or Hardware RAID1 with Asterisk (Vidura Senadeera)
...md5 : active raid1 hdc5[1] hda5[0]
> 38081984 blocks [2/2] [UU]
>
> md6 : active raid1 hdc6[1] hda6[0]
> 38708480 blocks [2/2] [UU]
>
> unused devices: <none>
>
> $ df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/md1 236M 38M 186M 17% /
> tmpfs 249M 0 249M 0% /dev/shm
> /dev/md3 1.9G 1.2G 643M 65% /usr
> /dev/md5 36G 29G 5.3G 85% /var
> /dev/md6 37G 30G 4.7G 87% /archive
>
> $ cat /proc/swaps
> Filename...
2004 Jun 09
1
Samba client filesize problems
...rwxr-xr-x 1 root root 14M Mar 24 08:26 BBC TWO -
2004,03,24 08,26,05.mpg
-rwxr-xr-x 1 root root 788M Mar 23 18:00
Neighbours-NB-BBC ONE - 2004,03,23 17,37,13.mpg
-rwxr-xr-x 1 root root 909M Mar 21 05:31 911COMMS.mpg
-rwxr-xr-x 1 root root 236M Mar 20 13:54 BBC TWO -
2004,03,20 13,46,31.mpg
-rwxr-xr-x 1 root root 1.2G Mar 20 13:46 BBC TWO -
2004,03,20 13,10,54.mpg
-rwxr-xr-x 1 root root 111M Mar 20 13:10 BBC TWO -
2004,03,20 13,06,54.mpg
-rwxr-xr-x 1 root root 396M Mar 20 13:06 BBC TWO -
2004,0...
2007 Aug 21
6
Saftware RAID1 or Hardware RAID1 with Asterisk
Dear All,
I would like to get community's feedback with regard to RAID1 ( Software or
Hardware) implementations with asterisk.
This is my setup
Motherboard with SATA RAID1 support
CENT OS 4.4
Asterisk 1.2.19
Libpri/zaptel latest release
2.8 Ghz Intel processor
2 80 GB SATA Hard disks
256 MB RAM
digium PRI/E1 card
Following are the concerns I am having
I'm planing to put this asterisk
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a
2009 Jun 15
33
compression at zfs filesystem creation
Hi,
I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump?
Thanks,
~~sa