search for: 232m

Displaying 5 results from an estimated 5 matches for "232m".

Did you mean: 232
2007 May 09
1
dsl iso and xen
...a2,w''] root = "/dev/sda2 ro" <configuration file> The filesystem on the Xen server is shown below. da10:/etc/xen/vm # df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 9.6G 4.2G 5.4G 45% / udev 232M 164K 231M 1% /dev And the DSL-iso is laying in /tmp(sda2) da10:/etc/xen/vm # ls -l /tmp/dsl-3.3.iso -rw-r--r-- 1 root root 52056064 May 8 2007 /tmp/dsl-3.3.iso Ill then starts up the domain with: xm create /etc/xen/vm/dsl -c The domain starts up, but I get a kernel...
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
...fresh-timeout: 32 cluster.min-free-disk: 200GB network.ping-timeout: 5 performance.io-thread-count: 64 performance.cache-size: 8GB performance.readdir-ahead: on features.trash: off features.trash-max-filesize: 1GB [ 11:31:56 ] - root at gl-master-03 ~ $ Host : gl-master-01 -rw-r----- 1 root root 232M Jun 23 17:49 /var/crash/_usr_sbin_glusterfsd.0.crash ----------------------------------------------------- Host : gl-master-02 -rw-r----- 1 root root 226M Jun 23 17:49 /var/crash/_usr_sbin_glusterfsd.0.crash ----------------------------------------------------- Host : gl-master-03 -rw-r----- 1 ro...
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
...ead: on > features.trash: off mvol1 has disabled the trash feature. So you should not be seeing the above mentioned errors in brick logs further. > features.trash-max-filesize: 1GB > [ 11:31:56 ] - root at gl-master-03??~ $ > > > Host : gl-master-01 > -rw-r----- 1 root root 232M Jun 23 17:49? > /var/crash/_usr_sbin_glusterfsd.0.crash > ----------------------------------------------------- > Host : gl-master-02 > -rw-r----- 1 root root 226M Jun 23 17:49? > /var/crash/_usr_sbin_glusterfsd.0.crash > ----------------------------------------------------- >...
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
...ntioned errors in > brick logs further. yes, right after the second outage we decided to disable the trash feature... > >> features.trash-max-filesize: 1GB >> [ 11:31:56 ] - root at gl-master-03 ~ $ >> >> >> Host : gl-master-01 >> -rw-r----- 1 root root 232M Jun 23 17:49 >> /var/crash/_usr_sbin_glusterfsd.0.crash >> ----------------------------------------------------- >> Host : gl-master-02 >> -rw-r----- 1 root root 226M Jun 23 17:49 >> /var/crash/_usr_sbin_glusterfsd.0.crash >> -------------------------------------...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a