search for: 254m

Displaying 10 results from an estimated 10 matches for "254m".

Did you mean: 254
2001 Nov 11
2
Software RAID and ext3 problem
.... I'm running 2.4.13 with the appropiate ext3 patch and a software raid array with paritiions as shown below: Filesystem Size Used Avail Use% Mounted on /dev/md5 939M 237M 654M 27% / /dev/md0 91M 22M 65M 25% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.3G 595M 69% /usr /dev/md8 938M 761M 177M 82% /var /dev/md9 9.2G 2.6G 6.1G 30% /home /dev/md10 11G 2.1G 8.7G 19% /scratch /dev/md12 56G 43G 13G 77% /global The /usr and /var filesystems keep...
2006 Apr 28
4
ZFS RAID-Z for Two-Disk Workstation Setup?
After reading the ZFS docs it does appear that RAID-Z can be used on a two-disk system and I was wondering if the system would [i]basically [/i]work as Intel''s Matrix RAID for two disks? [u] Intel Matrix RAID info:[/u] http://www.intel.com/design/chipsets/matrixstorage_sb.htm http://techreport.com/reviews/2005q1/matrix-raid/index.x?pg=1 My focus with this thread is some
2001 Nov 11
0
(no subject)
.... I'm running 2.4.13 with the appropiate ext3 patch and a software raid array with paritiions as shown below: Filesystem Size Used Avail Use% Mounted on /dev/md5 939M 237M 654M 27% / /dev/md0 91M 22M 65M 25% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.3G 595M 69% /usr /dev/md8 938M 761M 177M 82% /var /dev/md9 9.2G 2.6G 6.1G 30% /home /dev/md10 11G 2.1G 8.7G 19% /scratch /dev/md12 56G 43G 13G 77% /global The /usr and /var filesystems keep...
2013 Feb 22
2
High CPU Usage with 2.2
I am seeing a rather high CPU usage with 2.2 now: last pid: 30725; load averages: 4.58, 27.36, 25.49 up 0+15:36:12 16:02:53 103 processes: 4 running, 98 sleeping, 1 zombie CPU: 34.5% user, 0.0% nice, 65.4% system, 0.2% interrupt, 0.0% idle Mem: 602M Active, 1767M Inact, 254M Wired, 6116K Cache, 112M Buf, 490M Free Swap: 5900M Total, 5900M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 30569 mailnull 1 112 0 4956K 2784K RUN 0 1:27 68.65% pop3 30704 mailnull 1 108 0 4956K 2876K CPU0 0 0:10 57.37%...
2017 Jun 28
2
setting gfid on .trashcan/... failed - total outage
...17:49 /var/crash/_usr_sbin_glusterfsd.0.crash ----------------------------------------------------- Host : gl-master-02 -rw-r----- 1 root root 226M Jun 23 17:49 /var/crash/_usr_sbin_glusterfsd.0.crash ----------------------------------------------------- Host : gl-master-03 -rw-r----- 1 root root 254M Jun 23 16:35 /var/crash/_usr_sbin_glusterfsd.0.crash ----------------------------------------------------- Host : gl-master-04 -rw-r----- 1 root root 239M Jun 23 16:35 /var/crash/_usr_sbin_glusterfsd.0.crash ----------------------------------------------------- -- Dietmar Putz 3Q GmbH Wetzlare...
2017 Jun 29
0
setting gfid on .trashcan/... failed - total outage
...fsd.0.crash > ----------------------------------------------------- > Host : gl-master-02 > -rw-r----- 1 root root 226M Jun 23 17:49? > /var/crash/_usr_sbin_glusterfsd.0.crash > ----------------------------------------------------- > Host : gl-master-03 > -rw-r----- 1 root root 254M Jun 23 16:35? > /var/crash/_usr_sbin_glusterfsd.0.crash > ----------------------------------------------------- > Host : gl-master-04 > -rw-r----- 1 root root 239M Jun 23 16:35? > /var/crash/_usr_sbin_glusterfsd.0.crash > ----------------------------------------------------- If t...
2017 Jun 29
1
setting gfid on .trashcan/... failed - total outage
...----------------------------------------------- >> Host : gl-master-02 >> -rw-r----- 1 root root 226M Jun 23 17:49 >> /var/crash/_usr_sbin_glusterfsd.0.crash >> ----------------------------------------------------- >> Host : gl-master-03 >> -rw-r----- 1 root root 254M Jun 23 16:35 >> /var/crash/_usr_sbin_glusterfsd.0.crash >> ----------------------------------------------------- >> Host : gl-master-04 >> -rw-r----- 1 root root 239M Jun 23 16:35 >> /var/crash/_usr_sbin_glusterfsd.0.crash >> -------------------------------------...
2012 Jan 15
0
[CENTOS6] mtrr_cleanup: can not find optimal value - during server startup
...ize: 4M chunk_size: 64M num_reg: 10 lose cover RAM: 2M gran_size: 4M chunk_size: 128M num_reg: 10 lose cover RAM: 2M gran_size: 4M chunk_size: 256M num_reg: 10 lose cover RAM: 2M *BAD*gran_size: 4M chunk_size: 512M num_reg: 10 lose cover RAM: -254M gran_size: 4M chunk_size: 1G num_reg: 10 lose cover RAM: 2M *BAD*gran_size: 4M chunk_size: 2G num_reg: 10 lose cover RAM: -1022M gran_size: 8M chunk_size: 8M num_reg: 10 lose cover RAM: 126M gran_size: 8M chunk_size: 16M num_reg: 10 lose cover RAM: 6M gran_si...
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is a RAID5 volume on the adaptec card). We have snapshots every 4 hours for the first few days. If you add up the snapshot references it appears somewhat high versus daily use (mostly mail boxes, spam, etc changing), but say an aggregate of no more than 400+MB a
2002 Feb 28
5
Problems with ext3 fs
...1 hdk4[1] hde4[0] 170240 blocks [2/2] [UU] Now, the filesystems are set-up as shown: jlm@nijinsky:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/md5 939M 238M 653M 27% / /dev/md0 91M 23M 63M 27% /boot /dev/md6 277M 8.1M 254M 4% /tmp /dev/md7 1.8G 1.5G 360M 81% /usr /dev/md8 939M 398M 541M 43% /var /dev/md9 9.2G 5.1G 3.6G 59% /home /dev/md10 11G 1.7G 9.1G 16% /scratch /dev/md12 56G 49G 7.7G 87% /global with /etc/fstab as follows: jlm@ni...