Displaying 8 results from an estimated 8 matches for "262m".
Did you mean:
262
2002 Jun 06
1
Backuo problem from ext3 file system
...oj1 is working fine.
Filesystem Size Used Avail Use% Mounted on
/dev/md1 8.1G 3.4G 4.3G 44% /
/dev/sda1 48M 14M 31M 30% /boot
/dev/md0 39G 29G 7.7G 79% /proj
/dev/md2 16G 13G 2.4G 84% /proj1
none 262M 0 262M 0% /dev/shm
Backup Device : HP SURESTORE DAT40
Dump Version : dump-0.4b28-1.i386.rpm
Regards,
Rajeesh Kumar M.P
System Administrator
Aalayance E-Com Services Ltd,
Bangalore - India
2009 May 04
2
FW: Oracle 9204 installation on linux x86-64 on ocfs
...2.0G 94M 1.8G 5% /var
/dev/mapper/VolGroup01-u01
148G 93M 141G 1% /u01
/dev/sdc 600G 1.1G 599G 1% /u02
/dev/sdd 300G 1.1G 299G 1% /u03
/dev/sde 1.0G 274M 751M 27% /u04/quorum
/dev/sdf 1.0G 262M 763M 26% /u05
[root at s602749nj3el19 bin]# cd /u04/quorum/
[root at s602749nj3el19 quorum]# ls -ltr
total 8
drwxrwxr-x 2 oracle orainventory 4096 May 1 21:43 lost+found
drwxrwxr-x 2 oracle orainventory 4096 May 4 05:14 quorum.dbf
[root at s602749nj3el19 quorum]#
Keyur Patel | Principal Se...
2008 Jan 23
1
FreeBSD 6.3-Release + squid 2.6.17 = Hang process.
...bpthread.so.2
#3 0x88164680 in fork () from /lib/libpthread.so.2
#4 0x08091091 in ?? ()
#5 0x00000012 in ?? ()
#6 0x00000004 in ?? ()
#7 0x080e6edb in ?? ()
#8 0xbfbfe4a4 in ?? ()
#9 0x00000004 in ?? ()
#10 0x00000000 in ?? ()
netstat -h 8:
118K 0 37M 204K 0 262M 0
121K 0 37M 204K 0 255M 0
124K 0 30M 204K 0 248M 0
116K 0 36M 201K 0 257M 0
117K 0 40M 202K 0 260M 0
120K 0 45M 205K 0...
2002 Jun 14
4
Slow response from new Athlon 1.4Ghz machine?
...V:...
I now have a second server, it is an Athlon 1.6Ghz CPU, 512Mb Ram with SCSI
drive as main drive, IDE as aux and storage running RedHat 7.3 installation,
Filesystem is:-
[darryl@cascade darryl]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda6 372M 91M 262M 26% /
/dev/sda1 45M 8.9M 34M 21% /boot
/dev/sda5 703M 222M 445M 34% /home
none 251M 0 251M 0% /dev/shm
/dev/sda2 1.9G 1.7G 125M 94% /usr
/dev/sda7 251M 116M 122M 49% /var
/dev/hda1 19G 2.2G 16G 1...
2005 Apr 02
4
mkbootdisk!
...Mounted on
/dev/md5 17G 1.8G 14G 12% /
/dev/md1 99M 8.4M 86M 9% /boot
none 126M 0 126M 0% /dev/shm
/dev/md3 9.7G 2.7G 6.5G 30% /home
/dev/md0 981M 26M 905M 3% /usr/local/salva
/dev/md4 4.9G 262M 4.4G 6% /var/log
/dev/md2 981M 24M 908M 3% /var/spool
regards,
Israel
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2006 Aug 24
5
unaccounted for daily growth in ZFS disk space usage
We finally flipped the switch on one of our ZFS-based servers, with
approximately 1TB of 2.8TB (3 stripes of 950MB or so, each of which is
a RAID5 volume on the adaptec card). We have snapshots every 4 hours
for the first few days. If you add up the snapshot references it
appears somewhat high versus daily use (mostly mail boxes, spam, etc
changing), but say an aggregate of no more than 400+MB a
2008 May 21
35
DO NOT REPLY [Bug 5478] New: rsync: writefd_unbuffered failed to write 4092 bytes [sender]: Broken pipe (32)
https://bugzilla.samba.org/show_bug.cgi?id=5478
Summary: rsync: writefd_unbuffered failed to write 4092 bytes
[sender]: Broken pipe (32)
Product: rsync
Version: 3.0.3
Platform: Other
URL: https://bugzilla.samba.org/show_bug.cgi?id=1959
OS/Version: Linux
Status: NEW
Severity: normal