search for: 107m

Displaying 13 results from an estimated 13 matches for "107m".

Did you mean: 1076
2010 Jan 07
2
Random directory/files gets unavailable after sometime
Hello, I am using glusterfs v3.0.0 and having some problems with random directory/files. They work fine for some time ( hours ) and them suddenly gets unavailable: # ls -lh ls: cannot access MyDir: No such file or directory total 107M d????????? ? ? ? ? ? MyDir ( long dir list, intentionally hidden ) At the logs i get a lot of messages like those ones: [2010-01-07 13:36:16] W [fuse-bridge.c:793:fuse_getattr] glusterfs-fuse: 270708: GETATTR 3057375160 (fuse_loc_fill() failed) [2010-01-07 13:36:16] W [fus...
2009 Dec 29
2
ext3 partition size
...ext3 (rw,relatime) $ df -hT Filesystem Type Size Used Avail Use% Mounted on /dev/sdb2 ext3 30G 1.1G 28G 4% / /dev/sdb7 ext3 20G 1.3G 18G 7% /var /dev/sdb6 ext3 30G 12G 17G 43% /usr /dev/sdb5 ext3 40G 25G 13G 67% /home /dev/sdb1 ext3 107M 52M 50M 52% /boot */dev/sdb8 ext3 111G 79G 27G 76% /srv/multimedia* tmpfs tmpfs 2.9G 35M 2.9G 2% /dev/shm Parted info: (parted) select /dev/sdb Using /dev/sdb (parted) print Model: ATA ST3500630AS (scsi) Disk /dev/sdb: 500GB Sector size (logical/physical): 512B/512...
2015 Apr 29
2
nfs (or tcp or scheduler) changes between centos 5 and 6?
...nalysis/simulation jobs that constantly read data off >>the NAS. > > <snip> > *IF* I understand you, I've got one question: what parms are you using to > mount the storage? We had *real* performance problems when we went from 5 > to 6 - as in, unzipping a 26M file to 107M, while writing to an > NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final > answer was that once we mounted the NFS filesystem with nobarrier in fstab > instead of default, the time dropped to 35 or 40 sec again. > > barrier is in 6, and tries to make writes ato...
2015 Apr 29
5
nfs (or tcp or scheduler) changes between centos 5 and 6?
We have a "compute cluster" of about 100 machines that do a read-only NFS mount to a big NAS filer (a NetApp FAS6280). The jobs running on these boxes are analysis/simulation jobs that constantly read data off the NAS. We recently upgraded all these machines from CentOS 5.7 to CentOS 6.5. We did a "piecemeal" upgrade, usually upgrading five or so machines at a time, every few
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
...results in net performance loss because it creates a bottleneck on our > centralized storage. <snip> *IF* I understand you, I've got one question: what parms are you using to mount the storage? We had *real* performance problems when we went from 5 to 6 - as in, unzipping a 26M file to 107M, while writing to an NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final answer was that once we mounted the NFS filesystem with nobarrier in fstab instead of default, the time dropped to 35 or 40 sec again. barrier is in 6, and tries to make writes atomic transactions; its int...
2015 Apr 29
0
nfs (or tcp or scheduler) changes between centos 5 and 6?
...that constantly read data off >>>the NAS. >> >> <snip> >> *IF* I understand you, I've got one question: what parms are you using >> to mount the storage? We had *real* performance problems when we went from >> 5 to 6 - as in, unzipping a 26M file to 107M, while writing to an >> NFS-mounted drive, went from 30 sec or so to a *timed* 7 min. The final >> answer was that once we mounted the NFS filesystem with nobarrier in >> fstab instead of default, the time dropped to 35 or 40 sec again. >> >> barrier is in 6, and tries...
2008 Jul 05
2
Question on number of processes engendered by BDRb
...s an extract of the output from my top command. The PIDs are 7084, 7085 and 7083. ============================================ PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10961 raghus 15 0 193m 110m 3508 S 0 10.8 0:52.87 mongrel_rails 10971 raghus 15 0 188m 107m 3440 S 0 10.5 0:50.61 mongrel_rails 11013 raghus 15 0 179m 103m 3348 S 0 10.1 0:45.18 mongrel_rails * 7084 raghus 15 0 152m 73m 2036 S 11 7.2 116:31.68 packet_worker_r* 11129 raghus 15 0 134m 58m 3336 S 0 5.7 0:05.20 mongrel_rails * 7085 raghus 15 0 131m...
2002 Jun 18
0
error writing 4 unbuffered bytes - exiting
...l Use% Mounted on /dev/sda6 372M 338M 15M 96% / /dev/sda1 45M 8.9M 34M 21% /boot /dev/sda5 703M 218M 449M 33% /home none 251M 0 251M 0% /dev/shm /dev/sda2 1.9G 1.7G 101M 95% /usr /dev/sda7 251M 130M 107M 55% /var /dev/hda1 19G 2.3G 16G 13% /usr/local/backup /dev/hda2 17G 2.6G 14G 16% /opt/archive I am trying to maintain a rsync'ed copy of the data from most of the filesystems onto /opt/archive. (in fact the complete command I am using is); rsync -vaW --exc...
2011 Jul 07
4
Question on memory usage, garbage collector, 'top' stats on linux
...y 6208 webappus 20 0 179m 59m 4788 S 0.0 1.5 0:07.50 ruby 6295 postgres 20 0 102m 32m 28m S 0.0 0.8 17:54.62 postgres 1034 postgres 20 0 98.7m 26m 25m S 0.0 0.7 0:23.67 postgres 843 mysql 20 0 174m 26m 6648 S 0.0 0.7 0:31.82 mysqld 6222 postgres 20 0 107m 19m 11m S 0.0 0.5 0:00.61 postgres 6158 root 20 0 42668 8684 2344 S 0.0 0.2 0:02.48 ruby 907 postgres 20 0 98.6m 6680 5528 S 0.0 0.2 0:13.14 postgres -- You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group. To p...
2012 Jul 02
7
puppetmasterd continuously consuming high CPU, with many interrupts
...y, 0.0%ni, 6.6%id, 39.7%wa, 0.0%hi, 2.0%si, 0.0%st Mem: 8128776k total, 6830276k used, 1298500k free, 381356k buffers Swap: 8388604k total, 555328k used, 7833276k free, 180096k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10660 puppet 20 0 214m 107m 4040 S 61.9 1.3 8:46.81 puppetmasterd 3 root 20 0 0 0 0 S 21.4 0.0 320:38.54 ksoftirqd/0 10 root 20 0 0 0 0 R 20.2 0.0 549:30.88 ksoftirqd/1 10296 qemu 20 0 2470m 1.4g 8888 S 13.1 18.1 4:23.70 qemu-kvm 17334 qemu 20 0 2788m 1.7g 540 S...
2013 Jun 13
4
puppet: 3.1.1 -> 3.2.1 load increase
Hi, I recently updated from puppet 3.1.1 to 3.2.1 and noticed quite a bit of increased load on the puppetmaster machine. I''m using the Apache/passenger/rack way of puppetmastering. Main symptom is: higher load on puppetmaster machine (8 cores): - 3.1.1: around 4 - 3.2.1: around 9-10 Any idea why there''s more load on the machine with 3.2.1? -- You received this
2007 Sep 06
6
Build your own "appliance" concept
I've been working on this the past few days and thought I would put it out there to see if anyone else has interest in it. It really has nothing to do with the Digium appliance, I've just been looking for some mass produced solid state hardware to run small branch offices off of for awhile now and I think I've finally landed on something I like. Basically I've taken an HP thin
2012 Nov 13
1
thread taskq / unp_gc() using 100% cpu and stalling unix socket IPC
...ser, 0.0% nice, 57.1% system, 0.0% interrupt, 32.3% idle CPU 22: 5.9% user, 0.0% nice, 58.8% system, 0.0% interrupt, 35.3% idle CPU 23: 6.3% user, 0.0% nice, 59.6% system, 0.0% interrupt, 34.1% idle Mem: 3551M Active, 1351M Inact, 2905M Wired, 8K Cache, 7488K Buf, 85G Free ARC: 1073M Total, 107M MRU, 828M MFU, 784K Anon, 7647K Header, 130M Other Swap: 8192M Total, 8192M Free PID USERNAME THR PRI NICE SIZE RES STATE C TIME WCPU COMMAND 11 root 24 155 ki31 0K 384K CPU23 23 431.4H 847.95% idle 0 root 248 -8 0 0K 3968K - 1...