search for: 186m

Displaying 9 results from an estimated 9 matches for "186m".

Did you mean: 186
2005 Oct 04
0
Samba process
Hi All, Using top command the samba process shows below entry : 10093 bvuat 13 58 0 186M 129M sleep 2:49 2.76% bvsmgr 8611 root 1 58 0 7696K 6080K cpu/3 574:56 0.85% smbd 10077 bvuat 13 59 0 186M 129M sleep 2:53 0.57% bvsmgr 1355 oracle 1 54 0 596M 550M sleep 0:18 0.30% oracle 25577 bvuat 1 58 0 2720K 1736K cpu/1 0:02...
2011 Jul 15
1
Strange Behavior using FUSE client
...ci.harvard.edu:/mnt/data02 Brick1: cortex-dr-001.dfci.harvard.edu:/mnt/data03 Brick1: cortex-dr-002.dfci.harvard.edu:/mnt/data03 A df -h on the cortex-dr-001: Filesystem Size Used Avail Use% Mounted on /dev/md2 66G 4.6G 58G 8% / /dev/md0 71G 186M 68G 1% /opt tmpfs 5.9G 0 5.9G 0% /dev/shm /dev/sde1 17T 83G 17T 1% /mnt/data03 /dev/sdd1 39T 182G 38T 1% /mnt/data02 /dev/sdc1 39T 81G 39T 1% /mnt/data01 A df -h from the client: Filesystem Siz...
2016 Aug 11
5
Software RAID and GRUB on CentOS 7
Hi, When I perform a software RAID 1 or RAID 5 installation on a LAN server with several hard disks, I wonder if GRUB already gets installed on each individual MBR, or if I have to do that manually. On CentOS 5.x and 6.x, this had to be done like this: # grub grub> device (hd0) /dev/sda grub> device (hd1) /dev/sdb grub> root (hd0,0) grub> setup (hd0) grub> root (hd1,0) grub>
2016 Aug 11
0
Software RAID and GRUB on CentOS 7
...o, can't remember where, sorry. $0.02, no more, no less .... [root at Q6600:/etc, Thu Aug 11, 08:25 AM] 1018 # df -h Filesystem Type Size Used Avail Use% Mounted on /dev/md1 ext4 917G 8.0G 863G 1% / tmpfs tmpfs 4.0G 0 4.0G 0% /dev/shm /dev/md0 ext4 186M 60M 117M 34% /boot /dev/md3 ext4 1.8T 1.4T 333G 81% /home [root at Q6600:/etc, Thu Aug 11, 08:26 AM] 1019 # uname -a Linux Q6600 2.6.35.14-106.fc14.x86_64 #1 SMP Wed Nov 23 13:07:52 UTC 2011 x86_64 x86_64 x86_64 GNU/Linux [root at Q6600:/etc, Thu Aug 11, 08:26 AM] 1020 # -- Wil...
2013 Oct 24
1
failed: Message has been copied too many times
Hello, I'm running dovecot 2.1.16 in a ubuntu 12.04 server, with lazy_expunge, SiS and mdbox format. The problem I'm having is that the index for one the mailboxes of one of my users is growing too much. This is not the first time of this problem. In previous cases, is because a message is duplicated thousand of times (I haven't found any reason for this). In these cases,
2007 Aug 21
0
Saftware RAID1 or Hardware RAID1 with Asterisk (Vidura Senadeera)
...raid1 hdc5[1] hda5[0] > 38081984 blocks [2/2] [UU] > > md6 : active raid1 hdc6[1] hda6[0] > 38708480 blocks [2/2] [UU] > > unused devices: <none> > > $ df -h > Filesystem Size Used Avail Use% Mounted on > /dev/md1 236M 38M 186M 17% / > tmpfs 249M 0 249M 0% /dev/shm > /dev/md3 1.9G 1.2G 643M 65% /usr > /dev/md5 36G 29G 5.3G 85% /var > /dev/md6 37G 30G 4.7G 87% /archive > > $ cat /proc/swaps > Filename...
2013 Jun 13
4
puppet: 3.1.1 -> 3.2.1 load increase
Hi, I recently updated from puppet 3.1.1 to 3.2.1 and noticed quite a bit of increased load on the puppetmaster machine. I''m using the Apache/passenger/rack way of puppetmastering. Main symptom is: higher load on puppetmaster machine (8 cores): - 3.1.1: around 4 - 3.2.1: around 9-10 Any idea why there''s more load on the machine with 3.2.1? -- You received this
2007 Aug 21
6
Saftware RAID1 or Hardware RAID1 with Asterisk
Dear All, I would like to get community's feedback with regard to RAID1 ( Software or Hardware) implementations with asterisk. This is my setup Motherboard with SATA RAID1 support CENT OS 4.4 Asterisk 1.2.19 Libpri/zaptel latest release 2.8 Ghz Intel processor 2 80 GB SATA Hard disks 256 MB RAM digium PRI/E1 card Following are the concerns I am having I'm planing to put this asterisk
2013 Jan 26
4
Write failure on distributed volume with free space available
...pace left on device Filesystem Size Used Avail Use% Mounted on 192.168.192.5:/test 291M 170M 121M 59% /mnt/gluster1 1+0 records in 1+0 records out 16777216 bytes (17 MB) copied, 0.0842241 s, 199 MB/s Filesystem Size Used Avail Use% Mounted on 192.168.192.5:/test 291M 186M 105M 64% /mnt/gluster1 1+0 records in 1+0 records out 16777216 bytes (17 MB) copied, 0.102602 s, 164 MB/s Filesystem Size Used Avail Use% Mounted on 192.168.192.5:/test 291M 202M 89M 70% /mnt/gluster1 dd: opening `16_15': No space left on device Filesystem Size U...