Displaying 6 results from an estimated 6 matches for "238m".
Did you mean:
238
2010 Oct 20
0
Increased memory usage between 4.8 and 5.5
...inux
PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
25117 nobody 16 0 244m 57m 41m S 0.0 1.5 0:58.23 httpd
21318 nobody 15 0 240m 53m 41m S 0.0 1.3 0:53.25 httpd
18517 nobody 16 0 239m 51m 41m S 0.0 1.3 0:41.97 httpd
10383 nobody 15 0 238m 50m 40m S 0.0 1.3 0:31.07 httpd
29560 nobody 16 0 239m 49m 39m S 0.0 1.3 0:40.39 httpd
32459 nobody 16 0 238m 49m 40m S 0.0 1.3 0:23.07 httpd
8441 nobody 15 0 238m 48m 39m S 0.0 1.2 0:22.87 httpd
1350 nobody 15 0 238m 48m 39m S 0.0 1.2 0:29.83...
2014 Mar 19
3
Disk usage incorrectly reported by du
...the
source machine.
Here is the du output for for one directory exhibiting the problem:
#du -h |grep \/51
201M ./51/msg/8
567M ./51/msg/9
237M ./51/msg/6
279M ./51/msg/0
174M ./51/msg/10
273M ./51/msg/2
341M ./51/msg/7
408M ./51/msg/4
222M ./51/msg/11
174M ./51/msg/5
238M ./51/msg/1
271M ./51/msg/3
3.3G ./51/msg
3.3G ./51
after changing the directory and running du again I get different numbers
#cd 51
du -h
306M ./msg/8
676M ./msg/9
351M ./msg/6
338M ./msg/0
347M ./msg/10
394M ./msg/2
480M ./msg/7
544M ./msg/4
407M ./msg/11
3...
2012 Sep 12
1
cyrus2dovecot script converts mailboxes to bigger sizes
...39;m using
cyrus2dovecot script (by Freie Universit?t Berlin) to convert mailboxes
of users, but i'm wondering why size of mailbox in Maildir++ is so
much bigger than mailbox in cyrus format after conversion:
linux-a9qw:~/ # du -sh /mnt/imap/z/user/zinovik
/srv/vmail/petrsu.ru/z/zinovik/Maildir
238M /mnt/imap/z/user/zinovik
1.2G /srv/vmail/mydom.ru/z/zinovik/Maildir
I was planning to implement quota for mailboxes about 1 gigabyte,
but after conversion I would not be able to receive messages to my own
box, because i'm overquota.
I think the only way would be to set quota up to 15 GB...
2019 Feb 13
4
/boot partition running out of space randomly. Please help!
.../dev/shm
tmpfs 2.8G 8.5M 2.8G 1% /run
tmpfs 2.8G 0 2.8G 0% /sys/fs/cgroup
/dev/mapper/VolGroup00-LogVolRoot 30G 19G 12G 63% /
/dev/sda2 594M 594M 0 100% /boot
/dev/sda1 238M 9.7M 229M 5% /boot/efi
/dev/mapper/VolGroup00-LogVolHome 3.3G 415M 2.9G 13% /home
tmpfs 565M 0 565M 0% /run/user/54321
tmpfs 565M 0 565M 0% /run/user/1000
]$ ls -lh /boot
total 92M
-rw-r--r-- 1 root root 179K Dec 12 2...
2019 Feb 13
0
/boot partition running out of space randomly. Please help!
...2.8G 8.5M 2.8G 1% /run
> tmpfs 2.8G 0 2.8G 0% /sys/fs/cgroup
> /dev/mapper/VolGroup00-LogVolRoot 30G 19G 12G 63% /
> /dev/sda2 594M 594M 0 100% /boot
> /dev/sda1 238M 9.7M 229M 5% /boot/efi
> /dev/mapper/VolGroup00-LogVolHome 3.3G 415M 2.9G 13% /home
> tmpfs 565M 0 565M 0% /run/user/54321
> tmpfs 565M 0 565M 0% /run/user/1000
>
> ]$ ls -lh /boot
> total 92M
> -rw...
2002 Feb 28
5
Problems with ext3 fs
...hdc2[1] hda2[0]
59978304 blocks level 5, 32k chunk, algorithm 2 [4/4] [UUUU]
md11 : active raid1 hdk4[1] hde4[0]
170240 blocks [2/2] [UU]
Now, the filesystems are set-up as shown:
jlm@nijinsky:~$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/md5 939M 238M 653M 27% /
/dev/md0 91M 23M 63M 27% /boot
/dev/md6 277M 8.1M 254M 4% /tmp
/dev/md7 1.8G 1.5G 360M 81% /usr
/dev/md8 939M 398M 541M 43% /var
/dev/md9 9.2G 5.1G 3.6G 59% /home
/dev/md10 11G 1.7G 9.1G...