Displaying 11 results from an estimated 11 matches for "144m".
Did you mean:
144
2012 Dec 27
4
Samba vs. Firewall and/or SELinux
...comment = hier kannn reinkopiert werden
path = /data/public
read only = No
create mask = 0777
guest only = Yes
guest ok = Yes
sh-4.1# cat /etc/samba/smbusers
# Unix_name = SMB_name1 SMB_name2 ...
root = administrator admin
nobody = guest pcguest smbguest
sh-4.1# ls -lisah /data/public
total 144M
1703938 12K drwxrwxrwx. 4 nobody users 12K Dec 27 13:39 .
1703937 4.0K drwxr-xr-x. 3 root root 4.0K Dec 22 19:43 ..
1706985 144M -rwxrw-rw- 1 nobody nobody 144M Dec 27 13:39
Disney_ Alice im Wunderland (1951).mp4
--
Ibrahim "Arastirmacilar" Yurtseven
2...
2006 Sep 13
3
FreeBSD 6.1-RELEASE/kqueue high CPU load
Hi to ALL!
I have dovecot-1.0r7 installed on FreeBSD 6.1, using kqueue and Maildir
(<20 mailboxes, < 0,5 Gb size). Periodically CPU load of imap processes
increasing up to 60-80%.
Is it normall behavior or not? May be somebody had such kind of problems?
--
? ?????????, ?????? ?????
??? "???????" : ?????????? ??, WEB-??????????
+7 (495) 589 68 81
+7 (926) 575 22 11
2005 Feb 28
1
Mail server on DMZ
.../0
0.0.0.0/0 tcp dpt:20
0 0 ACCEPT icmp -- * * 0.0.0.0/0
0.0.0.0/0
3220 288K dmz2all all -- * * 0.0.0.0/0
0.0.0.0/0
Chain dmz2loc (1 references)
pkts bytes target prot opt in out source
destination
537K 144M ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0 state RELATED,ESTABLISHED
7 790 newnotsyn tcp -- * * 0.0.0.0/0
0.0.0.0/0 state NEW tcp flags:!0x16/0x02
0 0 ACCEPT tcp -- * * 0.0.0.0/0
0.0.0.0/0 tcp dpt:26
8561 411K...
2005 Mar 07
10
DNS Name problem with mail server on LAN
.../0
0.0.0.0/0 tcp dpt:20
0 0 ACCEPT icmp -- * * 0.0.0.0/0
0.0.0.0/0
3220 288K dmz2all all -- * * 0.0.0.0/0
0.0.0.0/0
Chain dmz2loc (1 references)
pkts bytes target prot opt in out source
destination
537K 144M ACCEPT all -- * * 0.0.0.0/0
0.0.0.0/0 state RELATED,ESTABLISHED
7 790 newnotsyn tcp -- * * 0.0.0.0/0
0.0.0.0/0 state NEW tcp flags:!0x16/0x02
0 0 ACCEPT tcp -- * * 0.0.0.0/0
0.0.0.0/0 tcp dpt:26
8561 411K...
2023 Jul 04
1
remove_me files building up
...128K 113 128K 1% /var/lib/glusterd
/dev/sdd1 7.5M 2.6M 5.0M 35% /data/glusterfs/gv1/brick3
/dev/sdc1 7.5M 600K 7.0M 8% /data/glusterfs/gv1/brick1
/dev/sde1 6.4M 2.9M 3.5M 46% /data/glusterfs/gv1/brick2
uk1-prod-gfs-01:/gv1 150M 6.5M 144M 5% /mnt/gfs
tmpfs 995K 21 995K 1% /run/user/1004
root at uk3-prod-gfs-arb-01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 796M 916K 795M 1% /run
/dev/sda1 12G 3.9G...
2023 Jul 04
1
remove_me files building up
...? ? ? ? ?128K ? 113 ?128K ? ?1% /var/lib/glusterd/dev/sdd1 ? ? ? ? ? ? ?7.5M ?2.6M ?5.0M ? 35% /data/glusterfs/gv1/brick3/dev/sdc1 ? ? ? ? ? ? ?7.5M ?600K ?7.0M ? ?8% /data/glusterfs/gv1/brick1/dev/sde1 ? ? ? ? ? ? ?6.4M ?2.9M ?3.5M ? 46% /data/glusterfs/gv1/brick2uk1-prod-gfs-01:/gv1 ? 150M ?6.5M ?144M ? ?5% /mnt/gfstmpfs ? ? ? ? ? ? ? ? ?995K ? ?21 ?995K ? ?1% /run/user/1004
root at uk3-prod-gfs-arb-01:~# df -hFilesystem ? ? ? ? ? ?Size ?Used Avail Use% Mounted onudev ? ? ? ? ? ? ? ? ?3.9G ? ? 0 ?3.9G ? 0% /devtmpfs ? ? ? ? ? ? ? ? 796M ?916K ?795M ? 1% /run/dev/sda1 ? ? ? ? ? ? ?12G ?3.9G ?7.3...
2023 Jul 04
1
remove_me files building up
...128K 113 128K 1% /var/lib/glusterd
/dev/sdd1 7.5M 2.6M 5.0M 35% /data/glusterfs/gv1/brick3
/dev/sdc1 7.5M 600K 7.0M 8% /data/glusterfs/gv1/brick1
/dev/sde1 6.4M 2.9M 3.5M 46% /data/glusterfs/gv1/brick2
uk1-prod-gfs-01:/gv1 150M 6.5M 144M 5% /mnt/gfs
tmpfs 995K 21 995K 1% /run/user/1004
root at uk3-prod-gfs-arb-01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 796M 916K 795M 1% /run
/dev/sda1 12G 3.9G...
2007 Mar 06
59
Memory leaks in my site
Hi all,
My environment is ruby-1.8.4, rails 1.2.2, mongrel 1.0.1, linux 2.6. Now, i
have a problem on memory leaks with mongrel. My site is running 5 mongrel
processes on a 2G RAM machine, the memory of each process grows from about
20M to about 250M, but it never recover to the initial 20M, so i had to
restart the mongrel processes once per day. The load is about 1M hits per
day.
Waiting for
2023 Jul 04
1
remove_me files building up
...? ? ? ? ?128K ? 113 ?128K ? ?1% /var/lib/glusterd/dev/sdd1 ? ? ? ? ? ? ?7.5M ?2.6M ?5.0M ? 35% /data/glusterfs/gv1/brick3/dev/sdc1 ? ? ? ? ? ? ?7.5M ?600K ?7.0M ? ?8% /data/glusterfs/gv1/brick1/dev/sde1 ? ? ? ? ? ? ?6.4M ?2.9M ?3.5M ? 46% /data/glusterfs/gv1/brick2uk1-prod-gfs-01:/gv1 ? 150M ?6.5M ?144M ? ?5% /mnt/gfstmpfs ? ? ? ? ? ? ? ? ?995K ? ?21 ?995K ? ?1% /run/user/1004
root at uk3-prod-gfs-arb-01:~# df -hFilesystem ? ? ? ? ? ?Size ?Used Avail Use% Mounted onudev ? ? ? ? ? ? ? ? ?3.9G ? ? 0 ?3.9G ? 0% /devtmpfs ? ? ? ? ? ? ? ? 796M ?916K ?795M ? 1% /run/dev/sda1 ? ? ? ? ? ? ?12G ?3.9G ?7.3...
2023 Jul 05
1
remove_me files building up
...128K 113 128K 1% /var/lib/glusterd
/dev/sdd1 7.5M 2.6M 5.0M 35% /data/glusterfs/gv1/brick3
/dev/sdc1 7.5M 600K 7.0M 8% /data/glusterfs/gv1/brick1
/dev/sde1 6.4M 2.9M 3.5M 46% /data/glusterfs/gv1/brick2
uk1-prod-gfs-01:/gv1 150M 6.5M 144M 5% /mnt/gfs
tmpfs 995K 21 995K 1% /run/user/1004
root at uk3-prod-gfs-arb-01:~# df -h
Filesystem Size Used Avail Use% Mounted on
udev 3.9G 0 3.9G 0% /dev
tmpfs 796M 916K 795M 1% /run
/dev/sda1 12G 3.9G...
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's