Displaying 15 results from an estimated 15 matches for "179m".
Did you mean:
  179
  
2004 Jul 22
4
0.99.10.x auth memory leak?
...0  220m 216m 5560 S  0.0 10.7  25:01.54 dovecot-auth
31234 root      16   0  205m 202m 5560 S  0.0 10.0  24:08.84 dovecot-auth
31231 root      16   0  200m 196m 5560 S  0.7  9.7  23:25.37 dovecot-auth
31232 root      16   0  196m 192m 5560 S  0.0  9.5  23:10.44 dovecot-auth
31233 root      15   0  179m 175m 5560 S  0.3  8.6  22:13.07 dovecot-auth
---
So, I guess my questions to Timo are:
Think it's leaky and any idea where?
Given the load, would a single auth process be a bad idea?
(it is a quite fast dual opteron box)
Regards,
Christian Balzer
-- 
Christian Balzer        Network/Systems...
2007 Apr 18
1
[Bridge] Multilink + bridge + nat problem
...prot opt in     out     source              
destination
1    3348K 1832M CONNMARK   0    --  *      *       0.0.0.0/0           
0.0.0.0/0           CONNMARK restore
2    2841K 1653M RETURN     0    --  *      *       0.0.0.0/0           
0.0.0.0/0           MARK match !0x0/0xf000
3     507K  179M MARCAR_IFACE_TRAFICO  0    --  *      *       0.0.0.0/0  
         0.0.0.0/0           MARK match 0x0/0xf000
4    40690 2721K MARK       0    --  wan0   *       0.0.0.0/0           
0.0.0.0/0           MARK match 0x0/0xf000 PHYSDEV match --physdev-in eth1
state NEW MARK or 0x8000
5    48680 3062K M...
2018 May 08
1
mount failing client to gluster cluster.
...33M  3.9G   1% /tmp
/dev/mapper/centos-home                         50G  4.3G   46G   9% /home
/dev/mapper/centos-var                          20G  341M   20G   2% /var
/dev/mapper/centos-data1                       120G   36M  120G   1% /data1
/dev/mapper/centos00-var_lib                   9.4G  179M  9.2G   2%
/var/lib
/dev/mapper/vg--gluster--prod1-gluster--prod1  932G  233G  699G  25%
/bricks/brick1
tmpfs                                          771M   12K  771M   1%
/run/user/42
tmpfs                                          771M   32K  771M   1%
/run/user/1000
glusterp1:gv0/glusterp1/image...
2008 Jul 05
2
Question on number of processes engendered by BDRb
...5 and 7083.
============================================
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+  COMMAND
10961 raghus    15   0  193m 110m 3508 S    0 10.8   0:52.87 mongrel_rails
10971 raghus    15   0  188m 107m 3440 S    0 10.5   0:50.61 mongrel_rails
11013 raghus    15   0  179m 103m 3348 S    0 10.1   0:45.18 mongrel_rails
* 7084 raghus    15   0  152m  73m 2036 S   11  7.2 116:31.68
packet_worker_r*
11129 raghus    15   0  134m  58m 3336 S    0  5.7   0:05.20 mongrel_rails
* 7085 raghus    15   0  131m  53m 2020 S    0  5.2   2:23.61
packet_worker_r*
 5094 mysql     15...
2011 Jul 07
4
Question on memory usage, garbage collector, 'top' stats on linux
...rs
Swap:   905208k total,        0k used,   905208k free,  3176008k cached
  PID USER      PR  NI  VIRT  RES  SHR S %CPU %MEM    TIME+
COMMAND
 6291 webappus  20   0  274m 151m 3276 S  0.3  3.9   8:27.12
ruby
 6218 webappus  20   0  206m  82m 3544 R 98.9  2.1   0:48.81
ruby
 6208 webappus  20   0  179m  59m 4788 S  0.0  1.5   0:07.50
ruby
 6295 postgres  20   0  102m  32m  28m S  0.0  0.8  17:54.62
postgres
 1034 postgres  20   0 98.7m  26m  25m S  0.0  0.7   0:23.67
postgres
  843 mysql     20   0  174m  26m 6648 S  0.0  0.7   0:31.82
mysqld
 6222 postgres  20   0  107m  19m  11m S  0.0  0.5   0...
2018 May 21
2
split brain? but where?
.../dev/mapper/centos-data1                       120G   36M  120G   1%
/data1
    /dev/mapper/vg--gluster--prod1-gluster--prod1  932G  260G  673G  28%
/bricks/brick1
    /dev/mapper/centos-var                          20G  413M   20G   3%
/var
    /dev/mapper/centos00-var_lib                   9.4G  179M  9.2G   2%
/var/lib
    tmpfs                                          771M  8.0K  771M   1%
/run/user/42
    glusterp1:gv0                                  932G  273G  659G  30%
/isos
    glusterp1:gv0/glusterp1/images                 932G  273G  659G  30%
/var/lib/libvirt/images
glusterp3.graywit...
2005 Oct 05
0
Asterisk 1.0.9-BRIstuffed-0.2.0-RC8o memory leak when using call files ?
...oncurrent 30 
call files to /var/spool/asterisk/outgoing/ on box A
which inititate via a dialplan context/extension a outbound call 
(redirected via chan_local) to box b
playing some preexisting wavfiles followed by hangup.
I see a memory growth for the asterisk process from 39M VIRT / 11M RES to 
179M VIRT / 22M RES after 1000 completed calls.
The box B acepting/recording the calls doesnt show such a memory growth 
over time.
Are there known issues about call files / memory growth in asterisk 1.0.9?
The dial construct via chan_local is there to have some additional 
information about the call...
2018 May 21
0
split brain? but where?
...ata1                       120G   36M  120G   1%
>/data1
>   /dev/mapper/vg--gluster--prod1-gluster--prod1  932G  260G  673G  28%
>/bricks/brick1
>   /dev/mapper/centos-var                          20G  413M   20G   3%
>/var
>   /dev/mapper/centos00-var_lib                   9.4G  179M  9.2G   2%
>/var/lib
>   tmpfs                                          771M  8.0K  771M   1%
>/run/user/42
>   glusterp1:gv0                                  932G  273G  659G  30%
>/isos
>   glusterp1:gv0/glusterp1/images                 932G  273G  659G  30%
>/var/lib/libvirt...
2018 May 22
2
split brain? but where?
...G   36M  120G   1%
> >/data1
> >   /dev/mapper/vg--gluster--prod1-gluster--prod1  932G  260G  673G  28%
> >/bricks/brick1
> >   /dev/mapper/centos-var                          20G  413M   20G   3%
> >/var
> >   /dev/mapper/centos00-var_lib                   9.4G  179M  9.2G   2%
> >/var/lib
> >   tmpfs                                          771M  8.0K  771M   1%
> >/run/user/42
> >   glusterp1:gv0                                  932G  273G  659G  30%
> >/isos
> >   glusterp1:gv0/glusterp1/images                 932G  273G...
2018 May 22
0
split brain? but where?
...gt; >/data1
>> >   /dev/mapper/vg--gluster--prod1-gluster--prod1  932G  260G  673G  28%
>> >/bricks/brick1
>> >   /dev/mapper/centos-var                          20G  413M   20G   3%
>> >/var
>> >   /dev/mapper/centos00-var_lib                   9.4G  179M  9.2G   2%
>> >/var/lib
>> >   tmpfs                                          771M  8.0K  771M   1%
>> >/run/user/42
>> >   glusterp1:gv0                                  932G  273G  659G  30%
>> >/isos
>> >   glusterp1:gv0/glusterp1/images...
2007 Mar 30
6
1.0.rc29 released
http://dovecot.org/releases/dovecot-1.0.rc29.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc29.tar.gz.sig
Probably one more RC after this.
	* Security fix: If zlib plugin was loaded, it was possible to open
	  gzipped mbox files outside the user's mail directory.
	+ Added auth_gssapi_hostname setting.
	- IMAP: LIST "" "" didn't return anything if there didn't
2007 Mar 30
6
1.0.rc29 released
http://dovecot.org/releases/dovecot-1.0.rc29.tar.gz
http://dovecot.org/releases/dovecot-1.0.rc29.tar.gz.sig
Probably one more RC after this.
	* Security fix: If zlib plugin was loaded, it was possible to open
	  gzipped mbox files outside the user's mail directory.
	+ Added auth_gssapi_hostname setting.
	- IMAP: LIST "" "" didn't return anything if there didn't
2018 May 22
1
split brain? but where?
...gt;> >   /dev/mapper/vg--gluster--prod1-gluster--prod1  932G  260G  673G  28%
>>> >/bricks/brick1
>>> >   /dev/mapper/centos-var                          20G  413M   20G   3%
>>> >/var
>>> >   /dev/mapper/centos00-var_lib                   9.4G  179M  9.2G   2%
>>> >/var/lib
>>> >   tmpfs                                          771M  8.0K  771M   1%
>>> >/run/user/42
>>> >   glusterp1:gv0                                  932G  273G  659G  30%
>>> >/isos
>>> >   glusterp1:g...
2018 Mar 28
3
Announcing Gluster release 4.0.1 (Short Term Maintenance)
On Wed, Mar 28, 2018 at 02:57:55PM +1300, Thing wrote:
> Hi,
> 
> Thanks, yes, not very familiar with Centos and hence googling took a while
> to find a 4.0 version at,
> 
> https://wiki.centos.org/SpecialInterestGroup/Storage
The announcement for Gluster 4.0 in CentOS should contain all the
details that you need as well:
2013 Jan 26
4
Write failure on distributed volume with free space available
...3 s, 191 MB/s
Filesystem            Size  Used Avail Use% Mounted on
192.168.192.5:/test   291M   96M  195M  33% /mnt/gluster1
1+0 records in
1+0 records out
16777216 bytes (17 MB) copied, 0.0890243 s, 188 MB/s
Filesystem            Size  Used Avail Use% Mounted on
192.168.192.5:/test   291M  112M  179M  39% /mnt/gluster1
1+0 records in
1+0 records out
16777216 bytes (17 MB) copied, 0.0853196 s, 197 MB/s
Filesystem            Size  Used Avail Use% Mounted on
192.168.192.5:/test   291M  128M  163M  45% /mnt/gluster1
1+0 records in
1+0 records out
16777216 bytes (17 MB) copied, 0.0923682 s, 182 MB/s...