Displaying 20 results from an estimated 3000 matches similar to: "Rate limiting"
2003 Jan 29
2
shorewall ( and everything else) quit logging
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
I have a machine that is running Shorewall 1.3.11a. Last night it
quit all logging. The "messages" file just ends at 4:20 PM, no
entries since. I was vim''n into that file about that time.... Any
ideas how to start logging without a reboot?
Thanks for your time,
Steve Postma
Sys Admin
Travizon
-----BEGIN PGP SIGNATURE-----
Version:
2017 Aug 08
1
Slow write times to gluster disk
Soumya,
its
[root at mseas-data2 ~]# glusterfs --version
glusterfs 3.7.11 built on Apr 27 2016 14:09:20
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
2017 Aug 08
0
Slow write times to gluster disk
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Soumya Koduri" <skoduri at redhat.com>, gluster-users at gluster.org, "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>, "Raghavendra
2017 Aug 07
2
Slow write times to gluster disk
Hi Soumya,
We just had the opportunity to try the option of disabling the
kernel-NFS and restarting glusterd to start gNFS. However the gluster
demon crashes immediately on startup. What additional information
besides what we provide below would help debugging this?
Thanks,
Pat
-------- Forwarded Message --------
Subject: gluster-nfs crashing on start
Date: Mon, 7 Aug 2017 16:05:09
2003 Jan 11
1
interesting problem
I have shorewall 1.3.12 installed on a redhat 8 fully patched machine with
three nicks. Eth0 has 10 IP''s bound to it and has been succsessfully routing
web traffic to servers on the dmz.
This morning I added another server to the DMZ, configured my network with
the correct IP, configured dnat in "rules" and restarted both. From a
standalone machine that is
on the same segment
2003 Jan 11
1
ulogd
I am unable to "make" ulogd on any of my redhat 8 machines (error 1), has
anybody used shorewall with any of the other logging daemon packages out
there?
Steve Postma
System Admin
Travizon
2003 Jan 14
1
logging
I would like to cut down on packets logged from "loc2net". I have modified
my policy file so that the logging for loc2net is "err" but dns packets and
smtp are still being logged. Is it possible to filter these out?
On a separate note, if I define ULOG in policy, I get an error on shorewall
startup "ULOG not defined" or something of that nature. Sorry about being
2003 Oct 21
2
problems
In the last 15 minutes I have had a major firewall running Shorewall display
some problems. This machine has been working fine for the better part of a
year, no changes made in the last week. This machine has three zones. There
is a DNAT running from the net zone and the loc zone to a webserver in the
dmz port 80 only. The DNAT from the loc zone seems to not be working
correctly. If I make a web
2017 Jun 24
0
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 9:10 AM, Pranith Kumar Karampuri <
pkarampu at redhat.com> wrote:
>
>
> On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
>>
>> Hi,
>>
>> Today we experimented with some of the FUSE options that we found in the
>> list.
>>
>> Changing these options had no effect:
>>
>>
2017 Jun 27
0
Slow write times to gluster disk
On Mon, Jun 26, 2017 at 7:40 PM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi All,
>
> Decided to try another tests of gluster mounted via FUSE vs gluster
> mounted via NFS, this time using the software we run in production (i.e.
> our ocean model writing a netCDF file).
>
> gluster mounted via NFS the run took 2.3 hr
>
> gluster mounted via FUSE: the run took
2017 Jun 23
2
Slow write times to gluster disk
On Fri, Jun 23, 2017 at 2:23 AM, Pat Haley <phaley at mit.edu> wrote:
>
> Hi,
>
> Today we experimented with some of the FUSE options that we found in the
> list.
>
> Changing these options had no effect:
>
> gluster volume set test-volume performance.cache-max-file-size 2MB
> gluster volume set test-volume performance.cache-refresh-timeout 4
> gluster
2017 Jun 26
3
Slow write times to gluster disk
Hi All,
Decided to try another tests of gluster mounted via FUSE vs gluster
mounted via NFS, this time using the software we run in production (i.e.
our ocean model writing a netCDF file).
gluster mounted via NFS the run took 2.3 hr
gluster mounted via FUSE: the run took 44.2 hr
The only problem with using gluster mounted via NFS is that it does not
respect the group write permissions which
2017 Jun 22
0
Slow write times to gluster disk
Hi,
Today we experimented with some of the FUSE options that we found in the
list.
Changing these options had no effect:
gluster volume set test-volume performance.cache-max-file-size 2MB
gluster volume set test-volume performance.cache-refresh-timeout 4
gluster volume set test-volume performance.cache-size 256MB
gluster volume set test-volume performance.write-behind-window-size 4MB
gluster
2017 Jun 20
2
Slow write times to gluster disk
Hi Ben,
Sorry this took so long, but we had a real-time forecasting exercise
last week and I could only get to this now.
Backend Hardware/OS:
* Much of the information on our back end system is included at the
top of
http://lists.gluster.org/pipermail/gluster-users/2017-April/030529.html
* The specific model of the hard disks is SeaGate ENTERPRISE CAPACITY
V.4 6TB
2017 Jun 12
0
Slow write times to gluster disk
Hi Guys,
I was wondering what our next steps should be to solve the slow write times.
Recently I was debugging a large code and writing a lot of output at
every time step. When I tried writing to our gluster disks, it was
taking over a day to do a single time step whereas if I had the same
program (same hardware, network) write to our nfs disk the time per
time-step was about 45 minutes.
2017 Jun 02
2
Slow write times to gluster disk
Are you sure using conv=sync is what you want? I normally use conv=fdatasync, I'll look up the difference between the two and see if it affects your test.
-b
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ravishankar N" <ravishankar at redhat.com>,
2003 Nov 21
7
FORWARD:REJECT
I have a 3 nic setup with shorewall 1.4.8-1 running on redhat 9. My eth2
(dmz zone)has 7 secondary address attached to it. I can ping a machine in
each subnet, dmz to net rules seem to be working fine on all machines.. I
have my policy set as dmz to dmz accept. If I try to ping between subnets I
get
Nov 21 12:18:45 kbeewall kernel: Shorewall:FORWARD:REJECT:IN=eth2 OUT=eth2
SRC=172.17.0.2
2012 Oct 05
0
No subject
for all three nodes:=0A=
=0A=
Brick1: gluster-0-0:/mseas-data-0-0=0A=
Brick2: gluster-0-1:/mseas-data-0-1=0A=
Brick3: gluster-data:/data=0A=
=0A=
=0A=
Which node are you trying to mount to /data? If it is not the=0A=
gluster-data node, then it will fail if there is not a /data directory.=0A=
In this case, it is a good thing, since mounting to /data on gluster-0-0=0A=
or gluster-0-1 would not
2012 Oct 05
0
No subject
for all three nodes:
Brick1: gluster-0-0:/mseas-data-0-0
Brick2: gluster-0-1:/mseas-data-0-1
Brick3: gluster-data:/data
Which node are you trying to mount to /data? If it is not the
gluster-data node, then it will fail if there is not a /data directory.
In this case, it is a good thing, since mounting to /data on gluster-0-0
or gluster-0-1 would not accomplish what you need.
To clarify, there
2004 Jul 02
1
logon problems
I am really not sure if this is a samba problem or what, but is starting
to annoy me. Here is what is happening.
Server 1: Squid,squidguard,dansguardian,iptables (This is firewall/filter)
Server2: Samba (used for file storage and squid authentification)
Clients are Windows 98. A few XP's.
Everything has run fine for almost 2 years. The other day when trying
to log on to the internet,