similar to: Shorewall 1.4.2

Displaying 20 results from an estimated 3000 matches similar to: "Shorewall 1.4.2"

2003 Mar 26
5
Where do we go from here?
As I recently announced on the Shorewall Development list, the version of Shorewall 1.4 currently in the CVS development tree improves the performance of complex zones (those requiring entries in /etc/shorewall/hosts). With that change, I''ve completed the product cleanup that I envisioned for 1.4. Before I wrap up 1.4.2 and begin thinking about 2.0, is there anything else that
2007 Jun 27
9
Newbie questions...
I''ve spent the last week or two poring over the documentation and setting up my first puppet environment, and while I''ve figured out how to do most of what I want to do with it, I have some questions that I haven''t been able to find answers for... * Can I match parts of a facter fact? In particular I have hostnames that include the environment as part of the
2003 Nov 21
7
FORWARD:REJECT
I have a 3 nic setup with shorewall 1.4.8-1 running on redhat 9. My eth2 (dmz zone)has 7 secondary address attached to it. I can ping a machine in each subnet, dmz to net rules seem to be working fine on all machines.. I have my policy set as dmz to dmz accept. If I try to ping between subnets I get Nov 21 12:18:45 kbeewall kernel: Shorewall:FORWARD:REJECT:IN=eth2 OUT=eth2 SRC=172.17.0.2
2008 Dec 31
5
Problem with "routeback, blacklist, tcpflags" in Shorewall 4.2.4-2
Hi, enabling this line in hosts file "WAN eth2:0.0.0.0/0!1.0.0.0/8,10.0.0.0/8,169.254.0.0/16,172.16.0.0/12,192.168.0.0/16 routeback,blacklist,tcpflags" results in this error message -- Preparing iptables-restore input... Running /usr/sbin/iptables-restore... iptables-restore v1.3.8: error creating chain ''ACCEPT'':File exists Error occurred at line: 29 Try
2003 Nov 08
1
Sourceforge updates, webmin
Great piece of software there... Just a few minor problems. First, the sourceforge site doesn''t seem to be kept up to date. This should be pointed out more (Sourceforge probably shouldn''t be the first mirror either). It caused me some long hours trying to solve a bug in 1.4.6, thinking this was the latest version, when in fact this bug was solved in 1.4.8 (routeback for if+).
2017 Jun 20
2
trash can feature, crashed???
All, I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last week, I enabled the trashcan feature on one of my volumes: gluster volume set date01 features.trash on I also limited the max file size to 500MB: gluster volume set data01 features.trash-max-filesize 500MB 3 hours after that I enabled this, this specific gluster volume went down: [2017-06-16
2017 Jun 20
0
trash can feature, crashed???
On Tue, 2017-06-20 at 08:52 -0400, Ludwig Gamache wrote: > All, > > I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last > week, I enabled the trashcan feature on one of my volumes: > gluster volume set date01 features.trash on I think you misspelled the volume name. Is it data01 or date01? > I also limited the max file size to 500MB:
2009 Aug 21
2
Multiple interfaces in a zone (not a standard case)
Hi, This subject has been brought up in the forum, but it''s a bit different. If I have a set of tun interfaces. I already defined tun+ as zone A, and I have excluded tun15 as zone B (a subset of zone A). I need to add tun16 to zone B. My config: /etc/shorewall/interfaces: A tun+ - routeback B tun15 /etc/shorewall/ A ipv4 B:A ipv4 I tried to define in
2017 Jul 30
1
Lose gnfs connection during test
Hi all I use Distributed-Replicate(12 x 2 = 24) hot tier plus Distributed-Replicate(36 x (6 + 2) = 288) cold tier with gluster3.8.4 for performance test. When i set client/server.event-threads as small values etc 2, it works ok. But if set client/server.event-threads as big values etc 32, the netconnects will always become un-available during the test, with following error messages in stree
2017 Nov 07
2
Enabling Halo sets volume RO
Hi all, I'm taking a stab at deploying a storage cluster to explore the Halo AFR feature and running into some trouble. In GCE, I have 4 instances, each with one 10gb brick. 2 instances are in the US and the other 2 are in Asia (with the hope that it will drive up latency sufficiently). The bricks make up a Replica-4 volume. Before I enable halo, I can mount to volume and r/w files. The
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list, recently I've noted a strange behaviour of my gluster storage, sometimes while executing a simple command like "gluster volume status vm-images-repo" as a response I got "Another transaction is in progress for vm-images-repo. Please try again after sometime.". This situation does not get solved simply waiting for but I've to restart glusterd on the node that
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody Regarding the issue with mount, usually I am using this systemd service to bring up the mount points: /etc/systemd/system/glusterfsmounts.service [Unit] Description=Glustermounting Requires=glusterd.service Wants=glusterd.service After=network.target network-online.target glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a response I got "Another transaction
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization, the following Gluster volumes settings are recommended to be applied (preferably at the creation of the volume) These settings are important for data reliability, ( Note that Replica 3 or Replica 2+1 is expected ) performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.low-prio-threads=32 network.remote-dio=enable
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
So from the cmd_history.logs across all the nodes it's evident that multiple commands on the same volume are run simultaneously which can result into transactions collision and you can end up with one command succeeding and others failing. Ideally if you are running volume status command for monitoring it's suggested to be run from only one node. On Thu, Jul 20, 2017 at 3:54 PM, Paolo
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options: performance.write-behind performance.flush-behind --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese < guillaume.pavese at interactiv-group.com> escreveu: > On oVirt / Redhat Virtualization, > the following Gluster volumes settings are recommended to be applied > (preferably at