Greg Scott
2013-Jul-09 01:17 UTC
[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore
I don't get this. I have a replicated volume and 2 nodes. My challenge is, when I take one node offline, the other node can no longer access the volume until both nodes are back online again. Details: I have 2 nodes, fw1 and fw2. Each node has an XFS file system, /gluster-fw1 on node fw1 and gluster-fw2 no node fw2. Node fw1 is at IP Address 192.168.253.1. Node fw2 is at 192.168.253.2. I create a gluster volume named firewall-scripts which is a replica of those two XFS file systems. The volume holds a bunch of config files common to both fw1 and fw2. The application is an active/standby pair of firewalls and the idea is to keep config files in a gluster volume. When both nodes are online, everything works as expected. But when I take either node offline, node fw2 behaves badly: [root at chicago-fw2 ~]# ls /firewall-scripts ls: cannot access /firewall-scripts: Transport endpoint is not connected And when I bring the offline node back online, node fw2 eventually behaves normally again. What's up with that? Gluster is supposed to be resilient and self-healing and able to stand up to this sort of abuse. So I must be doing something wrong. Here is how I set up everything - it doesn't get much simpler than this and my setup is right out the Getting Started Guide but using my own names. Here are the steps I followed, all from fw1: gluster peer probe 192.168.253.2 gluster peer status Create and start the volume: gluster volume create firewall-scripts replica 2 transport tcp 192.168.253.1:/gluster-fw1 192.168.253.2:/gluster-fw2 gluster volume start firewall-scripts On fw1: mkdir /firewall-scripts mount -t glusterfs 192.168.253.1:/firewall-scripts /firewall-scripts and add this line to /etc/fstab: 192.168.253.1:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0 on fw2: mkdir /firewall-scripts mount -t glusterfs 192.168.253.2:/firewall-scripts /firewall-scripts and add this line to /etc/fstab: 192.168.253.2:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0 That's it. That's the whole setup. When both nodes are online, everything replicates beautifully. But take one node offline and it all falls apart. Here is the output from gluster volume info, identical on both nodes: [root at chicago-fw1 etc]# gluster volume info Volume Name: firewall-scripts Type: Replicate Volume ID: 239b6401-e873-449d-a2d3-1eb2f65a1d4c Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.253.1:/gluster-fw1 Brick2: 192.168.253.2:/gluster-fw2 [root at chicago-fw1 etc]# Looking at /var/log/glusterfs/firewall-scripts.log on fw2, I see errors like this every couple of seconds: [2013-07-09 00:59:04.706390] I [afr-common.c:3856:afr_local_init] 0-firewall-scripts-replicate-0: no subvolumes up [2013-07-09 00:59:04.706515] W [fuse-bridge.c:1132:fuse_err_cbk] 0-glusterfs-fuse: 3160: FLUSH() ERR => -1 (Transport endpoint is not connected) And then when I bring fw1 back online, I see these messages on fw2: [2013-07-09 01:01:35.006782] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 0-firewall-scripts-client-0: changing port to 49152 (from 0) [2013-07-09 01:01:35.006932] W [socket.c:514:__socket_rwv] 0-firewall-scripts-client-0: readv failed (No data available) [2013-07-09 01:01:35.018546] I [client-handshake.c:1658:select_server_supported_programs] 0-firewall-scripts-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2013-07-09 01:01:35.019273] I [client-handshake.c:1456:client_setvolume_cbk] 0-firewall-scripts-client-0: Connected to 192.168.253.1:49152, attached to remote volume '/gluster-fw1'. [2013-07-09 01:01:35.019356] I [client-handshake.c:1468:client_setvolume_cbk] 0-firewall-scripts-client-0: Server and Client lk-version numbers are not same, reopening the fds [2013-07-09 01:01:35.019441] I [client-handshake.c:1308:client_post_handshake] 0-firewall-scripts-client-0: 1 fds open - Delaying child_up until they are re-opened [2013-07-09 01:01:35.020070] I [client-handshake.c:930:client_child_up_reopen_done] 0-firewall-scripts-client-0: last fd open'd/lock-self-heal'd - notifying CHILD-UP [2013-07-09 01:01:35.020282] I [afr-common.c:3698:afr_notify] 0-firewall-scripts-replicate-0: Subvolume 'firewall-scripts-client-0' came back up; going online. [2013-07-09 01:01:35.020616] I [client-handshake.c:450:client_set_lk_version_cbk] 0-firewall-scripts-client-0: Server lk version = 1 So how do I make glusterfs survive a node failure, which is the whole point of all this? thanks - Greg Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130709/109367de/attachment.html>
Greg Scott
2013-Jul-09 17:36 UTC
[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore
No takers? I am running gluster 3.4beta3 that came with Fedora 19. Is my issue a consequence of some kind of quorum split-brain thing? thanks - Greg Scott From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Greg Scott Sent: Monday, July 08, 2013 8:17 PM To: 'gluster-users at gluster.org' Subject: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore I don't get this. I have a replicated volume and 2 nodes. My challenge is, when I take one node offline, the other node can no longer access the volume until both nodes are back online again. Details: I have 2 nodes, fw1 and fw2. Each node has an XFS file system, /gluster-fw1 on node fw1 and gluster-fw2 no node fw2. Node fw1 is at IP Address 192.168.253.1. Node fw2 is at 192.168.253.2. I create a gluster volume named firewall-scripts which is a replica of those two XFS file systems. The volume holds a bunch of config files common to both fw1 and fw2. The application is an active/standby pair of firewalls and the idea is to keep config files in a gluster volume. When both nodes are online, everything works as expected. But when I take either node offline, node fw2 behaves badly: [root at chicago-fw2 ~]# ls /firewall-scripts ls: cannot access /firewall-scripts: Transport endpoint is not connected And when I bring the offline node back online, node fw2 eventually behaves normally again. What's up with that? Gluster is supposed to be resilient and self-healing and able to stand up to this sort of abuse. So I must be doing something wrong. Here is how I set up everything - it doesn't get much simpler than this and my setup is right out the Getting Started Guide but using my own names. Here are the steps I followed, all from fw1: gluster peer probe 192.168.253.2 gluster peer status Create and start the volume: gluster volume create firewall-scripts replica 2 transport tcp 192.168.253.1:/gluster-fw1 192.168.253.2:/gluster-fw2 gluster volume start firewall-scripts On fw1: mkdir /firewall-scripts mount -t glusterfs 192.168.253.1:/firewall-scripts /firewall-scripts and add this line to /etc/fstab: 192.168.253.1:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0 on fw2: mkdir /firewall-scripts mount -t glusterfs 192.168.253.2:/firewall-scripts /firewall-scripts and add this line to /etc/fstab: 192.168.253.2:/firewall-scripts /firewall-scripts glusterfs defaults,_netdev 0 0 That's it. That's the whole setup. When both nodes are online, everything replicates beautifully. But take one node offline and it all falls apart. Here is the output from gluster volume info, identical on both nodes: [root at chicago-fw1 etc]# gluster volume info Volume Name: firewall-scripts Type: Replicate Volume ID: 239b6401-e873-449d-a2d3-1eb2f65a1d4c Status: Started Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: 192.168.253.1:/gluster-fw1 Brick2: 192.168.253.2:/gluster-fw2 [root at chicago-fw1 etc]# Looking at /var/log/glusterfs/firewall-scripts.log on fw2, I see errors like this every couple of seconds: [2013-07-09 00:59:04.706390] I [afr-common.c:3856:afr_local_init] 0-firewall-scripts-replicate-0: no subvolumes up [2013-07-09 00:59:04.706515] W [fuse-bridge.c:1132:fuse_err_cbk] 0-glusterfs-fuse: 3160: FLUSH() ERR => -1 (Transport endpoint is not connected) And then when I bring fw1 back online, I see these messages on fw2: [2013-07-09 01:01:35.006782] I [rpc-clnt.c:1648:rpc_clnt_reconfig] 0-firewall-scripts-client-0: changing port to 49152 (from 0) [2013-07-09 01:01:35.006932] W [socket.c:514:__socket_rwv] 0-firewall-scripts-client-0: readv failed (No data available) [2013-07-09 01:01:35.018546] I [client-handshake.c:1658:select_server_supported_programs] 0-firewall-scripts-client-0: Using Program GlusterFS 3.3, Num (1298437), Version (330) [2013-07-09 01:01:35.019273] I [client-handshake.c:1456:client_setvolume_cbk] 0-firewall-scripts-client-0: Connected to 192.168.253.1:49152, attached to remote volume '/gluster-fw1'. [2013-07-09 01:01:35.019356] I [client-handshake.c:1468:client_setvolume_cbk] 0-firewall-scripts-client-0: Server and Client lk-version numbers are not same, reopening the fds [2013-07-09 01:01:35.019441] I [client-handshake.c:1308:client_post_handshake] 0-firewall-scripts-client-0: 1 fds open - Delaying child_up until they are re-opened [2013-07-09 01:01:35.020070] I [client-handshake.c:930:client_child_up_reopen_done] 0-firewall-scripts-client-0: last fd open'd/lock-self-heal'd - notifying CHILD-UP [2013-07-09 01:01:35.020282] I [afr-common.c:3698:afr_notify] 0-firewall-scripts-replicate-0: Subvolume 'firewall-scripts-client-0' came back up; going online. [2013-07-09 01:01:35.020616] I [client-handshake.c:450:client_set_lk_version_cbk] 0-firewall-scripts-client-0: Server lk version = 1 So how do I make glusterfs survive a node failure, which is the whole point of all this? thanks * Greg Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130709/1a97dfc0/attachment.html>
raghav
2013-Jul-10 10:17 UTC
[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore
On 07/09/2013 06:47 AM, Greg Scott wrote> I don't get this. I have a replicated volume and 2 nodes. My > challenge is, when I take one node offline, the other node can no > longer access the volume until both nodes are back online again. > Details: > I have 2 nodes, fw1 and fw2. Each node has an XFS file system, > /gluster-fw1 on node fw1 and gluster-fw2 no node fw2. Node fw1 is at > IP Address 192.168.253.1. Node fw2 is at 192.168.253.2. > I create a gluster volume named firewall-scripts which is a replica of > those two XFS file systems. The volume holds a bunch of config files > common to both fw1 and fw2. The application is an active/standby pair > of firewalls and the idea is to keep config files in a gluster volume. > When both nodes are online, everything works as expected. But when I > take either node offline, node fw2 behaves badly: > [root at chicago-fw2 ~]# ls /firewall-scripts > ls: cannot access /firewall-scripts: Transport endpoint is not connected > And when I bring the offline node back online, node fw2 eventually > behaves normally again. > What's up with that? Gluster is supposed to be resilient and > self-healing and able to stand up to this sort of abuse. So I must be > doing something wrong. > Here is how I set up everything -- it doesn't get much simpler than > this and my setup is right out the Getting Started Guide but using my > own names. > Here are the steps I followed, all from fw1: > gluster peer probe 192.168.253.2 > gluster peer status > Create and start the volume: > gluster volume create firewall-scripts replica 2 transport tcp > 192.168.253.1:/gluster-fw1 192.168.253.2:/gluster-fw2 > gluster volume start firewall-scripts > On fw1: > mkdir /firewall-scripts > mount -t glusterfs 192.168.253.1:/firewall-scripts /firewall-scripts > and add this line to /etc/fstab: > 192.168.253.1:/firewall-scripts /firewall-scripts glusterfs > defaults,_netdev 0 0 > on fw2: > mkdir /firewall-scripts > mount -t glusterfs 192.168.253.2:/firewall-scripts /firewall-scripts > and add this line to /etc/fstab: > 192.168.253.2:/firewall-scripts /firewall-scripts glusterfs > defaults,_netdev 0 0 > That's it. That's the whole setup. When both nodes are online, > everything replicates beautifully. But take one node offline and it > all falls apart. > Here is the output from gluster volume info, identical on both nodes: > [root at chicago-fw1 etc]# gluster volume info > Volume Name: firewall-scripts > Type: Replicate > Volume ID: 239b6401-e873-449d-a2d3-1eb2f65a1d4c > Status: Started > Number of Bricks: 1 x 2 = 2 > Transport-type: tcp > Bricks: > Brick1: 192.168.253.1:/gluster-fw1 > Brick2: 192.168.253.2:/gluster-fw2 > [root at chicago-fw1 etc]# > Looking at /var/log/glusterfs/firewall-scripts.log on fw2, I see > errors like this every couple of seconds: > [2013-07-09 00:59:04.706390] I [afr-common.c:3856:afr_local_init] > 0-firewall-scripts-replicate-0: no subvolumes up > [2013-07-09 00:59:04.706515] W [fuse-bridge.c:1132:fuse_err_cbk] > 0-glusterfs-fuse: 3160: FLUSH() ERR => -1 (Transport endpoint is not > connected) > And then when I bring fw1 back online, I see these messages on fw2: > [2013-07-09 01:01:35.006782] I [rpc-clnt.c:1648:rpc_clnt_reconfig] > 0-firewall-scripts-client-0: changing port to 49152 (from 0) > [2013-07-09 01:01:35.006932] W [socket.c:514:__socket_rwv] > 0-firewall-scripts-client-0: readv failed (No data available) > [2013-07-09 01:01:35.018546] I > [client-handshake.c:1658:select_server_supported_programs] > 0-firewall-scripts-client-0: Using Program GlusterFS 3.3, Num > (1298437), Version (330) > [2013-07-09 01:01:35.019273] I > [client-handshake.c:1456:client_setvolume_cbk] > 0-firewall-scripts-client-0: Connected to 192.168.253.1:49152, > attached to remote volume '/gluster-fw1'. > [2013-07-09 01:01:35.019356] I > [client-handshake.c:1468:client_setvolume_cbk] > 0-firewall-scripts-client-0: Server and Client lk-version numbers are > not same, reopening the fds > [2013-07-09 01:01:35.019441] I > [client-handshake.c:1308:client_post_handshake] > 0-firewall-scripts-client-0: 1 fds open - Delaying child_up until they > are re-opened > [2013-07-09 01:01:35.020070] I > [client-handshake.c:930:client_child_up_reopen_done] > 0-firewall-scripts-client-0: last fd open'd/lock-self-heal'd - > notifying CHILD-UP > [2013-07-09 01:01:35.020282] I [afr-common.c:3698:afr_notify] > 0-firewall-scripts-replicate-0: Subvolume 'firewall-scripts-client-0' > came back up; going online. > [2013-07-09 01:01:35.020616] I > [client-handshake.c:450:client_set_lk_version_cbk] > 0-firewall-scripts-client-0: Server lk version = 1 > So how do I make glusterfs survive a node failure, which is the whole > point of all this? >It looks like the brick processes on fw2 machine are not running and hence when fw1 is down, the entire replication process is stalled. can u do a ps and get the status of all the gluster processes and ensure that the brick process is up on fw2. Regards Raghav -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20130710/dee4ef2b/attachment.html>
Greg Scott
2013-Jul-15 20:19 UTC
[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore
Woops, didn't copy the list on this one. ***** I have SElinux set to permissive mode so those SELinux warnings should not be important. If they were real, I would also have trouble mounting by hand, right? - Greg -----Original Message----- From: Joe Julian [mailto:joe at julianfamily.org] Sent: Monday, July 15, 2013 2:37 PM To: Greg Scott Subject: Re: [Gluster-users] One node goes offline, the other node can't see the replicated volume anymore It's a known selinux bug: https://bugzilla.redhat.com/show_bug.cgi?id=984465 Either add your own via audit2allow or wait for a fix. (I'd do the former). On 07/15/2013 12:28 PM, Greg Scott wrote:> Maybe I am dealing with a systemd timing glitch because I can do my mount by hand on both nodes. > > I do > > ls /firewall-scripts, confirm it's empty, then > > mount -av, and then another > > ls /firewall-scripts and now my files show up. Both nodes behave identically. > > [root at chicago-fw2 rc.d]# nano /var/log/messages > [root at chicago-fw2 rc.d]# ls /firewall-scripts > [root at chicago-fw2 rc.d]# mount -av > / : ignored > /boot : already mounted > /boot/efi : already mounted > /gluster-fw2 : already mounted > swap : ignored > extra arguments at end (ignored) > /firewall-scripts : successfully mounted > [root at chicago-fw2 rc.d]# ls /firewall-scripts > allow-all failover-monitor.sh lost+found route-monitor.sh > allow-all-with-nat fwdate.txt rc.firewall start-failover-monitor.sh > etc initial_rc.firewall rcfirewall.conf var > [root at chicago-fw2 rc.d]# > > - Greg
Greg Scott
2013-Jul-15 20:29 UTC
[Gluster-users] One node goes offline, the other node can't see the replicated volume anymore
Re: Joe> I see the glusterfsd.service, but not the glusterd.service. Try: > > systemctl disable glusterfsd.service > systemctl enable glusterd.serviceTried this on both nodes and rebooted. Life in the Twilight Zone. First fw1 immediately after logging back in: [root at chicago-fw1 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora-root 14G 3.8G 8.7G 31% / devtmpfs 990M 0 990M 0% /dev tmpfs 996M 0 996M 0% /dev/shm tmpfs 996M 892K 996M 1% /run tmpfs 996M 0 996M 0% /sys/fs/cgroup tmpfs 996M 0 996M 0% /tmp /dev/sda2 477M 87M 365M 20% /boot /dev/sda1 200M 9.4M 191M 5% /boot/efi /dev/mapper/fedora-gluster--fw1 7.9G 33M 7.8G 1% /gluster-fw1 192.168.253.1:/firewall-scripts 7.6G 19M 7.2G 1% /firewall-scripts [root at chicago-fw1 ~]# [root at chicago-fw1 ~]# ls /firewall-scripts allow-all failover-monitor.sh lost+found route-monitor.sh allow-all-with-nat fwdate.txt rc.firewall start-failover-monitor.sh etc initial_rc.firewall rcfirewall.conf var [root at chicago-fw1 ~]# But it's not mounted on fw2. [root at chicago-fw2 rc.d]# reboot login as: root root at 10.10.10.72's password: Last login: Mon Jul 15 13:53:40 2013 from tinahp100b.infrasupport.local [root at chicago-fw2 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/fedora-root 14G 4.1G 8.4G 33% / devtmpfs 990M 0 990M 0% /dev tmpfs 996M 0 996M 0% /dev/shm tmpfs 996M 892K 996M 1% /run tmpfs 996M 0 996M 0% /sys/fs/cgroup tmpfs 996M 0 996M 0% /tmp /dev/sda2 477M 90M 362M 20% /boot /dev/sda1 200M 9.4M 191M 5% /boot/efi /dev/mapper/fedora-gluster--fw2 7.6G 19M 7.2G 1% /gluster-fw2 [root at chicago-fw2 ~]# Here is an extract from /var/log/messages on fw2. . . . Jul 15 15:18:26 chicago-fw2 audispd: queue is full - dropping event Jul 15 15:18:26 chicago-fw2 audispd: queue is full - dropping event Jul 15 15:18:28 chicago-fw2 systemd[1]: Started GlusterFS an clustered file-system server. Jul 15 15:18:28 chicago-fw2 systemd[1]: Starting GlusterFS an clustered file-system server... Jul 15 15:18:28 chicago-fw2 glusterfsd[1220]: [2013-07-15 20:18:28.304028] C [glusterfsd.c:1374:parse_cmdline] 0-glu sterfs: ERROR: parsing the volfile failed (No such file or directory) Jul 15 15:18:28 chicago-fw2 glusterfsd[1220]: USAGE: /usr/sbin/glusterfsd [options] [mountpoint] Jul 15 15:18:28 chicago-fw2 GlusterFS[1220]: [2013-07-15 20:18:28.304028] C [glusterfsd.c:1374:parse_cmdline] 0-glus terfs: ERROR: parsing the volfile failed (No such file or directory) Jul 15 15:18:28 chicago-fw2 systemd[1]: glusterfsd.service: control process exited, code=exited status=255 Jul 15 15:18:28 chicago-fw2 systemd[1]: Failed to start GlusterFS an clustered file-system server. Jul 15 15:18:28 chicago-fw2 systemd[1]: Unit glusterfsd.service entered failed state. Jul 15 15:18:28 chicago-fw2 mount[997]: Mount failed. Please check the log file for more details. Jul 15 15:18:28 chicago-fw2 rpc.statd[1258]: Version 1.2.7 starting Jul 15 15:18:28 chicago-fw2 rc.local[1001]: Mount failed. Please check the log file for more details. Jul 15 15:18:28 chicago-fw2 systemd[1]: firewall\x2dscripts.mount mount process exited, code=exited status=1 Jul 15 15:18:28 chicago-fw2 systemd[1]: Unit firewall\x2dscripts.mount entered failed state. Jul 15 15:18:28 chicago-fw2 sm-notify[1259]: Version 1.2.7 starting Jul 15 15:18:28 chicago-fw2 rc.local[1001]: / : ignored Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /boot : already mounted Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /boot/efi : already mounted Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /gluster-fw2 : already mounted Jul 15 15:18:28 chicago-fw2 rc.local[1001]: swap : ignored Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /firewall-scripts : successfully mounted Jul 15 15:18:28 chicago-fw2 rc.local[1001]: Mounted after mount -av Jul 15 15:18:28 chicago-fw2 rc.local[1001]: Filesystem Size Used Avail Use% Mounted on Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /dev/mapper/fedora-root 14G 4.1G 8.4G 33% / Jul 15 15:18:28 chicago-fw2 rc.local[1001]: devtmpfs 990M 0 990M 0% /dev Jul 15 15:18:28 chicago-fw2 rc.local[1001]: tmpfs 996M 0 996M 0% /dev/shm Jul 15 15:18:28 chicago-fw2 rc.local[1001]: tmpfs 996M 880K 996M 1% /run Jul 15 15:18:28 chicago-fw2 rc.local[1001]: tmpfs 996M 0 996M 0% /sys/fs/cgroup Jul 15 15:18:28 chicago-fw2 rc.local[1001]: tmpfs 996M 4.0K 996M 1% /tmp Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /dev/sda2 477M 90M 362M 20% /boot Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /dev/sda1 200M 9.4M 191M 5% /boot/efi Jul 15 15:18:28 chicago-fw2 rc.local[1001]: /dev/mapper/fedora-gluster--fw2 7.6G 19M 7.2G 1% /gluster-fw2 Jul 15 15:18:28 chicago-fw2 rc.local[1001]: Starting up firewall common items Jul 15 15:18:28 chicago-fw2 systemd[1]: Started /etc/rc.d/rc.local Compatibility. Jul 15 15:18:28 chicago-fw2 systemd[1]: Starting Terminate Plymouth Boot Screen... Jul 15 15:18:28 chicago-fw2 systemd[1]: Starting Wait for Plymouth Boot Screen to Quit... Jul 15 15:18:28 chicago-fw2 systemd[1]: Started Terminate Plymouth Boot Screen. Jul 15 15:18:28 chicago-fw2 systemd[1]: Started Wait for Plymouth Boot Screen to Quit. . . . And the extract from /var/log/messages from fw1 . . . Jul 15 15:18:07 chicago-fw1 systemd[1]: Starting OpenSSH server daemon... Jul 15 15:18:07 chicago-fw1 systemd[1]: Starting /etc/rc.d/rc.local Compatibility... Jul 15 15:18:07 chicago-fw1 systemd[1]: Started Vsftpd ftp daemon. Jul 15 15:18:07 chicago-fw1 systemd[1]: Started RPC bind service. Jul 15 15:18:07 chicago-fw1 systemd[1]: Starting GlusterFS an clustered file-system server... Jul 15 15:18:07 chicago-fw1 rc.local[1006]: Making sure the Gluster stuff is mounted Jul 15 15:18:07 chicago-fw1 rc.local[1006]: Mounted before mount -av Jul 15 15:18:07 chicago-fw1 systemd[1]: Started OpenSSH server daemon. Jul 15 15:18:07 chicago-fw1 rc.local[1006]: Filesystem Size Used Avail Use% Mounted on Jul 15 15:18:07 chicago-fw1 rc.local[1006]: /dev/mapper/fedora-root 14G 3.8G 8.7G 31% / Jul 15 15:18:07 chicago-fw1 rc.local[1006]: devtmpfs 990M 0 990M 0% /dev Jul 15 15:18:07 chicago-fw1 rc.local[1006]: tmpfs 996M 0 996M 0% /dev/shm Jul 15 15:18:07 chicago-fw1 rc.local[1006]: tmpfs 996M 2.1M 994M 1% /run Jul 15 15:18:07 chicago-fw1 rc.local[1006]: tmpfs 996M 0 996M 0% /sys/fs/cgroup Jul 15 15:18:07 chicago-fw1 rc.local[1006]: tmpfs 996M 0 996M 0% /tmp Jul 15 15:18:07 chicago-fw1 rc.local[1006]: /dev/sda2 477M 87M 365M 20% /boot Jul 15 15:18:07 chicago-fw1 rc.local[1006]: /dev/sda1 200M 9.4M 191M 5% /boot/efi Jul 15 15:18:07 chicago-fw1 rc.local[1006]: /dev/mapper/fedora-gluster--fw1 7.9G 33M 7.8G 1% /gluster-fw1 Jul 15 15:18:07 chicago-fw1 rc.local[1006]: extra arguments at end (ignored) Jul 15 15:18:07 chicago-fw1 dbus-daemon[457]: dbus[457]: [system] Activating service name='org.fedoraproject.Setroubleshootd' (u sing servicehelper) Jul 15 15:18:07 chicago-fw1 dbus[457]: [system] Activating service name='org.fedoraproject.Setroubleshootd' (using servicehelper ) Jul 15 15:18:07 chicago-fw1 kernel: [ 24.022605] fuse init (API version 7.21) Jul 15 15:18:07 chicago-fw1 systemd[1]: Mounted /firewall-scripts. Jul 15 15:18:07 chicago-fw1 systemd[1]: Starting Remote File Systems. Jul 15 15:18:07 chicago-fw1 systemd[1]: Reached target Remote File Systems. Jul 15 15:18:07 chicago-fw1 systemd[1]: Starting Trigger Flushing of Journal to Persistent Storage... Jul 15 15:18:07 chicago-fw1 systemd[1]: Mounting FUSE Control File System... Jul 15 15:18:07 chicago-fw1 systemd[1]: Mounted FUSE Control File System. Jul 15 15:18:09 chicago-fw1 systemd[1]: Started Trigger Flushing of Journal to Persistent Storage. Jul 15 15:18:09 chicago-fw1 systemd[1]: Starting Permit User Sessions... Jul 15 15:18:09 chicago-fw1 systemd[1]: Started Permit User Sessions. Jul 15 15:18:09 chicago-fw1 systemd[1]: Starting Command Scheduler... Jul 15 15:18:09 chicago-fw1 systemd[1]: Started Command Scheduler. Jul 15 15:18:09 chicago-fw1 systemd[1]: Starting Job spooling tools... Jul 15 15:18:09 chicago-fw1 systemd[1]: Started Job spooling tools. . . .