Displaying 20 results from an estimated 26 matches for "shared_storage".
2017 Nov 13
1
Shared storage showing 100% used
Hello list,
I recently enabled shared storage on a working cluster with nfs-ganesha
and am just storing my ganesha.conf file there so that all 4 nodes can
access it(baby steps).? It was all working great for a couple of weeks
until I was alerted that /run/gluster/shared_storage was full, see
below.? There was no warning; it went from fine to critical overnight.
Filesystem???????????????????????? Size? Used Avail Use% Mounted on
/dev/md125????????????????????????? 50G? 102M?? 47G?? 1% /
devtmpfs??????????????????????????? 32G???? 0?? 32G?? 0% /dev
tmpfs??????????????????...
2017 Jun 05
2
Gluster and NFS-Ganesha - cluster is down after reboot
...skills are better.
>
> Thank you.
>
> Kind regards.
>
> Adam
>
> ----------
>
> sudo sh -c 'cat > /root/gluster-run-ganesha << EOF
> #!/bin/bash
>
> while true; do
> echo "Wait"
> sleep 30
> if [[ -f /var/run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf
> ]]; then
> echo "Start Ganesha"
> systemctl start nfs-ganesha.service
> exit \$?
> else
> echo "Not mounted"
> fi
> done
> EOF'
>
> sudo chmod +x /root/gluster-run-ganesha...
2017 Jun 05
0
Gluster and NFS-Ganesha - cluster is down after reboot
...Kind regards.
>>
>> Adam
>>
>> ----------
>>
>> sudo sh -c 'cat > /root/gluster-run-ganesha << EOF
>> #!/bin/bash
>>
>> while true; do
>> echo "Wait"
>> sleep 30
>> if [[ -f /var/run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf
>> ]]; then
>> echo "Start Ganesha"
>> systemctl start nfs-ganesha.service
>> exit \$?
>> else
>> echo "Not mounted"
>> fi
>> done
>> EOF'
>>
>> s...
2017 Jun 06
1
Gluster and NFS-Ganesha - cluster is down after reboot
...d it.
Meantime I wrote something very simple but I assume your scripting
skills are better.
Thank you.
Kind regards.
Adam
----------
sudo sh -c 'cat > /root/gluster-run-ganesha << EOF
#!/bin/bash
while true; do
echo "Wait"
sleep 30
if [[ -f /var/run/gluster/shared_storage/nfs-ganesha/ganesha-ha.conf
]]; then
echo "Start Ganesha"
systemctl start nfs-ganesha.service
exit \$?
else
echo "Not mounted"
fi
done
EOF'
sudo chmod +x /root/gluster-run-ganesha
sudo sh -c 'cat > /etc/systemd/system/custom-gluster-ganesha.service <&l...
2019 Oct 01
3
CTDB and nfs-ganesha
...dbd, read by ctdbd_wrapper(1)
#
# See ctdbd.conf(5) for more information about CTDB configuration variables.
# Shared recovery lock file to avoid split brain. No default.
#
# Do NOT run CTDB without a recovery lock file unless you know exactly
# what you are doing.
CTDB_RECOVERY_LOCK=/run/gluster/shared_storage/.CTDB-lockfile
# List of nodes in the cluster. Default is below.
CTDB_NODES=/etc/ctdb/nodes
# List of public addresses for providing NAS services. No default.
CTDB_PUBLIC_ADDRESSES=/etc/ctdb/public_addresses
# What services should CTDB manage? Default is none.
# CTDB_MANAGES_SAMBA=yes
# CTDB_...
2017 Dec 02
2
gluster and nfs-ganesha
...b-cca9d77871c5
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode1:/data/brick1/gv0
Brick2: glnode2:/data/brick1/gv0
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
cluster.enable-shared-storage: enable
Volume Name: gluster_shared_storage
Type: Replicate
Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: glnode2:/var/lib/glusterd/ss_brick
Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick
Options Reconfigured:
transport.address-family: inet
n...
2019 Oct 01
0
CTDB and nfs-ganesha
...; # See ctdbd.conf(5) for more information about CTDB configuration variables.
>
> # Shared recovery lock file to avoid split brain. No default.
> #
> # Do NOT run CTDB without a recovery lock file unless you know exactly
> # what you are doing.
> CTDB_RECOVERY_LOCK=/run/gluster/shared_storage/.CTDB-lockfile
This should be in ctdb.conf:
[cluster]
recovery lock = /run/gluster/shared_storage/.CTDB-lockfile
> # List of nodes in the cluster. Default is below.
> CTDB_NODES=/etc/ctdb/nodes
>
> # List of public addresses for providing NAS services. No default.
> CTDB_PUB...
2019 Oct 02
3
CTDB and nfs-ganesha
...or more information about CTDB configuration variables.
>
> # Shared recovery lock file to avoid split brain. No default.
> #
> # Do NOT run CTDB without a recovery lock file unless you know exactly
> # what you are doing.
> CTDB_RECOVERY_LOCK=/run/gluster/shared_storage/.CTDB-lockfile
This should be in ctdb.conf:
[cluster]
recovery lock = /run/gluster/shared_storage/.CTDB-lockfile
> # List of nodes in the cluster. Default is below.
> CTDB_NODES=/etc/ctdb/nodes
>
> # List of public addresses for providi...
2017 Dec 04
2
gluster and nfs-ganesha
...Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: glnode1:/data/brick1/gv0
> Brick2: glnode2:/data/brick1/gv0
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> cluster.enable-shared-storage: enable
>
> Volume Name: gluster_shared_storage
> Type: Replicate
> Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: glnode2:/var/lib/glusterd/ss_brick
> Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick
> Opti...
2017 Dec 06
0
gluster and nfs-ganesha
...icks:
>> Brick1: glnode1:/data/brick1/gv0
>> Brick2: glnode2:/data/brick1/gv0
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>> cluster.enable-shared-storage: enable
>>
>> Volume Name: gluster_shared_storage
>> Type: Replicate
>> Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: glnode2:/var/lib/glusterd/ss_bric...
2019 Oct 02
0
CTDB and nfs-ganesha
...t CTDB configuration variables.
>
> # Shared recovery lock file to avoid split brain. No default.
> #
> # Do NOT run CTDB without a recovery lock file unless you know exactly
> # what you are doing.
> CTDB_RECOVERY_LOCK=/run/gluster/shared_storage/.CTDB-lockfile
This should be in ctdb.conf:
[cluster]
recovery lock = /run/gluster/shared_storage/.CTDB-lockfile
> # List of nodes in the cluster. Default is below.
> CTDB_NODES=/etc/ctdb/nodes
>
> # List of...
2019 Oct 02
1
CTDB and nfs-ganesha
...or more information about CTDB configuration variables.
>
> # Shared recovery lock file to avoid split brain. No default.
> #
> # Do NOT run CTDB without a recovery lock file unless you know exactly
> # what you are doing.
> CTDB_RECOVERY_LOCK=/run/gluster/shared_storage/.CTDB-lockfile
This should be in ctdb.conf:
[cluster]
recovery lock = /run/gluster/shared_storage/.CTDB-lockfile
> # List of nodes in the cluster. Default is below.
> CTDB_NODES=/etc/ctdb/nodes
>
> # List of public addresses for providi...
2017 May 01
1
Gluster and NFS-Ganesha - cluster is down after reboot
...few exceptions:
1. All RPMs are from "centos-gluster310" repo that is installed by "yum -y
install centos-release-gluster"
2. I have three nodes (not four) with "replica 3" volume.
3. I created empty ganesha.conf and not empty ganesha-ha.conf in
"/var/run/gluster/shared_storage/nfs-ganesha/" (referenced blog post is
outdated, this is now requirement)
4. ganesha-ha.conf doesn't have "HA_VOL_SERVER" since this isn't needed
anymore.
When I finish configuration, all is good. nfs-ganesha.service is active and
running and from client I can ping all three...
2017 Dec 04
0
gluster and nfs-ganesha
...Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: glnode1:/data/brick1/gv0
> Brick2: glnode2:/data/brick1/gv0
> Options Reconfigured:
> nfs.disable: on
> transport.address-family: inet
> cluster.enable-shared-storage: enable
>
> Volume Name: gluster_shared_storage
> Type: Replicate
> Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 2 = 2
> Transport-type: tcp
> Bricks:
> Brick1: glnode2:/var/lib/glusterd/ss_brick
> Brick2: 192.168.0.95:/var/lib/glusterd/ss_brick
> Opti...
2017 Dec 06
2
gluster and nfs-ganesha
...nsport-type: tcp
>> Bricks:
>> Brick1: glnode1:/data/brick1/gv0
>> Brick2: glnode2:/data/brick1/gv0
>> Options Reconfigured:
>> nfs.disable: on
>> transport.address-family: inet
>> cluster.enable-shared-storage: enable
>>
>> Volume Name: gluster_shared_storage
>> Type: Replicate
>> Volume ID: caf36f36-0364-4ab9-a158-f0d1205898c4
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 2 = 2
>> Transport-type: tcp
>> Bricks:
>> Brick1: glnode2:/var/lib/glusterd/ss_brick
>> Brick2: 192.168.0.95:...
2019 Oct 03
2
CTDB and nfs-ganesha
Hi Max,
On Wed, 2 Oct 2019 15:08:43 +0000, Max DiOrio
<Max.DiOrio at ieeeglobalspec.com> wrote:
> As soon as I made the configuration change and restarted CTDB, it crashes.
>
> Oct 2 11:05:14 hq-6pgluster01 systemd: Started CTDB.
> Oct 2 11:05:21 hq-6pgluster01 systemd: ctdb.service: main process exited, code=exited, status=1/FAILURE
> Oct 2 11:05:21 hq-6pgluster01
2019 Oct 05
2
CTDB and nfs-ganesha
...4]: Recovery has started
> 2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0
> 2019/10/04 09:51:29.175021 ctdbd[17244]: Recovery lock configuration inconsistent: recmaster has NULL, this node has /run/gluster/shared_storage/.CTDB-lockfile, shutting down
> 2019/10/04 09:51:29.175045 ctdbd[17244]: Shutdown sequence commencing.
> 2019/10/04 09:51:29.175056 ctdbd[17244]: Set runstate to SHUTDOWN (6)
Yep. CTDB refuses to work if the recovery lock is configured to be
different on different nodes, since that is an im...
2018 Feb 01
2
Upgrading a ctdb cluster: samba not listening on TCP port 445
...nning
again without any problems, but this is only a temporary solution.
Any help in how to debug this problem will be appreciated.
Best regards,
Nicolas
ctdb.conf
# Do NOT run CTDB without a recovery lock file unless you know exactly
# what you are doing.
CTDB_RECOVERY_LOCK=/var/run/gluster/shared_storage/ctdb/lockfile
# List of public addresses for providing NAS services. No default.
CTDB_PUBLIC_ADDRESSES=/var/run/gluster/shared_storage/ctdb/public_addresses
# List of nodes in the cluster.
CTDB_NODES=/var/run/gluster/shared_storage/ctdb/nodes
# What services should CTDB manage? Default is none.
CT...
2019 Oct 04
0
CTDB and nfs-ganesha
...ctdbd[17244]: Recovery has started
2019/10/04 09:51:29.174982 ctdbd[17244]: ../ctdb/server/ctdb_server.c:188 ctdb request 2147483554 of type 8 length 48 from node 1 to 0
2019/10/04 09:51:29.175021 ctdbd[17244]: Recovery lock configuration inconsistent: recmaster has NULL, this node has /run/gluster/shared_storage/.CTDB-lockfile, shutting down
2019/10/04 09:51:29.175045 ctdbd[17244]: Shutdown sequence commencing.
2019/10/04 09:51:29.175056 ctdbd[17244]: Set runstate to SHUTDOWN (6)
I'm attaching the full log from this startup.
The other thing that baffles me is that I have most of the legacy scripts...
2017 Aug 17
1
shared-storage bricks
Hi,
I enabled shared storage on my four nodes cluster but when I look at the volume info, I only have 3 bricks. Is that suppose to be normal ?
Thank you
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170817/9e94d1ac/attachment.html>