Displaying 17 results from an estimated 17 matches for "backupvolfil".
Did you mean:
backupvolfile
2015 May 16
4
fault tolerance
Hi people,
Now I am using the version of gluster 3.6.2 and I want configure the system for fault tolerance. The point is that I want have two server in replication mode and if one server down the client do not note the fault. How I need import the system in the client for this purpose.
2019 Dec 20
1
GFS performance under heavy traffic
Hi David,
Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).
In such way, when the primary is lost, your client can reach a backup one without disruption.
P.S.: Client may 'hang' - if the primary server...
2013 Dec 09
1
[CentOS 6] Upgrade to the glusterfs version in base or in glusterfs-epel
Hi,
I'm using glusterfs version 3.4.0 from gluster-epel[1].
Recently, I find out that there's a glusterfs version in base repo
(3.4.0.36rhs).
So, is it recommend to use that version instead of gluster-epel version?
If yes, is there a guide to make the switch with no downtime?
When run yum update glusterfs, I got the following error[2].
I found a guide[3]:
> If you have replicated or
2017 Jul 11
1
Replica 3 with arbiter - heal error?
...a 3 arbiter volume mounted and run
there a following script:
while true; do echo "$(date)" >> a.txt; sleep 2; done
After few seconds I add a rule to the firewall on the client, that
blocks access to node specified during mount e.g. if volume is mounted
with:
mount -t glusterfs -o backupvolfile-server=10.0.0.2 10.0.0.1:/vol /mnt/vol
I add:
iptables -A OUTPUT -d 10.0.0.1 -j REJECT
This causes the script above to block for approximately 40 seconds
until gluster client tries backupvolfile-server (can this timeout be
changed?) and everything continues as expected.
Heal info shows that th...
2019 Dec 24
1
GFS performance under heavy traffic
...ess to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards the arbiter.
> Presumably we shouldn't have an arbiter node listed under backupvolfile-server when mounting the filesystem? Since it doesn't store all the data surely it can't be used to serve the data.
I have my arbiter defined as last backup and no issues so far. At least the admin can easily identify the bricks from the mount options.
> We did have direct-io-mode=dis...
2013 Nov 01
1
Gluster "Cheat Sheet"
...- deleting xattr
showing a file's relationship with the bricks and translators
Disaster recovery
establish georep
check status - list of states - Initializing || Stable || Failed
Client Side Help
Windows drive map example (net use)
NFS mount example for fstab
glusterfs native mount (with backupvolfile-server syntax)
Common tuning options Table
network.ping-timeout
server.root-squash
cluster.server-quorum-type
cluster.server-quorum-ratio
cluster.min-free-disk
File locations
log files
config files (vol file, hooks)
2017 Sep 09
0
GlusterFS as virtual machine storage
...egards
> to the crashing/read-only behaviour?
I switched to FUSE now and the VM crashed (read-only remount)
immediately after one node started rebooting.
I tried to mount.glusterfs same volume on different server (not VM),
running Ubuntu Xenial and gluster client 3.10.5.
mount -t glusterfs -o backupvolfile-server=10.0.1.202
10.0.1.201:/gv_openstack_1 /mnt/gv_openstack_1/
I ran fio job I described earlier. As soon as I killall glusterfsd,
fio reported:
fio: io_u error on file /mnt/gv_openstack_1/fio.data: Transport
endpoint is not connected: read offset=7022575616, buflen=262144
fio: pid=7205, err=...
2019 Dec 28
1
GFS performance under heavy traffic
...difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
>>> It makes sense, as no data is being generated towards the arbiter.
>>> > Presumably we shouldn't have an arbiter node listed under backupvolfile-server when mounting the filesystem? Since it doesn't store all the data surely it can't be used to serve the data.
>>>
>>> I have my arbiter defined as last backup and no issues so far. At least the admin can easily identify the bricks from the mount options.
>>>...
2017 Aug 12
0
preferred replica?
...et they're in) and reject at TCP
level any connection from the remote subnet. As to why this is useful: when
the remote cluster/building/availability zone fails, all its clients are
considered unavailable, forcefully powered off if needed, and services
restarted in the active zone. Mount option backupvolfile-server would
ensure clients do reach a working replica on startup. The decision as to
which zone is the active one is done outside of Gluster, although the
2*replica+arbiter option looks helpful too.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.g...
2018 Jan 07
0
performance.readdir-ahead on volume folders not showing with ls command 3.13.1-1.el7
with
performance.readdir-ahead on
on the volume maked folders on mounts invisible to ls command but it
will show files fine
it shows folders fine with ls on bricks
what am I missing? maybe some settings are incompatible
guess over-tuning happened
vm1:/t1 /home/t1 glusterfs
defaults,_netdev,backupvolfile-server=vm2,attribute-timeout=600,entry-timeout=600,negative-timeout=600,fopen-keep-cache,use-readdirp=no,fetch-attempts=5
0 0
glusterfs.x86_64 3.13.1-1.el7 installed
glusterfs-api.x86_64 3.13.1-1.el7 installed
glusterfs-cli.x...
2017 Sep 09
3
GlusterFS as virtual machine storage
Pavel.
Is there a difference between native client (fuse) and libgfapi in
regards to the crashing/read-only behaviour?
We use Rep2 + Arb and can shutdown a node cleanly, without issue on our
VMs. We do it all the time for upgrades and maintenance.
However we are still on native client as we haven't had time to work on
libgfapi yet. Maybe that is more tolerant.
We have linux VMs mostly
2019 Dec 27
0
GFS performance under heavy traffic
...made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
>> It makes sense, as no data is being generated towards the arbiter.
>> > Presumably we shouldn't have an arbiter node listed under backupvolfile-server when mounting the filesystem? Since it doesn't store all the data surely it can't be used to serve the data.
>>
>> I have my arbiter defined as last backup and no issues so far. At least the admin can easily identify the bricks from the mount options.
>>
>> &...
2018 May 04
0
Crashing applications, RDMA_ERROR in logs
...that were bigger than
RDMA_INLINE_THRESHOLD (2048)
At the same time on gluster nodes in brick logs:
[2018-05-04 10:00:43.468470] W [MSGID: 103027]
[rdma.c:2498:__gf_rdma_send_reply_type_nomsg] 0-rpc-transport/rdma:
encoding write chunks failed
The gluster volume is mounted with options
"backupvolfile-server=cn03-ib,transport=rdma,log-level=WARNING"
The same applications run perfectly on not gluster FS. Could you please
help to debug and fix this?
# gluster volume status gv0
Status of volume: gv0
Gluster process TCP Port RDMA Port Online
Pid
---------...
2017 Sep 09
2
GlusterFS as virtual machine storage
...behaviour?
>
> I switched to FUSE now and the VM crashed (read-only remount)
> immediately after one node started rebooting.
>
> I tried to mount.glusterfs same volume on different server (not VM),
> running Ubuntu Xenial and gluster client 3.10.5.
>
> mount -t glusterfs -o backupvolfile-server=10.0.1.202
> 10.0.1.201:/gv_openstack_1 /mnt/gv_openstack_1/
>
> I ran fio job I described earlier. As soon as I killall glusterfsd,
> fio reported:
>
> fio: io_u error on file /mnt/gv_openstack_1/fio.data: Transport
> endpoint is not connected: read offset=7022575616,...
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
...oad the systemd daemon like:
systemctl enable glusterfsmounts.service
systemctl demon-reload
Also, I am using /etc/fstab to mount the glusterfs mount point properly,
since the Proxmox GUI seems to me a little broken in this regards
gluster1:VMS1 /vms1 glusterfs
defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em qua., 7 de jun. de 2023 ?s 01:51, Strahil Nikolov <hunter86_bg at yahoo.com>
escreveu:
> Hi Chris,
>
> here is a link to the settings needed for VM storage:
> https://github.com/glust...
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,