search for: xlators

Displaying 20 results from an estimated 431 matches for "xlators".

Did you mean: xlator
2008 Oct 31
3
Problem with xlator
?Hi, I have the next scenario: ############################################################################# SERVER SIDE? (64 bit architecture) ?############################################################################# Two Storage Machines with: HARDWARE DELL PE2900 III Intel Quad Core Xeon E5420 2,5Ghz, 2x6Mb cache, Bus 1333FSB RAM 4 GB FB 667Mhz (2x2Gb) 8 HDD 1 TB,
2017 Nov 07
2
Enabling Halo sets volume RO
...on /mnt/vol type fuse.glusterfs (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072) Thanks in advace, -Jon Setup info CentOS Linux release 7.4.1708 (Core) 4 GCE Instances (2 US, 2 Asia) 1 10gb Brick/Instance replica 4 volume Packages: glusterfs-client-xlators-3.12.1-2.el7.x86_64 glusterfs-cli-3.12.1-2.el7.x86_64 python2-gluster-3.12.1-2.el7.x86_64 glusterfs-3.12.1-2.el7.x86_64 glusterfs-api-3.12.1-2.el7.x86_64 glusterfs-fuse-3.12.1-2.el7.x86_64 glusterfs-server-3.12.1-2.el7.x86_64 glusterfs-libs-3.12.1-2.el7.x86_64 glusterfs-geo-replication-3.12...
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2017 Jun 29
1
issue with trash feature and arbiter volumes
Gluster 3.10.2 I have a replica 3 (2+1) volume and I have just seen both data bricks go down (arbiter stayed up). I had to disable trash feature to get the bricks to start. I had a quick look on bugzilla but did not see anything that looked similar. I just wanted to check that I was not hitting some know issue and/or doing something stupid, before I open a bug. This is from the brick log:
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody Regarding the issue with mount, usually I am using this systemd service to bring up the mount points: /etc/systemd/system/glusterfsmounts.service [Unit] Description=Glustermounting Requires=glusterd.service Wants=glusterd.service After=network.target network-online.target glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization, the following Gluster volumes settings are recommended to be applied (preferably at the creation of the volume) These settings are important for data reliability, ( Note that Replica 3 or Replica 2+1 is expected ) performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.low-prio-threads=32 network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options: performance.write-behind performance.flush-behind --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese < guillaume.pavese at interactiv-group.com> escreveu: > On oVirt / Redhat Virtualization, > the following Gluster volumes settings are recommended to be applied > (preferably at
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Hi list, recently I've noted a strange behaviour of my gluster storage, sometimes while executing a simple command like "gluster volume status vm-images-repo" as a response I got "Another transaction is in progress for vm-images-repo. Please try again after sometime.". This situation does not get solved simply waiting for but I've to restart glusterd on the node that
2017 Nov 08
0
Enabling Halo sets volume RO
...up_id=0,default_permissions,allow_other,max_read=131072) > > > Thanks in advace, > -Jon > > > Setup info > CentOS Linux release 7.4.1708 (Core) > 4 GCE Instances (2 US, 2 Asia) > 1 10gb Brick/Instance > replica 4 volume > > Packages: > > glusterfs-client-xlators-3.12.1-2.el7.x86_64 > glusterfs-cli-3.12.1-2.el7.x86_64 > python2-gluster-3.12.1-2.el7.x86_64 > glusterfs-3.12.1-2.el7.x86_64 > glusterfs-api-3.12.1-2.el7.x86_64 > glusterfs-fuse-3.12.1-2.el7.x86_64 > glusterfs-server-3.12.1-2.el7.x86_64 > glusterfs-libs-3.12.1-2.el7.x86_64 &gt...
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all, thanks a lot for all your answers. At first I changed both settings mentioned below and first test look good. Before changing the settings I was able to crash a new installed VM every time after a fresh installation by producing much i/o, e.g. when installing Libre Office. This always resulted in corrupt files inside the VM, but researching the qcow2 file with the
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
In attachment the requested logs for all the three nodes. thanks, Paolo Il 20/07/2017 11:38, Atin Mukherjee ha scritto: > Please share the cmd_history.log file from all the storage nodes. > > On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara > <paolo.margara at polito.it <mailto:paolo.margara at polito.it>> wrote: > > Hi list, > > recently I've
2010 May 11
1
Problems with gluster and autofs
There appears to be a race condition or a cycle with autofs and gluster 3.0.4. When gluster tries to stat the mount point in fuse-bridge.c, it hangs. When I comment out the code in lines: 3389-3415 it hangs on the call to mount() in fuse-lib/mount.c:538. This is true whether or not --ghost is specified. Has this problem been resolved? Is there a patch somewhere? --- gdb output after
2017 Jun 20
2
trash can feature, crashed???
All, I currently have 2 bricks running Gluster 3.10.1. This is a Centos installation. On Friday last week, I enabled the trashcan feature on one of my volumes: gluster volume set date01 features.trash on I also limited the max file size to 500MB: gluster volume set data01 features.trash-max-filesize 500MB 3 hours after that I enabled this, this specific gluster volume went down: [2017-06-16
2017 Jul 20
2
glusterd-locks.c:572:glusterd_mgmt_v3_lock
OK, on my nagios instance I've disabled gluster status check on all nodes except on one, I'll check if this is enough. Thanks, Paolo Il 20/07/2017 13:50, Atin Mukherjee ha scritto: > So from the cmd_history.logs across all the nodes it's evident that > multiple commands on the same volume are run simultaneously which can > result into transactions collision and you can
2017 Jul 20
0
glusterd-locks.c:572:glusterd_mgmt_v3_lock
Please share the cmd_history.log file from all the storage nodes. On Thu, Jul 20, 2017 at 2:34 PM, Paolo Margara <paolo.margara at polito.it> wrote: > Hi list, > > recently I've noted a strange behaviour of my gluster storage, sometimes > while executing a simple command like "gluster volume status > vm-images-repo" as a response I got "Another transaction
2017 Sep 05
0
Glusterd proccess hangs on reboot
Some corrections about the previous mails. Problem does not happen when no volumes created. Problem happens volumes created but in stopped state. Problem also happens when volumes started state. Below is the 5 stack traces taken by 10 min intervals and volumes stopped state. --1-- Thread 8 (Thread 0x7f413f3a7700 (LWP 104249)): #0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0 #1
2017 Sep 05
1
Glusterd proccess hangs on reboot
On Tue, Sep 5, 2017 at 6:13 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Some corrections about the previous mails. Problem does not happen > when no volumes created. > Problem happens volumes created but in stopped state. Problem also > happens when volumes started state. > Below is the 5 stack traces taken by 10 min intervals and volumes stopped > state. > As