similar to: No subject

Displaying 20 results from an estimated 1000 matches similar to: "No subject"

2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
Hi, I've got this strange problem where a striped endpoint will crash when I try to use cp to copy files off of it but not when I use rsync to copy files off: [user at gluster5 user]$ cp -r Python-2.6.4/ ~/tmp/ cp: reading `Python-2.6.4/Lib/lib2to3/tests/data/fixers/myfixes/__init__.py': Software caused connection abort cp: closing
2013 Jul 02
1
problem expanding a volume
Hello, I am having trouble expanding a volume. Every time I try to add bricks to the volume, I get this error: [root at gluster1 sdb1]# gluster volume add-brick vg0 gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1 /export/brick2/sdb1 or a prefix of it is already part of a volume Here is the volume info: [root at gluster1 sdb1]# gluster volume info vg0 Volume Name: vg0 Type:
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris, here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4 You can also ask in ovirt-users for real-world settings.Test well before changing production!!! IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!! Best Regards,Strahil Nikolov? On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris: Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system. Maybe that might work better with Proxmox? Hope this helps. Sorry that I wasn't able
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have found it to be exremely reliable though maybe not the fastest, though some of that is most of our storage is SATA SSDs in a software RAID1 config for each brick. What problems are you running into? You just mention 'problems' -wk On 6/1/23 8:42 AM, Christian Schoepplein wrote: > Hi, > > we'd like to use
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody Regarding the issue with mount, usually I am using this systemd service to bring up the mount points: /etc/systemd/system/glusterfsmounts.service [Unit] Description=Glustermounting Requires=glusterd.service Wants=glusterd.service After=network.target network-online.target glusterd.service [Service] Type=simple RemainAfterExit=true ExecStartPre=/usr/sbin/gluster volume list
2010 Apr 19
1
Permission Problems
Hello List, first of all my configuration: I have 2 GlusterPlatform 3.0.3 Servers virtualized on VMWare Esxi 4. With one Volume exported as "raid 1". I mounted the share with the GlusterClient 3.0.2 with the following /etc/fstab line: /etc/glusterfs/client.vol /mnt/images glusterfs defaults 0 0 The client.vol looks like this: # auto generated by
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization, the following Gluster volumes settings are recommended to be applied (preferably at the creation of the volume) These settings are important for data reliability, ( Note that Replica 3 or Replica 2+1 is expected ) performance.quick-read=off performance.read-ahead=off performance.io-cache=off performance.low-prio-threads=32 network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options: performance.write-behind performance.flush-behind --- Gilberto Nunes Ferreira (47) 99676-7530 - Whatsapp / Telegram Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese < guillaume.pavese at interactiv-group.com> escreveu: > On oVirt / Redhat Virtualization, > the following Gluster volumes settings are recommended to be applied > (preferably at
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all, thanks a lot for all your answers. At first I changed both settings mentioned below and first test look good. Before changing the settings I was able to crash a new installed VM every time after a fresh installation by producing much i/o, e.g. when installing Libre Office. This always resulted in corrupt files inside the VM, but researching the qcow2 file with the
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi, we'd like to use glusterfs for Proxmox and virtual machines with qcow2 disk images. We have a three node glusterfs setup with one volume and Proxmox is attached and VMs are created, but after some time, and I think after much i/o is going on for a VM, the data inside the virtual machine gets corrupted. When I copy files from or to our glusterfs directly everything is OK, I've
2023 Jan 19
1
really large number of skipped files after a scrub
Hi, Just to follow up my first observation from this email from december: automatic scheduled scrubs that not happen. We have now upgraded glusterfs from 7.4 to 10.1, and now see that the automated scrubs ARE running now. Not sure why they didn't in 7.4, but issue solved. :-) MJ On Mon, 12 Dec 2022 at 13:38, cYuSeDfZfb cYuSeDfZfb <cyusedfzfb at gmail.com> wrote: > Hi, > > I
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is held up, IO is not interrupted for existing transfers. I think this points to the heat metadata in the sqlite DB for the tier, is it possible that a table is temporarily locked while the promotion daemon runs so the calls to update the access count on files are blocked? On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I grabbed that from a file not realizing it was out of date. Here's a current configuration showing the active hot tier: [root at pod-sjc1-gluster1 ~]# gluster volume info Volume Name: gv0 Type: Tier Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196 Status: Started Snapshot Count: 13 Number of Bricks: 8 Transport-type: tcp Hot
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick sizes are now reporting the correct size of all bricks combined instead of just one brick. Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect? On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote: > Sure! > > > 1 -
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small (<1 MB) files and thousands of files larger than 1 GB. Attached is the tier log for gluster1 and gluster2. These are full of "demotion failed" messages, which is also shown in the status: [root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status Node Promoted files Demoted files
2017 Dec 21
3
Wrong volume size with df
Sure! > 1 - output of gluster volume heal <volname> info Brick pod-sjc1-gluster1:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick
2018 Jan 17
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
Here's the volume info: Volume Name: gv2a2 Type: Replicate Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f Status: Started Snapshot Count: 0 Number of Bricks: 1 x (2 + 1) = 3 Transport-type: tcp Bricks: Brick1: gluster1:/bricks/brick2/gv2a2 Brick2: gluster3:/bricks/brick3/gv2a2 Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter) Options Reconfigured: storage.owner-gid: 107
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom, The volume info doesn't show the hot bricks. I think you have took the volume info output before attaching the hot tier. Can you send the volume info of the current setup where you see this issue. The logs you sent are from a later point in time. The issue is hit earlier than the logs what is available in the log. I need the logs from an earlier time. And along with the entire tier
2018 Jan 23
1
[Possibile SPAM] Re: Strange messages in mnt-xxx.log
On 17 January 2018 at 16:04, Ing. Luca Lazzeroni - Trend Servizi Srl < luca at trendservizi.it> wrote: > Here's the volume info: > > > Volume Name: gv2a2 > Type: Replicate > Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x (2 + 1) = 3 > Transport-type: tcp > Bricks: > Brick1: