Displaying 20 results from an estimated 5000 matches similar to: "Libvirt and Glusterfs pool"
2018 Feb 07
0
Fwd: Troubleshooting glusterfs
Hello Nithya! Thank you for your help on figuring this out!
We changed our configuration and after having a successful test yesterday
we have run into new issue today.
The test including moderate read/write (~20-30 Mb/s) and scaling the
storage was running about 3 hours and at some moment system got stuck:
On the user level there are such errors when trying to work with filesystem:
OSError:
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hello Nithya!
Thank you so much, I think we are close to build a stable storage solution
according to your recommendations. Here's our rebalance log - please don't
pay attention to error messages after 9AM - this is when we manually
destroyed volume to recreate it for further testing. Also all remove-brick
operations you could see in the log were executed manually when recreating
volume.
2024 Sep 29
1
Growing cluster: peering worked, staging failed
Fellow gluster users,
trying to extend a 3 node cluster that is serving me very reliably for
a long time now.
Cluster is serving two volumes:
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: 9bafc4d2-d9b6-4b6d-a631-1cf42d1d2559
Status: Started
Snapshot Count: 0
Number of Bricks: 6 x (2 + 1) = 18
Transport-type: tcp
Volume Name: gv1
Type: Replicate
Volume ID:
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hey,
Did the heal completed and you still have some entries pending heal?
If yes then can you provide the following informations to debug the issue.
1. Which version of gluster you are running
2. gluster volume heal <volname> info summary or gluster volume heal
<volname> info
3. getfattr -d -e hex -m . <filepath-on-brick> output of any one of the
which is pending heal from all
2018 Feb 09
0
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you for your reply. The heal is still undergoing, as the /var/log/glusterfs/glustershd.log keeps growing, and there's a lot of pending entries in the heal info.
The gluster version is 3.10.9 and 3.10.10 (the version update in progress). It doesn't have info summary [yet?], and the heal info is way too long to attach here. (It takes more than 20 minutes just to collect
2018 Feb 09
1
self-heal trouble after changing arbiter brick
Hi Karthik,
Thank you very much, you made me much more relaxed. Below is getfattr output for a file from all the bricks:
root at gv2 ~ # getfattr -d -e hex -m . /data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
getfattr: Removing leading '/' from absolute path names
# file: data/glusterfs/testset/306/30677af808ad578916f54783904e6342.pack
2018 Feb 08
5
self-heal trouble after changing arbiter brick
Hi folks,
I'm troubled moving an arbiter brick to another server because of I/O load issues. My setup is as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3:
2019 Aug 19
2
Starting VM fails with: "Setting different DAC user or group on /path... which is already in use" after upgrading to libvirt 5.6.0-1
Hi,
I upgraded to a Fedora 29 host using virt-preview repo to
libvirt-daemon-5.6.0-1.fc29.x86_64
The host was using plain Fedora 29 without virt-preview before that.
After the upgrade, starting some vms that were running fine fail now with
this error:
Error starting domain: internal error: child reported (status=125):
Requested operation is not valid: Setting different DAC user or group on
2013 Apr 30
0
Libvirt and Glusterfs
Hi,
On a Fedora 18, I try to launch a VM with QEMU-GlusterFS native integration.
I have enable fedora-virt-preview repo, and gluster-alpha3 repo.
Below the list of installed package :
glusterfs-3.4.0-0.3.alpha3.fc18.x86_64
glusterfs-devel-3.4.0-0.3.alpha3.fc18.x86_64
glusterfs-fuse-3.4.0-0.3.alpha3.fc18.x86_64
glusterfs-server-3.4.0-0.3.alpha3.fc18.x86_64
2017 Jan 03
2
shadow_copy and glusterfs not working
Hello,
we are trying to configure a CTDB-Cluster with Glusterfs. We are using
Samba 4.5 together with gluster 3.9. We set up a lvm2 thin-provisioned
volume to use gluster-snapshots.
Then we configured the first share without using shadow_copy2 and
everything was working fine.
Then we added the shadow_copy2 parameters, when we did a "smbclient" we
got the following message:
root at
2009 Sep 10
0
Re: persistent ssh_host_keys
I believe you should support authorized_keys as well.
On Wednesday 09 September 2009 19:01:06 ovirt-devel-request at redhat.com wrote:
> Send Ovirt-devel mailing list submissions to
> ovirt-devel at redhat.com
>
> To subscribe or unsubscribe via the World Wide Web, visit
> https://www.redhat.com/mailman/listinfo/ovirt-devel
> or, via email, send a message with subject or
2018 Feb 04
1
Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
[root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status: Started
Snapshot Count: 0
Number of Bricks: 27
Transport-type: tcp
Bricks:
Brick1:
2018 Feb 04
1
Fwd: Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
# cat /etc/centos-release
CentOS release 6.9 (Final)
# glusterfs --version
glusterfs 3.12.3
[root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status:
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
On 5 February 2018 at 15:40, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
>
> I see a lot of the following messages in the logs:
> [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile,continuing
> [2018-02-04 07:41:16.189349] W [MSGID: 109011]
> [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no
2018 Feb 05
2
Fwd: Troubleshooting glusterfs
Hi,
I see a lot of the following messages in the logs:
[2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
0-glusterfs: No change in volfile,continuing
[2018-02-04 07:41:16.189349] W [MSGID: 109011]
[dht-layout.c:186:dht_layout_search]
48-gv0-dht: no subvolume for hash (value) = 122440868
[2018-02-04 07:41:16.244261] W [fuse-bridge.c:2398:fuse_writev_cbk]
0-glusterfs-fuse:
2024 Jan 03
0
Pre Validation failed on 192.168.3.31. Volume gv1 does not exist
Background ....
Three Fedora servers, 34 and two 37.? Upgraded on of the servers to
Fedora 39.? For some reason the 39 server would not rejoin the party.?
So I deleted the Fedora 39 server, manually tidied up the brick on Fed39
(192.168.3.31).? I run peer probe and all three serves are connected.
Now the volume displays [FQDN switched to IPs] as below. So I think I
just need to add the brick
2017 Dec 05
1
Slow seek times on stat calls to glusterfs metadata
Hi all,
I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks
a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB
disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded
with about 15 TB of data. Clients are connected via FUSE. I'm using
glusterfs 3.12.1.
I've found that running large rsyncs to populate the pool are taking a
2023 Jul 05
1
remove_me files building up
Hi Strahil,
This is the output from the commands:
root at uk3-prod-gfs-arb-01:~# du -h -x -d 1 /data/glusterfs/gv1/brick1/brick
2.2G /data/glusterfs/gv1/brick1/brick/.glusterfs
24M /data/glusterfs/gv1/brick1/brick/scalelite-recordings
16K /data/glusterfs/gv1/brick1/brick/mytute
18M /data/glusterfs/gv1/brick1/brick/.shard
0
2023 Jul 04
1
remove_me files building up
Thanks for the clarification.
That behaviour is quite weird as arbiter bricks should hold?only metadata.
What does the following show on host?uk3-prod-gfs-arb-01:
du -h -x -d 1?/data/glusterfs/gv1/brick1/brickdu -h -x -d 1?/data/glusterfs/gv1/brick3/brickdu -h -x -d 1 /data/glusterfs/gv1/brick2/brick
If indeed the shards are taking space -?that is a really strange situation.From which version
2010 Feb 23
2
Freezing Rails in ovirt-server...
The ovirt-server package has started to slip behind the curve WRT Ruby
on Rails development; i.e., the current version of ovirt-server is not
quite runnable as is on F12. And with F13 coming, the gap's only going
to widen.
So I'm considering the idea of freezing the version of Rails on the
ovirt-server project for the time being. I'll check with the Fedora
packaging team to see if