Displaying 20 results from an estimated 10000 matches similar to: "glusterfs distributed volume and samba, help pls."
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick
sizes are now reporting the correct size of all bricks combined instead of
just one brick.
Not sure if that gives you any clues for this... maybe adding another brick
to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 -
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris:
Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have
found it to be exremely reliable though maybe not the fastest, though
some of that is most of our storage is SATA SSDs in a software RAID1
config for each brick.
What problems are you running into?
You just mention 'problems'
-wk
On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> Hi,
>
> we'd like to use
2017 Dec 21
3
Wrong volume size with df
Sure!
> 1 - output of gluster volume heal <volname> info
Brick pod-sjc1-gluster1:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2018 Apr 30
1
Gluster rebalance taking many years
I cannot calculate the number of files normally
Through df -i I got the approximate number of files is 63694442
[root at CentOS-73-64-minimal ~]# df -i
Filesystem Inodes IUsed IFree IUse%
Mounted on
/dev/md2 131981312 30901030 101080282 24% /
devtmpfs 8192893 435 8192458 1%
/dev
tmpfs
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2018 Apr 30
0
Gluster rebalance taking many years
I met a big problem,the cluster rebalance takes a long time after adding a
new node
gluster volume rebalance web status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- -----------
2023 Jun 01
3
Using glusterfs for virtual machines with qcow2 images
Hi,
we'd like to use glusterfs for Proxmox and virtual machines with qcow2
disk images. We have a three node glusterfs setup with one volume and
Proxmox is attached and VMs are created, but after some time, and I think
after much i/o is going on for a VM, the data inside the virtual machine
gets corrupted. When I copy files from or to our glusterfs
directly everything is OK, I've
2018 Apr 30
0
Gluster rebalance taking many years
Hi,
This value is an ongoing rough estimate based on the amount of data
rebalance has migrated since it started. The values will cange as the
rebalance progresses.
A few questions:
1. How many files/dirs do you have on this volume?
2. What is the average size of the files?
3. What is the total size of the data on the volume?
Can you send us the rebalance log?
Thanks,
Nithya
On 30
2018 Apr 30
2
Gluster rebalance taking many years
2017 Dec 05
1
Slow seek times on stat calls to glusterfs metadata
Hi all,
I have a distributed / replicated pool consisting of 2 boxes, with 3 bricks
a piece. Each brick is mounted via a RAID 6 array consisting of 11 6 TB
disks. I'm running CentOS 7 with XFS and LVM. The 150 TB pool is loaded
with about 15 TB of data. Clients are connected via FUSE. I'm using
glusterfs 3.12.1.
I've found that running large rsyncs to populate the pool are taking a
2024 Oct 17
0
Bricks with different sizes.
Hi there.
I am deploying a glusterfs 2-node server, just for fun!
So in each server, I have:
2x - 500G - Operational System
2x - 2TB
1x - 1TB
I formatted the 2x 2TB and the 1x 1TB with XFS and mounted it in this order:
/dev/sdc -> /disk1 -----> 2TB
/dev/sdd -> /disk2 -----> 2TB
/dev/sde -> /disk3 -----> 1TB
And the create a gluster vol with this command:
gluster vol create VMS
2013 Jul 02
1
problem expanding a volume
Hello,
I am having trouble expanding a volume. Every time I try to add bricks to
the volume, I get this error:
[root at gluster1 sdb1]# gluster volume add-brick vg0
gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1
/export/brick2/sdb1 or a prefix of it is already part of a volume
Here is the volume info:
[root at gluster1 sdb1]# gluster volume info vg0
Volume Name: vg0
Type:
2017 Sep 17
2
Volume Heal issue
Hi all,
I have a replica 3 with 1 arbiter.
I see the last days that one file at a volume is always showing as needing
healing:
gluster volume heal vms info
Brick gluster0:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster1:/gluster/vms/brick
Status: Connected
Number of entries: 0
Brick gluster2:/gluster/vms/brick
*<gfid:66d3468e-00cf-44dc-a835-7624da0c5370>*
Status:
2017 Sep 17
0
Volume Heal issue
I am using gluster 3.8.12, the default on Centos 7.3
(I will update to 3.10 at some moment)
On Sun, Sep 17, 2017 at 11:30 AM, Alex K <rightkicktech at gmail.com> wrote:
> Hi all,
>
> I have a replica 3 with 1 arbiter.
>
> I see the last days that one file at a volume is always showing as needing
> healing:
>
> gluster volume heal vms info
> Brick
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms