Displaying 20 results from an estimated 1000 matches similar to: "Gluster 3.3: Unable to delete xattrs"
2011 Feb 24
0
No subject
which is a stripe of the gluster storage servers, this is the
performance I get (note use a file size > amount of RAM on client and
server systems, 13GB in this case) :
4k block size :
111 pir4:/pirstripe% /sb/admin/scripts/nfsSpeedTest -s 13g -y
pir4: Write test (dd): 142.281 MB/s 1138.247 mbps 93.561 seconds
pir4: Read test (dd): 274.321 MB/s 2194.570 mbps 48.527 seconds
testing from 8k -
2017 Jun 17
1
client reconnect fails (was gluster heal entry reappears)
Hi Ravi,
back to our client-cannot-reconnect-to-gluster-brick problem ...
> Von: Ravishankar N [ravishankar at redhat.com]
> Gesendet: Montag, 29. Mai 2017 06:34
> An: Markus Stockhausen; gluster-users at gluster.org
> Betreff: Re: [Gluster-users] gluster heal entry reappears
>
> > On 05/28/2017 10:31 PM, Markus Stockhausen wrote:
> > Hi,
> >
> > I'm
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Still getting error
pve01:~# gluster vol info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: gluster1:/disco2TB-0/vms
Brick2: gluster2:/disco2TB-0/vms
Brick3: gluster1:/disco1TB-0/vms
Brick4: gluster2:/disco1TB-0/vms
Brick5: gluster1:/disco1TB-1/vms
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello,
Thanks to "partner" on IRC who told me about this (quite big) problem.
Apparently in a distributed setup once a brick fills up you start
getting write failures. Is there a way to work around this?
I would have thought gluster would check for free space before writing
to a brick.
It's very easy to test, I created a distributed volume from 2 uneven
bricks and started to
2024 Nov 05
1
Add an arbiter when have multiple bricks at same server.
Yes but I want to add.
Is it the same logic?
---
Gilberto Nunes Ferreira
+55 (47) 99676-7530
Proxmox VE
VinChin Backup & Restore
Em ter., 5 de nov. de 2024, 14:09, Aravinda <aravinda at kadalu.tech> escreveu:
> Hello Gilberto,
>
> You can create a Arbiter volume using three bricks. Two of them will be
> data bricks and one will be Arbiter brick.
>
> gluster volume
2024 Oct 17
0
Bricks with different sizes.
Hi there.
I am deploying a glusterfs 2-node server, just for fun!
So in each server, I have:
2x - 500G - Operational System
2x - 2TB
1x - 1TB
I formatted the 2x 2TB and the 1x 1TB with XFS and mounted it in this order:
/dev/sdc -> /disk1 -----> 2TB
/dev/sdd -> /disk2 -----> 2TB
/dev/sde -> /disk3 -----> 1TB
And the create a gluster vol with this command:
gluster vol create VMS
2018 Jan 18
0
Blocking IO when hot tier promotion daemon runs
Thanks for the info, Hari. Sorry about the bad gluster volume info, I
grabbed that from a file not realizing it was out of date. Here's a current
configuration showing the active hot tier:
[root at pod-sjc1-gluster1 ~]# gluster volume info
Volume Name: gv0
Type: Tier
Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
Status: Started
Snapshot Count: 13
Number of Bricks: 8
Transport-type: tcp
Hot
2018 Jan 10
0
Blocking IO when hot tier promotion daemon runs
I should add that additional testing has shown that only accessing files is
held up, IO is not interrupted for existing transfers. I think this points
to the heat metadata in the sqlite DB for the tier, is it possible that a
table is temporarily locked while the promotion daemon runs so the calls to
update the access count on files are blocked?
On Wed, Jan 10, 2018 at 10:17 AM, Tom Fite
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi Chris,
here is a link to the settings needed for VM storage: https://github.com/gluster/glusterfs/blob/03592930239c3b43cbbdce17607c099ae075fd6d/extras/group-virt.example#L4
You can also ask in ovirt-users for real-world settings.Test well before changing production!!!
IMPORTANT: ONCE SHARDING IS ENABLED, IT CANNOT BE DISABLED !!!
Best Regards,Strahil Nikolov?
On Mon, Jun 5, 2023 at 13:55,
2023 Jun 07
1
Using glusterfs for virtual machines with qcow2 images
Hi everybody
Regarding the issue with mount, usually I am using this systemd service to
bring up the mount points:
/etc/systemd/system/glusterfsmounts.service
[Unit]
Description=Glustermounting
Requires=glusterd.service
Wants=glusterd.service
After=network.target network-online.target glusterd.service
[Service]
Type=simple
RemainAfterExit=true
ExecStartPre=/usr/sbin/gluster volume list
2023 Jun 01
1
Using glusterfs for virtual machines with qcow2 images
Chris:
Whilst I don't know what is the issue nor the root cause of your issue with using GlusterFS with Proxmox, but I am going to guess that you might already know that Proxmox "natively" supports Ceph, which the Wikipedia article for it says that it is a distributed object storage system.
Maybe that might work better with Proxmox?
Hope this helps.
Sorry that I wasn't able
2018 Jan 10
2
Blocking IO when hot tier promotion daemon runs
The sizes of the files are extremely varied, there are millions of small
(<1 MB) files and thousands of files larger than 1 GB.
Attached is the tier log for gluster1 and gluster2. These are full of
"demotion failed" messages, which is also shown in the status:
[root at pod-sjc1-gluster1 gv0]# gluster volume tier gv0 status
Node Promoted files Demoted files
2023 Jun 01
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
We use qcow2 with libvirt based kvm on many small clusters and have
found it to be exremely reliable though maybe not the fastest, though
some of that is most of our storage is SATA SSDs in a software RAID1
config for each brick.
What problems are you running into?
You just mention 'problems'
-wk
On 6/1/23 8:42 AM, Christian Schoepplein wrote:
> Hi,
>
> we'd like to use
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
On oVirt / Redhat Virtualization,
the following Gluster volumes settings are recommended to be applied
(preferably at the creation of the volume)
These settings are important for data reliability, ( Note that Replica 3 or
Replica 2+1 is expected )
performance.quick-read=off
performance.read-ahead=off
performance.io-cache=off
performance.low-prio-threads=32
network.remote-dio=enable
2018 Jan 02
0
Wrong volume size with df
For what it's worth here, after I added a hot tier to the pool, the brick
sizes are now reporting the correct size of all bricks combined instead of
just one brick.
Not sure if that gives you any clues for this... maybe adding another brick
to the pool would have a similar effect?
On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote:
> Sure!
>
> > 1 -
2023 Jun 02
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Try turn off this options:
performance.write-behind
performance.flush-behind
---
Gilberto Nunes Ferreira
(47) 99676-7530 - Whatsapp / Telegram
Em sex., 2 de jun. de 2023 ?s 07:55, Guillaume Pavese <
guillaume.pavese at interactiv-group.com> escreveu:
> On oVirt / Redhat Virtualization,
> the following Gluster volumes settings are recommended to be applied
> (preferably at
2023 Jun 05
1
[EXT] [Glusterusers] Using glusterfs for virtual machines with qco
Hi Gilberto, hi all,
thanks a lot for all your answers.
At first I changed both settings mentioned below and first test look good.
Before changing the settings I was able to crash a new installed VM every
time after a fresh installation by producing much i/o, e.g. when installing
Libre Office. This always resulted in corrupt files inside the VM, but
researching the qcow2 file with the
2017 Dec 21
3
Wrong volume size with df
Sure!
> 1 - output of gluster volume heal <volname> info
Brick pod-sjc1-gluster1:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick1/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick
2018 Jan 18
2
Blocking IO when hot tier promotion daemon runs
Hi Tom,
The volume info doesn't show the hot bricks. I think you have took the
volume info output before attaching the hot tier.
Can you send the volume info of the current setup where you see this issue.
The logs you sent are from a later point in time. The issue is hit
earlier than the logs what is available in the log. I need the logs
from an earlier time.
And along with the entire tier
2024 Nov 08
1
Add an arbiter when have multiple bricks at same server.
After force the add-brick
gluster volume add-brick VMS replica 3 arbiter 1 arbiter:/arbiter1
arbiter:/arbiter2 arbiter:/arbiter3 force
volume add-brick: success
pve01:~# gluster volume info
Volume Name: VMS
Type: Distributed-Replicate
Volume ID: e1a4f787-3f62-441e-a7ce-c0ae6b111ebf
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: