similar to: cluster.min-free-disk separate for each, brick

Displaying 20 results from an estimated 10000 matches similar to: "cluster.min-free-disk separate for each, brick"

2010 Jun 09
4
health monitoring of replicated volume
Hello, Is there any reasonable way to monitor the health of replicated volume and sync it, if out of sync ? Regards,
2011 Aug 07
1
Using volumes during fix-layout after add/remove-brick
Hello All- I regularly increase the size of volumes using "add-brick" followed by "rebalance VOLNAME fix-layout". I usually allow normal use of an expanded volume (i.e reading and writing files) to continue while "fix-layout" is taking place, and I have not experienced any apparent problems as a result. The documentation does not say that volumes cannot be used
2012 Feb 22
2
"mismatching layouts" errors after expanding volume
Dear All- There are a lot of the following type of errors in my client and NFS logs following a recent volume expansion. [2012-02-16 22:59:42.504907] I [dht-layout.c:682:dht_layout_dir_mismatch] 0-atmos-dht: subvol: atmos-replicate-0; inode layout - 0 - 0; disk layout - 9203501 34 - 1227133511 [2012-02-16 22:59:42.534399] I [dht-common.c:524:dht_revalidate_cbk] 0-atmos-dht: mismatching
2013 Jan 26
4
Write failure on distributed volume with free space available
Hello, Thanks to "partner" on IRC who told me about this (quite big) problem. Apparently in a distributed setup once a brick fills up you start getting write failures. Is there a way to work around this? I would have thought gluster would check for free space before writing to a brick. It's very easy to test, I created a distributed volume from 2 uneven bricks and started to
2023 Mar 30
1
Performance: lots of small files, hdd, nvme etc.
Well, you have *way* more files than we do... :) Il 30/03/2023 11:26, Hu Bert ha scritto: > Just an observation: is there a performance difference between a sw > raid10 (10 disks -> one brick) or 5x raid1 (each raid1 a brick) Err... RAID10 is not 10 disks unless you stripe 5 mirrors of 2 disks. > with > the same disks (10TB hdd)? The heal processes on the 5xraid1-scenario >
2018 Apr 27
2
How to set up a 4 way gluster file system
Hi, I have 4 nodes, so a quorum would be 3 of 4. The Q is I suppose why does the documentation give this command as an example without qualifying it? SO I am running the wrong command? I want a "raid10" On 27 April 2018 at 18:05, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi, > > With replica 2 volumes one can easily end up in split-brains if there are
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts, We're running glusterfs 3.3 and we have met file permission probelems after gluster volume rebalance. Files got stick permissions T--------- after rebalance which break our client normal fops unexpectedly. Any one known this issue? Thank you for your help. -- ??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2023 Mar 30
2
Performance: lots of small files, hdd, nvme etc.
Hello there, as Strahil suggested a separate thread might be better. current state: - servers with 10TB hdds - 2 hdds build up a sw raid1 - each raid1 is a brick - so 5 bricks per server - Volume info (complete below): Volume Name: workdata Type: Distributed-Replicate Number of Bricks: 5 x 3 = 15 Bricks: Brick1: gls1:/gluster/md3/workdata Brick2: gls2:/gluster/md3/workdata Brick3:
2012 Jan 22
2
Best practices?
Suppose I start building nodes with (say) 24 drives each in them. Would the standard/recommended approach be to make each drive its own filesystem, and export 24 separate bricks, server1:/data1 .. server1:/data24 ? Making a distributed replicated volume between this and another server would then have to list all 48 drives individually. At the other extreme, I could put all 24 drives into some
2017 Dec 11
2
reset-brick command questions
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x bricks => Replicated, Distributed volume * Gluster 3.4 1) I'm asking me, if I can
2006 Jan 05
4
Workshops or groups in Boston?
I have been hearing a lot about workshops and/or groups of Ruby on Rails users meeting up in numerous cities, but I haven''t been too successful finding any such gatherings or events in Boston. On the wiki there is a link to a Boston group for Ruby, but the page refuses to load. Anyone from the Boston area know of anything? Cheers, Eric Czarny eczarny@stonehill.edu
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2003 Nov 06
2
Assistance request
Hi Samba team, first I want to appologize for this spam, you probably get tons of such messages per day. However I am desparate and sick of digging into newsgroups, mailing lists and everything else ;)) Since 1 year I am trying to find a way to make samba receive winpopup messages from Windows 2000, or Xp, using the "net send" command. Receiving messages from other linux machines
2017 Sep 27
2
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
HI gluster experts, I meet a tough problem about ?split-brain? issue. Sometimes, after hard reboot, we will find some files in split-brain, however its parent directory or anything could be shown in command ?gluster volume heal <volume-name> info?, also, no entry in .glusterfs/indices/xattrop directory, can you help to shed some lights on this issue? Thanks! Following is some info from
2009 Aug 11
2
Win98 client can't connect after Samba upgrade
Hi All, I was running Samba 2.2 as a PDC for a small office server for several years. Hardware was getting old enough I finally bit the bullet and upgraded to Samba 3.3.4 on FreeBSD 7.2 and a new box. The smbpasswd file and all that stuff was copied from the old box. Clients are Win2000 and XP Professional, and for the most part were back up and connected, practically unaware of the switch
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
Hi, To resolve the gfid split-brain you can follow the steps at [1]. Since we don't have the pending markers set on the files, it is not showing in the heal info. To debug this issue, need some more data from you. Could you provide these things? 1. volume info 2. mount log 3. brick logs 4. shd log May I also know which version of gluster you are running. From the info you have provided it
2017 Aug 19
2
Add brick to a disperse volume
Hello, I'm using Gluster since 2 years but only with distributed volumes. I'm trying now to set dispersed volumes to have some redundancy. I had any problem to create a functional test volume with 4 bricks and 1 redundancy ( Number of Bricks: 1 x (3 + 1) = 4 ). I had also any problem to replace a supposed faulty brick with another one. My problem is that I can not add a brick to
2017 Sep 28
2
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
Hi, Thanks for reply! I?ve checked [1]. But the problem is that there is nothing shown in command ?gluster volume heal <volume-name> info?. So these split-entry files could only be detected when app try to visit them. I can find gfid mismatch for those in-split-brain entries from mount log, however, nothing show in shd log, the shd log does not know those split-brain entries. Because there
2017 Sep 28
0
after hard reboot, split-brain happened, but nothing showed in gluster voluem heal info command !
On Thu, Sep 28, 2017 at 11:41 AM, Zhou, Cynthia (NSB - CN/Hangzhou) < cynthia.zhou at nokia-sbell.com> wrote: > Hi, > > Thanks for reply! > > I?ve checked [1]. But the problem is that there is nothing shown in > command ?gluster volume heal <volume-name> info?. So these split-entry > files could only be detected when app try to visit them. > > I can find