similar to: Re-adding an existing brick to a volume

Displaying 20 results from an estimated 10000 matches similar to: "Re-adding an existing brick to a volume"

2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this. Remove attrs from the brick and delete the .glusterfs folder. Data stays in place. Add the brick to the volume. Since most of the data is the same as on the actual volume it does not need to be synced, and the heal operation finishes much faster. Do I have this right? Kind regards, Mitja On 25/02/2018 17:02, Vlad Kopylov wrote: > .gluster and attr already in
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html Vlad On Sun, Feb 25, 2018
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? Kind regards, Mitja On 25/02/2018 13:55, Jim Kinney wrote: > gluster volume add-brick volname replica 3 arbiter 1 > brickhost:brickpath/to/new/arbitervol > > Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a > change in command will happen so it won't count the
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
Hi, It should be there, see https://review.gluster.org/#/c/14502/ <https://review.gluster.org/#/c/14502/> BR, Martin > On 25 Feb 2018, at 15:52, Mitja Miheli? <mitja.mihelic at arnes.si> wrote: > > I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? > > Kind regards, > Mitja > > On 25/02/2018 13:55, Jim Kinney wrote:
2018 Feb 25
2
Convert replica 2 to replica 2+1 arbiter
Hi! I am using GlusterFS on CentOS7 with glusterfs-3.8.15 RPM version. I currently have a replica 2 running and I would like to get rid of the split-brain problem before it occurs. This is one of the possible solutions. Is it possible to and an arbiter to this volume? I have read in a thread from 2016 that this feature is planned for version 3.8. Is the feature available? If so, could you give
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica. On February 25, 2018 5:05:04 AM EST, "Mitja Miheli?" <mitja.mihelic at arnes.si> wrote: >Hi! > >I am using GlusterFS on CentOS7 with
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users, Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit. Suppose I have a Gluster volume made up of four 1 MB bricks, like this Volume Name: test Type: Distributed-Replicate Status: Started Number of
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody, Please, help to fix me a problem. I have a distributed-replicated volume between two servers. On each server I have 2 RAID-10 arrays, that replicated between servers. Brick gl1:/mnt/brick1/gm0 49153 0 Y 13910 Brick gl0:/mnt/brick0/gm0 N/A N/A N N/A Brick gl0:/mnt/brick1/gm0 N/A
2010 Sep 30
1
Routing of outgoing packets
Hi! I am trying to use hping to chek the latency of our network. Somehow things are not going to plan and I thought someone might be able to shed some light on the subject. Here is the setup: (the IP addresses gvien here are fake, but they do represent the correct state of the networking setup) vlan interface IP mask V2 eth0 192.168.20.20 32
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
I'm using gluster for a virt-store with 3x2 distributed/replicated servers for 16 qemu/kvm/libvirt virtual machines using image files stored in gluster and accessed via libgfapi. Eight of these disk images are standalone, while the other eight are qcow2 images which all share a single backing file. For the most part, this is all working very well. However, one of the gluster servers
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2008 Oct 15
1
Glusterfs performance with large directories
We at Wiseguys are looking into GlusterFS to run our Internet Archive. The archive stores webpages collected by our spiders. The test setup consists of three data machines, each exporting a volume of about 3.7TB and one nameserver machine. File layout is such that each host has it's own directory, for example the GlusterFS website would be located in:
2017 Aug 09
1
Gluster performance with VM's
Hi, community Please, help me with my trouble. I have 2 Gluster nodes, with 2 bricks on each. Configuration: Node1 brick1 replicated on Node0 brick0 Node0 brick1 replicated on Node1 brick0 Volume Name: gm0 Type: Distributed-Replicate Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1:
2018 Feb 01
2
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, My volume home is configured in replicate mode (version 3.12.4) with the bricks server1:/data/gluster/brick1 server2:/data/gluster/brick1 server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a > gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit
2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote: > Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks >
2014 Jan 03
1
SSSD and usermod
Hi! How to get usermod working with SSSD/389DS ? We have SSSD set up on our server and it uses 389DS. SSSD was enabled with the following command: authconfig --enablesssd --enablesssdauth --ldapbasedn=dc=example,dc=com --enableshadow --enablemkhomedir --enablelocauthorize --update Running for example "usermod -L username" returns: usermod: user 'username' does not exist in
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > "In a replica 2 volume... If we set the client-quorum option to > > auto, then the first brick must always be up, irrespective of the > > status of the second brick. If only the second brick is up, the > > subvolume becomes read-only." > > > By default client-quorum is
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote: > On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote: > > > "In a replica 2 volume... If we set the client-quorum option to > > > auto, then the first brick must always be up, irrespective of the > > > status of the second brick. If only the second brick is up,