similar to: How to trigger a resync of a newly replaced empty brick in replicate config ?

Displaying 20 results from an estimated 7000 matches similar to: "How to trigger a resync of a newly replaced empty brick in replicate config ?"

2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote: > Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks >
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0! In the past, I install well with GlusterFS 2.06, and Log file of server and Client placed in /var/log/glusterfs/... But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1 client), I start glusterFS servers and client, and type *df -H* at client, result is : "Transport endpoint is not connected" *I want to detect BUG, but I not found
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Heyho guys, I'm running since years glusterfs in a small environment without big problems. Now I'm going to use glusterFS for a bigger cluster but I've some questions :) Environment: * 4 Servers * 20 x 2TB HDD, each * Raidcontroller * Raid 10 * 4x bricks => Replicated, Distributed volume * Gluster 3.4 1) I'm asking me, if I can
2018 Jan 12
1
Reading over than the file size on dispersed volume
Hi All, I'm using gluster as dispersed volume and I send to ask for very serious thing. I have 3 servers and there are 9 bricks. My volume is like below. ------------------------------------------------------ Volume Name: TEST_VOL Type: Disperse Volume ID: be52b68d-ae83-46e3-9527-0e536b867bcc Status: Started Snapshot Count: 0 Number of Bricks: 1 x (6 + 3) = 9 Transport-type: tcp Bricks:
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users, Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit. Suppose I have a Gluster volume made up of four 1 MB bricks, like this Volume Name: test Type: Distributed-Replicate Status: Started Number of
2018 Feb 28
1
Intermittent mount disconnect due to socket poller error
We've been on the Gluster 3.7 series for several years with things pretty stable. Given that it's reached EOL, yesterday I upgraded to 3.13.2. Every Gluster mount and server was disabled then brought back up after the upgrade, changing the op-version to 31302 and then trying it all out. It went poorly. Every sizable read and write (100's MB) lead to 'Transport endpoint not
2011 Oct 05
1
Directory listings not working
Hello, I just finished installing gluster on two machines in server mode in EC2. I have mounted it via fuse on one of the boxes. Here is my volume info: # gluster volume info Volume Name: fast Type: Stripe Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: server1:/data2 Brick2: server2:/data All of this works great, and I can write files at a fairly high throughput.
2017 Jun 01
1
Restore a node in a replicating Gluster setup after data loss
Hi We have a Replica 2 + Arbiter Gluster setup with 3 nodes Server1, Server2 and Server3 where Server3 is the Arbiter node. There are several Gluster volumes ontop of that setup. They all look a bit like this: gluster volume info gv-tier1-vm-01 [...] Number of Bricks: 1 x (2 + 1) = 3 [...] Bricks: Brick1: Server1:/var/data/lv-vm-01 Brick2: Server2:/var/data/lv-vm-01 Brick3:
2011 May 10
3
ERROR: -91 after Kernel Upgrade
Hey guys, I have a OCFS2 Cluster mounted at 4 xen-server (gentoo). Today I upgraded the xen-kernel for tests at one server (server2) from 2.6.34-xen to 2.6.38-xen-r1. After reboot the server couldn''t mount the ocsfs2 device anymore. ocfs2-tools version: sys-fs/ocfs2-tools-1.4.3 Modules are loaded and /config type configfs and /dlm type ocfs2_dlmfs are mounted. server2 ~ # mount
2017 Sep 03
3
Poor performance with shard
Hey everyone! I have deployed gluster on 3 nodes with 4 SSDs each and 10Gb Ethernet connection. The storage is configured with 3 gluster volumes, every volume has 12 bricks (4 bricks on every server, 1 per ssd in the server). With the 'features.shard' off option my writing speed (using the 'dd' command) is approximately 250 Mbs and when the feature is on the writing speed is
2011 Jan 14
1
mixing tcp/ip and ib/rdma in distributed replicated volume for disaster recovery.
Hi, we would like to build a gluster storage systems that combines our need for performance with our need for disaster recovery. I saw a couple of posts indicating that this is possible (http://gluster.org/pipermail/gluster-users/2010-February/003862.html) but am not 100% clear if that is possible Let's assume I have a total of 6 storage servers and bricks and want to spread them across 2
2011 Aug 01
1
[Gluster 3.2.1] Réplication issues on a two bricks volume
Hello, I have installed GlusterFS one month ago, and replication have many issues : First of all, our infrastructure, 2 storage array of 8Tb in replication mode... We have our backups file on this arrays, so 6Tb of datas. I want replicate datas on the second storrage array, so, i use this command : # gluster volume rebalance REP_SVG migrate-data start And gluster start to replicate, in 2 weeks
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi, Maybe someone can point me to a documentation or explain this? I can't find it myself. Do we have any other useful resources except doc.gluster.org? As I see many gluster options are not described there or there are no explanation what is doing... On 2018-03-12 15:58, Anatoliy Dmytriyev wrote: > Hello, > > We have a very fresh gluster 3.10.10 installation. > Our volume
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi! I am running a replica 3 volume. On server2 I wanted to move the brick to a new disk. I removed the brick from the volume: gluster volume remove-brick VOLUME rep 2 server2:/gluster/VOLUME/brick0/brick force I unmounted the old brick and mounted the new disk to the same location. I added the empty new brick to the volume: gluster volume add-brick VOLUME rep 3
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume type first? Cheers, Laura B On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > Hi Anatoliy, > > The heal command is basically used to heal any mismatching contents > between replica copies of the files. > For the command "gluster volume heal <volname>"
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik, Thanks a lot for the explanation. Does it mean a distributed volume health can be checked only by "gluster volume status " command? And one more question: cluster.min-free-disk is 10% by default. What kind of "side effects" can we face if this option will be reduced to, for example, 5%? Could you point to any best practice document(s)? Regards, Anatoliy
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
On Wed, Mar 14, 2018 at 5:42 PM, Karthik Subrahmanya <ksubrahm at redhat.com> wrote: > > > On Wed, Mar 14, 2018 at 3:36 PM, Anatoliy Dmytriyev <tolid at tolid.eu.org> > wrote: > >> Hi Karthik, >> >> >> Thanks a lot for the explanation. >> >> Does it mean a distributed volume health can be checked only by "gluster >> volume
2017 Nov 15
1
unable to remove brick, pleas help
Hi, I am trying to remove a brick, from a server which is no longer part of the gluster pool, but I keep running into errors for which I cannot find answers on google. [root at virt2 ~]# gluster peer status Number of Peers: 3 Hostname: srv1 Uuid: 2bed7e51-430f-49f5-afbc-06f8cec9baeb State: Peer in Cluster (Disconnected) Hostname: srv3 Uuid: 0e78793c-deca-4e3b-a36f-2333c8f91825 State: Peer in