similar to: brick out of space, unmounted brick

Displaying 20 results from an estimated 11000 matches similar to: "brick out of space, unmounted brick"

2017 Aug 09
1
Gluster performance with VM's
Hi, community Please, help me with my trouble. I have 2 Gluster nodes, with 2 bricks on each. Configuration: Node1 brick1 replicated on Node0 brick0 Node0 brick1 replicated on Node1 brick0 Volume Name: gm0 Type: Distributed-Replicate Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf Status: Started Snapshot Count: 0 Number of Bricks: 2 x 2 = 4 Transport-type: tcp Bricks: Brick1:
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody, Please, help to fix me a problem. I have a distributed-replicated volume between two servers. On each server I have 2 RAID-10 arrays, that replicated between servers. Brick gl1:/mnt/brick1/gm0 49153 0 Y 13910 Brick gl0:/mnt/brick0/gm0 N/A N/A N N/A Brick gl0:/mnt/brick1/gm0 N/A
2009 Feb 23
1
Interleave or not
Lets say you had 4 servers and you wanted to setup replicate and distribute. What methoid would be better: server sdb1 xen0 brick0 xen1 mirror0 xen2 brick1 xen3 mirror1 replicate block0 - brick0 mirror0 replicate block1 - brick1 mirror1 distribute unify - block0 block1 or server sdb1 sdb2 xen0 brick0 mirror3 xen1 brick1 mirror0 xen2 brick2 mirror1 xen3 brick3 mirror2 replicate block0 -
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 12:00:29PM +0530, Karthik Subrahmanya wrote: > I will try to explain how you can end up in split-brain even with cluster > wide quorum: Yep, the explanation made sense. I hadn't considered the possibility of alternating outages. Thanks! > > > It would be great if you can consider configuring an arbiter or > > > replica 3 volume. > >
2012 Nov 27
1
Performance after failover
Hey, all. I'm currently trying out GlusterFS 3.3. I've got two servers and four clients, all on separate boxes. I've got a Distributed-Replicated volume with 4 bricks, two from each server, and I'm using the FUSE client. I was trying out failover, currently testing for reads. I was reading a big file, using iftop to see which server was actually being read from. I put up an
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi! I am running a replica 3 volume. On server2 I wanted to move the brick to a new disk. I removed the brick from the volume: gluster volume remove-brick VOLUME rep 2 server2:/gluster/VOLUME/brick0/brick force I unmounted the old brick and mounted the new disk to the same location. I added the empty new brick to the volume: gluster volume add-brick VOLUME rep 3
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick I don't think there is option to "reconnect brick back" what I did many times - delete .gluster and reset attr on the folder, connect the brick and then update those attr. with stat commands example here http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html Vlad On Sun, Feb 25, 2018
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this. Remove attrs from the brick and delete the .glusterfs folder. Data stays in place. Add the brick to the volume. Since most of the data is the same as on the actual volume it does not need to be synced, and the heal operation finishes much faster. Do I have this right? Kind regards, Mitja On 25/02/2018 17:02, Vlad Kopylov wrote: > .gluster and attr already in
2014 Jun 27
1
geo-replication status faulty
Venky Shankar, can you follow up on these questions? I too have this issue and cannot resolve the reference to '/nonexistent/gsyncd'. As Steve mentions, the nonexistent reference in the logs looks like the culprit especially seeing that the ssh command trying to be run is printed on an earlier line with the incorrect remote path. I have followed the configuration steps as documented in
2017 Nov 21
1
Brick and Subvolume Info
Hello I have a Distributed-Replicate volume and I would like to know if it is possible to see what sub-volume a brick belongs to, eg: A Distributed-Replicate volume containing: Number of Bricks: 2 x 2 = 4 Brick1: node1.localdomain:/mnt/data1/brick1 Brick2: node2.localdomain:/mnt/data1/brick1 Brick3: node1.localdomain:/mnt/data2/brick2 Brick4: node2.localdomain:/mnt/data2/brick2 Is it possible
2017 Sep 04
2
Slow performance of gluster volume
Hi all, I have a gluster volume used to host several VMs (managed through oVirt). The volume is a replica 3 with arbiter and the 3 servers use 1 Gbit network for the storage. When testing with dd (dd if=/dev/zero of=testfile bs=1G count=1 oflag=direct) out of the volume (e.g. writing at /root/) the performance of the dd is reported to be ~ 700MB/s, which is quite decent. When testing the dd on
2017 Sep 06
2
Slow performance of gluster volume
Hi Krutika, Is it anything in the profile indicating what is causing this bottleneck? In case i can collect any other info let me know. Thanx On Sep 5, 2017 13:27, "Abi Askushi" <rightkicktech at gmail.com> wrote: Hi Krutika, Attached the profile stats. I enabled profiling then ran some dd tests. Also 3 Windows VMs are running on top this volume but did not do any stress
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
Dear all, I'm running GlusterFS 3.4 alpha2 together with oVirt 3.2. This is solely a test system and it doesn't have much data or anything important in it. Currently it has only 2 VM's running and disk usage is around 15 GB. I have been trying to set up a geo-replication for disaster recovery testing. For geo-replication I did following: All machines are running CentOS 6.4 and using
2017 Sep 05
3
Slow performance of gluster volume
Hi Krutika, I already have a preallocated disk on VM. Now I am checking performance with dd on the hypervisors which have the gluster volume configured. I tried also several values of shard-block-size and I keep getting the same low values on write performance. Enabling client-io-threads also did not have any affect. The version of gluster I am using is glusterfs 3.8.12 built on May 11 2017
2017 Sep 05
0
Slow performance of gluster volume
I'm assuming you are using this volume to store vm images, because I see shard in the options list. Speaking from shard translator's POV, one thing you can do to improve performance is to use preallocated images. This will at least eliminate the need for shard to perform multiple steps as part of the writes - such as creating the shard and then writing to it and then updating the
2017 Sep 06
0
Slow performance of gluster volume
Do you see any improvement with 3.11.1 as that has a patch that improves perf for this kind of a workload Also, could you disable eager-lock and check if that helps? I see that max time is being spent in acquiring locks. -Krutika On Wed, Sep 6, 2017 at 1:38 PM, Abi Askushi <rightkicktech at gmail.com> wrote: > Hi Krutika, > > Is it anything in the profile indicating what is
2017 Sep 06
2
Slow performance of gluster volume
I tried to follow step from https://wiki.centos.org/SpecialInterestGroup/Storage to install latest gluster on the first node. It installed 3.10 and not 3.11. I am not sure how to install 3.11 without compiling it. Then when tried to start the gluster on the node the bricks were reported down (the other 2 nodes have still 3.8). No sure why. The logs were showing the below (even after rebooting the
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional digging myself: azathoth replicates with yog-sothoth, so I compared their brick directories. `ls -R /var/local/brick0/data | md5sum` gives the same result on both servers, so the filenames are identical in both bricks. However, `du -s /var/local/brick0/data` shows that azathoth has about 3G more data (445G vs 442G) than
2017 Sep 08
0
Slow performance of gluster volume
Following changes resolved the perf issue: Added the option /etc/glusterfs/glusterd.vol : option rpc-auth-allow-insecure on restarted glusterd Then set the volume option: gluster volume set vms server.allow-insecure on I am reaching now the max network bandwidth and performance of VMs is quite good. Did not upgrade the glusterd. As a next try I am thinking to upgrade gluster to 3.12 + test
2017 Sep 10
2
Slow performance of gluster volume
Great to hear! ----- Original Message ----- > From: "Abi Askushi" <rightkicktech at gmail.com> > To: "Krutika Dhananjay" <kdhananj at redhat.com> > Cc: "gluster-user" <gluster-users at gluster.org> > Sent: Friday, September 8, 2017 7:01:00 PM > Subject: Re: [Gluster-users] Slow performance of gluster volume > > Following