similar to: Single brick expansion

Displaying 20 results from an estimated 40000 matches similar to: "Single brick expansion"

2017 Jun 30
0
How to shutdown a node properly ?
Yes but why killing gluster notifies all clients and a graceful shutdown don't? I think this is a bug, if I'm shutting down a server, it's obvious that all clients should stop to connect to it.... Il 30 giu 2017 3:24 AM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the
2017 Jun 30
2
How to shutdown a node properly ?
On 06/30/2017 12:40 AM, Renaud Fortier wrote: > > On my nodes, when i use the system.d script to kill gluster (service > glusterfs-server stop) only glusterd is killed. Then I guess the > shutdown doesn?t kill everything ! > Killing glusterd does not kill other gluster processes. When you shutdown a node, everything obviously gets killed but the client does not get notified
2017 Oct 05
2
data corruption - any update?
On 4 October 2017 at 23:34, WK <wkmail at bneit.com> wrote: > Just so I know. > > Is it correct to assume that this corruption issue is ONLY involved if you > are doing rebalancing with sharding enabled. > > So if I am not doing rebalancing I should be fine? > That is correct. > -bill > > > > On 10/3/2017 10:30 PM, Krutika Dhananjay wrote: > >
2017 Oct 13
1
small files performance
Where did you read 2k IOPS? Each disk is able to do about 75iops as I'm using SATA disk, getting even closer to 2000 it's impossible Il 13 ott 2017 9:42 AM, "Szymon Miotk" <szymon.miotk at gmail.com> ha scritto: > Depends what you need. > 2K iops for small file writes is not a bad result. > In my case I had a system that was just poorly written and it was >
2017 Jun 29
0
How to shutdown a node properly ?
On my nodes, when i use the system.d script to kill gluster (service glusterfs-server stop) only glusterd is killed. Then I guess the shutdown doesn?t kill everything ! De : Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com] Envoy? : 29 juin 2017 13:41 ? : Ravishankar N <ravishankar at redhat.com> Cc : gluster-users at gluster.org; Renaud Fortier <Renaud.Fortier at
2017 Jul 03
0
Very slow performance on Sharded GlusterFS
Hi, I want to give an update for this. I also tested READ speed. It seems, sharded volume has a lower read speed than striped volume. This machine has 24 cores with 64GB of RAM . I really don?t think its caused due to low system. Stripe is kind of a shard but a fixed size based on stripe value / filesize. Hence, I would expect at least the same speed or maybe little slower. What I get is
2017 Jun 30
3
Very slow performance on Sharded GlusterFS
I already tried 512MB but re-try again now and results are the same. Both without tuning; Stripe 2 replica 2: dd performs 250~ mb/s but shard gives 77mb. I attached two logs (shard and stripe logs) Note: I also noticed that you said ?order?. Do you mean when we create via volume set we have to make an order for bricks? I thought gluster handles (and do the math) itself. Gencer
2017 Jun 29
4
How to shutdown a node properly ?
Init.d/system.d script doesn't kill gluster automatically on reboot/shutdown? Il 29 giu 2017 5:16 PM, "Ravishankar N" <ravishankar at redhat.com> ha scritto: > On 06/29/2017 08:31 PM, Renaud Fortier wrote: > > Hi, > > Everytime I shutdown a node, I lost access (from clients) to the volumes > for 42 seconds (network.ping-timeout). Is there a special way to
2017 Oct 11
0
data corruption - any update?
Just to clarify as i'm planning to put gluster in production (after fixing some issue, but for this I need community help): corruption happens only in this cases: - volume with shard enabled AND - rebalance operation In any other cases, corruption should not happen (or at least is not known to happen) So, what If I have to replace a failed brick/disks ? Will this trigger a rebalance and
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior in my lab with shards and RHEV/KVM data, trying to figure out if it's related. Thanks. On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote: > I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it > to settle. No problems. I am now running replica 4
2017 Aug 30
0
single brick logging errors endlessly
Hey gluster experts, We have a 20 physical server, replicate level 2, 40 brick cluster, the first brick is showing errors such as attached paste. It's around a 1PB system which is nearly full. https://paste.ee/p/Dqdde This seems to be a file too long error, as the link is going ../folder/../folder/../folder/../folder/ around 30+ times. Any suggestions as to why this has occurred and what we
2018 Apr 22
4
Reconstructing files from shards
Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha scritto: > Imho the easiest path would be to turn off sharding on the volume and > simply do a copy of the files (to a different directory, or rename and > then copy i.e.) > > This should simply store the files without sharding. > If you turn off sharding on a sharded volume with data in it, all sharded
2018 Apr 22
0
Reconstructing files from shards
So a stock ovirt with gluster install that uses sharding A. Can't safely have sharding turned off once files are in use B. Can't be expanded with additional bricks Ouch. On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote: >Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha >scritto: > >> Imho
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/ The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link. On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote: >What shard corruption bug? bugzilla url? I'm running into some odd >behavior >in my lab with shards and RHEV/KVM data, trying to figure out if it's >related. >
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few minutes. SIGTERM on the other hand causes crash, but this time it is not read-only remount, but around 10 IOPS tops and 2 IOPS on average. -ps On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote: > I currently only have a Windows 2012 R2 server VM in testing on top of > the gluster storage,
2017 Jul 05
2
[New Release] GlusterD2 v4.0dev-7
After nearly 3 months, we have another preview release for GlusterD-2.0. The highlights for this release are, - GD2 now uses an auto scaling etcd cluster, which automatically selects and maintains the required number of etcd servers in the cluster. - Preliminary support for volume expansion has been added. (Note that rebalancing is not available yet) - An end to end functional testing framework
2017 Sep 23
3
EC 1+2
Is possible to create a dispersed volume 1+2 ? (Almost the same as replica 3, the same as RAID-6) If yes, how many server I have to add in the future to expand the storage? 1 or 3? -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20170923/a702ba67/attachment.html>
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it to settle. No problems. I am now running replica 4 (preparing to remove a brick and host to replica 3). On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote: > Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail. > com> > ha scritto: > > It stopped being an outstanding
2017 Jul 01
0
Very slow performance on Sharded GlusterFS
I did the changes (one brick from 09th server and one replica from 10th server and continued with this order) and re-test. Nothing changed. Still slow. (exactly same result.) -Gencer. From: Gandalf Corvotempesta [mailto:gandalf.corvotempesta at gmail.com] Sent: Friday, June 30, 2017 8:19 PM To: gencer at gencgiyen.com Cc: Krutika Dhananjay <kdhananj at redhat.com>; gluster-user
2017 Jun 05
1
Rebalance + VM corruption - current status and request for feedback
Great, thanks! Il 5 giu 2017 6:49 AM, "Krutika Dhananjay" <kdhananj at redhat.com> ha scritto: > The fixes are already available in 3.10.2, 3.8.12 and 3.11.0 > > -Krutika > > On Sun, Jun 4, 2017 at 5:30 PM, Gandalf Corvotempesta < > gandalf.corvotempesta at gmail.com> wrote: > >> Great news. >> Is this planned to be published in next