similar to: Is the size of bricks limiting the size of files I can store?

Displaying 20 results from an estimated 10000 matches similar to: "Is the size of bricks limiting the size of files I can store?"

2018 Apr 02
0
Is the size of bricks limiting the size of files I can store?
On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote: > On Mon, 2 Apr 2018, Nithya Balachandran wrote: > > > On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote: > > > > > Hi > > > > > > I've found something that works so weird I'm certain I have > > > missed how > > > gluster is supposed to be
2018 Apr 13
0
Is the size of bricks limiting the size of files I can store?
Sorry about the late reply, I missed seeing your mail. To begin with, what is your use-case? Sharding is currently supported only for virtual machine image storage use-case. It *could* work in other single-writer use-cases but it's only tested thoroughly for the vm use-case. If yours is not a vm store use-case, you might want to do some tests first to see if it works fine. If you find any
2018 Apr 03
0
Is the size of bricks limiting the size of files I can store?
On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour <ante at update.uu.se> wrote: > On Mon, 2 Apr 2018, Nithya Balachandran wrote: > > On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote: >> >> >>> Hi >>> >>> I've found something that works so weird I'm certain I have missed how >>> gluster is supposed to
2018 Apr 02
0
Is the size of bricks limiting the size of files I can store?
On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote: > > Hi > > I've found something that works so weird I'm certain I have missed how > gluster is supposed to be used, but I can not figure out how. This is my > scenario. > > I have a volume, created from 16 nodes, each with a brick of the same > size. The total of that volume thus is in
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/ The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link. On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote: >What shard corruption bug? bugzilla url? I'm running into some odd >behavior >in my lab with shards and RHEV/KVM data, trying to figure out if it's >related. >
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior in my lab with shards and RHEV/KVM data, trying to figure out if it's related. Thanks. On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote: > I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it > to settle. No problems. I am now running replica 4
2018 Apr 22
0
Reconstructing files from shards
So a stock ovirt with gluster install that uses sharding A. Can't safely have sharding turned off once files are in use B. Can't be expanded with additional bricks Ouch. On April 22, 2018 5:39:20 AM EDT, Gandalf Corvotempesta <gandalf.corvotempesta at gmail.com> wrote: >Il dom 22 apr 2018, 10:46 Alessandro Briosi <ab1 at metalit.com> ha >scritto: > >> Imho
2018 Feb 25
3
Convert replica 2 to replica 2+1 arbiter
I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? Kind regards, Mitja On 25/02/2018 13:55, Jim Kinney wrote: > gluster volume add-brick volname replica 3 arbiter 1 > brickhost:brickpath/to/new/arbitervol > > Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a > change in command will happen so it won't count the
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
Hi, It should be there, see https://review.gluster.org/#/c/14502/ <https://review.gluster.org/#/c/14502/> BR, Martin > On 25 Feb 2018, at 15:52, Mitja Miheli? <mitja.mihelic at arnes.si> wrote: > > I must ask again, just to be sure. Is what you are proposing definitely supported in v3.8? > > Kind regards, > Mitja > > On 25/02/2018 13:55, Jim Kinney wrote:
2018 Jan 08
0
different names for bricks
I just noticed that gluster volume info foo and gluster volume heal foo statistics use different indices for brick numbers. Info uses 1 based but heal statistics uses 0 based. gluster volume info clifford Volume Name: cliffordType: Distributed- ReplicateVolume ID: 0e33ff98-53e8-40cf-bdb0-3e18406a945aStatus: StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type: tcpBricks:Brick1:
2017 Aug 08
0
How are bricks healed in Debian Jessie 3.11
On 08/08/2017 04:51 PM, Gerry O'Brien wrote: > Hi, > > How are bricks healed in Debian Jessie 3.11? Is it at the file of > block level? The scenario we have in mind is a 2 brick replica volume > for storing VM file systems in a self-service IaaS, e.g. OpenNebula. If > one of the bricks is off-line for a period of time all the VM files > systems will all have been
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it to settle. No problems. I am now running replica 4 (preparing to remove a brick and host to replica 3). On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote: > Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail. > com> > ha scritto: > > It stopped being an outstanding
2017 Nov 06
0
gfid entries in volume heal info that do not heal
That took a while! I have the following stats: 4085169 files in both bricks3162940 files only have a single hard link. All of the files exist on both servers. bmidata2 (below) WAS running when bmidata1 died. gluster volume heal clifford statistics heal-countGathering count of entries to be healed on volume clifford has been successful Brick bmidata1:/data/glusterfs/clifford/brick/brickNumber of
2017 Sep 25
1
Adding bricks to an existing installation.
Sharding is not enabled. Ludwig On Mon, Sep 25, 2017 at 2:34 PM, <lemonnierk at ulrar.net> wrote: > Do you have sharding enabled ? If yes, don't do it. > If no I'll let someone who knows better answer you :) > > On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > > All, > > > > We currently have a Gluster installation which is made of 2
2017 Sep 25
0
Adding bricks to an existing installation.
Do you have sharding enabled ? If yes, don't do it. If no I'll let someone who knows better answer you :) On Mon, Sep 25, 2017 at 02:27:13PM -0400, Ludwig Gamache wrote: > All, > > We currently have a Gluster installation which is made of 2 servers. Each > server has 10 drives on ZFS. And I have a gluster mirror between these 2. > > The current config looks like: >
2017 Aug 08
2
How are bricks healed in Debian Jessie 3.11
Hi, How are bricks healed in Debian Jessie 3.11? Is it at the file of block level? The scenario we have in mind is a 2 brick replica volume for storing VM file systems in a self-service IaaS, e.g. OpenNebula. If one of the bricks is off-line for a period of time all the VM files systems will all have been modified when brick comes back on-line. As some of these VM file systems are quite
2017 Nov 09
0
Adding a slack for communication?
On Wed, Nov 8, 2017 at 3:23 PM, Jim Kinney <jim.kinney at gmail.com> wrote: > The archival process of the mailing list makes searching for past issues > possible. Slack, and irc in general, is a more closed garden than a public > archived mailing list. > > That said, irc/slack is good for immediate interaction between people, say, > gluster user with a nightmare and a
2017 Oct 24
0
gfid entries in volume heal info that do not heal
I have 14,734 GFIDS that are different. All the different ones are only on the brick that was live during the outage and concurrent file copy- in. The brick that was down at that time has no GFIDs that are not also on the up brick. As the bricks are 10TB, the find is going to be a long running process. I'm running several finds at once with gnu parallel but it will still take some time.
2018 Feb 25
0
Convert replica 2 to replica 2+1 arbiter
gluster volume add-brick volname replica 3 arbiter 1 brickhost:brickpath/to/new/arbitervol Yes. The replica 3 looks odd. Somewhere in 3.12 (?) or not until v4 a change in command will happen so it won't count the arbiter as a replica. On February 25, 2018 5:05:04 AM EST, "Mitja Miheli?" <mitja.mihelic at arnes.si> wrote: >Hi! > >I am using GlusterFS on CentOS7 with
2018 Jan 15
0
[Gluster-devel] Integration of GPU with glusterfs
On Mon, Jan 15, 2018 at 12:06 AM, Ashish Pandey <aspandey at redhat.com> wrote: > > It is disappointing to see the limitation being put by Nvidia on low cost > GPU usage on data centers. > https://www.theregister.co.uk/2018/01/03/nvidia_server_gpus/ > > We thought of providing an option in glusterfs by which we can control if > we want to use GPU or not. > So, the