Displaying 20 results from an estimated 1600 matches similar to: "Is the size of bricks limiting the size of files I can store?"
2018 Apr 03
0
Is the size of bricks limiting the size of files I can store?
On Mon, Apr 2, 2018 at 11:37 PM, Andreas Davour <ante at update.uu.se> wrote:
> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>
> On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote:
>>
>>
>>> Hi
>>>
>>> I've found something that works so weird I'm certain I have missed how
>>> gluster is supposed to
2018 Apr 13
0
Is the size of bricks limiting the size of files I can store?
Sorry about the late reply, I missed seeing your mail.
To begin with, what is your use-case? Sharding is currently supported only
for virtual machine image storage use-case.
It *could* work in other single-writer use-cases but it's only tested
thoroughly for the vm use-case.
If yours is not a vm store use-case, you might want to do some tests first
to see if it works fine.
If you find any
2018 Apr 13
0
Is the size of bricks limiting the size of files I can store?
On April 12, 2018 3:48:32 PM EDT, Andreas Davour <ante at Update.UU.SE> wrote:
>On Mon, 2 Apr 2018, Jim Kinney wrote:
>
>> On Mon, 2018-04-02 at 20:07 +0200, Andreas Davour wrote:
>>> On Mon, 2 Apr 2018, Nithya Balachandran wrote:
>>>
>>>> On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote:
>>>>
2018 Apr 02
0
Is the size of bricks limiting the size of files I can store?
On 2 April 2018 at 14:48, Andreas Davour <ante at update.uu.se> wrote:
>
> Hi
>
> I've found something that works so weird I'm certain I have missed how
> gluster is supposed to be used, but I can not figure out how. This is my
> scenario.
>
> I have a volume, created from 16 nodes, each with a brick of the same
> size. The total of that volume thus is in
2018 Jan 08
0
different names for bricks
I just noticed that gluster volume info foo and gluster volume heal
foo statistics use different indices for brick numbers. Info uses 1
based but heal statistics uses 0 based.
gluster volume info clifford Volume Name: cliffordType: Distributed-
ReplicateVolume ID: 0e33ff98-53e8-40cf-bdb0-3e18406a945aStatus:
StartedSnapshot Count: 0Number of Bricks: 2 x 2 = 4Transport-type:
tcpBricks:Brick1:
2018 May 04
0
shard corruption bug
I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left
it to settle. No problems. I am now running replica 4 (preparing to
remove a brick and host to replica 3).
On Fri, 2018-05-04 at 14:24 +0000, Gandalf Corvotempesta wrote:
> Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > It stopped being an outstanding
2018 May 30
2
shard corruption bug
What shard corruption bug? bugzilla url? I'm running into some odd behavior
in my lab with shards and RHEV/KVM data, trying to figure out if it's
related.
Thanks.
On Fri, May 4, 2018 at 11:13 AM, Jim Kinney <jim.kinney at gmail.com> wrote:
> I upgraded my ovirt stack to 3.12.9, added a brick to a volume and left it
> to settle. No problems. I am now running replica 4
2018 May 30
0
shard corruption bug
https://docs.gluster.org/en/latest/release-notes/3.12.6/
The major issue in 3.12.6 is not present in 3.12.7. Bugzilla ID listed in link.
On May 29, 2018 8:50:56 PM EDT, Dan Lavu <dan at redhat.com> wrote:
>What shard corruption bug? bugzilla url? I'm running into some odd
>behavior
>in my lab with shards and RHEV/KVM data, trying to figure out if it's
>related.
>
2018 May 09
0
Some more questions
correct. a new server will NOT add space in this manner. But the
original Q was about rebalancing after adding a 4th server. If you are
using distributed/replication, then yes, a new server with be adding a
portion of it's space to add more space to the cluster.
But in a purely replica mode, nope.
On Wed, 2018-05-09 at 19:25 +0000, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018
2018 May 09
0
Some more questions
It all depends on how you are set up on the distribute. Think RAID 10
with 4 drives - each pair strips (distribute) and the pair of pairs
replicates.
On Wed, 2018-05-09 at 19:34 +0000, Gandalf Corvotempesta wrote:
> Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.
> com>
> ha scritto:
> > correct. a new server will NOT add space in this manner. But the
2018 May 30
1
peer detach fails
All,
I added a third peer for a arbiter brick host to replica 2 cluster.
Then I realized I can't use it since it has no infiniband like the
other two hosts (infiniband and ethernet for clients). So I removed the
new arbiter bricks from all of the volumes. However, I can't detach the
peer as it keeps saying there are bricks it hosts. Nothing in volume
status or info shows that host to be
2017 Sep 05
0
returning from a failed RAID
All,
I had a "bad timing" event where I lost 3 drives in a RAID6 array and
the structure of all of the LVM pools and nodes was lost.
This array was 1/2 of a redundant (replica 2) gluster config (will be
adding additional 3rd soon for split brain/redundancy with failure
issues).
The failed drives were replaced, the array rebuilt, all the thin_pools
and thin_volumes recreated, LUKS
2018 May 09
0
3.12, ganesha and storhaug
All,
I am upgrading the storage cluster from 3.8 to 3.10 or 3.12. I have
3.12 on the ovirt cluster. I would like to change the client connection
method to NFS/NFS-Ganesha as the FUSE method causes some issues with
heavy python users (mmap errors on file open for write).
I see that nfs-ganesha was dropped after 3.10 yet there is an updated
version in the 3.12 repo for CentOS 7 (which I am
2018 May 09
0
Some more questions
On Wed, 2018-05-09 at 18:26 +0000, Gandalf Corvotempesta wrote:
> Ok, some more question as I'm still planning our SDS (but I'm prone
> to use
> LizardFS, gluster is too inflexible)
>
> Let's assume a replica 3:
>
> 1) currently, is not possbile to add a single server and rebalance
> like any
> order SDS (Ceph, Lizard, Moose, DRBD, ....), right ? In replica
2018 Mar 07
0
Kernel NFS on GlusterFS
Gluster does the sync part better than corosync. It's not an
active/passive failover system. It more all active. Gluster handles the
recovery once all nodes are back online.
That requires the client tool chain to understand that a write goes to
all storage devices not just the active one.
3.10 is a long term support release. Upgrading to 3.12 or 4 is not a
significant issue once a replacement
2018 Jan 26
0
Replacing a third data node with an arbiter one
On Fri, 2018-01-26 at 07:12 +0530, Sankarshan Mukhopadhyay wrote:
> On Fri, Jan 26, 2018 at 7:05 AM, Ravishankar N <ravishankar at redhat.co
> m> wrote:
> >
> > On 01/24/2018 07:20 PM, Hoggins! wrote:
> >
> > Hello,
> >
> > The subject says it all. I have a replica 3 cluster :
> >
> > gluster> volume info thedude
> >
2018 May 04
2
shard corruption bug
Il giorno ven 4 mag 2018 alle ore 14:06 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> It stopped being an outstanding issue at 3.12.7. I think it's now fixed.
So, is not possible to extend and rebalance a working cluster with sharded
data ?
Can someone confirm this ? Maybe the ones that hit the bug in the past
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:22 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> You can change the replica count. Add a fourth server, add it's brick to
existing volume with gluster volume add-brick vol0 replica 4
newhost:/path/to/brick
This doesn't add space, but only a new replica, increasing the number of
copies
2018 May 09
2
Some more questions
Il giorno mer 9 mag 2018 alle ore 21:31 Jim Kinney <jim.kinney at gmail.com>
ha scritto:
> correct. a new server will NOT add space in this manner. But the original
Q was about rebalancing after adding a 4th server. If you are using
distributed/replication, then yes, a new server with be adding a portion of
it's space to add more space to the cluster.
Wait, in a distribute-replicate,
2010 Mar 18
2
CCrb configuration for many projects
Hi!
I have about 10 projects in my Cruise. I want to use metric_fu (roodi,
rcov, flay) metrics. Right now I have defined :cruise task in Rakefile
in each project. Is there any convention to manage cruise
configuration from one place? Gem? Plugin for Rails projects? Some
good habit also for custom artifacts?
--
Pozdrawiam, Sebastian Nowak
http://blog.sebastiannowak.net
XMPP: seban at chrome.pl