Displaying 20 results from an estimated 10000 matches similar to: "Directory structure replication on distributed volumes"
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
There is a known issue with gluster 3.12.x builds (see [1]) so you may be
running into this.
The "shared-brick-count" values seem fine on stor1. Please send us "grep -n
"share" /var/lib/glusterd/vols/volumedisk1/*" results for the other nodes
so we can check if they are the cause.
Regards,
Nithya
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
2013 Nov 06
0
remove-brick very slow for (distributed-)replicated volumes?
We have a gigabit ethernet lan on which there is no other traffic and
I am getting the following numbers when I do a remove-brick. The
sequence of steps is that I create a 2 way replicated volume, populate
it with 300 files totalling 100MB. I then add a pair of bricks to the
volume and then a remove-brick on the original two bricks.
Is this the expected speed for the operation or could there be
2018 Feb 27
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi,
Some days ago all my glusterfs configuration was working fine. Today I
realized that the total size reported by df command was changed and is
smaller than the aggregated capacity of all the bricks in the volume.
I checked that all the volumes status are fine, all the glusterd daemons
are running, there is no error in logs, however df shows a bad total size.
My configuration for one volume:
2018 Feb 28
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 18:28, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> I applied the workarround for this bug and now df shows the right size:
>
> That is good to hear.
> [root at stor1 ~]# df -h
> Filesystem Size Used Avail Use% Mounted on
> /dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
> /dev/sdc1
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Jose,
On 28 February 2018 at 22:31, Jose V. Carri?n <jocarbur at gmail.com> wrote:
> Hi Nithya,
>
> My initial setup was composed of 2 similar nodes: stor1data and stor2data.
> A month ago I expanded both volumes with a new node: stor3data (2 bricks
> per volume).
> Of course, then to add the new peer with the bricks I did the 'balance
> force' operation.
2014 Apr 18
0
Re: Many orphaned inodes after resize2fs
On Fri, Apr 18, 2014 at 06:56:57PM +0200, Patrik Horn?k wrote:
>
> yesterday I experienced following problem with my ext3 filesystem:
>
> - I had ext3 filesystem of the size of a few TB with journal. I correctly
> unmounted it and it was marked clean.
>
> - I then ran fsck.etx3 -f on it and it did not find any problem.
>
> - After increasing size of its LVM volume by
2018 Mar 01
0
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
I'm sorry for my last incomplete message.
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- -----------
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
I applied the workarround for this bug and now df shows the right size:
[root at stor1 ~]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sdb1 26T 1,1T 25T 4% /mnt/glusterfs/vol0
/dev/sdc1 50T 16T 34T 33% /mnt/glusterfs/vol1
stor1data:/volumedisk0
101T 3,3T 97T 4% /volumedisk0
stor1data:/volumedisk1
2014 Apr 18
3
Re: Many orphaned inodes after resize2fs
Hi,
it seems you got it right! I don't know if you read email I sent you before
posting to the mailing list, but I accidentally diagnosed the cause... :)
I've noticed that inodes fsck warned me about, at least ones that I
checked, all have all four timestamps latest in 2010...
The filesystem has maximum 1281998848 inodes, which is timestamp in august
2010. I don't know how it got
2018 Feb 28
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
My initial setup was composed of 2 similar nodes: stor1data and stor2data.
A month ago I expanded both volumes with a new node: stor3data (2 bricks
per volume).
Of course, then to add the new peer with the bricks I did the 'balance
force' operation. This task finished successfully (you can see info below)
and number of files on the 3 nodes were very similar .
For volumedisk1 I
2009 Mar 07
0
Distributed Applications Operations Engineer - Yahoo! Inc.
Hello,
Yahoo! is looking for a Distributed Applications Operations Engineer
The full job posting is as follows (also available at
http://careers.yahoo.com/jdescription.php?frm=jsres&oid=19743):
====
Think about impacting 1 out of every 2 people online--in innovative and
imaginative ways that are uniquely Yahoo!. We do just that each and
every day, and you could too. After all, it's big
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 5:35 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > > Since arbiter bricks need not be of same size as the data bricks, if
> you
> > > > can configure three more arbiter bricks
> > > > based on the guidelines in the doc [1], you can do it live and
2017 Sep 20
0
Sharding option for distributed volumes
Hello folks,
Would please someone advice how to use sharding option for distributed
volumes. At this moment i'm facing problem with exporting big file which
are not going to be distributed across bricks inside one volume.
Thank you in advance.
--
Best regards
Pavel Kutishchev
Golang DevOPS Engineer at
Self employed.
2017 Sep 21
0
Sharding option for distributed volumes
Hello Ji-Hyeon,
Thanks, is that option available in 3.12 gluster release? because we're
still on 3.8 and just playing around latest version in order to have our
solution migrated.
Thank you!
9/21/17 2:26 PM, Ji-Hyeon Gim ?????:
> Hello Pavel!
>
> In my opinion, you need to check features.shard-block-size option first.
> If a file nobigger than this value, it would not be
2018 Feb 27
0
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 4:18 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> > If you want to use the first two bricks as arbiter, then you need to be
> > aware of the following things:
> > - Your distribution count will be decreased to 2.
>
> What's the significance of this? I'm
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 04:59:36PM +0530, Karthik Subrahmanya wrote:
> > > Since arbiter bricks need not be of same size as the data bricks, if you
> > > can configure three more arbiter bricks
> > > based on the guidelines in the doc [1], you can do it live and you will
> > > have the distribution count also unchanged.
> >
> > I can probably find
2018 Mar 01
2
df reports wrong full capacity for distributed volumes (Glusterfs 3.12.6-1)
Hi Nithya,
Below the output of both volumes:
[root at stor1t ~]# gluster volume rebalance volumedisk1 status
Node Rebalanced-files size
scanned failures skipped status run time in
h:m:s
--------- ----------- -----------
----------- ----------- ----------- ------------
2018 Feb 26
0
Quorum in distributed-replicate volume
Hi Dave,
On Mon, Feb 26, 2018 at 4:45 PM, Dave Sherohman <dave at sherohman.org> wrote:
> I've configured 6 bricks as distributed-replicated with replica 2,
> expecting that all active bricks would be usable so long as a quorum of
> at least 4 live bricks is maintained.
>
The client quorum is configured per replica sub volume and not for the
entire volume.
Since you have a
2011 Sep 02
0
Copying data failed on distributed replicated volume (ver. 3.1.3)
Hi,
I am trying to backup data from a distributed replicated volume.
The volume was built from 6 units of 2 TB hard disks:
gluster> volume info
Volume Name: 6TB-Vol
Type: Distributed-Replicate
Status: Started
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: c107:/exp0
Brick2: c108:/exp0
Brick3: c109:/exp0
Brick4: c110:/exp0
Brick5: c111:/exp0
Brick6: c112:/exp0
Options
2018 Feb 27
2
Quorum in distributed-replicate volume
On Tue, Feb 27, 2018 at 03:20:25PM +0530, Karthik Subrahmanya wrote:
> If you want to use the first two bricks as arbiter, then you need to be
> aware of the following things:
> - Your distribution count will be decreased to 2.
What's the significance of this? I'm trying to find documentation on
distribution counts in gluster, but my google-fu is failing me.
> - Your data on