Displaying 20 results from an estimated 712 matches for "brick2".
Did you mean:
brick
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...n XFS bricks.
Here are the informations:
[root at ovh-ov1 bricks]# gluster volume info gv2a2
Volume Name: gv2a2
Type: Replicate
Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick2/gv2a2
Brick2: gluster3:/bricks/brick3/gv2a2
Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
Options Reconfigured:
storage.owner-gid: 107
storage.owner-uid: 107
user.cifs: off
features.shard: on
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...at ovh-ov1 bricks]# gluster volume info gv2a2
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick2/gv2a2
> Brick2: gluster3:/bricks/brick3/gv2a2
> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
> Options Reconfigured:
> storage.owner-gid: 107
> storage.owner-uid: 107
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...gv2a2
>>
>> Volume Name: gv2a2
>> Type: Replicate
>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/bricks/brick2/gv2a2
>> Brick2: gluster3:/bricks/brick3/gv2a2
>> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
>> Options Reconfigured:
>> storage.owner-gid: 107
>> storage.owner-uid: 107
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qle...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...at ovh-ov1 bricks]# gluster volume info gv2a2
>
> Volume Name: gv2a2
> Type: Replicate
> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x (2 + 1) = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick2/gv2a2
> Brick2: gluster3:/bricks/brick3/gv2a2
> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
> Options Reconfigured:
> storage.owner-gid: 107
> storage.owner-uid: 107
> user.cifs: off
> features.shard: on
> cluster.shd-wait-qlength: 10000
> cluster.shd-max...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...: Replicate
>>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x (2 + 1) = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: gluster1:/bricks/brick2/gv2a2
>>> Brick2: gluster3:/bricks/brick3/gv2a2
>>> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
>>> Options Reconfigured:
>>> storage.owner-gid: 107
>>> storage.owner-uid: 107
>>> user.cifs: off
>>&...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...gv2a2
>>
>> Volume Name: gv2a2
>> Type: Replicate
>> Volume ID: 83c84774-2068-4bfc-b0b9-3e6b93705b9f
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x (2 + 1) = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: gluster1:/bricks/brick2/gv2a2
>> Brick2: gluster3:/bricks/brick3/gv2a2
>> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
>> Options Reconfigured:
>> storage.owner-gid: 107
>> storage.owner-uid: 107
>> user.cifs: off
>> features.shard: on
>> cluster.shd-wait-qle...
2013 Jul 02
1
problem expanding a volume
Hello,
I am having trouble expanding a volume. Every time I try to add bricks to
the volume, I get this error:
[root at gluster1 sdb1]# gluster volume add-brick vg0
gluster5:/export/brick2/sdb1 gluster6:/export/brick2/sdb1
/export/brick2/sdb1 or a prefix of it is already part of a volume
Here is the volume info:
[root at gluster1 sdb1]# gluster volume info vg0
Volume Name: vg0
Type: Distributed-Replicate
Volume ID: 7ebad06f-2b44-4769-a395-475f300608e6
Status: Started
Number of Bri...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...-2068-4bfc-b0b9-3e6b93705b9f
>>>> Status: Started
>>>> Snapshot Count: 0
>>>> Number of Bricks: 1 x (2 + 1) = 3
>>>> Transport-type: tcp
>>>> Bricks:
>>>> Brick1: gluster1:/bricks/brick2/gv2a2
>>>> Brick2: gluster3:/bricks/brick3/gv2a2
>>>> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
>>>> Options Reconfigured:
>>>> storage.owner-gid: 107
>>>> storage.owner-uid: 107
&...
2018 Jan 17
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...5b9f
>>>>> Status: Started
>>>>> Snapshot Count: 0
>>>>> Number of Bricks: 1 x (2 + 1) = 3
>>>>> Transport-type: tcp
>>>>> Bricks:
>>>>> Brick1: gluster1:/bricks/brick2/gv2a2
>>>>> Brick2: gluster3:/bricks/brick3/gv2a2
>>>>> Brick3: gluster2:/bricks/arbiter_brick_gv2a2/gv2a2 (arbiter)
>>>>> Options Reconfigured:
>>>>> storage.owner-gid: 107
>>>>> stor...
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...rrupted, exactly at the same point.
Here's the volume info of the create volume:
Volume Name: gvtest
Type: Replicate
Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: gluster1:/bricks/brick1/gvtest
Brick2: gluster2:/bricks/brick1/gvtest
Brick3: gluster3:/bricks/brick1/gvtest
Options Reconfigured:
user.cifs: off
features.shard: off
cluster.shd-wait-qlength: 10000
cluster.shd-max-threads: 8
cluster.locking-scheme: granular
cluster.data-self-heal-algorithm: full
cluster.server-quorum-type: server
clust...
2018 Jan 16
1
Problem with Gluster 3.12.4, VM and sharding
Also to help isolate the component, could you answer these:
1. on a different volume with shard not enabled, do you see this issue?
2. on a plain 3-way replicated volume (no arbiter), do you see this issue?
On Tue, Jan 16, 2018 at 4:03 PM, Krutika Dhananjay <kdhananj at redhat.com>
wrote:
> Please share the volume-info output and the logs under /var/log/glusterfs/
> from all your
2018 Jan 18
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...lume info of the create volume:
>
> Volume Name: gvtest
> Type: Replicate
> Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick1/gvtest
> Brick2: gluster2:/bricks/brick1/gvtest
> Brick3: gluster3:/bricks/brick1/gvtest
> Options Reconfigured:
> user.cifs: off
> features.shard: off
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster.data-self-heal-algorithm: full...
2018 Jan 19
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...; Volume Name: gvtest
> Type: Replicate
> Volume ID: e2ddf694-ba46-4bc7-bc9c-e30803374e9d
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: gluster1:/bricks/brick1/gvtest
> Brick2: gluster2:/bricks/brick1/gvtest
> Brick3: gluster3:/bricks/brick1/gvtest
> Options Reconfigured:
> user.cifs: off
> features.shard: off
> cluster.shd-wait-qlength: 10000
> cluster.shd-max-threads: 8
> cluster.locking-scheme: granular
> cluster...
2017 Sep 29
1
Gluster geo replication volume is faulty
...lica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2: gfs3:/gfs/brick1/gv0
Brick3: gfs1:/gfs/arbiter/gv0 (arbiter)
Brick4: gfs1:/gfs/brick1/gv0
Brick5: gfs3:/gfs/brick2/gv0
Brick6: gfs2:/gfs/arbiter/gv0 (arbiter)
Brick7: gfs1:/gfs/brick2/gv0
Brick8: gfs2:/gfs/brick2/gv0
Brick9: gfs3:/gfs/arbiter/gv0 (arbiter)
Options Reconfigured:
nfs.disable: on
tra...
2017 Oct 06
0
Gluster geo replication volume is faulty
...gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gfs2:/gfs/brick1/gv0
> Brick2: gfs3:/gfs/brick1/gv0
> Brick3: gfs1:/gfs/arbiter/gv0 (arbiter)
> Brick4: gfs1:/gfs/brick1/gv0
> Brick5: gfs3:/gfs/brick2/gv0
> Brick6: gfs2:/gfs/arbiter/gv0 (arbiter)
> Brick7: gfs1:/gfs/brick2/gv0
> Brick8: gfs2:/gfs/brick2/gv0
> Brick9: gfs3:/gfs/arbiter/gv0 (arbiter)
> O...
2008 Dec 14
1
Is that iozone result normal?
...========
volume brick1-raw
type storage/posix # POSIX FS translator
option directory /exports/disk1 # Export this directory
end-volume
volume brick1
type performance/io-threads
subvolumes brick1-raw
option thread-count 16
option cache-size 256m
end-volume
volume brick2-raw
type storage/posix # POSIX FS translator
option directory /exports/disk2 # Export this directory
end-volume
volume brick2
type performance/io-threads
subvolumes brick2-raw
option thread-count 16
option cache-size 256m
end-volume
volume brick-ns
type storag...
2018 Apr 25
0
Turn off replication
Hello Karthik
Im having trouble adding the two bricks back online. Any help is appreciated
thanks
when i try to add-brick command this is what i get
[root at gluster01 ~]# gluster volume add-brick scratch gluster02ib:/gdata/brick2/scratch/
volume add-brick: failed: Pre Validation failed on gluster02ib. Brick: gluster02ib:/gdata/brick2/scratch not available. Brick may be containing or be contained by an existing brick
I have run the following commands and remove the .glusterfs hidden directories
[root at gluster02 ~]# setf...
2018 Apr 25
2
Turn off replication
...tatus scratch
Status of volume: scratch
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y 1819
Brick gluster01ib:/gdata/brick2/scratch 49154 49155 Y 1827
Brick gluster02ib:/gdata/brick1/scratch N/A N/A N N/A
Task Status of Volume scratch
------------------------------------------------------------------------------
There are no active volume tasks
[root at gluster02 glusterf...
2018 Apr 12
2
Turn off replication
...> that I?m running the right commands.
>
> 1. gluster volume heal scratch info
>
If the count is non zero, trigger the heal and wait for heal info count to
become zero.
> 2. gluster volume remove-brick scratch *replica 1 *
> gluster02ib:/gdata/brick1/scratch gluster02ib:/gdata/brick2/scratch force
>
3. gluster volume add-brick* ?#"* scratch gluster02ib:/gdata/brick1/
> scratch gluster02ib:/gdata/brick2/scratch
>
>
> Based on the configuration I have, Brick 1 from Node A and B are tide
> together and Brick 2 from Node A and B are also tide together. Looki...
2018 Apr 27
0
Turn off replication
...us of volume: scratch
> Gluster process TCP Port RDMA Port Online Pid
> ------------------------------------------------------------------------------
> Brick gluster01ib:/gdata/brick1/scratch 49152 49153 Y
> 1819
> Brick gluster01ib:/gdata/brick2/scratch 49154 49155 Y
> 1827
> Brick gluster02ib:/gdata/brick1/scratch N/A N/A N N/A
>
>
>
> Task Status of Volume scratch
> ------------------------------------------------------------------------------
> There are no active volume tasks...