Displaying 20 results from an estimated 26 matches for "brick0".
Did you mean:
brick
2018 Feb 27
2
Quorum in distributed-replicate volume
...uot;gluster volume info <volname>"
> and which brick is of what size.
Volume Name: palantir
Type: Distributed-Replicate
Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x 2 = 6
Transport-type: tcp
Bricks:
Brick1: saruman:/var/local/brick0/data
Brick2: gandalf:/var/local/brick0/data
Brick3: azathoth:/var/local/brick0/data
Brick4: yog-sothoth:/var/local/brick0/data
Brick5: cthulhu:/var/local/brick0/data
Brick6: mordiggian:/var/local/brick0/data
Options Reconfigured:
features.scrub: Inactive
features.bitrot: off
transport.address-famil...
2011 Oct 17
1
brick out of space, unmounted brick
...change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit.
Suppose I have a Gluster volume made up of four 1 MB bricks, like this
Volume Name: test
Type: Distributed-Replicate
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gluster0-node0:/brick0
Brick2: gluster0-node1:/brick1
Brick3: gluster0-node0:/brick2
Brick4: gluster0-node1:/brick3
The mounted Gluster volume will report that the size of the volume is 2 MB, which creates a false impression that it can hold a 2 MB file. This isn't too bad, since people are used to a file system'...
2018 Feb 13
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...7 14:07:11
Repository revision: git://git.gluster.com/glusterfs.git
# gluster volume status
Status of volume: palantir
Gluster process TCP Port RDMA Port Online
Pid
------------------------------------------------------------------------------
Brick saruman:/var/local/brick0/data 49154 0 Y
10690
Brick gandalf:/var/local/brick0/data 49155 0 Y
18732
Brick azathoth:/var/local/brick0/data 49155 0 Y
9507
Brick yog-sothoth:/var/local/brick0/data 49153 0 Y
39559
Brick cthulhu:/var/local/brick0/data...
2018 Feb 15
0
Failover problems with gluster 3.8.8-1 (latest Debian stable)
Well, it looks like I've stumped the list, so I did a bit of additional
digging myself:
azathoth replicates with yog-sothoth, so I compared their brick
directories. `ls -R /var/local/brick0/data | md5sum` gives the same
result on both servers, so the filenames are identical in both bricks.
However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
more data (445G vs 442G) than yog.
This seems consistent with my assumption that the problem is on
yog-sothoth (everything i...
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added the empty new brick to the volume:
gluster volume add-brick VOLUME rep 3 server2:/gluster/VOLUME/brick0/brick
There is about 2TB of data on the volume and they are all small files,
photos and documents....
2009 Feb 23
1
Interleave or not
Lets say you had 4 servers and you wanted to setup replicate and
distribute. What methoid would be better:
server sdb1
xen0 brick0
xen1 mirror0
xen2 brick1
xen3 mirror1
replicate block0 - brick0 mirror0
replicate block1 - brick1 mirror1
distribute unify - block0 block1
or
server sdb1 sdb2
xen0 brick0 mirror3
xen1 brick1 mirror0
xen2 brick2 mirror1
xen3 brick3 mirror2
replicate block0 - brick0 mirror0
replicate block1 - b...
2018 Feb 27
0
Quorum in distributed-replicate volume
...; and which brick is of what size.
>
> Volume Name: palantir
> Type: Distributed-Replicate
> Volume ID: 48379a50-3210-41b4-9a77-ae143c8bcac0
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x 2 = 6
> Transport-type: tcp
> Bricks:
> Brick1: saruman:/var/local/brick0/data
> Brick2: gandalf:/var/local/brick0/data
> Brick3: azathoth:/var/local/brick0/data
> Brick4: yog-sothoth:/var/local/brick0/data
> Brick5: cthulhu:/var/local/brick0/data
> Brick6: mordiggian:/var/local/brick0/data
> Options Reconfigured:
> features.scrub: Inactive
> feat...
2014 Jun 27
1
geo-replication status faulty
...n log grab #
[2014-06-26 17:09:08.794359] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2014-06-26 17:09:08.795387] I [monitor(monitor):130:monitor] Monitor: starting gsyncd worker
[2014-06-26 17:09:09.358588] I [gsyncd(/data/glusterfs/vol0/brick0/brick):532:main_i] <top>: syncing: gluster://localhost:gluster_vol0 -> ssh://root at node003:gluster://localhost:gluster_vol1
[2014-06-26 17:09:09.537219] I [monitor(monitor):129:monitor] Monitor: ------------------------------------------------------------
[2014-06-26 17:09:09.540030] I [...
2018 Feb 27
0
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 6:14 PM, Dave Sherohman <dave at sherohman.org> wrote:
> On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > > "In a replica 2 volume... If we set the client-quorum option to
> > > auto, then the first brick must always be up, irrespective of the
> > > status of the second brick. If only the second brick is up,
2018 Feb 25
0
Re-adding an existing brick to a volume
...2018 at 4:37 AM, Mitja Miheli? <mitja.mihelic at arnes.si> wrote:
> Hi!
>
> I am running a replica 3 volume. On server2 I wanted to move the brick to a
> new disk.
> I removed the brick from the volume:
> gluster volume remove-brick VOLUME rep 2
> server2:/gluster/VOLUME/brick0/brick force
>
> I unmounted the old brick and mounted the new disk to the same location.
> I added the empty new brick to the volume:
> gluster volume add-brick VOLUME rep 3 server2:/gluster/VOLUME/brick0/brick
>
> There is about 2TB of data on the volume and they are all small fi...
2017 Aug 09
1
Gluster performance with VM's
Hi, community
Please, help me with my trouble.
I have 2 Gluster nodes, with 2 bricks on each.
Configuration:
Node1 brick1 replicated on Node0 brick0
Node0 brick1 replicated on Node1 brick0
Volume Name: gm0
Type: Distributed-Replicate
Volume ID: 5e55f511-8a50-46e4-aa2f-5d4f73c859cf
Status: Started
Snapshot Count: 0
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: gl1:/mnt/brick1/gm0
Brick2: gl0:/mnt/brick0/gm0
Brick3: gl0:/mnt/br...
2018 Feb 25
1
Re-adding an existing brick to a volume
...li? <mitja.mihelic at arnes.si> wrote:
>> Hi!
>>
>> I am running a replica 3 volume. On server2 I wanted to move the brick to a
>> new disk.
>> I removed the brick from the volume:
>> gluster volume remove-brick VOLUME rep 2
>> server2:/gluster/VOLUME/brick0/brick force
>>
>> I unmounted the old brick and mounted the new disk to the same location.
>> I added the empty new brick to the volume:
>> gluster volume add-brick VOLUME rep 3 server2:/gluster/VOLUME/brick0/brick
>>
>> There is about 2TB of data on the volume a...
2018 Feb 15
2
Failover problems with gluster 3.8.8-1 (latest Debian stable)
...n Feb 15, 2018 14:20, "Dave Sherohman" <dave at sherohman.org> wrote:
> Well, it looks like I've stumped the list, so I did a bit of additional
> digging myself:
>
> azathoth replicates with yog-sothoth, so I compared their brick
> directories. `ls -R /var/local/brick0/data | md5sum` gives the same
> result on both servers, so the filenames are identical in both bricks.
> However, `du -s /var/local/brick0/data` shows that azathoth has about 3G
> more data (445G vs 442G) than yog.
>
> This seems consistent with my assumption that the problem is on
&...
2018 Feb 26
2
Quorum in distributed-replicate volume
On Mon, Feb 26, 2018 at 05:45:27PM +0530, Karthik Subrahmanya wrote:
> > "In a replica 2 volume... If we set the client-quorum option to
> > auto, then the first brick must always be up, irrespective of the
> > status of the second brick. If only the second brick is up, the
> > subvolume becomes read-only."
> >
> By default client-quorum is
2008 Oct 15
1
Glusterfs performance with large directories
...tures/posix-locks
subvolumes brick-posix0
end-volume
volume brick-fixed0
type features/fixed-id
option fixed-uid 2224
option fixed-gid 224
subvolumes brick-lock0
end-volume
volume brick-iothreads0
type performance/io-threads
option thread-count 4
subvolumes brick-fixed0
end-volume
volume brick0
type performance/read-ahead
subvolumes brick-iothreads0
end-volume
volume server
type protocol/server
option transport-type tcp/server
subvolumes brick0
option auth.ip.brick0.allow 10.1.0.*
end-volume
**** GlusterFS namespace config ****
volume brick-posix
type storage/posix
option dire...
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody,
Please, help to fix me a problem.
I have a distributed-replicated volume between two servers. On each
server I have 2 RAID-10 arrays, that replicated between servers.
Brick gl1:/mnt/brick1/gm0 49153 0 Y
13910
Brick gl0:/mnt/brick0/gm0 N/A N/A N
N/A
Brick gl0:/mnt/brick1/gm0 N/A N/A N
N/A
Brick gl1:/mnt/brick0/gm0 49154 0 Y
13613
On gl0 node arrays was terminated and removed. After that, new arrays
was created a...
2012 Nov 27
1
Performance after failover
...failed over from one
server to another ?
torbjorn at srv18:~$ sudo gluster volume$
Volume Name: testvol$
Type: Distributed-Replicate$
Volume ID: 90636b5d-0d57-483c-bbfd-c0cdab2adaaa$
Status: Started$
Number of Bricks: 2 x 2 = 4$
Transport-type: tcp$
Bricks:$
Brick1: srv18.trollweb.net:/srv/gluster/brick0$
Brick2: srv17.trollweb.net:/srv/gluster/brick0$
Brick3: srv18.trollweb.net:/srv/gluster/brick1$
Brick4: srv17.trollweb.net:/srv/gluster/brick1$
--
Vennlig hilsen
Torbj?rn Thorsen
Trollweb Solutions AS
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://sup...
2009 Jul 29
2
Xen - Backend or Frontend or Both?
I have 6 boxes with a client config (see below) across 6 boxes. I am using
distribute across 3 replicate pairs. Since I am running xen I need to
disable-direct-io and that slows things down quite a bit. My thought was
to move the replicate / distribute to the backend server config so that
self heal can happen on faster backend rather then frontend client with
disable-direct-io.
Does this
2013 Mar 20
2
Geo-replication broken in 3.4 alpha2?
...essfully with 3.4 alpha orif I'm missing something obvious.
Output of gluster volume info:
Volume Name: vmstorage
Type: Distributed-Replicate
Volume ID: a800e5b7-089e-4b55-9515-c9cc72502aea
Status: Started
Number of Bricks: 2 x 2 = 4
Transport-type: tcp
Bricks:
Brick1: mc1.ovirt.local:/gluster/brick0/vmstorage
Brick2: mc5.ovirt.local:/gluster/brick0/vmstorage
Brick3: mc1.ovirt.local:/gluster/brick1/vmstorage
Brick4: mc5.ovirt.local:/gluster/brick1/vmstorage
Options Reconfigured:
performance.stat-prefetch: off
performance.io-cache: off
performance.read-ahead: off
performance.quick-read: off
netw...
2008 Sep 15
0
Trace log of unify when glusterfs freezes
...ly=@0x524cd0
2008-09-15 20:18:24 E [client-protocol.c:3310:client_getdents_cbk] brick-ns: no proper reply from server, returning ENOTCONN
2008-09-15 20:18:24 W [client-protocol.c:1711:client_closedir] brick-ns: no proper fd found, returning
2008-09-15 20:19:14 W [client-protocol.c:205:call_bail] brick0: activating bail-out. pending frames = 1. last sent = 2008-09-15 20:18:24. last received = 2008-09-15 20:17:37 transport-timeout = 42
2008-09-15 20:19:14 C [client-protocol.c:212:call_bail] brick0: bailing transport
2008-09-15 20:19:14 W [client-protocol.c:205:call_bail] brick1: activating bail-o...