Displaying 20 results from an estimated 20000 matches similar to: "volume status report brick port NA while online status is Y after restore brick"
2013 Aug 20
1
files got sticky permissions T--------- after gluster volume rebalance
Dear gluster experts,
We're running glusterfs 3.3 and we have met file permission probelems after
gluster volume rebalance. Files got stick permissions T--------- after
rebalance which break our client normal fops unexpectedly.
Any one known this issue?
Thank you for your help.
--
???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2023 Sep 29
0
gluster volume status shows -> Online "N" after node reboot.
Hi list,
I am using a replica volume (3 nodes) gluster in an ovirt environment and
after setting one node in maintenance mode and rebooting it, the "Online"
flag in gluster volume status does not go to "Y" again.
[root at node1 glusterfs]# gluster volume status
Status of volume: my_volume
Gluster process TCP Port RDMA Port Online Pid
2013 Jan 03
0
Resolve brick failed in restore
Hi,
I have a lab with 10 machines acting as storage servers for some compute
machines, using glusterfs to distribute the data as two volumes.
Created using:
gluster volume create vol1 192.168.10.{221..230}:/data/vol1
gluster volume create vol2 replica 2 192.168.10.{221..230}:/data/vol2
and mounted on the client and server machines using:
mount -t glusterfs 192.168.10.221:/vol1 /mnt/vol1
mount
2017 Oct 24
0
brick is down but gluster volume status says it's fine
On Tue, Oct 24, 2017 at 11:13 PM, Alastair Neil <ajneil.tech at gmail.com>
wrote:
> gluster version 3.10.6, replica 3 volume, daemon is present but does not
> appear to be functioning
>
> peculiar behaviour. If I kill the glusterfs brick daemon and restart
> glusterd then the brick becomes available - but one of my other volumes
> bricks on the same server goes down in
2017 Jul 17
1
Gluster set brick online and start sync.
Hello everybody,
Please, help to fix me a problem.
I have a distributed-replicated volume between two servers. On each
server I have 2 RAID-10 arrays, that replicated between servers.
Brick gl1:/mnt/brick1/gm0 49153 0 Y
13910
Brick gl0:/mnt/brick0/gm0 N/A N/A N
N/A
Brick gl0:/mnt/brick1/gm0 N/A
2018 Feb 25
0
Re-adding an existing brick to a volume
.gluster and attr already in that folder so it would not connect it as a brick
I don't think there is option to "reconnect brick back"
what I did many times - delete .gluster and reset attr on the folder,
connect the brick and then update those attr. with stat
commands example here
http://lists.gluster.org/pipermail/gluster-users/2018-January/033352.html
Vlad
On Sun, Feb 25, 2018
2018 Feb 25
1
Re-adding an existing brick to a volume
Let me see if I understand this.
Remove attrs from the brick and delete the .glusterfs folder. Data stays
in place. Add the brick to the volume.
Since most of the data is the same as on the actual volume it does not
need to be synced, and the heal operation finishes much faster.
Do I have this right?
Kind regards,
Mitja
On 25/02/2018 17:02, Vlad Kopylov wrote:
> .gluster and attr already in
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
2013 Dec 23
0
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume
Hi all
How to ensure the new data write to other bricks, if one brick offline of gluster distributed volume ;
the client can write data that originally on offline bricks to other online bricks ;
the distributed volume crash, even if one brick offline; it's so unreliable
when the failed brick online ,how to join the original distribute volume;
don't want the new write data can't
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi,
Maybe someone can point me to a documentation or explain this? I can't
find it myself.
Do we have any other useful resources except doc.gluster.org? As I see
many gluster options are not described there or there are no explanation
what is doing...
On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added the empty new brick to the volume:
gluster volume add-brick VOLUME rep 3
2018 Apr 18
1
Replicated volume read request are served by remote brick
I have created a 2 brick replicated volume.
gluster> volume status
Status of volume: storage
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick master:/glusterfs/bricks/storage/mountpoint
49153 0 Y 5301
Brick worker1:/glusterfs/bricks/storage/mountpoint
2024 Feb 20
0
How to replace a brick in a dispersed volume?
Hi,
I setup a 4+2 dispersed volume and it worked well so far.
??? gluster volume info
??? Volume Name: disperseVol
??? Type: Disperse
??? Volume ID: 35386b55-829c-4bac-bdba-609427269cf4
??? Status: Started
??? Snapshot Count: 0
??? Number of Bricks: 1 x (4 + 2) = 6
??? Transport-type: tcp
??? Bricks:
??? Brick1: 192.168.129.227:/mnt/gluster/disperseVol
??? Brick2:
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume
type first?
Cheers,
Laura B
On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
> Hi Anatoliy,
>
> The heal command is basically used to heal any mismatching contents
> between replica copies of the files.
> For the command "gluster volume heal <volname>"
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello,
We have a very fresh gluster 3.10.10 installation.
Our volume is created as distributed volume, 9 bricks 96TB in total
(87TB after 10% of gluster disk space reservation)
For some reasons I can?t ?heal? the volume:
# gluster volume heal gv0
Launching heal operation to perform index self heal on volume gv0 has
been unsuccessful on bricks that are down. Please check if all brick
processes
2017 Aug 22
0
Brick count limit in a volume
Hi, I think this is the line limiting brick count:
https://github.com/gluster/glusterfs/blob/c136024613c697fec87aaff3a070862b92c57977/cli/src/cli-cmd-parser.c#L84
Can gluster-devs increase this limit? Should I open a github issue?
On Mon, Aug 21, 2017 at 7:01 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi,
> Gluster version is 3.10.5. I am trying to create a 5500 brick
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik,
Thanks a lot for the explanation.
Does it mean a distributed volume health can be checked only by "gluster
volume status " command?
And one more question: cluster.min-free-disk is 10% by default. What
kind of "side effects" can we face if this option will be reduced to,
for example, 5%? Could you point to any best practice document(s)?
Regards,
Anatoliy
2017 Aug 23
0
Brick count limit in a volume
Can you also please provide more detail on why those many bricks are needed
in a single volume?
Thanks,
Vijay
On Wed, Aug 23, 2017 at 12:43 AM, Atin Mukherjee <amukherj at redhat.com>
wrote:
> An upstream bug would be ideal as github issue is mainly used for
> enhancements. In the mean time, could you point to the exact failure shown
> at the command line and the log entry from
2023 Jun 05
1
How to find out data alignment for LVM thin volume brick
Hello,
I am preparing a brick as LVM thin volume for a test slave node using this documentation:
https://docs.gluster.org/en/main/Administrator-Guide/formatting-and-mounting-bricks/
but I am confused regarding the right "--dataalignment" option to be used for pvcreate. The documentation mentions the following under point 1:
"Create a physical volume(PV) by using the pvcreate
2017 Aug 23
2
Brick count limit in a volume
An upstream bug would be ideal as github issue is mainly used for
enhancements. In the mean time, could you point to the exact failure shown
at the command line and the log entry from cli.log?
On Wed, Aug 23, 2017 at 12:10 AM, Serkan ?oban <cobanserkan at gmail.com>
wrote:
> Hi, I think this is the line limiting brick count:
>
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick