search for: volname

Displaying 20 results from an estimated 316 matches for "volname".

2018 Mar 04
1
tiering
...detach start volume tier detach start: failed: Pre Validation failed on labgfs51. Found stopped brick labgfs51:/gfs/p1-tier/mount. Use force option to remove the offline brick Tier command failed ?force? results in Usage: # gluster volume tier labgreenbin detach start force Usage: volume tier <VOLNAME> status volume tier <VOLNAME> start [force] volume tier <VOLNAME> stop volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force] volume tier <VOLNAME> detach <start|stop|status|commit|[force]> So trying to remove the brick: # gluster v remov...
2017 Dec 21
3
Wrong volume size with df
Sure! > 1 - output of gluster volume heal <volname> info Brick pod-sjc1-gluster1:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick1/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster1:/data/brick2/gv0 Status: Connected Number of entries: 0 Brick pod-sjc1-gluster2:/data/brick2/gv0 Statu...
2017 Jun 20
2
trash can feature, crashed???
...glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdb46a) [0x7f30460bc46a] -->/usr/lib64/glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdaf2d) [0x7f30460bbf2d] -->/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f3051532255] ) 0-management: Ran script: /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=intermediate01 -o features.trash=on --gd-workdir=/var/lib/glusterd [2017-06-16 16:08:14.453290] I [run.c:191:runner_log] (-->/usr/lib64/glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdb46a) [0x7f30460bc46a] -->/usr/lib64/glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdaf2d) [0x7f30460bbf2d] -->...
2018 Mar 16
2
Disperse volume recovery and healing
Xavi, does that mean that even if every node was rebooted one at a time even without issuing a heal that the volume would have no issues after running gluster volume heal [volname] when all bricks are back online? ________________________________ From: Xavi Hernandez <jahernan at redhat.com> Sent: Thursday, March 15, 2018 12:09:05 AM To: Victor T Cc: gluster-users at gluster.org Subject: Re: [Gluster-users] Disperse volume recovery and healing Hi Victor, On Wed, M...
2018 Jan 02
0
Wrong volume size with df
...combined instead of just one brick. Not sure if that gives you any clues for this... maybe adding another brick to the pool would have a similar effect? On Thu, Dec 21, 2017 at 11:44 AM, Tom Fite <tomfite at gmail.com> wrote: > Sure! > > > 1 - output of gluster volume heal <volname> info > > Brick pod-sjc1-gluster1:/data/brick1/gv0 > Status: Connected > Number of entries: 0 > > Brick pod-sjc1-gluster2:/data/brick1/gv0 > Status: Connected > Number of entries: 0 > > Brick pod-sjc1-gluster1:/data/brick2/gv0 > Status: Connected > Number of e...
2018 Mar 16
0
Disperse volume recovery and healing
On Fri, Mar 16, 2018 at 4:57 AM, Victor T <hero_of_nothing_1 at hotmail.com> wrote: > Xavi, does that mean that even if every node was rebooted one at a time > even without issuing a heal that the volume would have no issues after > running gluster volume heal [volname] when all bricks are back online? > No. After bringing up one brick and before stopping the next one, you need to be sure that there are no damaged files. You shouldn't reboot a node if "gluster volume heal <volname> info" shows damaged files. The command "gluster volu...
2017 Dec 11
2
reset-brick command questions
...the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on a disperse gluster node and we want to replace it with a new disk > > Start reset process - > > |gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start | This works, I can see in gluster volume status the brick is not there anymore > > The above command kills the respective brick process. Now the brick > can be reformatted. > > To restart the brick after modifying configuration - > > |gluster volume...
2018 Mar 18
1
Disperse volume recovery and healing
No. After bringing up one brick and before stopping the next one, you need to be sure that there are no damaged files. You shouldn't reboot a node if "gluster volume heal <volname> info" shows damaged files. What happens in this case then? I'm thinking about a situation where the servers are kept in an environment that we don't control - i.e. the cloud. If the VMs are forcibly rebooted without enough time to complete a heal before the next one goes down, the...
2017 Jun 20
0
trash can feature, crashed???
...xlator/mgmt/glusterd.so(+0xdb46a) [0x7f30460bc46a] -- > >/usr/lib64/glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdaf2d) [0x7f30460bbf2d] -- > >/lib64/libglusterfs.so.0(runner_log+0x115) [0x7f3051532255] ) 0-management: Ran script: > /var/lib/glusterd/hooks/1/set/post/S30samba-set.sh --volname=intermediate01 -o features.trash=on > --gd-workdir=/var/lib/glusterd > [2017-06-16 16:08:14.453290] I [run.c:191:runner_log] ( > -->/usr/lib64/glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdb46a) [0x7f30460bc46a] -- > >/usr/lib64/glusterfs/3.10.1/xlator/mgmt/glusterd.so(+0xdaf2d) [0...
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following - 1 - output of gluster volume heal <volname> info 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log 3 - output of gluster volume <volname> info 4 - output of gluster volume <volname> status 5 - Also, could you try unmount the volume and mount it again and check the size? ----- Original Messag...
2012 Oct 22
1
How to add new bricks to a volume?
...p=2) volume,which is consisted of only two bricks and is mounted by multiple clients.Can I just use the following commands to add new bricks without stopping the services which is using the volume as motioned? 1?gluster peer probe new-node1 2?gluster peer probe new-node2 3?gluster volume add-brick VOLNAME new-brick1 new-brick2 4?gluster volume rebalance VOLNAME fix-layout start 5?gluster volume rebalance VOLNAME migrate-data start Is there something I missed? Thanks a lot. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pi...
2018 Mar 15
0
Disperse volume recovery and healing
...triggered once the bricks come online, however there was a bug (https://bugzilla.redhat.com/show_bug.cgi?id=1547662) that could cause delays in the self-heal process. This bug should be fixed in the next version. Meantime you can force self-heal to progress by issuing "gluster volume heal <volname>" commands each time it seems to have stopped. Once the output of "gluster volume heal <volname> info" reports 0 pending files on all bricks, you can proceed with the maintenance of the next server. No need to do any rebalance for down bricks. Rebalance is basically needed...
2017 Dec 19
3
Wrong volume size with df
I have a glusterfs setup with distributed disperse volumes 5 * ( 4 + 2 ). After a server crash, "gluster peer status" reports all peers as connected. "gluster volume status detail" shows that all bricks are up and running with the right size, but when I use df from a client mount point, the size displayed is about 1/6 of the total size. When browsing the data, they seem to
2017 Dec 12
0
reset-brick command questions
...lace the disk(s) represented by a brick within a volume. This is helpful when a disk goes bad etc That's what I need, the use case is a disk goes bad on a disperse gluster node and we want to replace it with a new disk <blockquote> Start reset process - gluster volume reset-brick VOLNAME HOSTNAME:BRICKPATH start </blockquote> This works, I can see in gluster volume status the brick is not there anymore <blockquote> The above command kills the respective brick process. Now the brick can be reformatted. To restart the brick after modifying configuration - glust...
2018 Mar 20
0
Disperse volume recovery and healing
...e case of "file damage," it would show up as files > that could not be healed in logfiles or gluster volume heal [volume] info? > If the damage affects more bricks than the volume redundancy, then probably yes. These files or directories will appear in "gluster volume heal <volname> info" permanently. In some cases, specially for directories, they could be manually healed. But this is always something that needs to be done with extra care and depends on each case, so I don't recommend to do it without help from someone that knows what is happening. Say we have ac...
2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote: > Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks > server1:/data/gluster/brick1 > se...
2018 Mar 05
0
tiering
...ine brick > > > > Tier command failed > > > > > > > > ?force? results in Usage: > > > > # gluster volume tier labgreenbin detach start force > > > > Usage: > > > > volume tier <VOLNAME> status > > > > volume tier <VOLNAME> start [force] > > > > volume tier <VOLNAME> stop > > > > volume tier <VOLNAME> attach [<replica COUNT>] <NEW-BRICK>... [force] > > > > volume tier &...
2012 Jun 12
1
What is glustershd ?
...#39;t find any documentation which explains what's the role of glustershd. The only thing I (think I) understand is that the glustershd-server.vol is only generated for replicated volumes. It contains a cluster/replicate volume which replicate my bricks. It's the same pattern I found in <volname>-fuse.vol. I guess <volname>-fuse.vol is the volfile used by gluster clients to mount the volume. But what about glustershd-server.vol, and the gluster instance using it ? What part of gluster communicates with glustershd ? Thanks for your help! Philippe -------------- next part -------...
2018 Feb 27
0
Quorum in distributed-replicate volume
..., do a peer probe to add them to the cluster, and then > > gluster volume create palantir replica 3 arbiter 1 [saruman brick] > [gandalf brick] [arbiter 1] [azathoth brick] [yog-sothoth brick] [arbiter > 2] [cthulhu brick] [mordiggian brick] [arbiter 3] > gluster volume add-brick <volname> replica 3 arbiter 1 <arbiter 1> <arbiter 2> <arbiter 3> is the command. It will convert the existing volume to arbiter volume and add the specified bricks as arbiter bricks to the existing subvols. Once they are successfully added, self heal should start automatically and you...
2018 Feb 01
2
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, My volume home is configured in replicate mode (version 3.12.4) with the bricks server1:/data/gluster/brick1 server2:/data/gluster/brick1 server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a > gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit