similar to: Top Reset

Displaying 20 results from an estimated 100000 matches similar to: "Top Reset"

2011 Sep 15
1
Gluster 3.2 configurations + translators
Hello, i'm little confused about gluster configuration interface. I did start with gluster 3.2 and i did all configurations using gluster cli command. Now when i was looking into way how to tune performance i find out in documentation on many places some pieces of text configuration files, but usually there is a warning that it is old and should be not used. Right now im solving how to turn
2011 Jun 06
2
Gluster 3.2.0 and ucarp not working
Hello everybody. I have a problem setting up gluster failover funcionality. Based on manual i setup ucarp which is working well ( tested with ping/ssh etc ) But when i use virtual address for gluster volume mount and i turn off one of nodes machine/gluster will freeze until node is back online. My virtual ip is 3.200 and machine real ip is 3.233 and 3.5. In gluster log i can see: [2011-06-06
2017 Dec 12
0
reset-brick command questions
Hi Jorick, 1 - Why would I even need to specify the " HOSTNAME:BRICKPATH " twice? I just want to replace the disk and get it back into the volume. Reset brick command can be used in different scenarios. One more case could be where you just want to change the host name to IP address of that node of bricks. In this case also you will follow the same steps but just have to provide IP
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 8:38 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> Eventually I can destroy and recreate this "export" volume again with the >> old names (ovirt0N.localdomain.local) if you give me the sequence of >> commands,
2017 Dec 11
2
reset-brick command questions
Hi, I'm trying to use the reset-brick command, but it's not completely clear to me > > Introducing reset-brick command > > /Notes for users:/ The reset-brick command provides support to > reformat/replace the disk(s) represented by a brick within a volume. > This is helpful when a disk goes bad etc > That's what I need, the use case is a disk goes bad on
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > > > On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> >> >> On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: >> >>> >>> >>>> ... >>>> >>>> then
2017 Jul 06
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 6:55 AM, Atin Mukherjee <amukherj at redhat.com> wrote: > > >> > You can switch back to info mode the moment this is hit one more time with > the debug log enabled. What I'd need here is the glusterd log (with debug > mode) to figure out the exact cause of the failure. > > >> >> Let me know, >> thanks >> >>
2017 Jul 05
2
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 7:42 AM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >>> ... >>> >>> then the commands I need to run would be: >>> >>> gluster volume reset-brick export
2017 Jul 06
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Thu, Jul 6, 2017 at 3:47 AM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> OK, so the log just hints to the following: >> >> [2017-07-05 15:04:07.178204] E [MSGID: 106123] >> [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit >>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 6:39 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > OK, so the log just hints to the following: > > [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] > 0-management: Commit failed for operation Reset Brick on local node > [2017-07-05 15:04:07.178214] E [MSGID: 106123] >
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
OK, so the log just hints to the following: [2017-07-05 15:04:07.178204] E [MSGID: 106123] [glusterd-mgmt.c:1532:glusterd_mgmt_v3_commit] 0-management: Commit failed for operation Reset Brick on local node [2017-07-05 15:04:07.178214] E [MSGID: 106123] [glusterd-replace-brick.c:649:glusterd_mgmt_v3_initiate_replace_brick_cmd_phases] 0-management: Commit Op Failed While going through the code,
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
And what does glusterd log indicate for these failures? On Wed, Jul 5, 2017 at 8:43 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > > > On Wed, Jul 5, 2017 at 5:02 PM, Sahina Bose <sabose at redhat.com> wrote: > >> >> >> On Wed, Jul 5, 2017 at 8:16 PM, Gianluca Cecchi < >> gianluca.cecchi at gmail.com> wrote: >> >>>
2017 Jul 10
0
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee <amukherj at redhat.com> >> wrote: >> >>> >>> >>> On Thu, Jul 6, 2017 at 5:26 PM, Gianluca Cecchi
2017 Jul 07
0
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
You'd need to allow some more time to dig into the logs. I'll try to get back on this by Monday. On Fri, Jul 7, 2017 at 2:23 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com> wrote: > On Thu, Jul 6, 2017 at 3:22 PM, Gianluca Cecchi <gianluca.cecchi at gmail.com > > wrote: > >> On Thu, Jul 6, 2017 at 2:16 PM, Atin Mukherjee <amukherj at redhat.com>
2017 Jul 05
1
op-version for reset-brick (Was: Re: [ovirt-users] Upgrading HC from 4.0 to 4.1)
On Wed, Jul 5, 2017 at 5:22 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > And what does glusterd log indicate for these failures? > See here in gzip format https://drive.google.com/file/d/0BwoPbcrMv8mvYmlRLUgyV0pFN0k/view?usp=sharing It seems that on each host the peer files have been updated with a new entry "hostname2": [root at ovirt01 ~]# cat
2017 Aug 01
3
How to delete geo-replication session?
Hi, I would like to delete a geo-replication session on my GluterFS 3.8.11 replicat 2 volume in order to re-create it. Unfortunately the "delete" command does not work as you can see below: $ sudo gluster volume geo-replication myvolume gfs1geo.domain.tld::myvolume-geo delete Staging failed on arbiternode.domain.tld. Error: Geo-replication session between myvolume and
2017 Aug 07
0
How to delete geo-replication session?
Hi, I would really like to get rid of this geo-replication session as I am stuck with it right now. For example I can't even stop my volume as it complains about that geo-replcation... Can someone let me know how I can delete it? Thanks > -------- Original Message -------- > Subject: How to delete geo-replication session? > Local Time: August 1, 2017 12:15 PM > UTC Time: August
2017 Aug 08
2
How to delete geo-replication session?
Do you see any session listed when Geo-replication status command is run(without any volume name) gluster volume geo-replication status Volume stop force should work even if Geo-replication session exists. From the error it looks like node "arbiternode.domain.tld" in Master cluster is down or not reachable. regards Aravinda VK On 08/07/2017 10:01 PM, mabi wrote: > Hi, >
2018 Feb 01
0
How to trigger a resync of a newly replaced empty brick in replicate config ?
You do not need to reset brick if brick path does not change. Replace the brick format and mount, then gluster v start volname force. To start self heal just run gluster v heal volname full. On Thu, Feb 1, 2018 at 6:39 PM, Alessandro Ipe <Alessandro.Ipe at meteo.be> wrote: > Hi, > > > My volume home is configured in replicate mode (version 3.12.4) with the bricks >
2018 Feb 01
2
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, My volume home is configured in replicate mode (version 3.12.4) with the bricks server1:/data/gluster/brick1 server2:/data/gluster/brick1 server2:/data/gluster/brick1 was corrupted, so I killed gluster daemon for that brick on server2, umounted it, reformated it, remounted it and did a > gluster volume reset-brick home server2:/data/gluster/brick1 server2:/data/gluster/brick1 commit