Displaying 20 results from an estimated 120 matches similar to: "RMAN backups on Glusters"
2017 Sep 12
2
Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode
I was looking to upgrade Gluster server from ver 3.5.X to 3.8.X.
I have already tried it in offline upgrade mode and that works, I am
interested in knowing if this upgrade of gluster server version can be
in online upgrade mode.
Many thanks in advance.
--
- Hemant Mamtora
2017 Oct 26
2
Change IP address of few nodes in GFS 3.8
Folks,
We have a 12 node replicated gluster with 2 X 6.
I need to change IP address in 6 nodes out of the 12 nodes, keeping the
host name same. The 6 nodes that I plan to change IP are part of the 3
sub-volumes.
Is this possible and if so is there any formal process to tell that
there is a change in IP address.
My requirement is to keep the volume up and not bring it down. I can
change IP
2017 Sep 13
1
Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode
Thanks for your reply.
So the way I understand is that it can be upgraded but with a downtime,
which means that there are no clients writing to this gluster volume, as
the volume is stopped.
But post a upgrade we will still have the data on the gluster volume
that we had before the upgrade(but with downtime).
- Hemant
On 9/13/17 2:33 PM, Diego Remolina wrote:
> Nope, not gonna work... I
2017 Sep 13
0
Gluster Server upgrade from 3.5 to 3.8 in online upgrade mode
Nope, not gonna work... I could never go even from 3.6. to 3.7 without
downtime cause of the settings change, see:
http://lists.gluster.org/pipermail/gluster-users.old/2015-September/023470.html
Even when changing options in the older 3.6.x I had installed, my new
3.7.x server would not connect, so had to pretty much stop gluster in
all servers, update to 3.7.x offline, then start gluster and
2005 Jun 06
1
FW: RMAN backup error
Skipped content of type multipart/alternative-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: image/jpeg
Size: 2743 bytes
Desc: Glacier Bkgrd.jpg
Url : http://oss.oracle.com/pipermail/ocfs-users/attachments/20050606/b81975f2/attachment.jpe
2008 Jul 24
2
ORA-19870 and ORA-19502 During RMAN restore to OCFS2 filesystem
Hi,
When attempting to restore to LINUX RHEL5 - OCFS2 filesystem received the
following error during RMAN restore for nearly all of the datafiles
attempted to restore with exception of a couple of smaller datafiles which
were smaller < 2GB.
ORA-19870: error reading backup piece /db/dumps/TR1_1/rmanbackup/TR1_88_1
ORA-19502: write error on file "/db/devices/db1/PR2/pr2_1/pr2.data1",
2017 Nov 03
0
Change IP address of few nodes in GFS 3.8
Thanks Atin.
Peer probes were done using FQDN and I was able to make these changes.
The only thing I had to do on rest of the nodes was to flush nscd; after that everything was good and I did not have to restart gluster services on these rest of the nodes.
- Hemant
On 10/30/17 11:46 AM, Atin Mukherjee wrote:
If the gluster nodes are peer probed through FQDNs then you?re good. If they?re done
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
how critical is above?
I get plenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
This means shd client is not able to establish the connection with the
brick on port 49155. Now this could happen if glusterd has ended up
providing a stale port back which is not what brick is listening to. If you
had killed any brick process using sigkill signal instead of sigterm this
is expected as portmap_signout is not received by glusterd in the former
case and the old portmap entry is
2017 Oct 30
0
Change IP address of few nodes in GFS 3.8
If the gluster nodes are peer probed through FQDNs then you?re good. If
they?re done through IPs then for every node you?d need to replace the old
IP with new IP for all the files in /var/lib/glusterd along with renaming
the filenames with have the associated old IP and restart all gluster
services. I used to have a script for this which I shared earlier in users
ML, need to dig through my mailbox
2017 Aug 29
0
modifying data via fues causes heal problem
hi there
I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too)
I see something like it:
# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs
has been successful
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 0
Brick
2023 Mar 22
0
subnets other than bricks' - volume availability ?
Hi guys.
I got confused, I cannot remember was it always that VOLs
were not available to clients outside of a volume's subnet
or this is new or...
something is not working here for me... I wonder.
eg.
...
Bricks:
Brick1: 10.1.0.100:/devs/00.GLUSTERs/VMs
Brick2: 10.1.0.101:/devs/00.GLUSTERs/VMs
Brick3: 10.1.0.99:/devs/00.GLUSTERs/VMs-arbiter (arbiter)
and on a client with an IP of
2017 Sep 28
1
one brick one volume process dies?
On 13/09/17 20:47, Ben Werthmann wrote:
> These symptoms appear to be the same as I've recorded in
> this post:
>
> http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
>
> On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee
> <atin.mukherjee83 at gmail.com
> <mailto:atin.mukherjee83 at gmail.com>> wrote:
>
> Additionally the
2011 Nov 17
3
merging corpora and metadata
Greetings!
I loose all my metadata after concatenating corpora. This is an
example of what happens:
> meta(corpus.1)
MetaID cid fid selfirst selend fname
1 0 1 11 2169 2518 WCPD-2001-01-29-Pg217.scrb
2 0 1 14 9189 9702 WCPD-2003-01-13-Pg39.scrb
3 0 1 14 2109 2577 WCPD-2003-01-13-Pg39.scrb
....
....
17 0
2017 Sep 13
0
one brick one volume process dies?
Please send me the logs as well i.e glusterd.logs and cmd_history.log.
On Wed, Sep 13, 2017 at 1:45 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
>
>
> On 13/09/17 06:21, Gaurav Yadav wrote:
>
>> Please provide the output of gluster volume info, gluster volume status
>> and gluster peer status.
>>
>> Apart from above info, please provide glusterd logs,
2017 Sep 13
0
one brick one volume process dies?
These symptoms appear to be the same as I've recorded in this post:
http://lists.gluster.org/pipermail/gluster-users/2017-September/032435.html
On Wed, Sep 13, 2017 at 7:01 AM, Atin Mukherjee <atin.mukherjee83 at gmail.com>
wrote:
> Additionally the brick log file of the same brick would be required.
> Please look for if brick process went down or crashed. Doing a volume start
2017 Sep 13
3
one brick one volume process dies?
On 13/09/17 06:21, Gaurav Yadav wrote:
> Please provide the output of gluster volume info, gluster
> volume status and gluster peer status.
>
> Apart? from above info, please provide glusterd logs,
> cmd_history.log.
>
> Thanks
> Gaurav
>
> On Tue, Sep 12, 2017 at 2:22 PM, lejeczek
> <peljasz at yahoo.co.uk <mailto:peljasz at yahoo.co.uk>> wrote:
2017 Sep 04
0
heal info OK but statistics not working
Ravi/Karthick,
If one of the self heal process is down, will the statstics heal-count
command work?
On Mon, Sep 4, 2017 at 7:24 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> 1) one peer, out of four, got separated from the network, from the rest of
> the cluster.
> 2) that unavailable(while it was unavailable) peer got detached with
> "gluster peer detach" command
2017 Sep 13
2
one brick one volume process dies?
Additionally the brick log file of the same brick would be required. Please
look for if brick process went down or crashed. Doing a volume start force
should resolve the issue.
On Wed, 13 Sep 2017 at 16:28, Gaurav Yadav <gyadav at redhat.com> wrote:
> Please send me the logs as well i.e glusterd.logs and cmd_history.log.
>
>
> On Wed, Sep 13, 2017 at 1:45 PM, lejeczek
2017 Sep 04
2
heal info OK but statistics not working
1) one peer, out of four, got separated from the network,
from the rest of the cluster.
2) that unavailable(while it was unavailable) peer got
detached with "gluster peer detach" command which succeeded,
so now cluster comprise of three peers
3) Self-heal daemon (for some reason) does not start(with an
attempt to restart glusted) on the peer which probed that
fourth peer.
4) fourth