Displaying 20 results from an estimated 600 matches similar to: "unable to remove brick, pleas help"
2001 Nov 28
1
Routing question!!
I have the following system where I'm using Suse 7.1 on the servers:
172.22.2.0/24 172.22.3.0/24 172.22.4.0/24
Clients Clients Clients
Internet Win95 Win95 Win95
| | | |
+----------+ +----------+ +----------+ +----------+
| Srv1 | | Srv2
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik and Ben,
I'll try and reply to you inline.
On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com> wrote:
> Hey,
>
> Can you give us the volume info output for this volume?
# gluster volume info virt_images
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks:
2010 Dec 13
2
Deploying libvirt with live migration
I have two physical servers: Virt1 and Virt2. I'm setting up live
migration with CentOS 5.5 between the two. I've done this by NFS
mounting /etc/libvirt and /var/lib/libvirt/images on both servers. This
is working well for me except for one thing.
I see the same list of VMs on each server (as expected), but each server
(Virt1 and Virt2) are able to start the same VM at the same time.
2017 Dec 22
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Henrik,
Thanks for providing the required outputs. See my replies inline.
On Thu, Dec 21, 2017 at 10:42 PM, Henrik Juul Pedersen <hjp at liab.dk> wrote:
> Hi Karthik and Ben,
>
> I'll try and reply to you inline.
>
> On 21 December 2017 at 07:18, Karthik Subrahmanya <ksubrahm at redhat.com>
> wrote:
> > Hey,
> >
> > Can you give us the
2017 Oct 05
0
Inconsistent slave status output
Hello,
I have a gluster volume set up using geo-replication on two slaves
however I'm seeing inconsistent status output on the slave nodes.
Here is the status shown by gluster volume geo-replication status on
each node.
[root at foo-gluster-srv3 ~]# gluster volume geo-replication status
MASTER NODE MASTER VOL MASTER BRICK SLAVE
USER SLAVE
2017 Dec 20
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi,
I have the following volume:
Volume Name: virt_images
Type: Replicate
Volume ID: 9f3c8273-4d9d-4af2-a4e7-4cb4a51e3594
Status: Started
Snapshot Count: 2
Number of Bricks: 1 x (2 + 1) = 3
Transport-type: tcp
Bricks:
Brick1: virt3:/data/virt_images/brick
Brick2: virt2:/data/virt_images/brick
Brick3: printserver:/data/virt_images/brick (arbiter)
Options Reconfigured:
features.quota-deem-statfs:
2017 Dec 21
2
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey,
Can you give us the volume info output for this volume?
Why are you not able to get the xattrs from arbiter brick? It is the same
way as you do it on data bricks.
The changelog xattrs are named trusted.afr.virt_images-client-{1,2,3} in
the getxattr outputs you have provided.
Did you do a remove-brick and add-brick any time? Otherwise it will be
trusted.afr.virt_images-client-{0,1,2} usually.
2017 Dec 21
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Here is the process for resolving split brain on replica 2:
https://access.redhat.com/documentation/en-US/Red_Hat_Storage/2.1/html/Administration_Guide/Recovering_from_File_Split-brain.html
It should be pretty much the same for replica 3, you change the xattrs with something like:
# setfattr -n trusted.afr.vol-client-0 -v 0x000000000000000100000000 /gfs/brick-b/a
When I try to decide which
2020 Sep 28
1
Centos8: Glusterd do not start correctly when I startup or reboot all server together
I have install and configure on two server centos8 glusterfs in replica
mode in this manner:
dnf install centos-release-gluster -y
dnf install glusterfs-server glusterfs glusterfs-fuse -y
systemctl enable --now glusterd
gluster peer probe virt1
gluster peer status
sh creavolume.sh gfsvol1 301G /gfsvol1 xfs
# NOTE: this is a my shell script to create fs on lvm
mkdir
2012 Oct 16
1
Migrating a LV backed guest
I have a KVM VM that's backed by a logical volume on local disk.
I'd like to copy / move it to an identically configured host.
[root at virt2 cquinn]# virsh migrate --copy-storage-all --verbose --persistent node1.www qemu+ssh://10.102.1.11/system
error: Unable to read from monitor: Connection reset by peer
How should I effectively troubleshoot this? Am I misunderstanding how virsh
2001 Sep 20
1
Complicated network problem
I am setting up a linux system in my school, where I have designed the
following system:
172.22.2.0/24 172.22.3.0/24 172.22.4.0/24
Clients Clients Clients
Internet Win95 Win95 Win95
| | | |
+----------+ +----------+ +----------+ +----------+
| Srv1
2018 Feb 05
0
[ovirt-users] VM paused due unknown storage error
Adding gluster-users.
On Wed, Jan 31, 2018 at 3:55 PM, Misak Khachatryan <kmisak at gmail.com> wrote:
> Hi,
>
> here is the output from virt3 - problematic host:
>
> [root at virt3 ~]# gluster volume status
> Status of volume: data
> Gluster process TCP Port RDMA Port Online
> Pid
>
2001 Nov 29
0
[SLE] Routing question!!
Hi,
Can't really make sense of your diagram. How many clients have you got and
what are trying to achieve? Also, what type of firewall are you trying to
achieve, a masquerading/NAT one, (in which case you need routing turned on),
or a application level one, (in which case you need it turned off).
If you've got this many servers I would suggest you install masquerading/NAT
firewall with
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hi Karthik,
Thanks for the info. Maybe the documentation should be updated to
explain the different AFR versions, I know I was confused.
Also, when looking at the changelogs from my three bricks before fixing:
Brick 1:
trusted.afr.virt_images-client-1=0x000002280000000000000000
trusted.afr.virt_images-client-3=0x000000000000000000000000
Brick 2:
2017 Dec 22
0
Gluster replicate 3 arbiter 1 in split brain. gluster cli seems unaware
Hey Henrik,
Good to know that the issue got resolved. I will try to answer some of the
questions you have.
- The time taken to heal the file depends on its size. That's why you were
seeing some delay in getting everything back to normal in the heal info
output.
- You did not hit the split-brain situation. In split-brain all the bricks
will be blaming the other bricks. But in your case the
2009 Aug 31
3
default profile
Hi,
I installed a SaMBa PDC and a BDC. When I log in to an XP client with a new
user, sometimes I get the initial profile settings from the netlogon share,
but often from local. When I get the local default settings, it is not
syncronized to the server at logout. Even if I get the new profile from the
server, on the same client, next time, with a new user, I get the new
profile from local. I
2005 Aug 15
16
swig_up
Tracing down some things to add in validators and I''ve run across
something that kinda bothers me...
In order to implement validators you have to override the clone method.
The directors seems to be set up to specifically handle this situation.
However, whenever C++ calls back to the object''s methods the swig_get_up
function is returning false. It seems like swig_up
2013 Dec 06
1
Authentification Dovecot + Samba4
Hello list,
I am struggling with setting up dovecot 2.1.7 with samba 4.1.2 on debian wheezy. Dovecot should authenticate via LDAP, but I?cannot get it to work reliably. Sometimes auth works, sometimes not. Referals are already activated in ldap.conf ? LDAP-authentication works fine with other clients (Apache Directory Studio, ?)?
Has somebody got a similar setup running? I would love some hints
2018 Oct 12
2
Sieve scripts not replicated
Hello,
I use dovecot replication and the sieve scripts are not replicated. Mail
replication is working fine.
Log when sieve script (with Rainloop webmail) is created:
Oct 12 12:57:57 srv1 dovecot: managesieve-login: Login:
user=<hativ at example.com>, method=PLAIN, rip=91.67.174.186,
lip=195.201.251.57, mpid=5360, TLS, session=<OXvK9QV4fOBbQ666>
Oct 12 12:57:57 srv1 dovecot:
2016 May 20
3
Suddenly Windows clients can't join Samba+ldap PDC anymore
Hi all,
some years ago i configured a `Primary Domain Controller` through
Samba and LDAP (slapd) on an Ubuntu machine (13.10) at 192.168.69.203
which should be accessible by the string/name `SRV1`. I must note i
did not installed winbind. I've never had any issue and it looks like
it's working fine as about 10 Windows machines joined the PDC and
Windows users can login against PDC on