Displaying 20 results from an estimated 300 matches similar to: "After restoring the failed host and synchronizing the data, it prompts that there are unsynchronized items"
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2017 Jun 28
0
Gluster volume not mounted
The mount log file of the volume would help in debugging the actual cause.
On Tue, Jun 27, 2017 at 6:33 PM, Joel Diaz <mrjoeldiaz at gmail.com> wrote:
> Good morning Gluster users,
>
> I'm very new to the Gluster file system. My apologies if this is not the
> correct way to seek assistance. However, I would appreciate some insight
> into understanding the issue I have.
2017 Jun 27
2
Gluster volume not mounted
Good morning Gluster users,
I'm very new to the Gluster file system. My apologies if this is not the
correct way to seek assistance. However, I would appreciate some insight
into understanding the issue I have.
I have three nodes running two volumes, engine and data. The third node is
the arbiter on both volumes. Both volumes were operation fine but one of
the volumes, data, no longer
2023 Sep 29
0
gluster volume status shows -> Online "N" after node reboot.
Hi list,
I am using a replica volume (3 nodes) gluster in an ovirt environment and
after setting one node in maintenance mode and rebooting it, the "Online"
flag in gluster volume status does not go to "Y" again.
[root at node1 glusterfs]# gluster volume status
Status of volume: my_volume
Gluster process TCP Port RDMA Port Online Pid
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone:
We have a glusterfs clusters, version is 3.2.7. The volume info is as below:
Volume Name: gfs1
Type: Distributed-Replicate
Status: Started
Number of Bricks: 94 x 3 = 282
Transport-type: tcp
We native mount the volume in all nodes. When we access the file
?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain.
While we can access the same file on
2012 Nov 06
2
I am very confused about strip Stripe what way it hold space?
I have 4 dell 2970 server , three server harddisk is 146Gx6 ,one hard disk is 72Gx6:
each server mount info is
/dev/sda4 on /exp1 type xfs (rw)
/dev/sdb1 on /exp2 type xfs (rw)
/dev/sdc1 on /exp3 type xfs (rw)
/dev/sdd1 on /exp4 type xfs (rw)
/dev/sde1 on /exp5 type xfs (rw)
/dev/sdf1 on /exp6 type xfs (rw)
I create a gluster volume have 4 stripe
gluster volume create test-volume3 stripe 4
2007 Apr 27
2
Unsynchronized object state detection
Is there a way to specify on a per-object basis that Puppet should merely
report that an object needs to be updated without actually performing the
update?
This would make it possible to detect changes to critical objects (e.g. config
files) that Puppet shouldn''t try to fix automaticaly.
--
Jos Backus
jos at catnook.com
2011 Jun 28
0
[Gluster-devel] volume rebalance still broken
Replying and adding gluster-users. That seems more appropriate?
________________________________________
From: gluster-devel-bounces+jwalker=gluster.com at nongnu.org [gluster-devel-bounces+jwalker=gluster.com at nongnu.org] on behalf of Emmanuel Dreyfus [manu at netbsd.org]
Sent: Tuesday, June 28, 2011 6:51 AM
To: gluster-devel at nongnu.org
Subject: [Gluster-devel] volume rebalance still broken
2007 Aug 15
0
[git patch] fstype support + minor stuff
hello hpa,
rebased my branch, please pull latest
git pull git://brane.itp.tuwien.ac.at/~mattems/klibc.git maks
for the following shortlog
maximilian attems (6):
fstype: add squashfs v3 support
reiser4_fs.h: add attribute packed to reiser4_master_sb
fstype: add ext4 support
.gitignore: add subdir specific entries
usr/klibc/Kbuild: beautify klibc build
fstype:
2017 Sep 22
0
fts_read failed
Hi,
I have simple installation, using mostly defaults, of two mirrored
servers. One brick, one volume.
GlusterFS version is 3.12.1 (server and client). All hosts involved are
Debian 9.1.
On another host I have mounted two different directories from the
cluster using /etc/fstab:
gfs1,gfs2:/vol1/sites-available/ws0 /etc/nginx/sites-available glusterfs
defaults,_netdev 0 0
and
2014 Apr 28
2
volume start causes glusterd to core dump in 3.5.0
I just built a pair of AWS Red Hat 6.5 instances to create a gluster replicated pair file system. I can install everything, peer probe, and create the volume, but as soon as I try to start the volume, glusterd dumps core.
The tail of the log after the crash:
+------------------------------------------------------------------------------+
[2014-04-28 21:49:18.102981] I
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
We have done some benchmarking tests using dovecot 2.0.12 to find the best
shared filesystem for hosting many users, here I share with you the results,
notice the bad perfomance of all the shared filesystems against the local
storage.
Is there any specific optimization/tunning on dovecot for use GFS2 on
rhel6??, we have configured the director to make the user mailbox persistent
in a node, we will
1997 Mar 03
0
SECURITY: Important fixes for IMAP
-----BEGIN PGP SIGNED MESSAGE-----
The IMAP servers included with all versions of Red Hat Linux have a buffer
overrun which allow *remote* users to gain root access on systems which run
them. A fix for Red Hat 4.1 is now avaialble (details on it at the end of this
note).
Users of Red Hat 4.0 should apply the Red Hat 4.1 fix. Users of previous
releases of Red Hat Linux are strongly encouraged to
2019 Dec 20
1
GFS performance under heavy traffic
Hi David,
Also consider using the mount option to specify backup server via 'backupvolfile-server=server2:server3' (you can define more but I don't thing replica volumes greater that 3 are usefull (maybe in some special cases).
In such way, when the primary is lost, your client can reach a backup one without disruption.
P.S.: Client may 'hang' - if the primary server got
2010 Jul 18
3
Proxy IMAP/POP/ManageSieve/SMTP in a large cluster enviroment
Hi to all in the list, we are trying to do some tests lab for a large scale
mail system having this requirements:
- Scale to maybe 1million users(Only for testing).
- Server side filters.
- User quotas.
- High concurrency.
- High performance and High Availability.
We plan to test this using RHEL5 and maybe RHEL6.
As a storage we are going to use an HP EVA 8400 FC(8 GB/s)
We defined this
2020 Feb 27
0
CentOS 7 : SELinux trouble with Fail2ban
On Wed, 26 Feb 2020, Nicolas Kovacs wrote:
>Some time ago I had SELinux problems with Fail2ban.
>Unfortunately when I install [...] from EPEL, I still get the same error.
EPEL packages are often crap quality (as packages), merely blind imports
of the upstream package without any adjustments needed for the
RHEL/CentOS environment (sometimes not even for Fedora), which is often
somewhat
2019 Dec 24
1
GFS performance under heavy traffic
Hi David,
On Dec 24, 2019 02:47, David Cunningham <dcunningham at voisonics.com> wrote:
>
> Hello,
>
> In testing we found that actually the GFS client having access to all 3 nodes made no difference to performance. Perhaps that's because the 3rd node that wasn't accessible from the client before was the arbiter node?
It makes sense, as no data is being generated towards
2012 Mar 21
2
Echo cancellation with different sound card for speaker and microphone
I'm developing an application that have a video conference component.
For that I need echo cancellation, and is looking around for
algorithms/implementations of that, and the one in speex is an
alternative. In the documentation for speex I find the following
sentence however.
"Using a different soundcard to do the capture and plaback will *not*
work, regardless of what you may
2005 Mar 02
4
timing/clock problem
Hi all,
We have been fighting with telco for a entire week.
Today they came here with a LITE3000 to analyze what is going on.
When I configure zaptel with no external clock, E1 gets aligned/synchronized
with bit rate in 2048000 bps, both me and telco.
span=4,0,0,ccs,hdb3,crc4
But when I configure span4 to get clock source from telco they become
unsynchronized. TElco bit rate stays in