search for: gfs1

Displaying 15 results from an estimated 15 matches for "gfs1".

Did you mean: gfs
2013 Jan 07
0
access a file on one node, split brain, while it's normal on another node
Hi, everyone: We have a glusterfs clusters, version is 3.2.7. The volume info is as below: Volume Name: gfs1 Type: Distributed-Replicate Status: Started Number of Bricks: 94 x 3 = 282 Transport-type: tcp We native mount the volume in all nodes. When we access the file ?/XMTEXT/gfs1_000/000/000/095? on one nodes, the error is split brain. While we can access the same file on another node. At t...
2011 Jun 28
0
[Gluster-devel] volume rebalance still broken
...e with 2 x 2 bricks), and I can read it here. Files added after the rebalance can be accessed without any problem. I tried stopping glusterfsd, removing all extended attribute on a bick and restarting it. glusterfs was able to reconstruct attributes: trusted.gfid trusted.glusterfs.test trusted.afr.gfs1-client-0 trusted.afr.gfs1-client-1 trusted.afr.gfs1-client-2 trusted.afr.gfs1-client-3 trusted.glusterfs.dht If I remove the desired file from a brick, afr will restre it from the other brick, but it is still not accessible. Removing extended atributes from the desired file case it to disapear fr...
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2: gfs3:/gfs/brick1/gv0 Brick3: gfs1:/gfs/arbite...
2011 May 05
5
Dovecot imaptest on RHEL4/GFS1, RHEL6/GFS2, NFS and local storage results
...t for use GFS2 on rhel6??, we have configured the director to make the user mailbox persistent in a node, we will thank's any help from you. we are interested in using GFS2 or NFS, we believe the problem is the locks, how can we improve this?? best regards, Aliet The results rhel 4.8 x86_64/GFS1 two nodes, shared FC lun on a SAN Totals: Logi List Stat Sele Fetc Fet2 Stor Dele Expu Appe Logo 100% 50% 50% 100% 100% 100% 50% 100% 100% 100% 100% 30% 5% 1- 2608 1321 1311 2608 2508 3545 547 2001 2493 2702 5282 2- 2810 1440 1430 2810 2688 3...
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote: > I am trying to get up geo replication between two gluster volumes > > I have set up two replica 2 arbiter 1 volumes with 9 bricks > > [root at gfs1 ~]# gluster volume info > Volume Name: gfsvol > Type: Distributed-Replicate > Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 > Status: Started > Snapshot Count: 0 > Number of Bricks: 3 x (2 + 1) = 9 > Transport-type: tcp > Bricks: > Brick1: gfs2:/gfs/brick1/gv0 > Bric...
2007 Aug 15
0
[git patch] fstype support + minor stuff
...t gfs2_inum { + __be64 no_formal_ino; + __be64 no_addr; +}; + +/* + * Generic metadata head structure + * Every inplace buffer logged in the journal must start with this. + */ +struct gfs2_meta_header { + uint32_t mh_magic; + uint32_t mh_type; + uint64_t __pad0; /* Was generation number in gfs1 */ + uint32_t mh_format; + uint32_t __pad1; /* Was incarnation number in gfs1 */ +}; + +/* Requirement: GFS2_LOCKNAME_LEN % 8 == 0 + * Includes: the fencing zero at the end */ +#define GFS2_LOCKNAME_LEN 64 + +/* + * super-block structure + */ +struct gfs2_sb { + struct gfs2_meta_...
2017 Sep 22
0
fts_read failed
Hi, I have simple installation, using mostly defaults, of two mirrored servers. One brick, one volume. GlusterFS version is 3.12.1 (server and client). All hosts involved are Debian 9.1. On another host I have mounted two different directories from the cluster using /etc/fstab: gfs1,gfs2:/vol1/sites-available/ws0 /etc/nginx/sites-available glusterfs defaults,_netdev 0 0 and gfs1,gfs2:/vol1/webroots/ws0 /var/www glusterfs defaults,_netdev 0 0 I cd to /var/www/example.com/public_html/www.example.com/ and run: chown -R auser:agroup . and get: chown: fts_read failed: No suc...
2010 Jul 18
3
Proxy IMAP/POP/ManageSieve/SMTP in a large cluster enviroment
...ng Dovecot - Postfix(Director or Proxy). 6- Mail backend. - ( n servers)RHEL5/6 using Dovecot. Now for functional role 6 "Mail Backend" we have some dilemmas. - Recommended scalable filesystem to use for such scenario(Clustered or not). -GFS2?? We have very bad experiences with GFS1 and maildir, GFS2 doesn't seems to improve this also. Using some techniques for session afinity with a backend server seems to help with the locking problems of GFS and the cache. Using GFS many IMAP SMTP servers can in parallel write or read to user mailboxes, if GFS can perform well we prefe...
2019 Dec 20
1
GFS performance under heavy traffic
...voisonics.com> ??????: >> >> >> Hi Strahil, >> >> The chart attached to my original email is taken from the GFS server. >> >> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >> >> Should we do something different to access all bricks simultaneously? >> >> Thanks for your help! >> >> >> On Fri, 20 Dec 2019 at...
2024 Jun 25
0
After restoring the failed host and synchronizing the data, it prompts that there are unsynchronized items
gluster volume info Volume Name: data Type: Replicate Number of Bricks: 1 x (2 + 1) = 3 Bricks: Brick1: node-gfs1:/gluster_bricks/data/data1 Brick2: node-gfs2:/gluster_bricks/data/data1 Brick3: node-gfs3:/gluster_bricks/data/data1 (arbiter) gluster volume heal data info Number of entries: 39 /var/log/glusterfs/glustershd.log The log appears client-rpc-fops_v2.c:785:client4_0_fsync_cbk] 0-data-client-0: remote...
1997 Mar 03
0
SECURITY: Important fixes for IMAP
...s/4.1/sparc/imap-4.1.BETA-3.sparc.rpm Erik -----BEGIN PGP SIGNATURE----- Version: 2.6.2 iQCVAwUBMxs/AaUg6PHLopv5AQG/ywQAilkPes+iLTI1r7HXRVeZawC3kjRbZAyx 3FcqswteuL482UeZadZoVo9cu0mnwhsjRAMkqs1hF+PgHGmUniR4JymdtIYTPXHa urZww4fc0A5AIeLwWEPStARipXk3jKDS3VPgKRd8EtQDaj8qAknGIfDBz/ZfFwV2 Aj4cF+TTKJY= =GfS1 -----END PGP SIGNATURE-----
2005 Dec 23
1
GFS2, OCFS2, and FUSE cause xenU to oops.
I really need to share a filesystem and I''d rather not have to export it from one domU to another so I tried mounting it with GFS2 and then OCFS2. Both caused the xenU kernel to oops just as the mount was attempted. I assumed that a FUSE-based solution would be a little less problematic (if only because it doesn''t require kernel patches) but it also caused an oops right when
2019 Dec 24
1
GFS performance under heavy traffic
...ahil, >>>>> >>>>> The chart attached to my original email is taken from the GFS server. >>>>> >>>>> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >>>>> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >>>>> >>>>> Should we do something different to access all bricks simultaneously? >>>>> >>>>> Thanks for your help! &g...
2019 Dec 28
1
GFS performance under heavy traffic
...>> The chart attached to my original email is taken from the GFS server. >>> >>>>> >>> >>>>> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >>> >>>>> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >>> >>>>> >>> >>>>> Should we do something different to access all bricks simultaneously? >>> >>>>> >&g...
2019 Dec 27
0
GFS performance under heavy traffic
...>>>>> The chart attached to my original email is taken from the GFS server. >> >>>>> >> >>>>> I'm not sure what you mean by accessing all bricks simultaneously. We've mounted it from the client like this: >> >>>>> gfs1:/gvol0 /mnt/glusterfs/ glusterfs defaults,direct-io-mode=disable,_netdev,backupvolfile-server=gfs2,fetch-attempts=10 0 0 >> >>>>> >> >>>>> Should we do something different to access all bricks simultaneously? >> >>>>> >> >>&...