similar to: GlusterFS mounted home directories

Displaying 20 results from an estimated 10000 matches similar to: "GlusterFS mounted home directories"

2016 Jul 24
0
Re: KMail
On Sun, 24 Jul 2016 14:44, Timothy Murphy wrote: > Any hope of KMail (and Kontact) coming to CentOS-7? > What exactly is the problem? > KMail seems to work on other Linux OS's. Work? Keyword here is "seems". [rant] Since Kmail1 on KDE3 there is no really fully working version that does not bork up your mail semi regulary. And do not even get me started on that
2016 Jul 24
4
KMail
Any hope of KMail (and Kontact) coming to CentOS-7? What exactly is the problem? KMail seems to work on other Linux OS's. -- Timothy Murphy gayleard /at/ eircom.net School of Mathematics, Trinity College, Dublin
2017 Jul 20
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/20/2017 03:42 PM, yayo (j) wrote: > > 2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > Could you check if the self-heal daemon on all nodes is connected > to the 3 bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using
2009 Jun 24
2
Limit of Glusterfs help
HI: Was there a limit of servers which was used as storage in Gluster ? 2009-06-24 eagleeyes ???? gluster-users-request ????? 2009-06-24 03:00:42 ???? gluster-users ??? ??? Gluster-users Digest, Vol 14, Issue 34 Send Gluster-users mailing list submissions to gluster-users at gluster.org To subscribe or unsubscribe via the World Wide Web, visit
2017 Jul 20
3
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 11:34 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > Could you check if the self-heal daemon on all nodes is connected to the 3 > bricks? You will need to check the glustershd.log for that. > If it is not connected, try restarting the shd using `gluster volume start > engine force`, then launch the heal command like you did earlier and see if > heals
2008 Nov 20
1
My ignorance and Fuse (or glusterfs)
I have a very simple test setup of 2 servers each working as a glusterfs-server and glusterfs-client to the other in an afr capacity. The gluster-c and gluster-s both start up with no errors and are handshaking properly.. One one server, I get the expected behaviour: I touch a file in the export dir and it magically appears in the others mount point. On the other server however, the file
2009 May 11
1
Problem of afr in glusterfs 2.0.0rc1
Hello: i had met the problem twice when i copy some files into the GFS space . i have five clients and two servers , when i copy files into /data which was GFS space on client A , the problem was appear. in the same path , A server can see the all files ,but B and C or D couldin't see the all files ,liks some files was missing ,but when i mount again ,the files was appear
2014 May 27
0
CEBA-2014:0539 CentOS 6 akonadi FASTTRACK Update
CentOS Errata and Bugfix Advisory 2014:0539 Upstream details at : https://rhn.redhat.com/errata/RHBA-2014-0539.html The following updated files have been uploaded and are currently syncing to the mirrors: ( sha256sum Filename ) i386: d6020e4949ddbfa76f2513216a6acc4d8105923a77343ce635b2bf3795bd7f3b akonadi-1.2.1-3.el6.i686.rpm 0ba9fd6d811b1e30d9c7bcc5885c263699e996008b4dc2185e8480f5bff74c71
2017 Jul 21
1
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com>: > > But it does say something. All these gfids of completed heals in the log > below are the for the ones that you have given the getfattr output of. So > what is likely happening is there is an intermittent connection problem > between your mount and the brick process, leading to pending heals again >
2017 Jun 14
0
No NFS connection due to GlusterFS CPU load
When executing the load test with the FIO tool, execute the following job from the client When executed, the load of 2 cores is high for the CPU. Up to 100%. At that time, if another client is performing NFS mounting, the df command I can not connect NFS without coming back. The log will continue to be output below. I believe that if the CPU utilization is distributed, the load will be eliminated.
2017 Jul 21
0
[ovirt-users] ovirt 4.1 hosted engine hyper converged on glusterfs 3.8.10 : "engine" storage domain alway complain about "unsynced" elements
On 07/21/2017 02:55 PM, yayo (j) wrote: > 2017-07-20 14:48 GMT+02:00 Ravishankar N <ravishankar at redhat.com > <mailto:ravishankar at redhat.com>>: > > > But it does say something. All these gfids of completed heals in > the log below are the for the ones that you have given the > getfattr output of. So what is likely happening is there is an >
2011 Sep 20
1
About using Dovecot indexes with Thunderbird/kmail
I have Dovecot running well on my Mandriva mail hub, handing out IMAP to the household LAN. This is 1.2.15. Eventually I'll upgrade the OS and get 2.x, but this is working fine. So this question is really about the mail readers I use and how they make use of Dovecot. I have Thunderbird on my laptop and KMail2 on my desktop. Dovecot indexes. GOOD! The trouble is that the mail readers
2004 Dec 07
3
Problem with dovecot on home LAN
At present I get email directly on my laptop in /var/spool/mail/* through uucp. I'd like to get the email in the same directory on my desktop (alfred), and then run a mail server on the destop and collect the email on my laptop (william) (or on other computers on my two little home LANs, ethernet and WiFi). I was advised that dovecot was a good imap server for this purpose (I tried
2012 Mar 12
0
Data consistency with Gluster 3.2.5
I have set up a replicated, four-node gluster config for a web farm. The idea is that each web node is its own Gluster server, and will have its own copy of the entire web root locally. It then serves the cluster to itself via a mount. We're running it over dual GigE NICs bonded. The problem I am having is when we switch live traffic to nodes in the cluster, they almost immediately get
2014 May 28
0
CentOS-announce Digest, Vol 111, Issue 13
Send CentOS-announce mailing list submissions to centos-announce at centos.org To subscribe or unsubscribe via the World Wide Web, visit http://lists.centos.org/mailman/listinfo/centos-announce or, via email, send a message with subject or body 'help' to centos-announce-request at centos.org You can reach the person managing the list at centos-announce-owner at centos.org When
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs ii glusterfs-client 1.3.8-0pre2 GlusterFS fuse client ii glusterfs-server 1.3.8-0pre2 GlusterFS fuse server ii libglusterfs0 1.3.8-0pre2 GlusterFS libraries and translator modules I have 2 hosts set up to use AFR with
2013 Dec 03
3
Self Heal Issue GlusterFS 3.3.1
Hi, I'm running glusterFS 3.3.1 on Centos 6.4. ? Gluster volume status Status of volume: glustervol Gluster process Port Online Pid ------------------------------------------------------------------------------ Brick KWTOCUATGS001:/mnt/cloudbrick 24009 Y 20031 Brick KWTOCUATGS002:/mnt/cloudbrick
2018 Jan 26
0
parallel-readdir is not recognized in GlusterFS 3.12.4
can you please test parallel-readdir or readdir-ahead gives disconnects? so we know which to disable parallel-readdir doing magic ran on pdf from last year https://events.static.linuxfound.org/sites/events/files/slides/Gluster_DirPerf_Vault2017_0.pdf -v On Thu, Jan 25, 2018 at 8:20 AM, Alan Orth <alan.orth at gmail.com> wrote: > By the way, on a slightly related note, I'm pretty
2008 Dec 16
4
GlusterFS process take very many memory
Hello!!! I try use GLusterFS + openvz, but gfs process every 1 minute memory usare increase at ~2MB. How i can fix this? P.S. sorry about my bad english. Cluster information: 1) 3 nodes (server-client), conf: ############## # local data # ############## volume vz type storage/posix option directory /home/local end-volume volume vz-locks type features/posix-locks subvolumes vz end-volume
2018 Jan 26
1
parallel-readdir is not recognized in GlusterFS 3.12.4
Dear Vlad, I'm sorry, I don't want to test this again on my system just yet! It caused too much instability for my users and I don't have enough resources for a development environment. The only other variables that changed before the crashes was the group metadata-cache[0], which I enabled the same day as the parallel-readdir and readdir-ahead options: $ gluster volume set homes