Displaying 20 results from an estimated 8000 matches similar to: "Samba with Global File System"
2002 Dec 19
0
Generic smb.conf file
Hi,
Does anybody know where I can get an untouched
smb.conf file? When I used samba-swat, it rewrote
smb.conf and deleted out the comments that was in the
original file that came with samba.
Minh
--- samba-request@lists.samba.org wrote:
> Send samba mailing list submissions to
> samba@lists.samba.org
>
> To subscribe or unsubscribe via the World Wide Web,
> visit
>
2006 Apr 02
0
CESA:2006-0201-1 CentOS 4 i386 Cluster Suite / Global File System update (csgfs repo only)
CentOS Errata and Security Advisory 2006-0201-1
CentOS 4 i386 Cluster Suite / Global File System Update
The CESA is an update to the csgfs repository only and not the main
CentOS-4 repository.
This CESA is issued to upgrade Cluster Suite 4 and Global File System
6.1 to use the 2.6.9-22.0.2.EL CentOS-4 kernel. It updates to all
CS/GFS packages to the latest versions.
The following files are
2006 Apr 02
0
CESA:2006-0201-1 CentOS 4 x86_64 Cluster Suite / Global File System update (csgfs repo only)
CentOS Errata and Security Advisory 2006-0201-1
CentOS 4 x86_64 Cluster Suite / Global File System Update
The CESA is an update to the csgfs repository only and not the main
CentOS-4 repository.
This CESA is issued to upgrade Cluster Suite 4 and Global File System
6.1 to use the 2.6.9-22.0.2.EL CentOS-4 kernel. It updates to all
CS/GFS packages to the latest versions.
The following files are
2004 Dec 01
2
1.0-test CVS HEAD index problems
Running current CVS HEAD I get:
dovecot: Dec 01 13:27:38 Info: Dovecot v1.0-test52 starting up
dovecot: Dec 01 13:33:34 Info: imap-login: Login: jtl [69.162.177.245]
dovecot: Dec 01 13:41:07 Error: IMAP(jtl): UIDVALIDITY changed
(1100294646 -> 110192643
5) in mbox file /var/spool/mail/j/t/jtl
dovecot: Dec 01 13:41:07 Error: IMAP(jtl): UIDVALIDITY changed
(1100294646 -> 110192643
5) in
2005 Aug 01
0
Announcing CentOS 3 i386 Cluster Suite (CS) and Global File System (GFS)
Announcing CentOS 3 i386 Cluster Suite (CS) and Global File System (GFS)
CentOS csgfs is now available for CentOS-3 i386. This is a built from
source found here:
ftp://ftp.redhat.com/pub/redhat/linux/enterprise/3/en/RHCS/i386/SRPMS/
ftp://ftp.redhat.com/pub/redhat/linux/enterprise/3/en/RHGFS/i386/SRPMS/
ftp://ftp.redhat.com/pub/redhat/linux/updates/enterprise/3AS/en/RHCS/SRPMS/
2005 May 28
5
CentOS and SL, together?
From: Lamar Owen <lowen at pari.edu>
> Referencing SL3 and CentOS 3 (as I haven't run SL4 as yet) there were some
> scientific applications and some Java stuff, eclipse for one,
You do understand the redistribution issues with Java, correct?
It's a Sun problem (a typical thorn for Red Hat in general), not a Red Hat one.
> part of cluster suite for another, included.
>
2006 Apr 01
0
Cluster Suite 4 and Global File System 6.1 for CentOS-4
The CentOS Development Team is pleased to announce the release of the
Cluster Suite 4 (CS) and Global File System 6.1 (GFS) for CentOS-4 for
the i386 and x86_64 architectures.
The files are located at:
http://mirror.centos.org/centos/4/csgfs/
Documentation for CS / GFS is with the other CentOS-4 docs at:
http://mirror.centos.org/centos/4/docs/
You can add the csgfs repository to your yum
2017 Oct 06
0
Gluster geo replication volume is faulty
On 09/29/2017 09:30 PM, rick sanchez wrote:
> I am trying to get up geo replication between two gluster volumes
>
> I have set up two replica 2 arbiter 1 volumes with 9 bricks
>
> [root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number
2001 Aug 15
4
WSJ article
Found this on usenet:
August 13, 2001
E-Business
Inventors Release Free Alternative To MP3 Music, but Cost Is High
By MEI FONG
Staff Reporter of THE WALL STREET JOURNAL
SOMERVILLE, Mass. -- Christopher Montgomery wants to be the Linus Torvalds
of
music, the creator of a piece of free software that has the sweeping impact
of
Mr. Torvalds's Linux operating system. He soon may begin finding
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes
I have set up two replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2:
2007 May 21
2
CentOS-5 - kmod-gfs dependency issue?
Hi all,
I'm wondering if I'm running into some dependency issues on my CentOS5
test-machines. I've installed with a fairly minimal package set,
updated, removed old kernels and am now experimenting with iscsi and gfs.
I think I need kmod-gfs to get gfs -support, but there is only a version
that suits the base-kernel, 2.6.18-8.el5.
"
[root at node02 ~]# yum install kmod-gfs
2018 May 04
0
Crashing applications, RDMA_ERROR in logs
Hello gluster users and professionals,
We are running gluster 3.10.10 distributed volume (9 nodes) using RDMA
transport.
From time to time applications crash with I/O errors (can't access file)
and in the client logs we can see messages like:
[2018-05-04 10:00:43.467490] W [MSGID: 114031]
[client-rpc-fops.c:2640:client3_3_readdirp_cbk] 0-gv0-client-2: remote
operation failed [Transport
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Hi,
Maybe someone can point me to a documentation or explain this? I can't
find it myself.
Do we have any other useful resources except doc.gluster.org? As I see
many gluster options are not described there or there are no explanation
what is doing...
On 2018-03-12 15:58, Anatoliy Dmytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume
2023 Jun 30
1
remove_me files building up
Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's crashed, we got the server back up and running and ensured that all healing entries cleared, and also increased the server spec (CPU/Mem) as this seemed to be the potential cause.
Since then however, we've seen some strange behaviour,
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello,
We have a very fresh gluster 3.10.10 installation.
Our volume is created as distributed volume, 9 bricks 96TB in total
(87TB after 10% of gluster disk space reservation)
For some reasons I can?t ?heal? the volume:
# gluster volume heal gv0
Launching heal operation to perform index self heal on volume gv0 has
been unsuccessful on bricks that are down. Please check if all brick
processes
2018 Mar 06
1
geo replication
Hi,
Have problems with geo replication on glusterfs 3.12.6 / Ubuntu 16.04.
I can see a ?master volinfo unavailable? in master logfile.
Any ideas?
Master:
Status of volume: testtomcat
Gluster process TCP Port RDMA Port Online Pid
------------------------------------------------------------------------------
Brick gfstest07:/gfs/testtomcat/mount 49153 0
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
Can we add a smarter error message for this situation by checking volume
type first?
Cheers,
Laura B
On Wednesday, March 14, 2018, Karthik Subrahmanya <ksubrahm at redhat.com>
wrote:
> Hi Anatoliy,
>
> The heal command is basically used to heal any mismatching contents
> between replica copies of the files.
> For the command "gluster volume heal <volname>"
2018 Mar 14
0
Can't heal a volume: "Please check if all brick processes are running."
Hi Karthik,
Thanks a lot for the explanation.
Does it mean a distributed volume health can be checked only by "gluster
volume status " command?
And one more question: cluster.min-free-disk is 10% by default. What
kind of "side effects" can we face if this option will be reduced to,
for example, 5%? Could you point to any best practice document(s)?
Regards,
Anatoliy
2023 Jul 04
1
remove_me files building up
Hi,
Thanks for your response, please find the xfs_info for each brick on the arbiter below:
root at uk3-prod-gfs-arb-01:~# xfs_info /data/glusterfs/gv1/brick1
meta-data=/dev/sdc1 isize=512 agcount=31, agsize=131007 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=1, sparse=1, rmapbt=0
=
2023 Jul 03
1
remove_me files building up
Hi,
you mentioned that the arbiter bricks run out of inodes.Are you using XFS ?Can you provide the xfs_info of each brick ?
Best Regards,Strahil Nikolov?
On Sat, Jul 1, 2023 at 19:41, Liam Smith<liam.smith at ek.co> wrote: Hi,
We're running a cluster with two data nodes and one arbiter, and have sharding enabled.
We had an issue a while back where one of the server's