rajasekhar gurram
2008-Jun-27 12:20 UTC
[Gluster-users] Glusterfs could not open spec file
Dear Team, I have installed and configured gluster in one server and client. one time it was worked fine, again later it is not working. my configuration files. server [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol volume rhel2 type storage/posix # POSIX FS translator option directory /opt # Export this directory end-volume volume rhel2 type protocol/server option transport-type tcp/server # For TCP/IP transport option client-volume-filename /etc/glusterfs/glusterfs-client.vol subvolumes export test option auth.ip.rhel2.allow * # Allow access to "brick" volume end-volume [root at rhel2 ~]# client [root at test ~]# cat /etc/glusterfs/glusterfs-client.vol volume rhel2 type protocol/client option transport-type tcp/client option remote-host 10.129.150.227 option remote-subvolume rhel2 end-volume [root at test ~]# problem: [root at test ~]# glusterfs --server 10.129.150.227 /mnt/glusterfs/ --volume-name r hel2 glusterfs: could not open specfile [root at test ~]# Apart from these i have few doubts, 1)In our website it told that gluster there is no single point of failure. But as per configuration point of view it is like server and client model so if server fails client cannot able to mount. 2)In server how can i confirm whether server was exported or not where as in NFS we have command to check showmount. kindly inform me If iam doing wrong, it will help full for me to go further checks. Thanks and Regards G.Rajasekhar, System Engineer. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080627/3c88ea15/attachment.html>
Hi Rajasekhar, Please find comments inlined. On Fri, Jun 27, 2008 at 4:20 PM, rajasekhar gurram < rajasekhar.gurram at locuz.com> wrote:> Dear Team, > I have installed and configured gluster in one server and client. > one time it was worked fine, again later it is not working. > my configuration files. > server > [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol > volume rhel2 > type storage/posix # POSIX FS translator > option directory /opt # Export this directory > end-volume > > volume rhel2 > type protocol/server > option transport-type tcp/server # For TCP/IP transport > option client-volume-filename /etc/glusterfs/glusterfs-client.vol > subvolumes export test > option auth.ip.rhel2.allow * # Allow access to "brick" volume > end-volume >* Both xlators (protocol/server and storage/posix) are named as rhel2. Each translator instance should be given a different name. * protocol/server lists "export" as one of its subvolumes, which is not present in the specfile. Try removing it if you dont need it. * server should have "option client-volume-specfile <glusterfs-volume-specification-file>" or a client-specfile should be present as <glusterfs-install-prefix>/etc/glusterfs-client.vol> > [root at rhel2 ~]# > > client > [root at test ~]# cat /etc/glusterfs/glusterfs-client.vol > volume rhel2 > type protocol/client > option transport-type tcp/client > option remote-host 10.129.150.227 > option remote-subvolume rhel2 > end-volume > [root at test ~]# > problem: > > [root at test ~]# glusterfs --server 10.129.150.227 /mnt/glusterfs/ > --volume-name r hel2 > glusterfs: could not open specfile >> > [root at test ~]# > > Apart from these i have few doubts, > 1)In our website it told that gluster there is no single point of failure. > But as per configuration point of view it is like server and client model > so if server fails > client cannot able to mount. >No single point of failure when glusterfs is run in clustered mode. What it exactly means that there is nothing like a single metadata server, failure of which renders cluster not operational (Though currently unify has a limitation in the form of namespace, which will be fixed in future releases).> > 2)In server how can i confirm whether server was exported or not where as > in NFS we have command to check showmount. >There is no tool currently which tell the directories exported by server. But this can be found out by checking logfiles of server.> > > kindly inform me If iam doing wrong, it will help full for me to go further > checks. > > Thanks and Regards > G.Rajasekhar, > System Engineer. > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >-- Raghavendra G A centipede was happy quite, until a toad in fun, Said, "Prey, which leg comes after which?", This raised his doubts to such a pitch, He fell flat into the ditch, Not knowing how to run. -Anonymous -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080702/02eeed2d/attachment.html>
I had sent it earlier from my gmail id which is not added to gluster-users. On Wed, Jul 2, 2008 at 12:16 PM, Raghavendra G <raghavendra.hg at gmail.com> wrote:> Hi Rajasekhar, > Please find comments inlined. > > On Fri, Jun 27, 2008 at 4:20 PM, rajasekhar gurram < > rajasekhar.gurram at locuz.com> wrote: > >> Dear Team, >> I have installed and configured gluster in one server and client. >> one time it was worked fine, again later it is not working. >> my configuration files. >> server >> [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol >> volume rhel2 >> type storage/posix # POSIX FS translator >> option directory /opt # Export this directory >> end-volume >> >> volume rhel2 >> type protocol/server >> option transport-type tcp/server # For TCP/IP transport >> option client-volume-filename /etc/glusterfs/glusterfs-client.vol >> subvolumes export test >> option auth.ip.rhel2.allow * # Allow access to "brick" volume >> end-volume >> > * Both xlators (protocol/server and storage/posix) are named as rhel2. Each > translator instance should be given a different name. > > * protocol/server lists "export" as one of its subvolumes, which is not > present in the specfile. Try removing it if you dont need it. > > * server should have "option client-volume-specfile > <glusterfs-volume-specification-file>" or a client-specfile should be > present as <glusterfs-install-prefix>/etc/glusterfs-client.vol > >> >> [root at rhel2 ~]# >> >> client >> [root at test ~]# cat /etc/glusterfs/glusterfs-client.vol >> volume rhel2 >> type protocol/client >> option transport-type tcp/client >> option remote-host 10.129.150.227 >> option remote-subvolume rhel2 >> end-volume >> [root at test ~]# >> problem: >> >> [root at test ~]# glusterfs --server 10.129.150.227 /mnt/glusterfs/ >> --volume-name r hel2 >> glusterfs: could not open specfile >> > > >> >> [root at test ~]# >> >> Apart from these i have few doubts, >> 1)In our website it told that gluster there is no single point of failure. >> But as per configuration point of view it is like server and client model >> so if server fails >> client cannot able to mount. >> > No single point of failure when glusterfs is run in clustered mode. What it > exactly means that there is nothing like a single metadata server, failure > of which renders cluster not operational (Though currently unify has a > limitation in the form of namespace, which will be fixed in future > releases). > >> >> 2)In server how can i confirm whether server was exported or not where as >> in NFS we have command to check showmount. >> > There is no tool currently which tell the directories exported by server. > But this can be found out by checking logfiles of server. > >> >> >> kindly inform me If iam doing wrong, it will help full for me to go >> further checks. >> >> Thanks and Regards >> G.Rajasekhar, >> System Engineer. >> >> _______________________________________________ >> Gluster-users mailing list >> Gluster-users at gluster.org >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users >> >> > > > -- > Raghavendra G > > A centipede was happy quite, until a toad in fun, > Said, "Prey, which leg comes after which?", > This raised his doubts to such a pitch, > He fell flat into the ditch, > Not knowing how to run. > -Anonymous-- Raghavendra G A centipede was happy quite, until a toad in fun, Said, "Prey, which leg comes after which?", This raised his doubts to such a pitch, He fell flat into the ditch, Not knowing how to run. -Anonymous -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080702/58ca1e89/attachment.html>
Hi Rajasekhar, Please find inlined comments, On Mon, Jul 7, 2008 at 11:21 AM, rajasekhar gurram < rajasekhar.gurram at locuz.com> wrote:> > Hi Raghavendra, > > Thank you for you kind responce.I had made some modifications to my > configuration files. > still I cant able to mount on client side.kindly correct my configuration. > > Server > [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol > volume brick > type storage/posix # POSIX FS translator > option directory /opt # Export this directory > end-volume > > volume rhel2 > type protocol/server > option listen-port 6998 >Glusterfs uses port 6996 by default. Either dont specify "option listen-port" or make sure to provide the port number in --server option in glusterfs command line.> > option transport-type tcp/server > subvolumes brick > option client-volume-filename /usr/etc/glusterfs-client.vol > option auth.ip.brick.allow * # Allow access to "brick" volume > end-volume > [root at rhel2 ~]# > > CLIENT > [root at test ~]# cat /usr/etc/glusterfs-client.vol > volume rhel2 > type protocol/client > option transport-type tcp/client > option remote-host 10.129.150.227 > option remote-subvolume brick > end-volume > [root at test ~]# > > > [root at test ~]# glusterfs -f /usr/etc/glusterfs-client.vol /mnt/glusterfs/ > [root at test ~]# > for this command no respoce (not showing any error. > but > > [root at test ~]# mount.glusterfs 10.129.150.227:/opt/ /mnt/glusterfs/ > glusterfs: could not open specfile > [root at test ~]# > this is the error showing for the above command. > > kindly correct me If iam doing wrong. > > Thanks and Regards, > G.Rajasekhar, > System Engineer > -----Original Message----- > From: raghavendra.hg at gmail.com on behalf of Raghavendra G > Sent: Wed 02/07/2008 13:49 > To: rajasekhar gurram > Cc: gluster-users at gluster.org > Subject: Re: [Gluster-users] Glusterfs could not open spec file > > I had sent it earlier from my gmail id which is not added to gluster-users. > > On Wed, Jul 2, 2008 at 12:16 PM, Raghavendra G <raghavendra.hg at gmail.com> > wrote: > > > Hi Rajasekhar, > > Please find comments inlined. > > > > On Fri, Jun 27, 2008 at 4:20 PM, rajasekhar gurram < > > rajasekhar.gurram at locuz.com> wrote: > > > >> Dear Team, > >> I have installed and configured gluster in one server and client. > >> one time it was worked fine, again later it is not working. > >> my configuration files. > >> server > >> [root at rhel2 ~]# cat /etc/glusterfs/glusterfs-server.vol > >> volume rhel2 > >> type storage/posix # POSIX FS translator > >> option directory /opt # Export this directory > >> end-volume > >> > >> volume rhel2 > >> type protocol/server > >> option transport-type tcp/server # For TCP/IP transport > >> option client-volume-filename /etc/glusterfs/glusterfs-client.vol > >> subvolumes export test > >> option auth.ip.rhel2.allow * # Allow access to "brick" volume > >> end-volume > >> > > * Both xlators (protocol/server and storage/posix) are named as rhel2. > Each > > translator instance should be given a different name. > > > > * protocol/server lists "export" as one of its subvolumes, which is not > > present in the specfile. Try removing it if you dont need it. > > > > * server should have "option client-volume-specfile > > <glusterfs-volume-specification-file>" or a client-specfile should be > > present as <glusterfs-install-prefix>/etc/glusterfs-client.vol > > > >> > >> [root at rhel2 ~]# > >> > >> client > >> [root at test ~]# cat /etc/glusterfs/glusterfs-client.vol > >> volume rhel2 > >> type protocol/client > >> option transport-type tcp/client > >> option remote-host 10.129.150.227 > >> option remote-subvolume rhel2 > >> end-volume > >> [root at test ~]# > >> problem: > >> > >> [root at test ~]# glusterfs --server 10.129.150.227 /mnt/glusterfs/ > >> --volume-name r hel2 > >> glusterfs: could not open specfile > >> > > > > > >> > >> [root at test ~]# > >> > >> Apart from these i have few doubts, > >> 1)In our website it told that gluster there is no single point of > failure. > >> But as per configuration point of view it is like server and client > model > >> so if server fails > >> client cannot able to mount. > >> > > No single point of failure when glusterfs is run in clustered mode. What > it > > exactly means that there is nothing like a single metadata server, > failure > > of which renders cluster not operational (Though currently unify has a > > limitation in the form of namespace, which will be fixed in future > > releases). > > > >> > >> 2)In server how can i confirm whether server was exported or not where > as > >> in NFS we have command to check showmount. > >> > > There is no tool currently which tell the directories exported by server. > > But this can be found out by checking logfiles of server. > > > >> > >> > >> kindly inform me If iam doing wrong, it will help full for me to go > >> further checks. > >> > >> Thanks and Regards > >> G.Rajasekhar, > >> System Engineer. > >> > >> _______________________________________________ > >> Gluster-users mailing list > >> Gluster-users at gluster.org > >> http://zresearch.com/cgi-bin/mailman/listinfo/gluster-users > >> > >> > > > > > > -- > > Raghavendra G > > > > A centipede was happy quite, until a toad in fun, > > Said, "Prey, which leg comes after which?", > > This raised his doubts to such a pitch, > > He fell flat into the ditch, > > Not knowing how to run. > > -Anonymous > > > > > -- > Raghavendra G > > A centipede was happy quite, until a toad in fun, > Said, "Prey, which leg comes after which?", > This raised his doubts to such a pitch, > He fell flat into the ditch, > Not knowing how to run. > -Anonymous > >-- Raghavendra G A centipede was happy quite, until a toad in fun, Said, "Prey, which leg comes after which?", This raised his doubts to such a pitch, He fell flat into the ditch, Not knowing how to run. -Anonymous -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20080707/2d5dd084/attachment.html>