Displaying 20 results from an estimated 35 matches for "volumenam".
Did you mean:
volumename
2000 Mar 23
1
URGENT: WIN2K INVALID VOLUMENAME
Dear Samba Users
I detected a strange behaviour between Win2k Prof. and Samba and I hope
somebody can help me
fixing it.
The Bug:
=======
Samba does not correctly return the volume name back to win2k.
In the man pages it is written that by default the sharename is returned as
volumename.
This works fine for Win98 and NT4 but not for Win2k.
If I say "label v:" (needless to say that v: is my samba drive) from Win2k I
get the following result:
==========================================================================
Microsoft Windows 2000 [Version 5.00.2195]
(C) Copyrig...
2000 Mar 19
0
URGENT:WIN2K MISSING VOLUMENAME
Dear Samba Users
I detected a strange behaviour between Win2k Prof. and Samba and I hope somebody can help me
fixing it.
The Bug:
=======
Samba does not correctly return the volume name back to win2k.
In the man pages it is written that by default the sharename is returned as volumename.
This works fine for Win98 and NT4 but not for Win2k.
If I say "label v:" (needless to say that v: is my samba drive) from Win2k I get the following result:
==========================================================================
Microsoft Windows 2000 [Version 5.00.2195]
(C) Copyrig...
2017 Jul 05
1
Gluster failure due to "0-management: Lock not released for <volumename>"
...s on any node.
>
>
>
> *From:* Victor Nomura [mailto:victor at mezine.com]
> *Sent:* July-04-17 9:41 AM
> *To:* 'Atin Mukherjee'
> *Cc:* 'gluster-users'
> *Subject:* RE: [Gluster-users] Gluster failure due to "0-management: Lock
> not released for <volumename>"
>
>
>
> The nodes have all been rebooted numerous times with no difference in
> outcome. The nodes are all connected to the same switch and I also
> replaced it to see if made any difference.
>
>
>
> There is no issues with connectivity network wise and no...
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...ks. Is there a manual way to resolve the locks?
Regards,
Victor
From: Atin Mukherjee [mailto:amukherj at redhat.com]
Sent: June-30-17 3:40 AM
To: Victor Nomura
Cc: gluster-users
Subject: Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>"
On Thu, 29 Jun 2017 at 22:51, Victor Nomura <victor at mezine.com> wrote:
Thanks for the reply. What would be the best course of action? The data on the volume isn?t important right now but I?m worried when our setup goes to production we don?t have the same situation and...
2017 Jul 04
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...ther nodes in order to perform any gluster commands on any node.
From: Victor Nomura [mailto:victor at mezine.com]
Sent: July-04-17 9:41 AM
To: 'Atin Mukherjee'
Cc: 'gluster-users'
Subject: RE: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>"
The nodes have all been rebooted numerous times with no difference in outcome. The nodes are all connected to the same switch and I also replaced it to see if made any difference.
There is no issues with connectivity network wise and no firewall in place between the nodes....
2017 Jun 30
3
Gluster failure due to "0-management: Lock not released for <volumename>"
...t; Victor Nomura
>
>
>
> *From:* Atin Mukherjee [mailto:amukherj at redhat.com]
> *Sent:* June-27-17 12:29 AM
>
>
> *To:* Victor Nomura
> *Cc:* gluster-users
>
> *Subject:* Re: [Gluster-users] Gluster failure due to "0-management: Lock
> not released for <volumename>"
>
>
>
> I had looked at the logs shared by Victor privately and it seems to be
> there is a N/W glitch in the cluster which is causing the glusterd to lose
> its connection with other peers and as a side effect to this, lot of rpc
> requests are getting bailed out re...
2009 Oct 15
1
Patch depends on the previous storage patch...
This patch is dependant on the previously submitted storage admin patch.
2017 Jun 29
0
Gluster failure due to "0-management: Lock not released for <volumename>"
...ble after? (I don?t delete the data on the bricks)
Regards,
Victor Nomura
From: Atin Mukherjee [mailto:amukherj at redhat.com]
Sent: June-27-17 12:29 AM
To: Victor Nomura
Cc: gluster-users
Subject: Re: [Gluster-users] Gluster failure due to "0-management: Lock not released for <volumename>"
I had looked at the logs shared by Victor privately and it seems to be there is a N/W glitch in the cluster which is causing the glusterd to lose its connection with other peers and as a side effect to this, lot of rpc requests are getting bailed out resulting glusterd to end up into...
2017 Jun 22
0
Gluster failure due to "0-management: Lock not released for <volumename>"
Could you attach glusterd.log and cmd_history.log files from all the nodes?
On Wed, Jun 21, 2017 at 11:40 PM, Victor Nomura <victor at mezine.com> wrote:
> Hi All,
>
>
>
> I?m fairly new to Gluster (3.10.3) and got it going for a couple of months
> now but suddenly after a power failure in our building it all came crashing
> down. No client is able to connect after
2010 May 04
1
Glusterfs and Xen - mount option
Hi,
I've installed Gluster Storage Platform on two servers (mirror) and mounted
the volume on a client.
I had to mount it using "mount -t glusterfs
<MANAGEMENT_SERVER>:<VOLUMENAME-ON-WEBGUI>-tcp <MOUNTPOINT>" because I
didn't had a vol file and couldn't generate one with volgen because we don't
have shell access to the gluster server right?
How can I add the option to mount it with --disable-direct-io-mode?
I can't find how to do it.
Do you...
2017 Dec 21
3
Wrong volume size with df
...pod-sjc1-gluster2:/data/brick2/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster1:/data/brick3/gv0
Status: Connected
Number of entries: 0
Brick pod-sjc1-gluster2:/data/brick3/gv0
Status: Connected
Number of entries: 0
> 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
Attached
> 3 - output of gluster volume <volname> info
[root at pod-sjc1-gluster2 ~]# gluster volume info
Volume Name: gv0
Type: Distributed-Replicate
Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
Status: Started
Snapshot Count: 13
Number of Bricks: 3 x 2 = 6
Transport-type: tcp...
2017 Jun 27
2
Gluster failure due to "0-management: Lock not released for <volumename>"
I had looked at the logs shared by Victor privately and it seems to be
there is a N/W glitch in the cluster which is causing the glusterd to lose
its connection with other peers and as a side effect to this, lot of rpc
requests are getting bailed out resulting glusterd to end up into a stale
lock and hence you see that some of the commands failed with "another
transaction is in progress or
2018 Jan 02
0
Wrong volume size with df
...> Number of entries: 0
>
> Brick pod-sjc1-gluster1:/data/brick3/gv0
> Status: Connected
> Number of entries: 0
>
> Brick pod-sjc1-gluster2:/data/brick3/gv0
> Status: Connected
> Number of entries: 0
>
> > 2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
>
> Attached
>
> > 3 - output of gluster volume <volname> info
>
> [root at pod-sjc1-gluster2 ~]# gluster volume info
>
> Volume Name: gv0
> Type: Distributed-Replicate
> Volume ID: d490a9ec-f9c8-4f10-a7f3-e1b6d3ced196
> Status: Started
> Snapshot C...
2018 Apr 04
0
Invisible files and directories
...der but on the bricks I
found some files. Renaming the directory caused it to reappear.
We're running gluster 3.12.7-1 on Debian 9 from the repositories provided by gluster.org, upgraded from 3.8 a while ago. The volume is mounted via the
fuse client.Our settings are:
> gluster volume info?$VOLUMENAME
> ?
> Volume Name: $VOLUMENAME
> Type: Distribute
> Volume ID: 0d210c70-e44f-46f1-862c-ef260514c9f1
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 23
> Transport-type: tcp
> Bricks:
> Brick1: gluster02:/srv/glusterfs/bricks/DATA201/data
> Brick2: gluster...
2017 Jun 21
2
Gluster failure due to "0-management: Lock not released for <volumename>"
Hi All,
I'm fairly new to Gluster (3.10.3) and got it going for a couple of months
now but suddenly after a power failure in our building it all came crashing
down. No client is able to connect after powering back the 3 nodes I have
setup.
Looking at the logs, it looks like there's some sort of "Lock" placed on the
volume which prevents all the clients from connecting to
2017 Dec 21
0
Wrong volume size with df
Could youplease provide following -
1 - output of gluster volume heal <volname> info
2 - /var/log/glusterfs - provide log file with mountpoint-volumename.log
3 - output of gluster volume <volname> info
4 - output of gluster volume <volname> status
5 - Also, could you try unmount the volume and mount it again and check the size?
----- Original Message -----
From: "Teknologeek Teknologeek" <teknologeek06 at gmail....
2012 Sep 13
1
how to monitor glusterfs mounted client
Dear gluster experts,
I want to ask a question about how to monitor active mounted client for
glusterfs.
Since I want to known how many clients mount a volume and I also want to
known the clients hostname/ip.
I walk through the glusterfs admin guide and don't know how. Any one helps?
--
Yongtao Fu
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2013 Nov 01
1
Hi all! Glusterfs Ipv6 support
Is Gluster 3.3.2 work with ipv6? I cant find options in CLI tu turn on and
cant fint anything about it in Admin guide.
When i search in google - i'm find workaround - add to volume config ( file
var/lib/glusterd/vols/cluster/volumename.hostname.place-metadirname )
string:
option transport.address-family inet6
to section:
volume cluster-server
type protocol/server
....
end-volume
And after restart i'm see - glusterfsd is listen on ipv6 address. But i
stiil can't connect through ipv6 and mount dir from glusterf...
2018 Apr 04
2
Invisible files and directories
Right now the volume is running with
readdir-optimize off
parallel-readdir off
On Wed, Apr 4, 2018 at 1:29 AM, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi Serg,
>
> Do you mean that turning off readdir-optimize did not work? Or did you
> mean turning off parallel-readdir did not work?
>
>
>
> On 4 April 2018 at 10:48, Serg Gulko <s.gulko at
2018 Feb 23
0
Problem migration 3.7.6 to 3.13.2
Hi Daniele,
Do you mean that the df -h output is incorrect for the volume post the
upgrade?
If yes and the bricks are on separate partitions, you might be running into
[1]. Can you search for the string "option shared-brick-count" in the files
in /var/lib/glusterd/vols/<volumename> and let us know the value? The
workaround to get this working on the cluster is available in [2].
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1517260
[2] https://bugzilla.redhat.com/show_bug.cgi?id=1517260#c19
Regards,
Nithya
On 23 February 2018 at 15:12, Daniele Frulla <daniele.fru...