Displaying 20 results from an estimated 5000 matches similar to: "glusterfs: Failed to get the port number for remote subvolume"
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount it via mount -t gluster and I can see everything. I am
still seeing the following error in /var/log/glusterfs/nfs.log
[2011-02-04 13:09:16.404851] E
[client-handshake.c:1079:client_query_portmap_cbk]
bhl-volume-client-98: failed to get the port number for remote
subvolume
[2011-02-04 13:09:16.404909] I
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote:
Hey Guy's,
Returning to this topic, after disabling the the quorum:
cluster.quorum-type: none
cluster.server-quorum-type: none
I've ran into a number of gluster errors (see below).
I'm using gluster as the backend for my NFS storage. I have gluster
running on two nodes, nfs01 and nfs02. It's mounted on /n on each host.
The path /n is
2018 Feb 24
0
Failed heal volume
Whem I try to run healin volume i have this log errors and 3221 file not
healing:
[2018-02-24 15:32:00.915219] W [socket.c:3216:socket_connect] 0-glusterfs:
Error disabling sockopt IPV6_V6ONLY: "Protocollo non disponibile"
[2018-02-24 15:32:00.915854] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-02-24 15:32:01.925714] E
2009 Mar 18
0
glusterfs bdb backend problem
I cannot touch or cp files to glusterfs filesystem with bdb backend.
orion31 is glusterfs server, orion28 is glusterfs client.
-------------------------------------------------------------------------------------------
[root at orion28 ~]# mount -t glusterfs /etc/glusterfs/glusterfs-bdb.vol /mnt
[root at orion28 ~]# touch /mnt/a
touch: cannot touch `/mnt/a': No such file or directory
[root
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2010 May 31
2
DHT translator problem
Hello,
I am trying to configure a volume using DHT, however after I mount it,
the mount point looks rather strange and when I try to do 'ls' on it I get:
ls: /mnt/gtest: Stale NFS file handle
I can create files and dirs in the mount point, I can list them but I
cant list the mount point itself.
Example:
the folume is mounted on /mnt/gtest
[root at storage2]# ls -l /mnt/
?---------
2010 Mar 15
1
Glusterfs 3.0.X crashed on Fedora 11
the glusterfs 3.0.X crashed on Fedora 12, it got buffer overflow, seems fine on Fedora 11
Name : fuse
Arch : x86_64
Version : 2.8.1
Release : 4.fc12
Name : glibc
Arch : x86_64
Version : 2.11.1
Release : 1
complete log:
======================================================================================================
[root at test_machine06 ~]# glusterfsd
2008 Dec 20
1
glusterfs1.4rc6 bdb problem
I download 1.4rc6 version, and
#rpmbuild -ta glusterfs-1.4.0rc6.tar.gz --without ibverbs
# cd /usr/src/redhat/RPM/i386/
# rpm -ivh glusterfs-1.4.0rc6-1.i386.rpm
my configuration:
# cat /etc/gluster/glusterfs-server.vol
volume bdb
type storage/bdb
option directory /data2/glusterfs
option checkpoint-timeout 10
option lru-limit 200
end-volume
volume bdbserver
type protocol/server
option
2011 Feb 04
1
GlusterFS and MySQL Innodb Locking Issue
I'm having some problems getting two nodes to mount a shared gluster
volume where I have the MySQL data files stored. The databases are
Innodb. Creating the volume on the master server works fine and it
mounts, and when I mount that on the first mysql node it works fine,
too. However, when I try to mount it with the second node I get this
error:
InnoDB: Unable to lock ./ibdata1, error: 11
2009 Jun 11
2
Problem with new version of GlusterFS-2.0.1 while copying.
Hi,
I am having some problem with new version of
GlusterFS-2.0.1 while copying using "apache" user.
sudo -u apache cp -pvf zip/* test/
getting the message
cp: getting attribute
`trusted.glusterfs.afr.data-pending' of
`zip/speccok1ma131231824637.zip': Operation not permitted
`zip/speccok1ma131231824776.zip' -> `test/speccok1ma131231824776.zip'
No problem while
2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0!
In the past, I install well with GlusterFS 2.06, and Log file of server
and Client placed in /var/log/glusterfs/...
But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1
client),
I start glusterFS servers and client, and type *df -H* at client, result
is : "Transport endpoint is not connected"
*I want to detect BUG, but I not found
2009 May 28
2
Glusterfs 2.0 hangs on high load
Hello!
After upgrade to version 2.0, now using 2.0.1, I'm experiencing problems
with glusterfs stability.
I'm running 2 node setup with cliet side afr, and glusterfsd also is
running on same servers. Time to time glusterfs just hangs, i can
reproduce this running iozone benchmarking tool. I'm using patched
Fuse, but same result is with unpatched.
2018 Feb 05
0
Fwd: Troubleshooting glusterfs
On 5 February 2018 at 15:40, Nithya Balachandran <nbalacha at redhat.com>
wrote:
> Hi,
>
>
> I see a lot of the following messages in the logs:
> [2018-02-04 03:22:01.544446] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
> 0-glusterfs: No change in volfile,continuing
> [2018-02-04 07:41:16.189349] W [MSGID: 109011]
> [dht-layout.c:186:dht_layout_search] 48-gv0-dht: no
2017 Sep 08
0
GlusterFS as virtual machine storage
On Sep 8, 2017 13:36, "Gandalf Corvotempesta" <
gandalf.corvotempesta at gmail.com> wrote:
2017-09-08 13:21 GMT+02:00 Pavel Szalbot <pavel.szalbot at gmail.com>:
> Gandalf, isn't possible server hard-crash too much? I mean if reboot
> reliably kills the VM, there is no doubt network crash or poweroff
> will as well.
IIUP, the only way to keep I/O running is to
2012 Jan 04
0
FUSE init failed
Hi,
I'm having an issue using the GlusterFS native client.
After doing a mount the filesystem appears mounted but any operation
results in a
Transport endpoint is not connected
message
gluster peer status and volume info don't complain.
I've copied the mount log below which mentions an error at fuse_init.
The kernel is based on 2.6.15 and FUSE api version is 7.3.
I'm using
2017 Jul 30
1
Lose gnfs connection during test
Hi all
I use Distributed-Replicate(12 x 2 = 24) hot tier plus
Distributed-Replicate(36 x (6 + 2) = 288) cold tier with gluster3.8.4
for performance test. When i set client/server.event-threads as small
values etc 2, it works ok. But if set client/server.event-threads as big
values etc 32, the netconnects will always become un-available during
the test, with following error messages in stree
2017 Aug 18
1
Is transport=rdma tested with "stripe"?
On Wed, Aug 16, 2017 at 4:44 PM, Hatazaki, Takao <takao.hatazaki at hpe.com> wrote:
>> Note that "stripe" is not tested much and practically unmaintained.
>
> Ah, this was what I suspected. Understood. I'll be happy with "shard".
>
> Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers