Displaying 20 results from an estimated 11880 matches for "volumes".
Did you mean:
volume
2010 May 04
1
Posix warning : Access to ... is crossing device
...t-type tcp
option remote-host clustr-02
option transport.socket.nodelay on
option transport.remote-port 6996
option remote-subvolume brick24
end-volume
########################################
########################################
volume mirror-0
type cluster/replicate
subvolumes clustr-03-1 clustr-03-2
end-volume
volume mirror-1
type cluster/replicate
subvolumes clustr-03-3 clustr-03-4
end-volume
volume mirror-2
type cluster/replicate
subvolumes clustr-03-5 clustr-03-6
end-volume
volume mirror-3
type cluster/replicate
subvolumes clustr-03-7 clustr-0...
2011 May 07
1
Gluster "Peer Rejected"
...gbe02 is out of sync with the group.
I triggered a manual self heal by doing a the recommended ./find on a gluster
mount.
I'm stuck... I cannot find ANY docs on this except one saying:
"hi Freddie,
????A Peer is Rejected during "peer probe" if the two peers have conflicting
volumes, i.e. volumes with same name but different contents.
Is this what happened to you?.
Thanks
Pranith. "
I don't see any resolution....
Regards,
Nobody's Home
type=2
count=24
status=2
sub_count=2
version=1
transport-type=0
volume-id=9e24a924-cb5b-47c6-924b-9e4ea2dc5023
brick-0=gbe...
2009 Mar 04
2
[LLVMdev] Fwd: PPC Nightly Build Result
Something last night broke the build on Darwin PPC. Possible Gabor's
check-in?
-bw
Begin forwarded message:
> From: admin at spang.apple.com (admin)
> Date: March 4, 2009 3:56:10 AM PST
> To: wendling at apple.com
> Subject: PPC Nightly Build Result
>
> /Volumes/SandBox/NightlyTest/llvmgcc42.roots/llvmgcc42~obj/obj-
> powerpc-powerpc/./prev-gcc/xgcc -B/Volumes/SandBox/NightlyTest/
> llvmgcc42.roots/llvmgcc42~obj/obj-powerpc-powerpc/./prev-gcc/ -B/
> Developer/usr/llvm-gcc-4.2/powerpc-apple-darwin9/bin/ -c -g -O2 -
> mdynamic-no-pic -DIN_G...
2009 Mar 05
0
[LLVMdev] Fwd: PPC Nightly Build Result
...ld on Darwin PPC. Possible Gabor's
> check-in?
>
> -bw
>
> Begin forwarded message:
>
> > From: ad... at spang.apple.com (admin)
> > Date: March 4, 2009 3:56:10 AM PST
> > To: wendl... at apple.com
> > Subject: PPC Nightly Build Result
>
> > /Volumes/SandBox/NightlyTest/llvmgcc42.roots/llvmgcc42~obj/obj-
> > powerpc-powerpc/./prev-gcc/xgcc -B/Volumes/SandBox/NightlyTest/
> > llvmgcc42.roots/llvmgcc42~obj/obj-powerpc-powerpc/./prev-gcc/ -B/
> > Developer/usr/llvm-gcc-4.2/powerpc-apple-darwin9/bin/ -c -g -O2 -
> > mdynam...
2010 Mar 02
2
crash when using the cp command to copy files off a striped gluster dir but not when using rsync
...stripe-5
type protocol/client
option transport-type ib-verbs
option remote-host gluster5
option remote-subvolume iothreads
end-volume
volume readahead-gluster1
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-stripe-1
end-volume
volume readahead-gluster2
type performance/read-ahead
option page-count 4 # 2 is default
option force-atime-update off # default is off
subvolumes client-stripe-2
end-volume
volume readahead-gluster3
type performance/read-ahead
option page-count 4...
2009 Mar 05
1
[LLVMdev] Fwd: PPC Nightly Build Result
...t;> check-in?
>>
>> -bw
>>
>> Begin forwarded message:
>>
>>> From: ad... at spang.apple.com (admin)
>>> Date: March 4, 2009 3:56:10 AM PST
>>> To: wendl... at apple.com
>>> Subject: PPC Nightly Build Result
>>
>>> /Volumes/SandBox/NightlyTest/llvmgcc42.roots/llvmgcc42~obj/obj-
>>> powerpc-powerpc/./prev-gcc/xgcc -B/Volumes/SandBox/NightlyTest/
>>> llvmgcc42.roots/llvmgcc42~obj/obj-powerpc-powerpc/./prev-gcc/ -B/
>>> Developer/usr/llvm-gcc-4.2/powerpc-apple-darwin9/bin/ -c -g -O2 -
>>...
2012 Jan 04
0
FUSE init failed
...p
20: end-volume
21:
22: volume test-volume-client-3
23: type protocol/client
24: option remote-host node004
25: option remote-subvolume /local
26: option transport-type tcp
27: end-volume
28:
29: volume test-volume-replicate-0
30: type cluster/replicate
31: subvolumes test-volume-client-0 test-volume-client-1
32: end-volume
33:
34: volume test-volume-replicate-1
35: type cluster/replicate
36: subvolumes test-volume-client-2 test-volume-client-3
37: end-volume
38:
39: volume test-volume-dht
40: type cluster/distribute
41: subvolumes te...
2012 Sep 18
4
cannot create a new volume with a brick that used to be part of a deleted volume?
...want to
continue? (y/n) y
Stopping volume gv0 has been successful
[root at farm-ljf0 ~]# gluster volume delete gv0
Deleting volume will erase all information about the volume. Do you
want to continue? (y/n) y
Deleting volume gv0 has been successful
[root at farm-ljf0 ~]# gluster volume info all
No volumes present
########
I then attempted to create a new volume using the same bricks that
used to be part of the (now) deleted volume, but it keeps refusing &
failing claiming that the brick is already part of a volume:
########
[root at farm-ljf1 ~]# gluster volume create gv0 rep 2 transport tcp
10...
2005 Jul 12
1
HAL and mounting volume
Hi, is there anybody understanding HAL?
I use CentOS 4 (RHEL 4) I need set specific mount options for USB flash
disk.
I found I can do it in
/usr/share/hal/fdi/95userpolicy/storage-policy.fdi
<?xml version="1.0" encoding="ISO-8859-1"?><!-- -*- SGML -*- -->
<deviceinfo version="0.2">
<device>
<match key="volume.fstype"
2018 Apr 09
2
Gluster cluster on two networks
...y] 2-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti\
l brick's port is available
[2018-04-09 11:42:29.632804] E [MSGID: 108006] [afr-common.c:5143:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 11:42:29.637247] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti\
l brick's port is...
2008 Jun 11
1
software raid performance
Are there known performance issues with using glusterfs on software raid? I've
been playing with a variety of configs (AFR, AFR with Unify) on a two server
setup. Everything seems to work well, but performance (creating files,
reading files, appending to files) is very slow. Using the same configs on
two non-software raid machines shows significant performance increases.
Before I go a
2011 Feb 24
0
No subject
...hese settings may not apply to v3.2=
) :
####
volume posix-stripe
type storage/posix
option directory /export/gluster1/stripe
end-volume
volume posix-distribute
type storage/posix
option directory /export/gluster1/distribute
end-volume
volume locks
type features/locks
subvolumes posix-stripe
end-volume
volume locks-dist
type features/locks
subvolumes posix-distribute
end-volume
volume iothreads
type performance/io-threads
option thread-count 16
subvolumes locks
end-volume
volume iothreads-dist
type performance/io-threads
option thread-count 16
subvolumes...
2009 Jul 13
1
[PATCH] Use volume key instead of path to identify volume.
This patch teaches taskomatic to use the volume 'key' instead of the
path from libvirt to key the volume off of in the database. This fixes
the duplicate iscsi volume bug we were seeing. The issue was that
libvirt changed the way they name storage volumes and included a local
ID that changed each time it was attached.
Note that the first run with this new patch will cause duplicate
volumes because of the key change. Ideally you would delete all storage
pools and readd them after applying this patch.
Signed-off-by: Ian Main <imain at redhat.com...
2018 Apr 10
0
Gluster cluster on two networks
...client-3: disconnected from urd-gds-volume-client-3.
> Client process will keep trying to connect to glusterd unti\
> l brick's port is available
> [2018-04-09 11:42:29.632804] E [MSGID: 108006] [afr-common.c:5143:__afr_handle_child_down_event]
> 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until
> atleast one of them comes back up.
> [2018-04-09 11:42:29.637247] I [MSGID: 114018] [client.c:2285:client_rpc_notify]
> 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4.
> Client process will keep trying to connect to glusterd unti\
&g...
2018 Apr 10
1
Gluster cluster on two networks
...ed from urd-gds-volume-client-3.
> > Client process will keep trying to connect to glusterd unti\
> > l brick's port is available
> > [2018-04-09 11:42:29.632804] E [MSGID: 108006] [afr-common.c:5143:__afr_handle_child_down_event]
> > 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until
> > atleast one of them comes back up.
> > [2018-04-09 11:42:29.637247] I [MSGID: 114018] [client.c:2285:client_rpc_notify]
> > 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4.
> > Client process will keep trying to connect...
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount it via mount -t gluster and I can see everything. I am
still seeing the following error in /var/log/glusterfs/nfs.log
[2011-02-04 13:09:16.404851] E
[client-handshake.c:1079:client_query_portmap_cbk]
bhl-volume-client-98: failed to get the port number for remote
subvolume
[2011-02-04 13:09:16.404909] I
2018 Feb 25
2
Re-adding an existing brick to a volume
Hi!
I am running a replica 3 volume. On server2 I wanted to move the brick
to a new disk.
I removed the brick from the volume:
gluster volume remove-brick VOLUME rep 2
server2:/gluster/VOLUME/brick0/brick force
I unmounted the old brick and mounted the new disk to the same location.
I added the empty new brick to the volume:
gluster volume add-brick VOLUME rep 3
2018 Apr 10
0
Gluster cluster on two networks
...fy] 2-urd-gds-volume-client-3: disconnected from urd-gds-volume-client-3. Client process will keep trying to connect to glusterd unti
l brick's port is available
[2018-04-09 11:42:29.632804] E [MSGID: 108006] [afr-common.c:5143:__afr_handle_child_down_event] 2-urd-gds-volume-replicate-0: All subvolumes are down. Going offline until atleast one of them comes back up.
[2018-04-09 11:42:29.637247] I [MSGID: 114018] [client.c:2285:client_rpc_notify] 2-urd-gds-volume-client-4: disconnected from urd-gds-volume-client-4. Client process will keep trying to connect to glusterd unti
l brick's port is a...
2009 Jun 26
0
Error when expand dht model volumes
HI all:
I met a problem in expending dht volumes, i write in a dht storage directory untile it grew up to 90%,so i add four new volumes into the configur file.
But when start again ,the data in directory some disappeared ,Why ??? Was there a special action before expending the volumes?
my client cofigure file is this :
volume client1...
2012 Jan 05
1
Can't stop or delete volume
Hi,
I can't stop or delete a replica volume:
# gluster volume info
Volume Name: sync1
Type: Replicate
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: thinkpad:/gluster/export
Brick2: quad:/raid/gluster/export
# gluster volume stop sync1
Stopping volume will make its data inaccessible. Do you want to continue? (y/n) y
Volume sync1 does not exist
# gluster volume