similar to: Brick and Subvolume Info

Displaying 20 results from an estimated 7000 matches similar to: "Brick and Subvolume Info"

2018 Feb 08
2
Thousands of EPOLLERR - disconnecting now
Hello I have a large cluster in which every node is logging: I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now At a rate of of around 4 or 5 per second per node, which is adding up to a lot of messages. This seems to happen while my cluster is idle. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2018 Feb 08
0
Thousands of EPOLLERR - disconnecting now
On Thu, Feb 8, 2018 at 2:04 PM, Gino Lisignoli <glisignoli at gmail.com> wrote: > Hello > > I have a large cluster in which every node is logging: > > I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - > disconnecting now > > At a rate of of around 4 or 5 per second per node, which is adding up to a > lot of messages. This seems to happen while my
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2017 Oct 19
3
gluster tiering errors
All, I am new to gluster and have some questions/concerns about some tiering errors that I see in the log files. OS: CentOs 7.3.1611 Gluster version: 3.10.5 Samba version: 4.6.2 I see the following (scrubbed): Node 1 /var/log/glusterfs/tier/<vol>/tierd.log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed
2013 Nov 21
3
Sync data
I guys! i have 2 servers in replicate mode, the node 1 has all data, and the cluster 2 is empty. I created a volume (gv0) and start it. Now, how can I synchronize all files on the node 1 by the node 2 ? Steps that I followed: gluster peer probe node1 gluster volume create gv0 replica 2 node1:/data node2:/data gluster volume gvo start thanks! -------------- next part -------------- An HTML
2018 Jan 07
1
Clear heal statistics
Is there any way to clear the historic statistic from the command "gluster volume heal <volume_name> statistics" ? It seems the command takes longer and longer to run each time it is used, to the point where it times out and no longer works. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0! In the past, I install well with GlusterFS 2.06, and Log file of server and Client placed in /var/log/glusterfs/... But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1 client), I start glusterFS servers and client, and type *df -H* at client, result is : "Transport endpoint is not connected" *I want to detect BUG, but I not found
2011 Oct 17
1
brick out of space, unmounted brick
Hello Gluster users, Before I put Gluster into production, I am wondering how it determines whether a byte can be written, and where I should look in the source code to change these behaviors. My experiences are with glusterfs 3.2.4 on CentOS 6 64-bit. Suppose I have a Gluster volume made up of four 1 MB bricks, like this Volume Name: test Type: Distributed-Replicate Status: Started Number of
2012 Oct 22
1
How to add new bricks to a volume?
Hi, dear glfs experts: I've been using glusterfs (version 3.2.6) for months,so far it works very well.Now I'm facing a problem of adding two new bricks to an existed replicated (rep=2) volume,which is consisted of only two bricks and is mounted by multiple clients.Can I just use the following commands to add new bricks without stopping the services which is using the volume as motioned?
2009 Jul 29
2
Xen - Backend or Frontend or Both?
I have 6 boxes with a client config (see below) across 6 boxes. I am using distribute across 3 replicate pairs. Since I am running xen I need to disable-direct-io and that slows things down quite a bit. My thought was to move the replicate / distribute to the backend server config so that self heal can happen on faster backend rather then frontend client with disable-direct-io. Does this
2008 Nov 09
3
Still problem with trivial self heal
Hi! I have trivial problem with self healing. Maybe somebody will be able to tell mi what am I doing wrong, and why do the files not heal as I expect. Configuration: Servers: two nodes A, B --------- volume posix type storage/posix option directory /ext3/glusterfs13/brick end-volume volume brick type features/posix-locks option mandatory on subvolumes posix end-volume volume server
2018 Mar 13
1
trashcan on dist. repl. volume with geo-replication
Hi Kotresh, thanks for your repsonse... answers inside... best regards Dietmar Am 13.03.2018 um 06:38 schrieb Kotresh Hiremath Ravishankar: > Hi Dietmar, > > I am trying to understand the problem and have few questions. > > 1. Is trashcan enabled only on master volume? no, trashcan is also enabled on slave. settings are the same as on master but trashcan on slave is complete
2017 Sep 21
1
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2017 Sep 21
1
Fwd: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Just making sure this gets through. ---------- Forwarded message ---------- From: Martin Toth <snowmailer at gmail.com> Date: Thu, Sep 21, 2017 at 9:17 AM Subject: Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help] To: gluster-users at gluster.org Cc: Marek Toth <scorpion909 at gmail.com>, amye at redhat.com Hello all fellow GlusterFriends, I would like you to comment /
2017 Sep 20
3
Upgrade Gluster 3.7 to 3.12 and add 3rd replica [howto/help]
Hello all fellow GlusterFriends, I would like you to comment / correct my upgrade procedure steps on replica 2 volume of 3.7.x gluster. Than I would like to change replica 2 to replica 3 in order to correct quorum issue that Infrastructure currently has. Infrastructure setup: - all clients running on same nodes as servers (FUSE mounts) - under gluster there is ZFS pool running as raidz2 with SSD
2012 Sep 29
1
quota severe performace issue help
Dear gluster experts, We have encountered a severe performance issue related to quota feature of gluster. My underlying fs is lvm with xfs format. The problem is if quota is enabled the io performance is about 26MB/s but with quota disabled the io performance is 216MB/s. Any one known what's the problem? BTW I have reproduce it several times and it is related to quota indeed. Here's the
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W