similar to: Brick and Subvolume Info

Displaying 20 results from an estimated 6000 matches similar to: "Brick and Subvolume Info"

2018 Feb 08
2
Thousands of EPOLLERR - disconnecting now
Hello I have a large cluster in which every node is logging: I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - disconnecting now At a rate of of around 4 or 5 per second per node, which is adding up to a lot of messages. This seems to happen while my cluster is idle. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2017 Oct 24
2
gluster tiering errors
Milind - Thank you for the response.. >> What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi Option Value ------ ----- cluster.watermark-hi 90 # gluster volume get <vol> cluster.watermark-low Option
2017 Oct 27
0
gluster tiering errors
Herb, I'm trying to weed out issues here. So, I can see quota turned *on* and would like you to check the quota settings and test to see system behavior *if quota is turned off*. Although the file size that failed migration was 29K, I'm being a bit paranoid while weeding out issues. Are you still facing tiering errors ? I can see your response to Alex with the disk space consumption and
2018 Feb 08
0
Thousands of EPOLLERR - disconnecting now
On Thu, Feb 8, 2018 at 2:04 PM, Gino Lisignoli <glisignoli at gmail.com> wrote: > Hello > > I have a large cluster in which every node is logging: > > I [socket.c:2474:socket_event_handler] 0-transport: EPOLLERR - > disconnecting now > > At a rate of of around 4 or 5 per second per node, which is adding up to a > lot of messages. This seems to happen while my
2010 Jan 03
2
Where is log file of GlusterFS 3.0?
I not found log file of Gluster 3.0! In the past, I install well with GlusterFS 2.06, and Log file of server and Client placed in /var/log/glusterfs/... But after install GlusterFS 3.0( on Centos5.4 64 bit), (4 server + 1 client), I start glusterFS servers and client, and type *df -H* at client, result is : "Transport endpoint is not connected" *I want to detect BUG, but I not found
2018 Apr 03
0
Sharding problem - multiple shard copies with mismatching gfids
Raghavendra, Sorry for the late follow up. I have some more data on the issue. The issue tends to happen when the shards are created. The easiest time to reproduce this is during an initial VM disk format. This is a log from a test VM that was launched, and then partitioned and formatted with LVM / XFS: [2018-04-03 02:05:00.838440] W [MSGID: 109048]
2018 Mar 26
1
Sharding problem - multiple shard copies with mismatching gfids
Ian, Do you've a reproducer for this bug? If not a specific one, a general outline of what operations where done on the file will help. regards, Raghavendra On Mon, Mar 26, 2018 at 12:55 PM, Raghavendra Gowdappa <rgowdapp at redhat.com> wrote: > > > On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> > wrote: > >> The gfid mismatch
2018 Mar 26
3
Sharding problem - multiple shard copies with mismatching gfids
On Mon, Mar 26, 2018 at 12:40 PM, Krutika Dhananjay <kdhananj at redhat.com> wrote: > The gfid mismatch here is between the shard and its "link-to" file, the > creation of which happens at a layer below that of shard translator on the > stack. > > Adding DHT devs to take a look. > Thanks Krutika. I assume shard doesn't do any dentry operations like rename,
2018 Apr 06
1
Sharding problem - multiple shard copies with mismatching gfids
Sorry for the delay, Ian :). This looks to be a genuine issue which requires some effort in fixing it. Can you file a bug? I need following information attached to bug: * Client and bricks logs. If you can reproduce the issue, please set diagnostics.client-log-level and diagnostics.brick-log-level to TRACE. If you cannot reproduce the issue or if you cannot accommodate such big logs, please set
2010 May 04
1
Posix warning : Access to ... is crossing device
I have a distributed/replicated setup with Glusterfs 3.0.2, that I'm testing on 4 servers, each with access to /mnt/gluster (which consists of all directories /mnt/data01 - data24) on each server. I'm using configs I built from volgen, but every time I access a file (via an 'ls -l') for the first time, I get all of these messages in my logs on each server: [2010-05-04 10:50:30] W
2018 Mar 25
2
Sharding problem - multiple shard copies with mismatching gfids
Hello all, We are having a rather interesting problem with one of our VM storage systems. The GlusterFS client is throwing errors relating to GFID mismatches. We traced this down to multiple shards being present on the gluster nodes, with different gfids. Hypervisor gluster mount log: [2018-03-25 18:54:19.261733] E [MSGID: 133010] [shard.c:1724:shard_common_lookup_shards_cbk]
2018 Jan 07
1
Clear heal statistics
Is there any way to clear the historic statistic from the command "gluster volume heal <volume_name> statistics" ? It seems the command takes longer and longer to run each time it is used, to the point where it times out and no longer works. -------------- next part -------------- An HTML attachment was scrubbed... URL:
2011 Oct 18
2
gluster rebalance taking three months
Hi guys, we have a rebalance running on eight bricks since July and this is what the status looks like right now: ===Tue Oct 18 13:45:01 CST 2011 ==== rebalance step 1: layout fix in progress: fixed layout 223623 There are roughly 8T photos in the storage,so how long should this rebalance take? What does the number (in this case) 22362 represent? Our gluster infomation: Repository
2018 Mar 26
0
Sharding problem - multiple shard copies with mismatching gfids
The gfid mismatch here is between the shard and its "link-to" file, the creation of which happens at a layer below that of shard translator on the stack. Adding DHT devs to take a look. -Krutika On Mon, Mar 26, 2018 at 1:09 AM, Ian Halliday <ihalliday at ndevix.com> wrote: > Hello all, > > We are having a rather interesting problem with one of our VM storage >
2018 Feb 02
1
How to trigger a resync of a newly replaced empty brick in replicate config ?
Hi, I simplified the config in my first email, but I actually have 2x4 servers in replicate-distribute with each 4 bricks for 6 of them and 2 bricks for the remaining 2. Full healing will just take ages... for a just single brick to resync ! > gluster v status home volume status home Status of volume: home Gluster process TCP Port RDMA Port Online Pid
2008 Dec 10
3
AFR healing problem after returning one node.
I've got configuration which in simple includes combination of afrs and unify - servers exports n[1-3]-brick[12] and n[1-3]-ns and client got cluster configuration: volume afr-ns type cluster/afr subvolumes n1-ns n2-ns n3-ns option data-self-heal on option metadata-self-heal on option entry-self-heal on end-volume volume afr1 type cluster/afr subvolumes n1-brick2
2018 Jan 24
1
fault tolerancy in glusterfs distributed volume
I have made a distributed replica3 volume with 6 nodes. I mean this: Volume Name: testvol Type: Distributed-Replicate Volume ID: f271a9bd-6599-43e7-bc69-26695b55d206 Status: Started Snapshot Count: 0 Number of Bricks: 2 x 3 = 6 Transport-type: tcp Bricks: Brick1: 10.0.0.2:/brick Brick2: 10.0.0.3:/brick Brick3: 10.0.0.1:/brick Brick4: 10.0.0.5:/brick Brick5: 10.0.0.6:/brick Brick6:
2017 Oct 22
0
gluster tiering errors
Herb, What are the high and low watermarks for the tier set at ? # gluster volume get <vol> cluster.watermark-hi # gluster volume get <vol> cluster.watermark-low What is the size of the file that failed to migrate as per the following tierd log: [2017-10-19 17:52:07.519614] I [MSGID: 109038] [tier.c:1169:tier_migrate_using_query_file] 0-<vol>-tier-dht: Promotion failed for
2009 Jun 11
2
Issue with files on glusterfs becoming unreadable.
elbert at host1:~$ dpkg -l|grep glusterfs ii glusterfs-client 1.3.8-0pre2 GlusterFS fuse client ii glusterfs-server 1.3.8-0pre2 GlusterFS fuse server ii libglusterfs0 1.3.8-0pre2 GlusterFS libraries and translator modules I have 2 hosts set up to use AFR with
2009 Jun 26
0
Error when expand dht model volumes
HI all: I met a problem in expending dht volumes, i write in a dht storage directory untile it grew up to 90%,so i add four new volumes into the configur file. But when start again ,the data in directory some disappeared ,Why ??? Was there a special action before expending the volumes? my client cofigure file is this : volume client1 type protocol/client option transport-type