similar to: "Input/output error" on mkdir for PPC64 based client

Displaying 20 results from an estimated 800 matches similar to: ""Input/output error" on mkdir for PPC64 based client"

2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie, with XDRs and how it is used). Just glance the logs of the client process where you saw the errors, which could give some hints. If you don't understand the logs, share them, so we will try to look into it. -Amar On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote: > I recently
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64 client and an x86 client. Weirdly the client logs were almost identical. Here's the ppc64 gluster client log of attempting to create a folder... ------------- [2017-09-20 13:34:23.344321] D [rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2017 Aug 28
2
GFID attir is missing after adding large amounts of data
Hi Cluster Community, we are seeing some problems when adding multiple terrabytes of data to a 2 node replicated GlusterFS installation. The version is 3.8.11 on CentOS 7. The machines are connected via 10Gbit LAN and are running 24/7. The OS is virtualized on VMWare. After a restart of node-1 we see that the log files are growing to multiple Gigabytes a day. Also there seem to be problems
2017 Aug 29
0
GFID attir is missing after adding large amounts of data
This is strange, a couple of questions: 1. What volume type is this? What tuning have you done? gluster v info output would be helpful here. 2. How big are your bricks? 3. Can you write me a quick reproducer so I can try this in the lab? Is it just a single multi TB file you are untarring or many? If you give me the steps to repro, and I hit it, we can get a bug open. 4. Other than
2017 Sep 01
1
GFID attir is missing after adding large amounts of data
I re-added gluster-users to get some more eye on this. ----- Original Message ----- > From: "Christoph Sch?bel" <christoph.schaebel at dc-square.de> > To: "Ben Turner" <bturner at redhat.com> > Sent: Wednesday, August 30, 2017 8:18:31 AM > Subject: Re: [Gluster-users] GFID attir is missing after adding large amounts of data > > Hello Ben, >
2017 Nov 08
2
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
On 8 November 2017 at 02:47, Sam McLeod <mailinglists at smcleod.net> wrote: > > On 6 Nov 2017, at 3:32 pm, Laura Bailey <lbailey at redhat.com> wrote: > > Do the users have permission to see/interact with the directories, in > addition to the files? > > > Yes, full access to directories and files. > Also testing using the root user. > > > On Mon,
2013 Mar 27
1
Samba4 issue: roaming profile mismatch betweens W2k/XP machines due to enabled o
Samba 4.0.4 installed, provisioned by classicupgrade, running on Debian Squeeze: -------------------------------------------------------------------------------- The issue is, that changes to the roaming profile is not transferred after log ins/outs between Win2K and XP machine. In example: I log into the W2k machine with my testuser and create a "testdir1" and "testdir2" on
2017 Nov 08
0
Gluster clients can't see directories that exist or are created within a mounted volume, but can enter them.
> On 8 Nov 2017, at 9:03 pm, Nithya Balachandran <nbalacha at redhat.com> wrote: > > > That is not the log for the mount. Please check /var/log/glusterfs/var-lib-mountedgluster.log on the system on which you are running the mount process. > > Please provide the volume config details as well (gluster volume info) from one of the server nodes. > Oh I'm sorry, I
2017 Nov 16
2
Missing files on one of the bricks
On 11/16/2017 04:12 PM, Nithya Balachandran wrote: > > > On 15 November 2017 at 19:57, Frederic Harmignies > <frederic.harmignies at elementai.com > <mailto:frederic.harmignies at elementai.com>> wrote: > > Hello, we have 2x files that are missing from one of the bricks. > No idea how to fix this. > > Details: > > # gluster volume
2013 Jul 01
1
[PATCH v2] xfstests: btrfs/316: cross-subvolume sparse copy
This testscript creates reflinks to files on different subvolumes, overwrites the original files and reflinks, and moves reflinked files between subvolumes. Originally submitted as testcase 302, changes are made based on comments from Eric: http://oss.sgi.com/archives/xfs/2013-03/msg00231.html Two new common/rc functions used in this script (_require_cp_reflink and _verify_reflink) have been
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote: > Ji-Hyeon, > > You're saying that "stripe=2 transport=rdma" should work. Ok, that > was firstly I wanted to know. I'll put together logs later this week. Note that "stripe" is not tested much and practically unmaintained. We do not advise you to use it. If you have large files that you
2018 May 10
2
broken gluster config
Whatever repair happened has now finished but I still have this, I cant find anything so far telling me how to fix it. Looking at http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/ I cant determine what file? dir gvo? is actually the issue. [root at glusterp1 gv0]# gluster volume heal gv0 info split-brain Brick
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
Hi Anatoliy, The heal command is basically used to heal any mismatching contents between replica copies of the files. For the command "gluster volume heal <volname>" to succeed, you should have the self-heal-daemon running, which is true only if your volume is of type replicate/disperse. In your case you have a plain distribute volume where you do not store the replica of any
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello, We have a very fresh gluster 3.10.10 installation. Our volume is created as distributed volume, 9 bricks 96TB in total (87TB after 10% of gluster disk space reservation) For some reasons I can?t ?heal? the volume: # gluster volume heal gv0 Launching heal operation to perform index self heal on volume gv0 has been unsuccessful on bricks that are down. Please check if all brick processes
2013 Nov 29
1
Self heal problem
Hi, I have a glusterfs volume replicated on three nodes. I am planing to use the volume as storage for vMware ESXi machines using NFS. The reason for using tree nodes is to be able to configure Quorum and avoid split-brains. However, during my initial testing when intentionally and gracefully restart the node "ned", a split-brain/self-heal error occurred. The log on "todd"
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
> Note that "stripe" is not tested much and practically unmaintained. Ah, this was what I suspected. Understood. I'll be happy with "shard". Having said that, "stripe" works fine with transport=tcp. The failure reproduces with just 2 RDMA servers (with InfiniBand), one of those acts also as a client. I looked into logs. I paste lengthy logs below with
2017 Sep 29
1
Gluster geo replication volume is faulty
I am trying to get up geo replication between two gluster volumes I have set up two replica 2 arbiter 1 volumes with 9 bricks [root at gfs1 ~]# gluster volume info Volume Name: gfsvol Type: Distributed-Replicate Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306 Status: Started Snapshot Count: 0 Number of Bricks: 3 x (2 + 1) = 9 Transport-type: tcp Bricks: Brick1: gfs2:/gfs/brick1/gv0 Brick2:
2017 Oct 17
3
gfid entries in volume heal info that do not heal
Hi Matt, Run these commands on all the bricks of the replica pair to get the attrs set on the backend. On the bricks of first replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/10/86/ 108694db-c039-4b7c-bd3d-ad6a15d811a2 On the fourth replica set: getfattr -d -e hex -m . <brick path>/.glusterfs/ e0/c5/e0c56bf7-8bfe-46ca-bde1-e46b92d33df3 Also run the "gluster volume
2018 May 10
0
broken gluster config
[trying to read, I cant understand what is wrong? root at glusterp1 gv0]# gluster volume heal gv0 info Brick glusterp1:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1 Brick glusterp2:/bricks/brick1/gv0 <gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain Status: Connected Number of entries: 1
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log. >> Run these commands on all the bricks of the replica pair to get the attrs set on the backend. [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2 getfattr: Removing leading '/' from absolute path names # file: