Lindolfo Meira
2019-Jan-22 20:20 UTC
[Gluster-users] writev: Transport endpoint is not connected
Dear all, I've been trying to benchmark a gluster file system using the MPIIO API of IOR. Almost all of the times I try to run the application with more than 6 tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev: Transport endpoint is not connected". And then each one of the N tasks returns "ERROR: cannot open file to get file size, MPI MPI_ERR_FILE: invalid file, (aiori-MPIIO.c:488)". Does anyone have any idea what's going on? I'm writing from a single node, to a system configured for stripe over 6 bricks. The volume is mounted with the options _netdev and transport=rdma. I'm using OpenMPI 2.1.2 (I tested version 4.0.0 and nothing changed). IOR arguments used: -B -E -F -q -w -k -z -i=1 -t=2m -b=1g -a=MPIIO. Running OpenSUSE Leap 15.0 and GlusterFS 5.3. Output of "gluster volume info" follows bellow: Volume Name: gfs Type: Stripe Volume ID: ea159033-5f7f-40ac-bad0-6f46613a336b Status: Started Snapshot Count: 0 Number of Bricks: 1 x 6 = 6 Transport-type: rdma Bricks: Brick1: pfs01-ib:/mnt/data/gfs Brick2: pfs02-ib:/mnt/data/gfs Brick3: pfs03-ib:/mnt/data/gfs Brick4: pfs04-ib:/mnt/data/gfs Brick5: pfs05-ib:/mnt/data/gfs Brick6: pfs06-ib:/mnt/data/gfs Options Reconfigured: nfs.disable: on Thanks in advance, Lindolfo Meira, MSc Diretor Geral, Centro Nacional de Supercomputa??o Universidade Federal do Rio Grande do Sul +55 (51) 3308-3139
Raghavendra Gowdappa
2019-Jan-23 02:33 UTC
[Gluster-users] writev: Transport endpoint is not connected
On Wed, Jan 23, 2019 at 1:59 AM Lindolfo Meira <meira at cesup.ufrgs.br> wrote:> Dear all, > > I've been trying to benchmark a gluster file system using the MPIIO API of > IOR. Almost all of the times I try to run the application with more than 6 > tasks performing I/O (mpirun -n N, for N > 6) I get the error: "writev: > Transport endpoint is not connected". And then each one of the N tasks > returns "ERROR: cannot open file to get file size, MPI MPI_ERR_FILE: > invalid file, (aiori-MPIIO.c:488)". > > Does anyone have any idea what's going on? > > I'm writing from a single node, to a system configured for stripe over 6 > bricks. The volume is mounted with the options _netdev and transport=rdma. > I'm using OpenMPI 2.1.2 (I tested version 4.0.0 and nothing changed). IOR > arguments used: -B -E -F -q -w -k -z -i=1 -t=2m -b=1g -a=MPIIO. Running > OpenSUSE Leap 15.0 and GlusterFS 5.3. Output of "gluster volume info" > follows bellow: > > Volume Name: gfs > Type: Stripe >+Dhananjay, Krutika <kdhananj at redhat.com> stripe has been deprecated. You can use sharded volumes.> Volume ID: ea159033-5f7f-40ac-bad0-6f46613a336b > Status: Started > Snapshot Count: 0 > Number of Bricks: 1 x 6 = 6 > Transport-type: rdma > Bricks: > Brick1: pfs01-ib:/mnt/data/gfs > Brick2: pfs02-ib:/mnt/data/gfs > Brick3: pfs03-ib:/mnt/data/gfs > Brick4: pfs04-ib:/mnt/data/gfs > Brick5: pfs05-ib:/mnt/data/gfs > Brick6: pfs06-ib:/mnt/data/gfs > Options Reconfigured: > nfs.disable: on > > > Thanks in advance, > > Lindolfo Meira, MSc > Diretor Geral, Centro Nacional de Supercomputa??o > Universidade Federal do Rio Grande do Sul > +55 (51) 3308-3139_______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190123/d005ae0a/attachment.html>