Displaying 6 results from an estimated 6 matches for "mlx4_0".
Did you mean:
mlx4
2017 Jan 06
0
mlx4_0 Initializing and... (infiniband)
...Device ID: 25408
Description: Node Port1 Port2 Sys
image
GUIDs: 0008f104039a62a0 0008f104039a62a1
0008f104039a62a2 0008f104039a62a3
MACs: 000000000000
000000000001
VSD:
PSID: MT_04A0110001
$ ibstat
CA 'mlx4_0'
CA type: MT25408
Number of ports: 2
Firmware version: 2.9.1000
Hardware version: a0
Node GUID: 0x0008f104039a08dc
System image GUID: 0x0008f104039a08df
Port 1:
State: Initializing
Physical state: LinkUp
Rate: 10
Base lid: 1...
2012 Dec 18
1
Problem with srptools
...ext=003048ffff9dd3b4,ioc_guid=003048ffff9dd3b4,dgid=fe80000000000000003048ffff9dd3b5,pkey=ffff,service_id=003048ffff9dd3b4
id_ext=003048ffff9dd3b4,ioc_guid=003048ffff9dd3b4,dgid=fe80000000000000003048ffff9dd3b6,pkey=ffff,service_id=003048ffff9dd3b4
root@blade1:/#
root@blade1:/# ibstat
CA ''mlx4_0''
CA type: MT26418
Number of ports: 1
Firmware version: 2.9.1000
Hardware version: a0
Node GUID: 0x003048ffff9dd66c
System image GUID: 0x003048ffff9dd66f
Port 1:
State: Active
Physical state: LinkUp
Rate: 20
Base lid: 23...
2018 Aug 02
1
NFS/RDMA connection closed
...ndcq_process_wc: frmr
ffff88106638a640 (stale): WR flushed
Jul 30 18:19:26 n001 kernel: nfs: server 10.10.11.100 not responding,
still trying
Jul 30 18:19:36 n001 kernel: nfs: server 10.10.10.100 not responding,
timed out
Jul 30 18:19:38 n001 kernel: rpcrdma: connection to 10.10.11.100:20049
on mlx4_0, memreg 5 slots 32 ird 16
Jul 30 18:19:38 n001 kernel: nfs: server 10.10.11.100 OK
Jul 31 14:42:08 n001 kernel: RPC: rpcrdma_sendcq_process_wc: frmr
ffff8810671f02c0 (stale): WR flushed
Jul 31 14:42:08 n001 kernel: RPC: rpcrdma_sendcq_process_wc: frmr
ffff8810677bda40 (stale): WR flus...
2013 Jan 18
1
Configuration...
Hi have what might be some elementary questions. Really, what I"d love
would be for someone who has had good success to publish his/her
configuration files and, maybe, the output from ifconfig. At this point,
when I see the not-so-good performance I"m getting, I don't realistically
know if I'm in the right ballfield. It seems to me, that with so many
mellanox cards out
2013 Jul 02
1
RDMA Volume Mount Not Functioning for Debian 3.4.beta4
...~# gluster volume info test2
Volume Name: test2
Type: Stripe
Volume ID: fd2a2fc2-c5b3-4241-9e02-e2d936972357
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: rdma
Bricks:
Brick1: cs1-i:/gluster/test2
Brick2: cs2-i:/gluster/test2
Other basic stuff:
root at cs3-p:~# ibv_devinfo
hca_id: mlx4_0
transport: InfiniBand (0)
fw_ver: 2.9.1000
node_guid: 0002:c903:000c:1e82
sys_image_guid: 0002:c903:000c:1e85
vendor_id: 0x02c9
vendor_part_id:...
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
...437 330) for 172.17.1.61:1022
[2018-02-22 18:07:45.243396] E [rpcsvc.c:560:rpcsvc_check_and_reply_error] 0-rpcsvc: rpc actor failed to complete successfully
[2018-02-22 18:07:45.243416] W [MSGID: 103070] [rdma.c:4282:gf_rdma_handle_failed_send_completion] 0-rpc-transport/rdma: send work request on `mlx4_0' returned error wc.status = 1, wc.vendor_err = 105, post->buf = 0x7f6462b85000, wc.byte_len = 45056, post->reused = 1
[2018-02-22 18:07:45.243692] W [MSGID: 103070] [rdma.c:4282:gf_rdma_handle_failed_send_completion] 0-rpc-transport/rdma: send work request on `mlx4_0' returned error w...