Displaying 14 results from an estimated 14 matches for "qpn".
Did you mean:
ppn
2018 Apr 25
1
RDMA Client Hang Problem
Thank you for your mail.
ibv_rc_pingpong seems working between servers and client. Also udaddy,
ucmatose, rping etc are working.
root at gluster1:~# ibv_rc_pingpong -d mlx5_0 -g 0
? local address:? LID 0x0000, QPN 0x0001e4, PSN 0x10090e, GID
fe80::ee0d:9aff:fec0:1dc8
? remote address: LID 0x0000, QPN 0x00014c, PSN 0x09402b, GID
fe80::ee0d:9aff:fec0:1b14
8192000 bytes in 0.01 seconds = 7964.03 Mbit/sec
1000 iters in 0.01 seconds = 8.23 usec/iter
root at cinder:~# ibv_rc_pingpong -g 0 -d mlx5_0 gluster1
?...
2018 Apr 25
0
RDMA Client Hang Problem
Is infiniband itself working fine? You can run tools like ibv_rc_pingpong
to find out.
On Wed, Apr 25, 2018 at 12:23 PM, Necati E. SISECI <siseci at gmail.com> wrote:
> Dear Gluster-Users,
>
> I am experiencing RDMA problems.
>
> I have installed Ubuntu 16.04.4 running with 4.15.0-13-generic kernel,
> MLNX_OFED_LINUX-4.3-1.0.1.0-ubuntu16.04-x86_64 to 4 different servers.
2020 Jul 16
0
[PATCH vhost next 10/10] vdpa/mlx5: Add VDPA driver for supported mlx5 devices
...dbr_addr, vqp->db.dma);
> + MLX5_SET(create_qp_in, in, opcode, MLX5_CMD_OP_CREATE_QP);
> + err = mlx5_cmd_exec(mdev, in, inlen, out, sizeof(out));
> + kfree(in);
> + if (err)
> + goto err_kzalloc;
> +
> + vqp->mqp.uid = MLX5_GET(create_qp_in, in, uid);
> + vqp->mqp.qpn = MLX5_GET(create_qp_out, out, qpn);
> +
> + if (!vqp->fw)
> + rx_post(vqp, mvq->num_ent);
> +
> + return 0;
> +
> +err_kzalloc:
> + if (!vqp->fw)
> + mlx5_db_free(ndev->mvdev.mdev, &vqp->db);
> +err_db:
> + if (!vqp->fw)
> + rq_buf_free(...
2019 Dec 03
0
[vhost:linux-next 4/11] drivers/net/ethernet/mellanox/mlx4/en_netdev.c:1376:12: error: 'tx_ring' undeclared; did you mean 'en_print'?
..._netdev.c:50:0:
drivers/net/ethernet/mellanox/mlx4/en_netdev.c: In function 'mlx4_en_tx_timeout':
>> drivers/net/ethernet/mellanox/mlx4/en_netdev.c:1376:12: error: 'tx_ring' undeclared (first use in this function); did you mean 'en_print'?
txqueue, tx_ring->qpn, tx_ring->sp_cqn,
^
drivers/net/ethernet/mellanox/mlx4/mlx4_en.h:830:41: note: in definition of macro 'en_warn'
en_print(KERN_WARNING, priv, format, ##__VA_ARGS__)
^~~~~~~~~~~
drivers/net/ethernet/mellanox/mlx4/en_netd...
2018 Apr 25
2
RDMA Client Hang Problem
Dear Gluster-Users,
I am experiencing RDMA problems.
I have installed Ubuntu 16.04.4 running with 4.15.0-13-generic kernel,
MLNX_OFED_LINUX-4.3-1.0.1.0-ubuntu16.04-x86_64 to 4 different servers.
All of them has Mellanox ConnectX-4 LX dual port NICs. These four
servers are connected via Mellanox SN2100 Switch.
I have installed GlusterFS Server v3.10 (from Ubuntu PPA) to 3 servers.
These 3
2012 Jun 07
2
Basic question about confidence intervals
...confidence interval for the response
time at that λ with a desired confidence level, c%,
is computed as follows:
• Compute the mean server response time: μ =
PN
i=1 Ri/N, where Ri is the server response time
for the ith run.
• Compute the standard deviation for the server response
time: σ = qPN
i=1(Ri − μ)2/(N − 1).
• Confidence interval for the response time at confidence
100c% is given as: [μ − zpσ/√N, μ +
zpσ/pN], where p = (1 + c)/2, and zp is the quantile
of the unit normal distribution at p.
If N <= 30, we replace zp by tp;n−1, which is the pquantile
of a t-variate with n−...
2019 Dec 10
0
[PATCH net-next v10 2/3] mlx4: use new txqueue timeout argument
...; i < priv->tx_ring_num[TX]; i++) {
- struct mlx4_en_tx_ring *tx_ring = priv->tx_ring[TX][i];
-
- if (!netif_tx_queue_stopped(netdev_get_tx_queue(dev, i)))
- continue;
- en_warn(priv, "TX timeout on queue: %d, QP: 0x%x, CQ: 0x%x, Cons: 0x%x, Prod: 0x%x\n",
- i, tx_ring->qpn, tx_ring->sp_cqn,
- tx_ring->cons, tx_ring->prod);
- }
+ en_warn(priv, "TX timeout on queue: %d, QP: 0x%x, CQ: 0x%x, Cons: 0x%x, Prod: 0x%x\n",
+ txqueue, tx_ring->qpn, tx_ring->sp_cqn,
+ tx_ring->cons, tx_ring->prod);
priv->port_stats.tx_timeout++;
en_db...
2019 Dec 10
0
[PATCH net-next v10 2/3] mlx4: use new txqueue timeout argument
...; i < priv->tx_ring_num[TX]; i++) {
- struct mlx4_en_tx_ring *tx_ring = priv->tx_ring[TX][i];
-
- if (!netif_tx_queue_stopped(netdev_get_tx_queue(dev, i)))
- continue;
- en_warn(priv, "TX timeout on queue: %d, QP: 0x%x, CQ: 0x%x, Cons: 0x%x, Prod: 0x%x\n",
- i, tx_ring->qpn, tx_ring->sp_cqn,
- tx_ring->cons, tx_ring->prod);
- }
+ en_warn(priv, "TX timeout on queue: %d, QP: 0x%x, CQ: 0x%x, Cons: 0x%x, Prod: 0x%x\n",
+ txqueue, tx_ring->qpn, tx_ring->sp_cqn,
+ tx_ring->cons, tx_ring->prod);
priv->port_stats.tx_timeout++;
en_db...
2004 Feb 05
0
Majordomo results: STATUS
...9; not recognized.
>>>> NHmh3Bpbj+Ywbc0gds8rivxRuSSS/////wN37mjlZehul4ODdoyVobDC1+8KKEltlL7rG06Evfk4
**** Command 'nhmh3bpbj+ywbc0gds8rivxrusss/////wn37mjlzehul4oddoyvobdc1+8kkeltll7rg06evfk4' not recognized.
>>>> /////3q/B1Kg8UVsllOzGnzlUcAypx+aGJkdpC67S950DalI/////+qPN+KQQfWsZiPjpmw1AdCi
**** Command '/////3q/b1kg8uvsllozgnzlucaypx+agjkdpc67s950dali/////+qpn+kqqfwszipjpmw1adci' not recognized.
>>>> d08qCOnNtJ6Le25kXVlY/////1pfZ3KAkaW81vMTNlyFseASR3+6+Dl9xA5bq/5UrQk9/////5p3
**** Command 'd08qconntj6le25kxvly/////1pfz3kakaw81vmtnlyfseasr3+6...
2020 Nov 01
12
[PATCH mlx5-next v1 00/11] Convert mlx5 to use auxiliary bus
From: Leon Romanovsky <leonro at nvidia.com>
Changelog:
v1:
* Renamed _mlx5_rescan_driver to be mlx5_rescan_driver_locked like in
other parts of the mlx5 driver.
* Renamed MLX5_INTERFACE_PROTOCOL_VDPA to tbe MLX5_INTERFACE_PROTOCOL_VNET as
a preparation to coming series from Eli C.
* Some small naming renames in mlx5_vdpa.
* Refactored adev index code to make Parav's SF series
2019 Dec 03
4
[PATCH RFC net-next v8 0/3] netdev: ndo_tx_timeout cleanup
A bunch of drivers want to know which tx queue triggered a timeout,
and virtio wants to do the same.
We actually have the info to hand, let's just pass it on to drivers.
Note: tested with an experimental virtio patch by Julio.
That patch itself isn't ready yet though, so not included.
Other drivers compiled only.
Michael S. Tsirkin (3):
netdev: pass the stuck queue to the timeout
2019 Dec 10
4
[PATCH net-next v11 0/3] netdev: ndo_tx_timeout cleanup
Sorry about the churn, v10 was based on net - not on net-next
by mistake.
A bunch of drivers want to know which tx queue triggered a timeout,
and virtio wants to do the same.
We actually have the info to hand, let's just pass it on to drivers.
Note: tested with an experimental virtio patch by Julio.
That patch itself isn't ready yet though, so not included.
Other drivers compiled only.
2019 Dec 10
4
[PATCH net-next v11 0/3] netdev: ndo_tx_timeout cleanup
Sorry about the churn, v10 was based on net - not on net-next
by mistake.
A bunch of drivers want to know which tx queue triggered a timeout,
and virtio wants to do the same.
We actually have the info to hand, let's just pass it on to drivers.
Note: tested with an experimental virtio patch by Julio.
That patch itself isn't ready yet though, so not included.
Other drivers compiled only.
2019 Dec 10
4
[PATCH net-next v12 0/3] netdev: ndo_tx_timeout cleanup
Yet another forward declaration I missed. Hopfully the last one ...
A bunch of drivers want to know which tx queue triggered a timeout,
and virtio wants to do the same.
We actually have the info to hand, let's just pass it on to drivers.
Note: tested with an experimental virtio patch by Julio.
That patch itself isn't ready yet though, so not included.
Other drivers compiled only.