Displaying 16 results from an estimated 16 matches for "hcas".
Did you mean:
has
2011 Nov 14
1
RDMA/Ethernet wi ROCEE - failed to modify QP to RTR
...remote QP
I see this when I run RDMA over Ethernet using ROCEE RPMs, but when I run over Infiniband using RHEL 6.2-, it runs fine. On the same Ethernet configuration, Gluster/TCP runs fine, NFS/RDMA runs fine as does AMQP app. But qperf and rping utilities fail in the same way. Firmware on the HCAs is not the latest, is it worth risk to upgrade?
I went into debugger and found line where qperf fails, it's near line 2056 in rdma.c in qperf sources (qperf-debuginfo, I did Makefile)
(gdb)
2088 } else if (dev->trans == IBV_QPT_RC) {
(gdb)
2090 flags = IBV_QP_STATE...
2018 May 29
2
RDMA inline threshold?
...ed mode, config.transport=tcp,rdma). Mounting with transport=rdma shows this error, mouting with transport=tcp is fine.
however, this problem does not arise on all large directories, not on all. I didn't recognize a pattern yet.
I'm using glusterfs v3.12.6 on the servers, QDR Infiniband HCAs .
Is this a known issue with RDMA transport?
best wishes,
Stefan
2019 Jul 30
2
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
On Tue, Jul 30, 2019 at 08:51:57AM +0300, Christoph Hellwig wrote:
> All users pass PAGE_SIZE here, and if we wanted to support single
> entries for huge pages we should really just add a HMM_FAULT_HUGEPAGE
> flag instead that uses the huge page size instead of having the
> caller calculate that size once, just for the hmm code to verify it.
I suspect this was added for the ODP
2018 May 30
0
RDMA inline threshold?
...cp,rdma). Mounting
> with transport=rdma shows this error, mouting with transport=tcp is fine.
>
> however, this problem does not arise on all large directories, not on all.
> I didn't recognize a pattern yet.
>
> I'm using glusterfs v3.12.6 on the servers, QDR Infiniband HCAs .
>
> Is this a known issue with RDMA transport?
>
> best wishes,
> Stefan
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
-------------- next...
2018 May 30
2
RDMA inline threshold?
...> with transport=rdma shows this error, mouting with transport=tcp is fine.
>>
>> however, this problem does not arise on all large directories, not on
>> all. I didn't recognize a pattern yet.
>>
>> I'm using glusterfs v3.12.6 on the servers, QDR Infiniband HCAs .
>>
>> Is this a known issue with RDMA transport?
>>
>> best wishes,
>> Stefan
>>
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://lists.gluster.org/mailman/listinf...
2019 Jul 30
0
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
...huge pages
are ok into hmm_range_fault, and it then could pass the shift out, and
limits itself to a single vma (which it normally doesn't, that is an
additional complication). But all this seems really awkward in terms
of an API still. AFAIK ODP is only used by mlx5, and mlx5 unlike other
IB HCAs can use scatterlist style MRs with variable length per entry,
so even if we pass multiple pages per entry from hmm it could coalesce
them. The best API for mlx4 would of course be to pass a biovec-style
variable length structure that hmm_fault could fill out, but that would
be a major restructure.
2018 May 30
0
RDMA inline threshold?
...sport=tcp,rdma). Mounting with transport=rdma shows this error, mouting with transport=tcp is fine.
>
> however, this problem does not arise on all large directories, not on all. I didn't recognize a pattern yet.
>
> I'm using glusterfs v3.12.6 on the servers, QDR Infiniband HCAs .
>
> Is this a known issue with RDMA transport?
>
> best wishes,
> Stefan
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://lists.gluster.org/mailman/listinfo/gluster-users
>
>
2011 Mar 23
2
OFFTOPIC :: IB hardware choice
Hi! I would need an advice from those that use IB (as admins :) )
i have a choice between :
1. Mellanox InfiniHost? III Lx HCA card, single-port CX4, DDR, PCIe
x8, mem-free, tall bracket, RoHS R5
2. QLogic Single Port 20 Gb InfiniBand to x16 PCI Express Adapter
(Single Pack)
aside the price is there anything else that could help me make a
discrimination between this two?
(these will be used in
2015 Mar 28
2
Why is irqbalance not balancing?
I am running irqbalance with default configuration on an Atom 330 machine. This CPU has 2 physical cores + 2 SMT (aka Hyperthreading) cores.
As shown below the interrupt for the eth0 device is always on CPUs 0 and 1, with CPUs 2 and 3 left idle. But why?
Maybe irqbalance prefers physical cores? My understanding, though, is that the even-numbered CPUs are the physical cores, with the
2019 Jul 30
1
[PATCH 07/13] mm: remove the page_shift member from struct hmm_range
...to hmm_range_fault, and it then could pass the shift out, and
> limits itself to a single vma (which it normally doesn't, that is an
> additional complication). But all this seems really awkward in terms
> of an API still. AFAIK ODP is only used by mlx5, and mlx5 unlike other
> IB HCAs can use scatterlist style MRs with variable length per entry,
> so even if we pass multiple pages per entry from hmm it could coalesce
> them.
When the driver takes faults it has to repair the MR mapping, and
fixing a page in the middle of a variable length SGL would be pretty
complicated....
2008 Mar 04
16
Cannot send after transport endpoint shutdown (-108)
This morning I''ve had both my infiniband and tcp lustre clients hiccup. They are evicted from the server presumably as a result of their high load and consequent timeouts. My question is- why don''t the clients re-connect. The infiniband and tcp clients both give the following message when I type "df" - Cannot send after transport endpoint shutdown (-108). I''ve
2013 Jun 06
1
Reproducable Infiniband panic
Hello,
I see a reproducable panic when doing ibping and aborting it with ^C. My
setup is two machines with Mellanox Infinihost III HCAs (one Linux one
FreeBSD) connected back-to-back.
Details below. I can upload 2 crash dumps, if this is useful. For some
reason the port doesn't become ACTIVE, so no packets arrive, but that is
probably unrelated.
% uname -a
FreeBSD cosel.inf.tu-dresden.de 9.1-STABLE FreeBSD 9.1-STABLE #0
r+b65...
2012 Jul 04
13
[PATCH 0/6] tcm_vhost/virtio-scsi WIP code for-3.6
From: Nicholas Bellinger <nab at linux-iscsi.org>
Hi folks,
This series contains patches required to update tcm_vhost <-> virtio-scsi
connected hosts <-> guests to run on v3.5-rc2 mainline code. This series is
available on top of target-pending/auto-next here:
git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git tcm_vhost
This includes the necessary vhost
2012 Jul 04
13
[PATCH 0/6] tcm_vhost/virtio-scsi WIP code for-3.6
From: Nicholas Bellinger <nab at linux-iscsi.org>
Hi folks,
This series contains patches required to update tcm_vhost <-> virtio-scsi
connected hosts <-> guests to run on v3.5-rc2 mainline code. This series is
available on top of target-pending/auto-next here:
git://git.kernel.org/pub/scm/linux/kernel/git/nab/target-pending.git tcm_vhost
This includes the necessary vhost
2019 Sep 23
2
[PATCH trivial 1/3] treewide: drivers: Fix Kconfig indentation
...lp---
+ tristate "Broadcom Netxtreme HCA support"
+ depends on 64BIT
+ depends on ETHERNET && NETDEVICES && PCI && INET && DCB
+ select NET_VENDOR_BROADCOM
+ select BNXT
+ ---help---
This driver supports Broadcom NetXtreme-E 10/25/40/50 gigabit
RoCE HCAs. To compile this driver as a module, choose M here:
the module will be called bnxt_re.
diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
index 8911bc2ec42a..f553adae8eb4 100644
--- a/drivers/input/keyboard/Kconfig
+++ b/drivers/input/keyboard/Kconfig
@@ -171,11 +171,...
2019 Oct 04
3
[RESEND TRIVIAL 1/3] treewide: drivers: Fix Kconfig indentation
...lp---
+ tristate "Broadcom Netxtreme HCA support"
+ depends on 64BIT
+ depends on ETHERNET && NETDEVICES && PCI && INET && DCB
+ select NET_VENDOR_BROADCOM
+ select BNXT
+ ---help---
This driver supports Broadcom NetXtreme-E 10/25/40/50 gigabit
RoCE HCAs. To compile this driver as a module, choose M here:
the module will be called bnxt_re.
diff --git a/drivers/input/keyboard/Kconfig b/drivers/input/keyboard/Kconfig
index 8911bc2ec42a..f553adae8eb4 100644
--- a/drivers/input/keyboard/Kconfig
+++ b/drivers/input/keyboard/Kconfig
@@ -171,11 +171,...