Displaying 20 results from an estimated 1000 matches similar to: "SMB Direct support?"
2019 Jul 31
1
SMB Direct support?
On Wed, Jul 31, 2019 at 11:02:18AM +0000, douxevip via samba wrote:
> Hi all, is there anybody on the mailing list who is more knowledgeable about SMB direct? Would appreciate some pointers. See below. Thanks.
>
> -------- Original Message --------
> On Jul 25, 2019, 22:52, douxevip via samba < samba at lists.samba.org> wrote:
> Hello all, I was reading up on SMB Direct
2019 Jul 31
0
SMB Direct support?
Hi all, is there anybody on the mailing list who is more knowledgeable about SMB direct? Would appreciate some pointers. See below. Thanks.
-------- Original Message --------
On Jul 25, 2019, 22:52, douxevip via samba < samba at lists.samba.org> wrote:
Hello all, I was reading up on SMB Direct support and it seems very interesting. I looked through slides of a presentation by Stefan
2015 Apr 14
1
HBA enumeration and multipath configuration
# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
# uname -r
3.10.0-123.20.1.el7.x86_64
Hi,
We use iSCSI over a 10G Ethernet Adapter and SRP over an Infiniband adapter to provide multipathing
to our storage:
# lspci | grep 10-Gigabit
81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
81:00.1 Ethernet controller: Intel Corporation
2019 Jul 19
3
Samba async performance - bottleneck or bug?
Hi David,
Thanks for your reply.
> Hmm, so this "async" (sync=disabled?) ZFS tunable means that it
> completely ignores O_SYNC and O_DIRECT and runs the entire workload in
> RAM? I know nothing about ZFS, but that sounds like a mighty dangerous
> setting for production deployments.
Yes, you are correct - sync writes will flush to RAM, just like async, will stay in RAM for
2019 Jul 18
2
Samba async performance - bottleneck or bug?
Hi,
I have a ZFS dataset that has sync writes disabled (setting sync=disabled) which means that it will only do async writes, and sync requests get converted to async writes. The ZFS dataset is hosted on a single Samsung 840 Pro 512GB SATA SSD.
I have this same dataset served as a Samba share, using Proxmox VE 6. Samba version 4.9.5-Debian (Buster), protocol SMB3_11. Kernel version 5.0.15.
To
2018 Jul 10
2
Solarflare SFC9000 direct connection
hi guys
I wonder if any of you might be using SFN6122F-R7 SFP+ (SFC9000, same
firmware everywhere, Centos 7.5 too.
I'm trying poor man's setup to get the servers onto 10GbE network.
Setup is such that three Dell R815 are connected to each other, each has
one Solarflare(SFP ports) and each Solarflare is set as net-team(both
ports on a card are net-team device) with runner in broadcast
2015 Mar 05
1
Cannot remount drive after lost iSCSI connection
The most recent message is:
[3108269.919256] sd 2:0:1:0: timing out command, waited 1080s
[3108269.919528] sd 2:0:1:0: [sdb] Unhandled error code
[3108269.919535] sd 2:0:1:0: [sdb] Result: hostbyte=DID_OK
driverbyte=DRIVER_OK
[3108269.919540] sd 2:0:1:0: [sdb] CDB: Read(10): 28 00 00 01 21 47 00 00
08 00
[3108269.919586] EXT4-fs error (device sdb1): ext4_find_entry: reading
directory #2 offset 0
2019 Aug 06
1
Samba async performance - bottleneck or bug?
Hi David,
> You're still using direct I/O with fio, which will likely disallow
> client side caching with oplocks/leases.
Is there a way to bypass this with settings in smb.conf at all and transform all writes to async?
> I'd recommend checking that your (cifs.ko?) client is using a relatively
> modern SMB2+ dialect and that leases are enabled on both sides.
Yes, I
2019 Sep 01
3
vfs_shadow_copy2 not working
Hi Jeremy,
Here's the log with log level 10: https://pastebin.com/0EAuz2B8
The location of the shared folder: /pool/shadowtest
The location of the snapshots: /pool/shadowtest/.zfs/snapshot
Here are how snapshots are named:
autosnap_2019-09-01_13:29:01_daily
autosnap_2019-09-01_13:29:01_hourly
autosnap_2019-09-01_13:44:09_frequently
And this is currently in my smb.conf file:
[shadowtest]
2019 Apr 11
4
[RFC 0/3] VirtIO RDMA
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia <yuval.shaia at oracle.com> wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
> >
2019 Apr 11
4
[RFC 0/3] VirtIO RDMA
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia <yuval.shaia at oracle.com> wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
> >
2013 Sep 03
2
Intel 10Gb network card
hi,
I have a hard time figuring this out, the kernel says:
...
ix0: <Intel(R) PRO/10GbE PCI-Express Network Driver, Version - 2.5.15> port
0xecc0-0xecdf mem 0xd9e80000-0xd9efffff,0xd9ff8000-0xd9ffbfff irq 40 at device
0.0 on pci4
ix0: Using MSIX interrupts with 9 vectors
ix0: Ethernet address: 90:e2:ba:29:c0:54
ix0: PCI Express Bus: Speed 5.0GT/s Width x8
ix1: <Intel(R) PRO/10GbE
2019 Apr 22
2
[Qemu-devel] [RFC 0/3] VirtIO RDMA
On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > On Thu, 11 Apr 2019 14:01:54 +0300
> > > Yuval Shaia <yuval.shaia at oracle.com> wrote:
> > >
> > > > Data center backends use more and more RDMA or RoCE devices and
2019 Apr 22
2
[Qemu-devel] [RFC 0/3] VirtIO RDMA
On Fri, Apr 19, 2019 at 01:16:06PM +0200, Hannes Reinecke wrote:
> On 4/15/19 12:35 PM, Yuval Shaia wrote:
> > On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> > > On Thu, 11 Apr 2019 14:01:54 +0300
> > > Yuval Shaia <yuval.shaia at oracle.com> wrote:
> > >
> > > > Data center backends use more and more RDMA or RoCE devices and
2019 Apr 15
4
[RFC 0/3] VirtIO RDMA
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia <yuval.shaia at oracle.com> wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
> >
2019 Apr 15
4
[RFC 0/3] VirtIO RDMA
On Thu, Apr 11, 2019 at 07:02:15PM +0200, Cornelia Huck wrote:
> On Thu, 11 Apr 2019 14:01:54 +0300
> Yuval Shaia <yuval.shaia at oracle.com> wrote:
>
> > Data center backends use more and more RDMA or RoCE devices and more and
> > more software runs in virtualized environment.
> > There is a need for a standard to enable RDMA/RoCE on Virtual Machines.
> >
2019 Sep 02
2
vfs_shadow_copy2 not working
On Sun, Sep 1, 2019 at 3:24 PM douxevip via samba <samba at lists.samba.org>
wrote:
> > The location of the shared folder: /pool/shadowtest
> > The location of the snapshots: /pool/shadowtest/.zfs/snapshot
> > Here are how snapshots are named:
> >
> > autosnap_2019-09-01_13:29:01_daily
> > autosnap_2019-09-01_13:29:01_hourly
> >
2023 Nov 14
2
emulate ARM ?
Hi guys.
How do you emulate AMR arch - I mean, with what's in distro
&| SIGs repos as oppose to do-it-yourself?
many thanks, L.
2018 Jun 27
2
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
On Wed, Jun 27, 2018 at 10:24:43PM +0800, Jason Wang wrote:
>
>
> On 2018?06?26? 13:17, xiangxia.m.yue at gmail.com wrote:
> > From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
> >
> > This patch improves the guest receive performance from
> > host. On the handle_tx side, we poll the sock receive
> > queue at the same time. handle_rx do that in the
2018 Jun 26
3
[PATCH net-next v2] net: vhost: improve performance when enable busyloop
From: Tonghao Zhang <xiangxia.m.yue at gmail.com>
This patch improves the guest receive performance from
host. On the handle_tx side, we poll the sock receive
queue at the same time. handle_rx do that in the same way.
For avoiding deadlock, change the code to lock the vq one
by one and use the VHOST_NET_VQ_XX as a subclass for
mutex_lock_nested. With the patch, qemu can set differently
the