Displaying 20 results from an estimated 300 matches similar to: "[Gluster-devel] Poor performance of block-store with RDMA"
2017 Sep 13
0
glusterfs expose iSCSI
On Wed, Sep 13, 2017 at 1:03 PM, GiangCoi Mr <ltrgiang86 at gmail.com> wrote:
> Hi all
>
Hi GiangCoi,
The Good news is that now we have gluster-block [1] which will help
you configure block storage using gluster very easy.
gluster-block will take care of all the targetcli and tcmu-runner
configuration for you, all you need as a pre-requisite is a gluster
volume.
And the sad part is
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
[
Hi Takao.
Could you attach some logs which we can diagnostic?
On 2017? 08? 15? 19:42, Hatazaki, Takao wrote:
>
> Hi,
>
>
>
> I did ib_write_lat in perftest. It worked fine. Between servers and
> between server and client, 2-byte latency was ~0.8us, 8MB bandwidth
> was ~6GB/s. Very normal with IB/FDR.
>
>
>
>
2017 Sep 21
0
Sharding option for distributed volumes
Hello Ji-Hyeon,
Thanks, is that option available in 3.12 gluster release? because we're
still on 3.8 and just playing around latest version in order to have our
solution migrated.
Thank you!
9/21/17 2:26 PM, Ji-Hyeon Gim ?????:
> Hello Pavel!
>
> In my opinion, you need to check features.shard-block-size option first.
> If a file nobigger than this value, it would not be
2017 Sep 13
3
glusterfs expose iSCSI
Hi all
I want to configure glusterfs to expose iSCSI target. I followed this
artical
https://pkalever.wordpress.com/2016/06/23/gluster-solution-for-non-shared-persistent-storage-in-docker-container/
but when I install tcmu-runner. It doesn't work.
I setup on CentOS7 and installed tcmu-runner by rpm. When I run targetcli,
it not show *user:glfs* and *user:gcow*
*/>* ls
o- /
2017 Aug 06
1
State: Peer Rejected (Connected)
Hi Ji-Hyeon,
Thanks to your help I could find out the problematic file. This would be the quota file of my volume it has a different checksum on node1 whereas node2 and arbiternode have the same checksum. This is expected as I had issues which my quota file and had to fix it manually with a script (more details on this mailing list in a previous post) and I only did that on node1.
So what I now
2017 Aug 06
0
State: Peer Rejected (Connected)
On 2017? 08? 06? 15:59, mabi wrote:
> Hi,
>
> I have a 3 nodes replica (including arbiter) volume with GlusterFS
> 3.8.11 and this night one of my nodes (node1) had an out of memory for
> some unknown reason and as such the Linux OOM killer has killed the
> glusterd and glusterfs process. I restarted the glusterd process but
> now that node is in "Peer Rejected"
2010 Aug 22
1
ocfs2 crash on intensive disk write
Hi,
I'm getting system (and eventually cluster) crashes on intensive disk
writes in ubuntu server 10.04 with my OCFS2 file system.
I have an iSER (infiniband) backed shared disk array with OCFS2 on it.
There are 6 nodes in the cluster, and the heartbeat interface is over a
regular 1GigE connection. Originally, the problem presented itself while
I was doing performance testing and
2009 Oct 07
1
tgtadm and exported iscsi volumes
Out of haste I decided for the first time not to pull down and compile
iet to export a single volume to a windows machine so I just used tgtd.
Well, much more time later than otherwise would be the case had I used
iet, when I mount the volume on the windows ini, it can't find and install
a driver? WTF?
This have anything to do with the tgtd being a tech preview:)
Anybody seen this b4?
jlc
2018 Feb 09
0
[Gluster-devel] Glusterfs and Structured data
+gluster-users
Another guideline we can provide is to disable all performance xlators for workloads requiring strict metadata consistency (even for non gluster-block usecases like native fuse mount etc). Note that we might still can have few perf xlators turned on. But, that will require some experimentation. The safest and easiest would be to turn off following xlators:
* performance.read-ahead
2008 Aug 20
0
Iser support for XEN
Hi Xen-Users!
I ask for the infiniband support of the xen hypervisor as describet in
the following papers
http://www.openfabrics.org/archives/spring2007sonoma/Monday%20April%2030/Xiong%20OFA-Sonoma-2007-04-30-SoftIB.ppt
and
http://www.*xen*.org/files/*Xen*_RDMA_Voltaire_YHaviv.pdf
The papers read that a direct infiniband support is possible to
circumvent the dom0 to access infiniband devices
2006 Aug 02
0
[PATCH 0/6] SCSI frontend and backend drivers
This patchset includes an updated version of the SCSI frontend and
backend drivers.
The frontend and backend drivers exchange SCSI RDMA Protocol (SRP)
messages via a ring buffer. The backend driver sends SCSI commands to
the user-space daemon, which performs SCSI commands and I/O
operations. The backend driver uses VM_FOREIGN feature like the blktap
driver for zero-copy of data pages.
Like the
2012 May 31
0
iSCSI offload with Broadcom NetXtreme II BCM5709.
Hello all.
We recently purchased HP Proliant DL380 G7 server and also a HP Lefthand
P4500 G2 24TB iSCSI box.
The server itself has Quad Ethernet based on the BCM5709 Chipset from
Broadcom (NetXtreme II).
My main question is about iSCSI offload mechanism, as iSCSI itself is
pretty new to me
but I have gotten a good hold on it tho.
I have managed to setup and configured our Lefthand box into
2011 Jul 29
1
Custom storage pools/volumes
Hello everyone,
We're currently working with using libvirt as an abstract API to make our dealings with other hypervisors far easier and faster, however, we have our own storage API that connects/disconnects and makes LUNs available over iSCSI with iSER.
We would have loved to use a storage pool however our system implements a one target per LUN/VDI, therefore, we'd have to create a pool
2018 Oct 20
1
how to use LV(container) created by lxc-create?
hi everyone,
how can I use a LV created by lxc in libvirt guest domain?
You know, you lxc-create with LV as backstore for a
container(ubuntu in my case) and now you want to give to libvit.
That, I thought would be a very common case but I failed to
find a good howto on how one: lxc-create(lvm) => let libvirt
take control over it. Or it's not how you would do it at all?
many thanks, L.
2016 Feb 05
2
safest way to mount iscsi loopback..
.. what is?
fellow centosians.
how to you mount your loopback targets?
I'm trying lvm backstore, I was hoping I would do it with
uuid, but it's exposed more than once and how would kernel
decide which device to use I don't know.
thanks
2012 Jan 13
1
LSI/3ware 9750-4i and multipath I/O
Hi,
I was wondering if anyone has successfully configured two lsi/3ware 9750-4i series controllers for multipathing under CentOS 5.7 x86_64?
I've tried some basic setups with both multibus and failover settings, and had repeatable filesystem corruption over a iscsi(tgtd) or nfs3 connection.
Any ideas?
Vahan
2012 Nov 17
2
iSCSI Question
Hey everyone,
Is anybody aware of a /true/ active/active multi-head and multi-target
clustered iSCSI daemon?
IE:
Server 1:
Hostname: host1.test.com
IP Address: 10.0.0.1
Server 2:
Hostname: host2.test.com
IP Address: 10.0.0.2
Then they would utilize a CLVM disk between them, let's call that VG "disk"
and then directly map each LUN (1,2,3,4,etc) to LV's named 1,2,3,4,... and
2016 Apr 12
0
Problems with scsi-target-utils when hosted on dom0 centos 7 xen box
On Mon, Apr 11, 2016 at 9:14 PM, Nathan Coulson <nathan at bravenet.com> wrote:
> Hello
>
> We were attempting to use scsi-target-utils, hosted on a xen dom0 vm using
> localhost, and running into some problems. I was not able to reproduce this
> on a centos 7.2 server using the default kernel.
Have you tried booting the Virt SIG kernel natively and seeing if you
can
2016 Apr 13
0
Problems with scsi-target-utils when hosted on dom0 centos 7 xen box
On 2016-04-12 09:43 AM, Nathan Coulson wrote:
> By natively, I take it using
> kernel /vmlinuz (vs kernel /xen)
>
> Not yet, but working on setting up such an environment.
>
> (At this time, I was using virt-install to reproduce the problem, and
> the original server we are testing on did not support kvm but the 2nd
> server does).
>
> On 2016-04-12 03:26 AM, George
2014 May 17
1
Large file system idea
This idea is intruiging...
Suppose one has a set of file servers called A, B, C, D, and so forth, all
running CentOS 6.5 64-bit, all being interconnected with 10GbE. These file
servers can be divided into identical pairs, so A is the same
configuration (diks, processors, etc) as B, C the same as D, and so forth
(because this is what I have; there are ten servers in all). Each file
server has