Dan,
I've read your blog post about this, but I've been unable to find a way
to install this "plugin" on CentOS 6 for use with tgtd.
There appears to be a "scsi-target-utils-gluster" RPM out there that
has
what appears to be a module that would accomplish this, but I can only
find this package for EL7-based OSes.
Do I have to build the module myself for tgtd on CentOS 6? If so, do
you have instructions to do so? Thanks.
Regards,
Jon Heese
On 4/1/2015 4:21 PM, Dan Lambright wrote:> incidentally , for all you iSCSI on gluster fans.. gluster has a
"plugin" to LIO and the target daemon (tgt). The plugin makes it so
the server can send IO directly between the iSCSI server and gluster process in
user space (as opposed to routing it all through FUSE). Its a nice speed up, in
case anyone is looking for a performance bump :)
>
> ----- Original Message -----
>> From: "Jon Heese" <jonheese at jonheese.com>
>> To: "Gluster-users at gluster.org List" <gluster-users at
gluster.org>
>> Sent: Wednesday, April 1, 2015 3:20:41 PM
>> Subject: Re: [Gluster-users] iscsi and distributed volume
>>
>> Or use multipath I/O (assuming your iSCSI initiator OS supports it) to
mount
>> the iSCSI LUN on both nodes in an active/passive manner.
>>
>> I do this with tgtd directly on the Gluster nodes to serve up iSCSI
disks
>> from an image file sitting on a replicated volume to a VMware ESXi 5.5
>> cluster.
>>
>> If you go this route, be sure to configure the iSCSI initiator(s)
multipath
>> to be active/passive (or similar) as my testing with round-robin
produced
>> very poor performance and data corruption.
>>
>> Regards,
>> Jon Heese
>> ________________________________________
>> From: gluster-users-bounces at gluster.org <gluster-users-bounces at
gluster.org>
>> on behalf of Paul Robert Marino <prmarino1 at gmail.com>
>> Sent: Wednesday, April 01, 2015 2:59 PM
>> To: Dan Lambright
>> Cc: Gluster-users at gluster.org List
>> Subject: Re: [Gluster-users] iscsi and distributed volume
>>
>> You do realize you would have to put the ISCSI target disk image on
>> the mounted Gluster volume not directly on the brick.
>> So as long as you have replication your volume would remain accessible.
>> You can not point the ISCSI process directly to the brick or
>> replication and striping wont work properly.
>> That said you could consider using something like keepalived with a
>> monitoring script to handle a VIP for failover in case a node or some
>> of the underlying processes go down.
>>
>>
>> On Wed, Apr 1, 2015 at 10:17 AM, Dan Lambright <dlambrig at
redhat.com> wrote:
>>>
>>>
>>> ----- Original Message -----
>>>> From: "Roman" <romeo.r at gmail.com>
>>>> To: gluster-users at gluster.org
>>>> Sent: Wednesday, April 1, 2015 4:38:50 AM
>>>> Subject: [Gluster-users] iscsi and distributed volume
>>>>
>>>> Hi devs, list!
>>>>
>>>> I've got somewhat simple but in same time pretty difficult
question. But
>>>> I'm
>>>> running glusterf in production and don't have any option to
test myself :(
>>>>
>>>> say I've got a distributed gluster volume of 2x350GB
>>>> I want to export ISCSI target for M$ server and I want it to be
600GB.
>>>> I understand, that when I create a large file for ISCSI target
with dd, it
>>>> will be distributed between two bricks. And here comes the
question:
>>>>
>>>> What will happen when
>>>>
>>>> 1. one of bricks goes down? Ok, simple - target won't be
accessible.
>>>> 2. would be data available again, when the brick comes back up?
(ie
>>>> failure
>>>> due to network or power)
>>>>
>>>> yes, we have backup server and ups and generator, as we are
running DC,
>>>> but
>>>> I'm just curious if we will have to restore the data from
backups or it
>>>> will
>>>> be available after brick comes back up?
>>>
>>> What kind of gluster volume is it- I would hope it is replicated?
>>>
>>> Data within the file is not distributed between two bricks, unless
your
>>> volume type is striped.
>>>
>>> Assuming its replicated, if one brick went down, the other replica
would
>>> continue to operate, so you would have availability.
>>>
>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Best regards,
>>>> Roman.
>>>>
>>>> _______________________________________________
>>>> Gluster-users mailing list
>>>> Gluster-users at gluster.org
>>>> http://www.gluster.org/mailman/listinfo/gluster-users
>>> _______________________________________________
>>> Gluster-users mailing list
>>> Gluster-users at gluster.org
>>> http://www.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>> _______________________________________________
>> Gluster-users mailing list
>> Gluster-users at gluster.org
>> http://www.gluster.org/mailman/listinfo/gluster-users
>>
>