Displaying 20 results from an estimated 3000 matches similar to: "DTrace provider proposal: fibre channel (fc) COMSTAR port provider probes"
2009 Apr 23
1
Dtrace provider for the COMSTAR iscsi Target Port Provider - Review Request
The iscsit_dtrace_draftv5.txt describes the Dtrace probes definition for
the COMSTAR iSCSI target port provider. For all those interested in
iSCSI, please take a look and send me your
feedback/questions/suggestions etc by Thursday, April 20, 2009
thanks
Priya
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: iscsit_dtrace_draftv5.txt
URL:
2009 Apr 30
0
DTrace provider proposal: COMSTAR iSCSI target port provider probes
I would like to request approval of the the DTrace proposal for the COMSTAR
iSCSI target port provider. I have updated the specification of the
iscsi provider at http://wikis.sun.com/display/DTrace/iscsi+Provider. Your
feedback is appreciated.
thanks
Priya
-------------- next part --------------
An embedded and charset-unspecified text was scrubbed...
Name: iscsit_dtrace_draftv8.txt
URL:
2009 Apr 29
0
COMSTAR iscsi DTrace Probes -
The earlier post with the Dtrace probes definition (sent on April 23) was only to solicit feedback from all prospective users of iSCSI DTrace probes. I will follow up shortly with an updated document requesting approval from the community.
thanks
Priya
-------- Original Message --------
Subject: Dtrace provider for the COMSTAR iscsi Target Port Provider -
Review Request
Date: Thu, 23 Apr
2009 Apr 17
4
unable to find any probes from the nfs provider
I want to list/use the nfs probes but I get the error "dtrace: failed to
match nfs*:::: No probe matches description". Is there a way to enable
nfs provider probes? My system is running snv_112 (bfu''ed from the gate
archive)
# dtrace -lP nfs*
ID PROVIDER MODULE FUNCTION NAME
dtrace: failed to match nfs*:::: No probe matches
2017 Dec 19
0
kernel: blk_cloned_rq_check_limits: over max segments limit., Device Mapper Multipath, iBFT, iSCSI COMSTAR
Hi,
WARNING: Long post ahead
I have an issue when starting multipathd. The kernel complains about "blk_cloned_rq_check_limits:
over max segments limit".
The server in question is configured for KVM hosting. It boots via iBFT to an iSCSI volume. Target
is COMSTAR and underlying that is a ZFS volume (100GB). The server also has two infiniband cards
providing four (4) more paths over SRP
2009 Aug 28
0
Comstar and ESXi
Hello all,
I am running an OpenSolaris server running 06/09. I installed comstar and enabled it. I have an ESXi 4.0 server connecting to Comstar via iscsi on its own switch. (There are two esxi servers), both of which do this regardless of whether one is on or off. The error I see is on esxi "Lost connectivity to storage device
naa.600144f030bc450000004a9806980003. Path vmhba33:C0:T0:L0 is
2009 Dec 17
0
Upgrading a volume from iscsitgt to COMSTAR
Hi, I have a zfs volume that''s exported via iscsi for my wife''s Mac to
use for Time Machine.
I''ve just built a new machine to house my "big" pool, and installed
build 129 on it. I''d like to start using COMSTAR for exporting the
iscsi targets, rather than the older iscsi infrastructure.
I''ve seen quite a few tutorials on how to use
2010 Oct 11
0
Ubuntu iSCSI install to COMSTAR zfs volume Howto
I apologize if this has been covered before. I have not seen a blow-by-blow installation guide for Ubuntu onto an iSCSI target.
The install guides I have seen assume that you can make a target visible to all, which is a problem if you want multiple iSCSI installations on the same COMSTAR target. During install Ubuntu generates three random initiators and you have to deal with them to get things
2010 Feb 24
0
opensolaris COMSTAR io stats
Hi all,
Using "old" way of sharing volumes over iscsi in zfs (zfs set
shareiscsi=on) i can see i/o stats per iscsi volume running a command
iscsitadm show stats -I 1 volume.
However i couldn''t find something similar in new framework,comstar.
Probably i''m missing something, so if anyone has some tips please share
it :)
Thanks in advance,
Bruno
-------------- next part
2009 Mar 03
0
COMSTAR/zfs integration for faster SCSI
Hi,
Nexenta CP and NexentaStor has integrated COMSTAR with ZFS, which
provides 2-3x performance gain over userland SCSI target daemon. I;ve
blogged in more detail at
http://www.gulecha.org/2009/03/03/nexenta-iscsi-with-comstarzfs-integration/
Cheers,
Anil
http://www.gulecha.org
2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi,
I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving.
I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''.
During zpool import I am getting a non-zero exit code,
2007 Aug 20
3
RAID storage - SATA, SCSI, or Fibre Channel?
I have a Dell PowerEdge 2950 and am looking to add more storage. I know a lot
of factors can go into the type of answer given, but for present and future
technology planning, should I look for a rack of SATA, SCSI, or fibre channel
drives? Maybe I'm dating myself with fibre channel, and possibly SCSI?
I may be looking to add a few TB now, and possibly more later.
What are people
2019 Jan 11
1
CentOS 7 as a Fibre Channel SAN Target
For quite some time I?ve been using FreeNAS to provide services as a NAS over ethernet and SAN over Fibre Channel to CentOS 7 servers each using their own export, not sharing the same one.
It?s time for me to replace my hardware and I have a new R720XD that I?d like to use in the same capacity but configure CentOS 7 as a Fibre Channel target rather than use FreeNAS any further.
I?m doing
2009 Dec 31
6
zvol (slow) vs file (fast) performance snv_130
Hello,
I was doing performance testing, validating zvol performance in particularly, and found that zvol write performance to be slow ~35-44MB/s at 1MB blocksize writes. I then tested the underlying zfs file system with the same test and got 121MB/s. Is there any way to fix this? I really would like to have compatible performance between the zfs filesystem and the zfs zvols.
# first test is a
2008 May 15
2
[storage-discuss] ZFS and fibre channel issues
The ZFS crew might be better to answer this question. (CC''d here)
--jc
William Yang wrote:
> I am having issues creating a zpool using entire disks with a fibre
> channel array. The array is a Dell PowerVault 660F.
> When I run "zpool create bottlecap c6t21800080E512C872d14
> c6t21800080E512C872d15", I get the following error:
> invalid vdev
2006 Jan 26
1
Virtualization of Fibre Channel?
In principle, will Xen handle virtualization of Fibre Channel HBA''s? Say I
have a physical server with a single FC HBA installed. If I create two
domains, will XEN virtualize the HBA so that both of the domains can talk to
the FC fabric?
-M.
_______________________________________________
Xen-users mailing list
Xen-users@lists.xensource.com
http://lists.xensource.com/xen-users
2005 Dec 07
6
RE: live migration with xen 2.0.7 with fibre channel onDebian - help needed
I had this exact same problem with 2.0.7. I had done a little investigation and found scheduled_work gets called to schedule the shutdown in the user domain kernel, but the shutdown work that gets scheduled never actually gets called. I''m glad someone else is seeing this same problem now :-) Like you, it worked a number of times in a row, then would fail, and it didn''t seem to
2008 Jun 06
3
how xen recognizes fibre channel based storage resources
Hello
I''m a newbie here so please don''t laugh at me...
I plan to setup several (4) physical servers to run xen virtualized
OS-es (in example web servers). What I need is that those virtualized
OS-es must have simultaneous access to disk storage physically created
on FC array.
I read that this could be done by OCFS or Lustre file system but my
question is: how virtualized
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2010 Jun 07
20
Homegrown Hybrid Storage
Hi,
I''m looking to build a virtualized web hosting server environment accessing
files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540
with the following configuration:
- 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives)
- 2 Intel X-25 32GB SSD''s as a mirrored ZIL
- 4 Intel X-25 64GB SSD''s as the L2ARC.
-