Displaying 20 results from an estimated 4000 matches similar to: "How do I know if I am using SAN?"
2007 May 16
5
[RFC] pv-scsi driver (scsiback/scsifront)
Hi all.
We developped a pv-scsi driver that we refered Fujita-san''s scsi-driver
and blkback.
(see, http://www.xensource.com/files/xensummit_4/Xen_Summit_8_Matsumoto.pdf)
The pv-scsi driver''s feature is as follow:
* Guest has dedicated SCSI-HBAs of Dom0.
* Guest can send scsi_cdb to the HBAs.
* Guest recognises the HBAs from hostno of xenstore.
Currentlly, We are
2010 Feb 16
2
Highly Performance and Availability
Hello everyone,
I am currently running Dovecot as a high performance solution to a particular
kind of problem. My userbase is small, but it murders email servers. The volume
is moderate, but message retention requirements are stringent, to put it nicely.
Many users receive a high volume of email traffic, but want to keep every
message, and *search* them. This produces mail accounts up to
2010 Feb 17
1
CentOS 5.3 host not seeing storage device
Maybe one of you has experienced something like this before.
I have a host running CentOS5.3, x86_64 version with the standard
qla2xxx driver. Both ports are recognized and show output in dmesg
but they never find my storage device:
qla2xxx 0000:07:00.1: LIP reset occured (f700).
qla2xxx 0000:07:00.1: LIP occured (f700).
qla2xxx 0000:07:00.1: LIP reset occured (f7f7).
qla2xxx 0000:07:00.0: LOOP
2009 Oct 01
1
rsync file corruption when destination is a SAN LUN (Solaris 9 & 10)
I have run into a problem using 'rsync' to copy files from local disk
to a SAN mounted LUN / file-system.
The 'rsync' seems to run fine and it reports no errors, but some
files are corrupted (check-sums don't match originals,
and file data is changed).
So far, I have found this problem on both Solaris 9 and Solaris 10
OSes and on several different models of
Sparc systems
2007 Aug 18
2
Correlate i/o with a process
Hello:
I have a server with 2 HBAs, and the users keeps complaining about
performance problems. My question is, how can I relate the process with high
I/O wait? Also, is it possible to see how much data is being pushed thru by
my 2 HBAs?
TIA!
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2015 Apr 14
1
HBA enumeration and multipath configuration
# cat /etc/redhat-release
CentOS Linux release 7.1.1503 (Core)
# uname -r
3.10.0-123.20.1.el7.x86_64
Hi,
We use iSCSI over a 10G Ethernet Adapter and SRP over an Infiniband adapter to provide multipathing
to our storage:
# lspci | grep 10-Gigabit
81:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
81:00.1 Ethernet controller: Intel Corporation
2020 Jan 04
0
CentOS 7 as a Fibre Channel SAN Target
In waiting, I tried CentOS 8 which was an even bigger bust. I wiped that clean and tried again with Fedora 31. Same darn error "Could not create Target in configFS".
Anyone??
Thank you,
Steffan Cline
steffan at hldns.com
602-793-0014
?On 1/2/20, 2:00 AM, "CentOS on behalf of Steffan Cline via CentOS" <centos-bounces at centos.org on behalf of centos at centos.org>
2020 Sep 20
4
CentOS 8 LSI SAS2004 Driver
Hello,
I've recently been given domain over a number of supermicro storage
servers using Broadcom / LSI SAS2004 PCI-Express Fusion-MPT SAS-2
[Spitfire] (rev 03) to run a bunch of SSDs. I was attempting to do fresh
installs of CentOS 8 and have come to find out that RedHat deprecated
support for a number of HBAs for 8 including all running the SAS2004 chip.
Does anyone know if there is a
2019 Jan 11
1
CentOS 7 as a Fibre Channel SAN Target
For quite some time I?ve been using FreeNAS to provide services as a NAS over ethernet and SAN over Fibre Channel to CentOS 7 servers each using their own export, not sharing the same one.
It?s time for me to replace my hardware and I have a new R720XD that I?d like to use in the same capacity but configure CentOS 7 as a Fibre Channel target rather than use FreeNAS any further.
I?m doing
2020 Nov 12
2
ssacli start rebuild?
> On Nov 11, 2020, at 5:38 PM, Warren Young <warren at etr-usa.com> wrote:
>
> On Nov 11, 2020, at 2:01 PM, hw <hw at gc-24.de> wrote:
>>
>> I have yet to see software RAID that doesn't kill the performance.
>
> When was the last time you tried it?
>
> Why would you expect that a modern 8-core Intel CPU would impede I/O in any measureable way as
2009 Dec 02
7
san suport
Hi,
i''m having problems attaching disks from a fc-san to a solaris 10 guest.
xen host ist a opensolaris box "SunOS node1 5.11 snv_127 i86pc i386 i86xpv".
my xen guest is named pg4.
this command works fine.
virsh attach-disk pg4 /dev/dsk/c8t600A0B800029D69A000013CA4B00E1ABd0 hdb
and before i was able to import this volume as a zpool on the xen host - so
connection to this
2012 Jun 29
1
Storage Pools & nodedev-create specific scsi_host#?
Hello everyone,
Current host build is RHEL 6.2, soon to be upgrading.
I'm in the process of mapping out a KVM/RHEV topology. I have questions about the landscape of storage pools and the instantiation of specific scsi_host IDs using virsh nodedev-create and some magic XML definitions. I'm grouping these questions together because the answer to one may impact the other.
High-level
2008 Mar 17
1
Boot from FC SAN CentOS 5.1 x86-64
Anything special needed to boot from a SAN in
CentOS 5.1 x86-64 ?
We have a new system running a QLA2342 FC HBA connected to
a SAN, have a volume exported to it and would like to boot
from it.
We PXEboot kickstart and that works, the installer sees
the disk, installs to it, and reboots. When GRUB tries to
load all it prints is "GRUB" and sits there.
When we try the Xen edition of
2004 Mar 06
1
OCFS and multipathing
I've got my RAC cluster running pretty smoothly now (thanks again for all
the help so far), but I only have single connections between the servers and
the RAID array. The servers each have two Qlogic HBAs, and I'd like to find
out if there's any reasonable way to implement multipathing.
The platform is RHEL 3, and Red Hat's knowledgebase indicates that they
strongly recommend
2007 May 21
1
slow file creation
Hi all,
I'm troubleshooting an ocfs2 performance problem where creating files in
a directory containing ~180k files is taking significant time to
complete. Sometimes creating an empty file will take >100 seconds to
complete. This is a three node cluster. I'm currently running OCFS2
1.2.3-1. Are there any changes in a recent version that may address
this issue? What should I look
2010 Apr 01
1
iSCSI supported initiators
What initiators does libvirt support for the iSCSI pool? Will it work with qlogic's iSCSI HBAs for instance?
---
Alexander Pierce, RHCE
Solutions Architect -- Red Hat, Inc.
apierce at redhat.com -- +1 (530) MR-LINUX (675-4689)
Learn. Network. Experience Open Source.
The 2010 Red Hat Summit and JBoss World
Boston. June 22 - 25, 2010
http://www.theredhatsummit.com
http://www.jbossworld.com
2005 Aug 11
1
How to prevent loading qla2300.o module
I apologize in advance as this is not really at CentOS specific issue,
but I don't know where else to turn. We are configuring some Dell
1850s for a customer, and they have all been configured with a QLogic
dual channel HBA. But no storage has been (or will be in the near
future) attached to these HBAs. During the boot process (both for the
kickstart and after the OS has been
2014 Nov 19
1
Infra - CentOS {www,seven}.centos.org downtime
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Due to a hardware maintenance (moving some gluster volumes to
Infiniband, so adding/configuring the IB HBAs), we'll have to shutdown
some nodes in the CentOS Infra.
Migration is scheduled for Friday November 19th, 9:30 am UTC time.
You can convert to local time with $(date -d '2014-11-19 09:30 UTC')
The expected "downtime" is
2011 Dec 30
1
PCI/VGA passthrough and function level reset (FLR)
I''m wondering if FLR really must be supported by the PCI card for
PCI/VGA passthrough to work or if it will work anyway. I have read in
the VTdHowTo that trying to pass through hardware without the FLR
feature will result in an error. At the same time I read on a pdf
document on the VMWare website
(http://www.vmware.com/files/pdf/techpaper/vsp_4_vmdirectpath_host.pdf)
that:
2020 Sep 20
1
CentOS 8 LSI SAS2004 Driver
On 20/09/2020 04:16, Akemi Yagi wrote:
> On Sat, Sep 19, 2020 at 8:04 PM William Markuske <wmarkuske at gmail.com> wrote:
>>
>> Hello,
>>
>> I've recently been given domain over a number of supermicro storage
>> servers using Broadcom / LSI SAS2004 PCI-Express Fusion-MPT SAS-2
>> [Spitfire] (rev 03) to run a bunch of SSDs. I was attempting to do