similar to: Performance issues with iSCSI under Linux

Displaying 20 results from an estimated 10000 matches similar to: "Performance issues with iSCSI under Linux"

2010 May 04
8
iscsitgtd failed request to share on zpool import after upgrade from b104 to b134
Hi, I am posting my question to both storage-discuss and zfs-discuss as I am not quite sure what is causing the messages I am receiving. I have recently migrated my zfs volume from b104 to b134 and upgraded it from zfs version 14 to 22. It consist of two zvol''s ''vol01/zvol01'' and ''vol01/zvol02''. During zpool import I am getting a non-zero exit code,
2010 Sep 14
9
dedicated ZIL/L2ARC
We are looking into the possibility of adding a dedicated ZIL and/or L2ARC devices to our pool. We are looking into getting 4 ? 32GB Intel X25-E SSD drives. Would this be a good solution to slow write speeds? We are currently sharing out different slices of the pool to windows servers using comstar and fibrechannel. We are currently getting around 300MB/sec performance with 70-100% disk busy.
2012 Jul 02
14
HP Proliant DL360 G7
Hello, Has anyone out there been able to qualify the Proliant DL360 G7 for your Solaris/OI/Nexenta environments? Any pros/cons/gotchas (vs. previous generation HP servers) would be greatly appreciated. Thanks in advance! -Anh
2007 Feb 18
7
Zfs best practice for 2U SATA iSCSI NAS
Is there a best practice guide for using zfs as a basic rackable small storage solution? I''m considering zfs with a 2U 12 disk Xeon based server system vs something like a second hand FAS250. Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs. Being able to take snapshots of running (or maybe paused) xen iscsi vols and re-export then for cloning and remote backup
2008 Aug 25
5
Unable to import zpool since system hang during zfs destroy
Hi all, I have a RAID-Z zpool made up of 4 x SATA drives running on Nexenta 1.0.1 (OpenSolaris b85 kernel). It has on it some ZFS filesystems and few volumes that are shared to various windows boxes over iSCSI. On one particular iSCSI volume, I discovered that I had mistakenly deleted some files from the FAT32 partition that is on it. The files were still in a ZFS snapshot that was made earlier
2010 Apr 15
6
ZFS for ISCSI ntfs backing store.
I''m looking to move our file storage from Windows to Opensolaris/zfs. The windows box will be connected through 10g for iscsi to the storage. The windows box will continue to serve the windows clients and will be hosting approximately 4TB of data. The physical box is a sunfire x4240, single AMD 2435 processor, 16G ram, LSI 3801E HBA, ixgbe 10g card. I''m looking for suggestions
2012 Nov 07
45
Dedicated server running ESXi with no RAID card, ZFS for storage?
Morning all... I have a Dedicated server in a data center in Germany, and it has 2 3TB drives, but only software RAID. I have got them to install VMWare ESXi and so far everything is going ok... I have the 2 drives as standard data stores... But i am paranoid... So, i installed Nexenta as a VM, gave it a small disk to boot off and 2 1Tb disks on separate physical drives... I have created a
2010 Dec 16
6
AHCI or IDE?
Hello All, I want to build a home file and media server now. After experiment with a Asus Board and running in unsolve problems I have bought this Supermicro Board X8SIA-F with Intel i3-560 and 8 GB Ram http://www.supermicro.com/products/motherboard/Xeon3000/3400/X8SIA.cfm?IPMI=Y also the LSI HBA SAS 9211-8i
2010 Apr 05
14
Are there (non-Sun/Oracle) vendors selling OpenSolaris/ZFS based NAS Hardware?
I''ve seen the Nexenta and EON webpages, but I''m not looking to build my own. Is there anything out there I can just buy? -Kyle
2012 Jan 06
3
ZFS + Dell MD1200's - MD3200 necessary?
We are looking at building a storage platform based on Dell HW + ZFS (likely Nexenta). Going Dell because they can provide solid HW support globally. Are any of you using the MD1200 JBOD with head units *without* an MD3200 in front? We are being told that the MD1200''s won''t "daisy chain" unless the MD3200 is involved. We would be looking to use some sort of
2012 Apr 06
6
Seagate Constellation vs. Hitachi Ultrastar
Happy Friday, List! I''m spec''ing out a Thumper-esque solution and having trouble finding my favorite Hitachi Ultrastar 2TB drives at a reasonable post-flood price. The Seagate Constellations seem pretty reasonable given the market circumstances but I don''t have any experience with them. Anybody using these in their ZFS systems and have you had good luck? Also, if
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. Since I am hoping to provide a 2TB
2011 Apr 07
40
X4540 no next-gen product?
While I understand everything at Oracle is "top secret" these days. Does anyone have any insight into a next-gen X4500 / X4540? Does some other Oracle / Sun partner make a comparable system that is fully supported by Oracle / Sun? http://www.oracle.com/us/products/servers-storage/servers/previous-products/index.html What do X4500 / X4540 owners use if they''d like more
2008 Feb 08
4
List of supported multipath drivers
Where can I find a list of supported multipath drivers for ZFS? Keith McAndrew Senior Systems Engineer Northern California SUN Microsystems - Data Management Group <mailto:Keith.McAndrew at SUN.com> Keith.McAndrew at SUN.com 916 715 8352 Cell CONFIDENTIALITY NOTICE The information contained in this transmission may contain privileged and confidential information of SUN
2012 Nov 22
19
ZFS Appliance as a general-purpose server question
A customer is looking to replace or augment their Sun Thumper with a ZFS appliance like 7320. However, the Thumper was used not only as a protocol storage server (home dirs, files, backups over NFS/CIFS/Rsync), but also as a general-purpose server with unpredictably-big-data programs running directly on it (such as corporate databases, Alfresco for intellectual document storage, etc.) in order to
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2008 Jun 02
29
ZFS Hardware Check, OS X Compatibility, NEWBIE!!
This is my first post here, and i hope it is ok that i posted in this thread. I have been doing a bit of reading on the solaris platforms, and seem to be inclined to try out the open solaris os or solaris 10. My only worry is that my lack of knowledge with the command line may make this difficult regarding trouble shooting. It seems fairly straighforward creating zpools etc, but maybe nexenta is
2008 Jul 07
8
zfs-discuss Digest, Vol 33, Issue 19
Hello Ross, We''re trying to accomplish the same goal over here, ie. serving multiple VMware images from a NFS server. Could you tell what kind of NVRAM device did you end up choosing? We bought a Micromemory PCI card but can''t get a Solaris driver for it... Thanks Gilberto On 7/6/08 9:54 AM, "zfs-discuss-request at opensolaris.org" <zfs-discuss-request at
2011 Apr 05
11
ZFS send/receive to Solaris/FBSD/OpenIndiana/Nexenta VM guest?
Hello, I''m debating an OS change and also thinking about my options for data migration to my next server, whether it is on new or the same hardware. Migrating to a new machine I understand is a simple matter of ZFS send/receive, but reformatting the existing drives to host my existing data is an area I''d like to learn a little more about. In the past I''ve asked about
2010 Sep 29
10
Resliver making the system unresponsive
This must be resliver day :) I just had a drive failure. The hot spare kicked in, and access to the pool over NFS was effectively zero for about 45 minutes. Currently the pool is still reslivering, but for some reason I can access the file system now. Resliver speed has been beaten to death I know, but is there a way to avoid this? For example, is more enterprisy hardware less susceptible to