similar to: iSCSI Solution

Displaying 20 results from an estimated 10000 matches similar to: "iSCSI Solution"

2016 Jun 22
3
Mailboxes on NFS or iSCSI
Hello, we are running Dovecot (2.2.13-12~deb8u1) on Debian stable. Configured with Mailbox++, IMAP, POP3, LMTPD, Managesieved, ACL. Mailboxes are on local 1.2TB RAID, it's about 5310 accounts. We are slowly getting out of space and we are considering to move Mailboxes onto Netapp disk array with two independent network connections. Are there some pitfalls? Not sure we should use NTP or
2005 Nov 09
1
iSCSI experiences
I just tried an iSCSI connection to a NetAPP filer. I first tried using CentOS-4.2, but the iscsi initiator would not work. Using the command iscsi-ls would not show anything. Without spending too much time trying to figure out why, I tried from a CentOS-3 box. Connecting from the CentOS-3 box worked without a problem the first time. I will try further to get the CentOS-4 box to connect, but
2011 Sep 14
1
KVM CO 5.6 VM guest crashes running iSCSI
Hi All, I'm running KVM host on CentOS 5.6 x64, all of my guests are CO 5.6 x64 as well. I create / run VMs via libvirt. Here are the packages I have: # rpm -qa | egrep "kvm|virt" kvm-83-224.el5.centos python-virtinst-0.400.3-11.el5 kvm-qemu-img-83-224.el5.centos kmod-kvm-83-224.el5.centos libvirt-python-0.8.2-15.el5 etherboot-zroms-kvm-5.4.4-13.el5.centos libvirt-0.8.2-15.el5
2005 Nov 03
1
Has anyone successfully used centos 4 as an iscsi CLIENT/w CHAP?
Well, Im using an Adaptec Snap 4500 and I've had nothing but problems with it, initially I thought it was an issue with CHAP authentication so I completely disabled authentication and the kernel is still reporting in dmesg and /var/log/messages that it cannot authenticate with the target /boggle... On a side note, I know the iscsi slice works because I can connect to it just fine on Windows XP
2008 Apr 28
8
NetApp vfiler example scripts
Hi, For anyone who is interested; I created some basic scripts based on my current iSCSI block script. http://kinkrsoftware.nl/contrib/xen/block-netapp http://kinkrsoftware.nl/contrib/xen/netapp-lun.py Basically these two scripts allow you to start a vfiler with a qtree of customers. You log in the NetApp upon boot, and can use: netapp://customer1/disk1 It is work in progress... because I
2008 Apr 28
8
NetApp vfiler example scripts
Hi, For anyone who is interested; I created some basic scripts based on my current iSCSI block script. http://kinkrsoftware.nl/contrib/xen/block-netapp http://kinkrsoftware.nl/contrib/xen/netapp-lun.py Basically these two scripts allow you to start a vfiler with a qtree of customers. You log in the NetApp upon boot, and can use: netapp://customer1/disk1 It is work in progress... because I
2008 Nov 14
10
Shared volume: Software-ISCSI or GFS or OCFS2?
Hello list, I want to use shared volumes between severall vm''s and defenetly don''t want to use NFS or Samba! So i have three options: 1. simulated(software-) iscsi 2. GFS 3. OCFS2 What do you suggest and why? Kind regards, Florian ********************************************************************************************** IMPORTANT: The contents of this email and any
2006 Nov 09
7
xen, iscsi and resilience to short network outages
Hi. Here is the short version: If dom0 experiences a short (< 120 second) network outage the guests whose disks are on iSCSI LUNs get (seemingly) unrecoverable IO errors. Is it possible to make Xen more resiliant to such problems? And now the full version: We''re testing Xen on iSCSI LUNs. The hardware/software configuration is: * Dom0 and guest OS: SLES10 x86_64 * iSCSI LUN on
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2007 Feb 18
7
Zfs best practice for 2U SATA iSCSI NAS
Is there a best practice guide for using zfs as a basic rackable small storage solution? I''m considering zfs with a 2U 12 disk Xeon based server system vs something like a second hand FAS250. Target enviroment is mixature of Xen or VI hosts via iSCSI and nfs/cifs. Being able to take snapshots of running (or maybe paused) xen iscsi vols and re-export then for cloning and remote backup
2011 Apr 01
15
Zpool resize
Hi, LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m changing LUN size on netapp and solaris format see new value but zpool still have old value. I tryed zpool export and zpool import but it didn''t resolve my problem. bash-3.00# format Searching for disks...done AVAILABLE DISK SELECTIONS: 0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2015 Mar 18
4
NFS4 ACLs with samba 3 (or 4)
I know this was discussed a lot a few years ago, but my google searches aren't quite getting me where I'm confident in the answer, so I figure I'd just ask again here if that's ok. Here's what we have, and what we'd like to do: Storage is a Netapp (cluster mode CDOT 8.2 I believe), it's NFS exported to our linux system. Linux system is CentOS 6 and can NFS mount the
2008 Apr 22
3
mount: /dev/sdb1 already mounted or /blah busy
How do I go about troubeshooting this? I'm using RHEL 4 update 6. mount: /dev/sdb1 already mounted or /blah busy It's actually an iSCSI LUN (NetApp filer). I successfully configured (ext3) and mounted it, but when I rebooted, the /dev/sdb1 device/partition is seen by the kernel and it shows up with "fdisk -l". Nevertheless I get that error. I've tried
2012 Feb 01
3
A Billion Files on OCFS2 -- Best Practices?
We have an application that has many processing threads writing more than a billion files ranging from 2KB ? 50KB, with 50% under 8KB (currently there are 700 million files). The files are never deleted or modified ? they are written once, and read infrequently. The files are hashed so that they are evenly distributed across ~1,000,000 subdirectories up to 3 levels deep, with up to 1000 files
2008 Nov 06
2
Painfully slow NetApp with databas
Hello, We have long running problem with NetApp filers. When we connect server to the filer sequential read performance is ~70MB/s. But once we run database on the server seq read performance drops to ~11MB/s. That's happening with two servers. One is running Oracle another - MySQL. During speed tests database load is very light (less than 1MB/s of reads and writes). During the tests NetApp
2012 Jan 10
3
Clustering solutions - mail, www, storage.
Hi all. I am currently working for a hosting provider in a 100+ linux hosts' environment. We have www, mail HA solutions, as storage we mainly use NFS at the moment. We are also using DRBD, Heartbeat, Corosync. I am now gathering info to make a cluster with: - two virtualization nodes (active master and passive slave); - two storage nodes (for vm files) used by mentioned virtualization nodes
2006 Jan 05
2
Linux HA may not be the best choice in your situation. High Availability using 2 sites
Just to clarify, I'm looking at this from an application layer Point of View. One of the reasons why I'm looking at it that way, is because Tim said he was looking at LinuxHA..."application level" redundancy that uses IP. Tim, just to let you know, I don't believe that LinuxHA will work in the way you described, only because of the different IP ranges. It looks like Linux
2008 Jan 17
1
Add more space to LVM
I have a database server that is running out of space. All my databases are being stored in a 80G /opt partition. Because I'm using LVM, wouldn't I be able to pop the HDDs (a h/w raid volume) in, add it to the LVM, and resize my ext3 /opt partition? Everything that I've been reading says this is possible, but I'm not sure. Has anyone done this and are there any pitfalls to
2020 Jun 25
1
virsh edit does not work when <initiator> and <auth> is used in config
Hello, I am having problem when using: "virsh edit <vm_name>" my VM has network iscsi disk defined: <disk type='network' device='disk'> <driver name='qemu' type='raw'/> <source protocol='iscsi' name='iqn.1992-08.com.netapp:5481.60080e50001ff2000000000051aee24d/0'> <host
2009 Apr 26
9
Peculiarities of COW over COW?
We run our IMAP spool on ZFS that''s derived from LUNs on a Netapp filer. There''s a great deal of churn in e-mail folders, with messages appearing and being deleted frequently. I know that ZFS uses copy-on- write, so that blocks in use are never overwritten, and that deleted blocks are added to a free list. This behavior would spread the free list all over the zpool. As well,