Displaying 20 results from an estimated 300 matches similar to: "pool-refresh on iSCSI pools does not actually rescan the bus"
2011 Sep 13
1
libvirt does not recognize all devices in iscsi and mpath pools in a predictable manner
Hi,
I'm using libvirt 0.8.3 on Fedora 14 (as I wrote earlier, I'm having some
trouble updating to the newest version), and I'm having problems getting iscsi
and mpath storage pools to work in a usable and consistent manner.
I have two storage pools defined on the host machine, one for raw iscsi-
devices and one for those same iscsi devices device-mapped by multipath. They
look
2007 Sep 02
4
Performance Issues
My apology for cross posting
We have a DELL6850 with 8Gbytes of memory, four 3.2Ghz CPU's , perc 4
raid controller, with fourteen 300Gbyte 10Krpm disk on a powervault
220s, And a powervault 124T LTO-3 tape systems on a separate
160Mbyte/sec adaptec SCSI card.
The disks are configured as two 2Tbyte raid 0 partitions using the perc
4 hardware.
The problem is - reading from the disk, and
2006 Aug 16
1
gnbd help on centos
I've googled for this, but everything I find tends to talk about cluster
and doesn't give an example close enough that I can figure this out. I
have read the Red Hat Cluster Suite Configuring and Managing a Cluster
<http://www.redhat.com/docs/manuals/csgfs/browse/rh-cs-en/> links from
http://www.redhat.com/docs/manuals/csgfs/. (I think these are mirred on
centos.org, but I
2003 Apr 17
2
Samba 2.2.8a Large File Support Issues
I know this issue has been beat to death on the newsgroups, but I have
not yet found an answer.
I will make an attempt to include all the necessary info.
Description of Problem:
I am having problems trying to access large files (>2GB) on mounted
smbfs volumes.
To clarify, I am attempting to view large files that exist on an NT
server, from my Linux box, and getting erroneous results.
2009 Nov 12
1
no valid partitiontables anymore
Hi,
recently I had to shut down an iscsi-raid an the connected servers.
After the reinstallation and changing the ip config to match our new lan
design, I can login to the iscsi device, the volumes are there and I can
establish an iscsi link to some volumes.
But, some other volumes on the iscsi device are reported with an invalid
partition table or that they can't be mounted.
e.g.:
fdisk
2014 Dec 15
0
rsync output under CentOS 6
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
On 2014-12-15 14:43, Niamh Holding wrote:
> LM> Folders should only be listed if timestamps or permissions are
> LM> different.
>
> Further experimentation shows this to be the case IF the
> destination is another local drive.
>
> Unfortunately the required destination is a CIFS share, which
> might change things.
2018 Apr 26
2
cluster of 3 nodes and san
Hi list, I need a little help, I currently have a cluster with vmware
and 3 nodes, I have a storage (Dell powervault) connected by FC in
redundancy, and I'm thinking of migrating it to proxmox since the
maintenance costs are very expensive, but the Doubt is if I can use
glusterfs with a san connected by FC? , It is advisable? , I add
another data, that in another site I have another cluster
2012 May 11
1
namespace from snapshots
hi all,
I'm trying to give access to snapshots taken from by dell iscsi MD3200i
to my maildirs users.
snapshot are mounted in read only mode from my FreeBSD box.
In my /usr/local/etc/dovecot/conf.d/10-mail.conf, I have :
namespace inbox {
inbox = yes
}
namespace da1 {
prefix = INBOX.backup.da1.
hidden = no
list = yes
inbox = no
location = maildir:/da1/%u/Maildir
type = private
}
I
2012 Jun 07
1
Accessing maildir snapshots through dovecot / namespace
Hi,
I've the following setup :
- FreeBSD 9.0 / Dovecot 2.1.7
- Maildir storage over iSCSI (Dell MD3200i)
- Virtual users over LDAP
to render the storage snapshots available through
dovecot (to allow my users to browse their mail history).
Here is my conf :
namespace {
type = private
inbox = yes
list = yes
prefix = INBOX.
location =
2018 Apr 27
0
cluster of 3 nodes and san
Hi, any advice?
El mi?., 25 abr. 2018 19:56, Ricky Gutierrez <xserverlinux at gmail.com>
escribi?:
> Hi list, I need a little help, I currently have a cluster with vmware
> and 3 nodes, I have a storage (Dell powervault) connected by FC in
> redundancy, and I'm thinking of migrating it to proxmox since the
> maintenance costs are very expensive, but the Doubt is if I can
2008 Oct 13
1
"EDAC i5000 MC0: FATAL ERRORS Found!!!" error message?
Hi List,
We had the following error thrown on console on a PowerEdge server
running CentOS 5 (64 bit). Googling around didn't yield any particular
insights. The server crashed a few minutes after this message. Running
memtester, just to check, didn't find anything; and the box has been
running for months before this without issue.
I'm wondering if anyone has run across this
2002 Mar 02
4
ext3 on Linux software RAID1
Everyone,
We just had a pretty bad crash on one of production boxes and the ext2
filesystem on the data partition of our box had some major filesystem
corruption. Needless to say, I am now looking into converting the
filesystem to ext3 and I have some questions regarding ext3 and Linux
software RAID.
I have read that previously there were some issues running ext3 on a
software raid device
2007 Apr 03
2
Corrupt inodes on shared disk...
I am having problems when using a Dell PowerVault MD3000 with multipath
from a Dell PowerEdge 1950. I have 2 cables connected and mount the
partition on the DAS Array. I am using RHEL 4.4 with RHCS and a two
node cluster. Only one node is "Active" at a time, it creates a mount
to the partition, and if there is an issue RHCS will fence the device
and then the other node will mount the
2008 May 15
2
[storage-discuss] ZFS and fibre channel issues
The ZFS crew might be better to answer this question. (CC''d here)
--jc
William Yang wrote:
> I am having issues creating a zpool using entire disks with a fibre
> channel array. The array is a Dell PowerVault 660F.
> When I run "zpool create bottlecap c6t21800080E512C872d14
> c6t21800080E512C872d15", I get the following error:
> invalid vdev
2006 Jun 20
1
Windows 2003, Cygwin, and rsync
I just installed Cygwin / OpenSSH / rsync on two Dell PowerVault 745N
NASes running Windows 2003 Appliance Edition. My rsync daemons are
running, ssh works, and in theory all is well. But... I'm getting an
average of maybe 15 Mb/s rsyncing between them.
Now, I know I have an issue in the way they're connected... one is
attached to a Cisco 2970 (1 Gb/s), which is attached to a NetGear
2004 Apr 12
2
FW: cluster1 error
I am trying to use:
ocfs-support-1.0.10-1
ocfs-2.4.21-EL-smp-1.0.11-1
ocfs-tools-1.0.10-1
with RedHat AS 3.0, 2-node cluster with shared SCSI. 2 dell 1650s, dual
CPUs, PERC 3/DC cards chained to a PowerVault 220S.
I am using lvm, and here is my layout:
[root@cluster1 archive]# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda2 32G 5.1G 25G
2003 May 16
1
data=journal or data=writeback for NFS server
I'm trying to increase the performance of an NFS server running Red Hat
Linux 7.3 and using ext3. This server sees hefty read & write activity
and I've been getting a lot of complaints that writing to the disks is
taking a long time. The hardware is ample (Dell PE 2650 with 2x2.4GHz
Xeons, 6GB RAM, PERC3-DC RAID controller & attached PowerVault 220S) and
network connectivity
2018 Apr 27
1
cluster of 3 nodes and san
>but the Doubt is if I can use glusterfs with a san connected by FC?
Yes, just format the volumes with xfs and ready to go
For a replica in different DC, be careful about latency. What is the
connection between DCs?
It can be doable if latency is low.
On Fri, Apr 27, 2018 at 4:02 PM, Ricky Gutierrez <xserverlinux at gmail.com> wrote:
> Hi, any advice?
>
> El mi?., 25 abr. 2018
2008 Nov 25
0
Dell PE2970 with ATI RN50
Hey Guys,
I'm having a few troubles trying to get Cent OS 5.2 displaying correctly on
my Dell PowerEdge 2970 (I also have two PowerVault NF500 III's but they are
just getting a little screen lag at the minute, the 2970 is getting white
artifacts when I drag a window!).
Can anyone help me locate and correctly install some drivers. The 2970 has
an onbaord ATI RN50 as its an AMD board/CPU
2004 Sep 22
1
high nfs load after rsync finishes
Hi,
We have a cluster of Poweredge 1750s and couple storage nodes
(Powervault 220) running RH9 (2.4.20-31.9smp). We use rsync for daily
backups. The daily rsync write size is only around 10MB and read size
is ~300MB. The rsync finishes fast enough, but for around 4-5 hours
after rsync finishes, the load on the