Displaying 16 results from an estimated 16 matches for "cx3".
Did you mean:
cr3
2010 Sep 29
4
XFS on a 25 TB device
Hello all,
I have just configured a 64-bit CentOS 5.5 machine to support an XFS
filesystem as specified in the subject line. The filesystem will be used to
store an extremely large number of files (in the tens of millions). Due to
its extremely large size, would there be any non-standard XFS
build/configuration options I should consider?
Thanks.
Boris.
-------------- next part --------------
An
2008 Mar 26
3
generate random numbers subject to constraints
I am trying to generate a set of random numbers that fulfill the following
constraints:
X1 + X2 + X3 + X4 = 1
aX1 + bX2 + cX3 + dX4 = n
where a, b, c, d, and n are known.
Any function to do this?
Thanks,
-Ala'
[[alternative HTML version deleted]]
2006 Nov 23
2
Need SAN suggestion
Hello,
It is not total off-topic since almost all connected servers will be RHEL
and CentOS. :)
We are considering to purchase new storage system. The most probably we
will be choosing between HP (EVA6000), EMC (CX3-20C) and Hitachi (similar
model).
iSCSI connection is also required.
What are your experienced with listed systems? What to avoid? What ot
expect?
Now I just know that sequential read/write preformance on EVA is poor. :)
Thanks,
Mindaugas
2012 Aug 22
5
Centos machine sometimes unreachable
I have a simple perl script that every few hours pings the handful of
machines on my LAN. Lately I've sometimes been getting
ping of 192.168.0.1 succeeded
ping of 192.168.0.7 succeeded
ping of 192.168.0.5 FAILED
ping of 192.168.0.6 succeeded
ping of 192.168.0.9 succeeded
This machine in question has been running Centos faithfully for about six
years and no recent changes to it have been
2007 Aug 21
3
Hot swap SATA?
Should it be possible to hot-swap SATA drives with Centos5? It doesn't
seem to work on my system. Removing an unmounted drive locked the
system up, and leaving one out at bootup makes the devices change names
and keeps grub from finding /boot on a scsi drive that is shifted up.
--
Les Mikesell
lesmikesell at gmail.com
2014 Feb 28
6
suggestions for large filesystem server setup (n * 100 TB)
Hi,
over time the requirements and possibilities regarding filesystems
changed for our users.
currently I'm faced with the question:
What might be a good way to provide one big filesystem for a few users
which could also be enlarged; backuping the data is not the question.
Big in that context is up to couple of 100 TB may be.
O.K. I could install one hardware raid with e.g. N big drives
2007 Jun 15
3
zfs and EMC
Hi there,
have a strange behavior if i?ll create a zfs pool at an EMC PowerPath
pseudo device.
I can create a pool on emcpower0a
but not on emcpower2a
zpool core dumps with invalid argument .... ????
Thats my second maschine with powerpath and zfs
the first one works fine, even zfs/powerpath and failover ...
Is there anybody who has the same failure and a solution ? :)
Greets
Dominik
2015 Jun 25
0
LVM hatred, was Re: /boot on a separate partition?
...0 [0 ]
[root at backup-rdc ~]# lvscan
ACTIVE '/dev/vg_opt/lv_backups' [5.86 TB] inherit
ACTIVE '/dev/VolGroup00/LogVol00' [37.91 GB] inherit
ACTIVE '/dev/VolGroup00/LogVol01' [1.97 GB] inherit
ACTIVE '/dev/bak-rdc/cx3-80' [26.37 TB] inherit
[root at backup-rdc ~]#
It's just beautiful the way I can take another 1.95 TB LUN, add it to
the volume group, expand the logical volume, and then expand the
underlying filesystem (XFS) and just dynamically add storage. Being on
an EMC Clariion foundation, I don...
2007 Jul 09
0
Kernel panic with OCFS2 1.2.6 for EL5 (answer to multipath question)
...00
> Anyone knows how to prevent the kernel from trying to access this lun?
>
> Regards,
> Luis Freitas
>
> *Daniel <daniel.anderzen@gmail.com>* wrote:
>
> Hello
>
> System: Two brand new Dell 1950 servers with dual Intel Quadcore Xeon
> connected to an EMC CX3-20 SAN. Running CentOS 5 x86_64 - both with kernel
> 2.6.18-8.1.6-el5 x86_64.
>
> I just noticed a panic on one of the servers:
>
> Jul 2 04:08:52 megasrv2 kernel: (3568,2):dlm_drop_lockres_ref:2289 ERROR:
> while dropping ref on
> 87B24E40651A4C7C858EF03ED6F3595F:M00000000000...
2007 Mar 23
2
ZFS ontop of SVM - CKSUM errors
...Fri Mar 23 12:52:36 2007
config:
NAME STATE READ WRITE CKSUM
test ONLINE 0 0 134
/dev/md/dsk/d100 ONLINE 0 0 134
errors: 66 data errors, use ''-v'' for a list
bash-3.00#
Disks are from Clariion CX3-40 with FC 15K disks using MPxIO (2x 4Gb links).
I was changing watermarks for cache on the array and now I wonder - the array or SVM+ZFS?
I''m a little bit suspicious about SVM as I can get ~80MB/s only on avarage with short burst upto ~380MB/s (no matter if it''s ZFS, UFS or dir...
2008 Oct 14
1
FreeBSD 7-STABLE, isp(4), QLE2462: panic & deadlocks
Hello everybody,
we recently got three Dell PowerEdge servers equipped with Qlogic
QLE2462 cards (dual FC 4Gbps ports) and an EMC CLARiiON CX3-40 SAN.
The servers with the FC cards were successfully and extensively
tested with an older SUN StorEdge T3 SAN.
However, when we connect them to the CX3-40, create and mount a new
partition and then do something as simple as "tar -C /san -xf ports.tgz"
the system panics and deadlocks....
2011 Apr 02
8
ZFS @ centOS
I have trouble finding definitive information about this. I am considering
the use of SME 7.5.1 (centOS based) for my server needs, but I do want to
use ZFS and I thus far I have only found information about the ZFS-Fuse
implementation and unclear hints that there is another way. Phoronix
reported that http://kqinfotech.com/ would release some form of ZFS for the
kernel but I have found nothing.
2007 Apr 17
10
storage type for ZFS
The paragraph below is from ZFS admin guide
Traditional Volume Management
As described in ?ZFS Pooled Storage? on page 18, ZFS eliminates the need for a separate volume
manager. ZFS operates on raw devices, so it is possible to create a storage pool comprised of logical
volumes, either software or hardware. This configuration is not recommended, as ZFS works best
when it uses raw physical
2011 Apr 12
17
40TB File System Recommendations
Hello All
I have a brand spanking new 40TB Hardware Raid6 array to play around
with. I am looking for recommendations for which filesystem to use. I am
trying not to break this up into multiple file systems as we are going
to use it for backups. Other factors is performance and reliability.
CentOS 5.6
array is /dev/sdb
So here is what I have tried so far
reiserfs is limited to 16TB
ext4
2017 Apr 24
3
OT: systemd Poll - So Long, and Thanks for All the fish.
On 04/20/2017 05:55 PM, Warren Young wrote:
> ... I find that most hardware is ready to fall over by the time the
> CentOS that was installed on it drops out of support anyway.
> ...
James' point isn't the hardware cost, it's the people cost for
retraining. In many ways the Fedora treadmill is easier, being that
there are many more smaller jumps than the huge leap from C6
2007 Jul 29
1
6 node cluster with unexplained reboots
We just installed a new cluster with 6 HP DL380g5, dual single port Qlogic 24xx HBAs connected via two HP 4/16 Storageworks switches to a 3Par S400. We are using the 3Par recommended config for the Qlogic driver and device-mapper-multipath giving us 4 paths to the SAN. We do see some SCSI errors where DM-MP is failing a path after get a 0x2000 error from the SAN controller, but the path gets puts