Displaying 9 results from an estimated 9 matches for "perc6".
Did you mean:
perc
2009 Jun 18
2
PERC6 driver on CENTOS 5.3
We have DELL R900 server with PERC6/i and PERC 6/E in it. O.S. is CENTOS 5.3. I download latest PERC 6 driver from DELL site and tried to installed it. I have error messages said :
Module version 00.00.03.21 for megaraid_sas.ko
is not newer than what is already found in kernel 2.6.18-128.1.10.el5 (00.00.04.01-RH1).
Does CENTOS 5...
2013 Feb 15
2
OT: LSI SAS1068e / Perc6/iR RAID Health
...he PE1950s.
I'm finding drivers and Megaraid Storage Manager downloads via LSI's site,
but I'd prefer not to install a bunch of crapware. All I need is the
proper LSI daemon which exports info via SNMP.
I've used check_sasraid_megaraid Nagios script [0] in the past for PERC5i
and PERC6i controllers in Dell PE2950s. For 2950s I've only had to
install sas_snmp-3.11-0003.i386.rpm from LSI to get rolling.
I've tried installing parts from the MSM zip/tarball mess to install only
the lsi_mrdsnmpd daemon (lsi_mrdsnmpagent).
PERC5i = LSI 8048e (megaraid)
PERC6i = LSI SAS 1078...
2009 Jun 28
5
How to change Disk sequence on DELL R900 CENTOS 5.3?
we have DELL R900 with CENTOS 5.3 in it. This DELL R900 come with one integrate PERC6/I and two PERC6/E card. DELL 6/I control 5 internal disks. The original disk sequence are:
/dev/sda1 /boot
/dev/sda2 /
/dev/sdb1 swap
...
after I configured PERC6/E disks and reboot, /dev/sda change to RAID disk and original /boot and / change to /dev/sde1 and /dev/sde2.
My modprobe.conf is:...
2008 Mar 26
3
HW experience
Hi,
we would like to establish a small Lustre instance and for the OST
planning to use standard Dell PE1950 servers (2x QuadCore + 16 GB Ram) and
for the disk a JBOD (MD1000) steered by the PE1950 internal Raid controller
(Raid-6). Any experience (good or bad) with such a config ?
thanxs,
Martin
2013 Dec 09
3
Gluster infrastructure question
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Heyho guys,
I'm running since years glusterfs in a small environment without big
problems.
Now I'm going to use glusterFS for a bigger cluster but I've some
questions :)
Environment:
* 4 Servers
* 20 x 2TB HDD, each
* Raidcontroller
* Raid 10
* 4x bricks => Replicated, Distributed volume
* Gluster 3.4
1)
I'm asking me, if I can
2009 May 15
1
Dell 2950 with CentOS 5.3
Hi all,
Any one experience issue with Dell 2950 and CentOS 5.3,
I face some issue like this, server install alright. Keep it in production
for a while, Done an update one the server. Reboot , It said no boot devices
found. Check on the perc6/i controller the both harddisk show online and
optimal. Using live cd to check, the both harddisk still have the data.
My issue was solve by removing the hard disk, clear the raid configuration.
Put in the hard disk and reimport the configuration from the harddisk. The
server able to boot up prope...
2009 Feb 12
8
Xen 3.3.1 Windows HVM Disk I/O -> domU and dom0 hangs
Hi,
we are currently working on getting windows working on your xen servers.
but we are facing a severe problem where dom0 and all domus hang for
1-5 seconds from time to time.
we think it is probably because of disk i/o, because top sometimes
says 100% wa (waiting on io) during the hang.
dom0 has cpu 0 for exclusive use and the windows vms use cpu 1 to 7.
should we give dom0 more than once
2008 Jan 20
2
Dell Perc 6 disk geometry problem with RAID5 (both 6.3 final and 7.0 RC1)
Hi,
We bought a new Dell PowerEdge 2950III with Perc 6/i and have the disk
geometry problem using 6.3 final or 7.0 RC1. Seems that we are not alone at
least one guy has similar problem reported earlier:
http://unix.derkeiler.com/Mailing-Lists/FreeBSD/questions/2008-01/msg00506.html
I was reading the mailing list and found that some of the people are happily
using this hardware with the latest
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.