similar to: Need a CONFIRMED working hardware sata raid10 card

Displaying 20 results from an estimated 500 matches similar to: "Need a CONFIRMED working hardware sata raid10 card"

2007 Mar 14
3
New System Build
Here is what I've ordered for the new system: Case: Antec TX1050B, Mid-Tower Server Case MoBo: Intel D945GCL CPU: Intel Pentium-4 631 RAM: Aeneon 512 Mb 240-pin DDR2 SDRAM DIMM - DDR2 667 Disk: Hitachi 80 Gb SATA 3G Carrier: Vantec MRK-200ST-BK DVD: got one on the shelf I'm going to try Floppy: Sony MPF920-Z-121 My intention is to load CentOS-4 complete and use the
2007 Mar 17
2
CentOS-4.3 Install Fails
I'm using the same CD's that I've used to install CentOS on my other systems. After the first failure I had the installation verify the install media just in case. It passed. I've tried the default install (used for all previous installs) and the i586 option (after googling). They both fail in the same way. Here is a transcript of what is output to the screen: Running
2008 Sep 26
2
USB to SATA / eSATA adapter compatibility
Hi all, Does anyone know if this USB to SATA / e-SATA adapter will work on Linux? http://www.vantecusa.com/front/product/view_detail/266 Their website and knowledgebase doesn't say anything about it, so I'm checking the list to try my luck :) -- Kind Regards Rudi Ahlers
2016 Oct 27
6
[OT] How to recover data from an IDE drive
Hello, As some may recall, I suffered a hardware failure of a 10 yr old IBM Netvista back in January. I was backing up my personal data, 'My Documents', to my CentOS server but I apparently didn't get my emails. It was a main board failure and I believe the data is still good on the hard drive. Only problem, its an IDE drive and my server and new PC have SATA drives. Is it possible
2009 Jun 19
3
snmp statistics for vnics
1 simple question about network statistics for vnics: We noticed that the statistics for vnics (used by Virtualbox guests) are not there. When the nics are plumped in the host OS, we do see them showing up, but all all records are 0. Can someone tell me how to get the correct network info via snmp on the Vbox Host?? Thanks in advance. IF-MIB::ifDescr.1 = STRING: lo0 IF-MIB::ifDescr.2 = STRING:
2010 Sep 30
3
Cannot destroy snapshots: dataset does not exist
Hello, I have a ZFS filesystem (zpool version 26 on Nexenta CP 3.01) which I''d like to rollback but it''s having an existential crisis. Here''s what I see: root at bambi:/# zfs rollback bambi/faline/userdocs at AutoD-2010-09-28 cannot rollback to ''bambi/faline/userdocs at AutoD-2010-09-28'': more recent snapshots exist use ''-r'' to
2010 Jun 18
5
Migrating away from Nagios
I have to rebuild a new Nagios box and thought this might be a good time to migrate away. I use snmp mostly for everything but with the fork Nagios endured I wonder about putting any more effort into the project. I probably should look at OpenNMS again, but the other options I think might work are Icinga (Should be trivial to migrate) or Zenoss or maybe even Zabbix? Anyone have experience in
2004 Apr 08
3
Squid + shaping question
Hi folks, So, I have a pretty simple setup - a linux router machine running as a firewall/router for a small neighborhood LAN (approx 20 machines). I also have squid running on the box in non-transparent mode, and also I have set up NAT for TCP/UDP ports above 1024 for all clients and SSH/POP/SMTP/CVS NAT''d for selected ones based on MAC filtering. No hosts whatsoever can access ports 80
2007 Apr 24
2
setting up CentOS 5 with Raid10
I would like to set up CentOS on 4 SATA hard drives that I would like to configure in RAID10. I read somewhere that Raid10 support is in the latest kernel, but I can't seem to get anaconda to let me create it. I only see raid 0, 1, 5, and 6. Even when I tried to set up raid5 or raid1, it would not let me put the /boot partition on it, and I though that this was now possible. Is it
2013 Mar 12
1
what is "single spanned virtual disk" on RAID10????
We have DELL R910 with H800 adapter in it. several MD1220 connect to H800. Since MD1220 have 24 hard disks in it. When I configured RAID10, there is a choice call 'single spanned virtual disk" (22 disks). Can anyone tell me how "single spanned virtual disk" work? Any document relate to it? Thanks.
2011 May 05
1
Converting 1-drive ext4 to 4-drive raid10 btrfs
Hello! I have a 1 TB ext4 drive that''s quite full (~50 GB free space, though I could free up another 100 GB or so if necessary) and two empty 0.5 TB drives. Is it possible to get another 1 TB drive and combine the four drives to a btrfs raid10 setup without (if all goes well) losing my data? Regards, Paul -- To unsubscribe from this list: send the line "unsubscribe
2009 Dec 10
3
raid10, centos 4.x
I just created a 4 drive mdadm --level=raid10 on a centos 4.8-ish system here, and shortly thereafter remembreed I hadn't updated it in a while, so i ran yum update... while installing/updating stuff, got these errors: Installing: kernel ####################### [14/69] raid level raid10 (in /proc/mdstat) not recognized ... Installing: kernel-smp
2017 Oct 17
1
lvconvert(split) - raid10 => raid0
hi guys, gals do you know if conversion from lvm's raid10 to raid0 is possible? I'm fiddling with --splitmirrors but it gets me nowhere. On "takeover" subject man pages says: "..between striped/raid0 and raid10."" but no details, nowhere I could find documentation, nor a howto. many thanks, L.
2019 Sep 30
1
CentOS 8 broken mdadm Raid10
Hello, On my system with a Intel SCU Controller and a Raid 10 System it is not possible to install this Raid10. I have tested this with a CentOS 7 and Opensuse all found my Raid but with CentOS 8 this is broken? I found on start the Installation a Error from mdadm that ist all. Now I download and Test the Stream iso? and hope ..... -- mit freundlichen Gr?ssen / best regards G?nther J,
2012 Nov 22
0
raid10 data fs full after degraded mount
Hello, on a fs with 4 disks, raid 10 for data, one drive was failing and has been removed. After reboot and ''mount -o degraded...'', the fs looks full, even though before removal of the failed device it was almost 80% free. root@fs0:~# df -h /mnt/b Filesystem Size Used Avail Use% Mounted on /dev/sde 11T 2.5T 41M 100% /mnt/b root@fs0:~# btrfs fi df /mnt/b Data,
2014 May 28
0
Failed Disk RAID10 Problems
Hi, I have a Btrfs RAID 10 (data and metadata) file system that I believe suffered a disk failure. In my attempt to replace the disk, I think that I've made the problem worse and need some help recovering it. I happened to notice a lot of errors in the journal: end_request: I/O error, dev dm-11, sector 1549378344 BTRFS: bdev /dev/mapper/Hitachi_HDS721010KLA330_GTA040PBG71HXF1 errs: wr
2012 Jul 14
2
bug: raid10 filesystem has suddenly ceased to mount
Hi! The problem is that the BTRFS raid10 filesystem without any understandable cause refuses to mount. Here is dmesg output: [77847.845540] device label linux-btrfs-raid10 devid 3 transid 45639 /dev/sdc1 [77848.633912] btrfs: allowing degraded mounts [77848.633917] btrfs: enabling auto defrag [77848.633919] btrfs: use lzo compression [77848.633922] btrfs: turning on flush-on-commit [77848.658879]
2013 Mar 28
1
question about replacing a drive in raid10
Hi all, I have a question about replacing a drive in raid10 (and linux kernel 3.8.4). A bad disk was physical removed from the server. After this a new disk was added with "btrfs device add /dev/sdg /btrfs" to the raid10 btrfs FS. After this the server was rebooted and I mounted the filesystem in degraded mode. It seems that a previous started balance continued. At this point I want to
2014 Apr 07
3
Software RAID10 - which two disks can fail?
Hi All. I have a server which uses RAID10 made of 4 partitions for / and boots from it. It looks like so: mdadm -D /dev/md1 /dev/md1: Version : 00.90 Creation Time : Mon Apr 27 09:25:05 2009 Raid Level : raid10 Array Size : 973827968 (928.71 GiB 997.20 GB) Used Dev Size : 486913984 (464.36 GiB 498.60 GB) Raid Devices : 4 Total Devices : 4 Preferred Minor : 1
2007 May 07
5
Anaconda doesn't support raid10
So after troubleshooting this for about a week, I was finally able to create a raid 10 device by installing the system, copying the md modules onto a floppy, and loading the raid10 module during the install. Now the problem is that I can't get it to show up in anaconda. It detects the other arrays (raid0 and raid1) fine, but the raid10 array won't show up. Looking through the logs