similar to: zfs, raidz, spare and jbod

Displaying 20 results from an estimated 900 matches similar to: "zfs, raidz, spare and jbod"

2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2007 Oct 10
0
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation.
Just as I create a ZFS pool and copy the root partition to it.... the performance seems to be really good then suddenly the system hangs all my sesssions and displays on the console: Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma map got ''no resources'' Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0: dma allocate fail Oct 10 00:23:28 sunrise arcmsr: WARNING: arcmsr0:
2007 Oct 10
1
Areca 1100 SATA Raid Controller in JBOD mode Hangs on zfs root creation
Updated to latest firmware 1.43-70417 ... same problem.. WARNING: arcmsr0: dma map got ''no resources'' WARNING: arcmsr0: dma allocate fail WARNING: arcmsr0: dma allocate fail free scsi hba pkt WARNING: arcmsr0: dma map got ''no resources'' WARNING: arcmsr0: dma allocate fail WARNING: a The only positive thing is that everytime I try to copy my UFS
2010 Apr 13
1
People Centric recherche plusieurs développeurs Ruby on Rails
Un bon job de Developpeur Expérimenté Ruby on Rails, sur logiciel marketing digital, équipe pro, Paris 2, Sté Cotée, Paris Rou SiliconValley. Veuillez consulter le lien suivant pour plusieurs détails: http://www.people-centric.fr/2010/01/27/type/6368-job-developpeur-ruby.html/fr/ Cordialement, Roxana Stefan Malene Chargée de recrutement en IT Mail:
2017 Oct 22
2
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
-----Original Message----- From: CentOS [mailto:centos-bounces at centos.org] On Behalf Of Noam Bernstein Sent: Sunday, October 22, 2017 8:54 AM To: CentOS mailing list <centos at centos.org> Subject: [CentOS] Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64 > Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, >
2009 May 13
2
With RAID-Z2 under load, machine stops responding to local or remote login
Hi world, I have a 10-disk RAID-Z2 system with 4 GB of DDR2 RAM and a 3 GHz Core 2 Duo. It''s exporting ~280 filesystems over NFS to about half a dozen machines. Under some loads (in particular, any attempts to rsync between another machine and this one over SSH), the machine''s load average sometimes goes insane (27+), and it appears to all be in kernel-land (as nothing in
2007 Sep 12
0
Re: [CentOS-devel] Areca RAID drivers
On Tue, 2007-09-04 at 23:32 +0100, Karanbir Singh wrote: > Hi, > > Phil Schaffner wrote: > > Karanbir provided driver disk images and indicated that kmod drivers > > would be in CentOS-5 Extras, but apparently these never materialized. > > The only src.rpm I could turn up was the last link above, and I did get > > CentOS-5 x86_64 kmod-style drivers to build from
2008 Mar 26
1
freebsd 7 and areca controller
Hi. I'm looking at deploying a freebsd 7-release server with some storage attached to an areca ARC-1680 controller. But this card is not mentioned in 'man 4 arcmsr' (http://www.freebsd.org/cgi/man.cgi?query=arcmsr&sektion=4&manpath=FreeBSD+7.0-RELEASE). Areaca's website does mention freebsd as a supported OS (http://www.areca.com.tw/products/pcietosas1680series.htm). Has
2012 Apr 03
2
CentOS 6.2 + areca raid + xfs problems
Two weeks ago I (clean-)installed CentOS 6.2 on a server which had been running 5.7. There is a 16 disk = ~11 TB data volume running on an Areca ARC-1280 raid card with LVM + xfs filesystem on it. The included arcmsr driver module is loaded. At first it seemed ok, but with in a few hours I started getting I/O error message on directory listings, and then a bit later when I did a vgdisplay
2010 Jul 26
1
areca 1100 kmod / kernel support
Hi, is the areca 1100 Raid Controller supported by CentOS 5? Or is there a kmod rpm available? Thanks Juergen
2010 Jun 18
25
Erratic behavior on 24T zpool
Well, I''ve searched my brains out and I can''t seem to find a reason for this. I''m getting bad to medium performance with my new test storage device. I''ve got 24 1.5T disks with 2 SSDs configured as a zil log device. I''m using the Areca raid controller, the driver being arcmsr. Quad core AMD with 16 gig of RAM OpenSolaris upgraded to snv_134. The zpool
2017 Sep 28
2
mounting an nfs4 file system as v4.0 in CentOS 7.4?
CentOS 7.4 client mounting a CentOS 7.4 server filesystem over nfs4. nfs seems to be much slower since the upgrade to 7.4, so I thought it might be nice to mount the directory as v4.0 rather than the new default of v4.1 to see if it makes a difference. The release notes state, without an example: "You can retain the original behavior by specifying 0 as the minor version" nfs(5)
2008 Jan 30
3
newfs locks entire machine for 20seconds
----- Original Message ----- From: "Ivan Voras" <ivoras@freebsd.org> >> The machine is running with ULE on 7.0 as mention using an Areca 1220 >> controller over 8 disks in RAID 6 + Hotspare. > > I'd suggest you first try to reproduce the stall without ULE, while > keeping all other parameters exactly the same. Ok tried with an updated 7 world / kernel as
2013 May 31
62
cpuidle and un-eoid interrupts at the local apic
Recently our automated testing system has caught a curious assertion while testing Xen 4.1.5 on a HaswellDT system. (XEN) Assertion ''(sp == 0) || (peoi[sp-1].vector < vector)'' failed at irq.c:1030 (XEN) ----[ Xen-4.1.5 x86_64 debug=n Not tainted ]---- (XEN) CPU: 0 (XEN) RIP: e008:[<ffff82c48016b2b4>] do_IRQ+0x514/0x750 (XEN) RFLAGS: 0000000000010093 CONTEXT:
2017 Oct 22
0
Areca RAID controller on latest CentOS 7 (1708 i.e. RHEL 7.4) kernel 3.10.0-693.2.2.el7.x86_64
Is anyone running any Areca RAID controllers with the latest CentOS 7 kernel, 3.10.0-693.2.2.el7.x86_64? We recently updated (from 3.10.0-514.26.2.el7.x86_64), and we?ve started having lots of problems. To add to the confusion, there?s also a hardware problem (either with the controller or the backplane most likely) that we?re in the process of analyzing. Regardless, we have an ARC1883i, and
2010 Jan 08
0
ZFS partially hangs when removing an rpool mirrored disk while having some IO on another pool on another partition of the same disk
Hello, Sorry for the (very) long subject but I''ve pinpointed the problem to this exact situation. I know about the other threads related to hangs, but in my case there was no < zfs destroy > involved, nor any compression or deduplication. To make a long story short, when - a disk contains 2 partitions (p1=32GB, p2=1800 GB) and - p1 is used as part of a zfs mirror of rpool
2017 Aug 13
1
[Bug 102192] New: Dell XPS 15 9560: PU: 1 PID: 58 at drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gf100.c:190 gf100_vm_flush+0x1b3/0x1c0
https://bugs.freedesktop.org/show_bug.cgi?id=102192 Bug ID: 102192 Summary: Dell XPS 15 9560: PU: 1 PID: 58 at drivers/gpu/drm/nouveau/nvkm/subdev/mmu/gf100.c:190 gf100_vm_flush+0x1b3/0x1c0 [nouveau] Product: xorg Version: unspecified Hardware: x86-64 (AMD64) OS: Linux (All)
2008 Nov 24
1
no priority on the console?
As per my previous message, I've spent about 3 months trying to debug a problem that was causing all disk I/O to go very slowly. One of the things which made this nearly impossible to diagnose was the absolute lack of priority given to the console. Logging in on the console would take 12-15 minutes. Hitting enter on the console would usually take between 3 and 5 minutes. This
2007 Sep 11
0
irqbalanced on SMP dom0 ?
Hi listmembers, not a really urgent question, but i''m just curious about it: Is it advised to use an irqbalanced on dom0 when running domU''s pinned to particular cores? as an example, i''ve got a dual quadcore xen system running with domU pinned to cores 1-3 (CPU#0) domU pinned to cores 4-5 (CPU#1) domU pinned to cores 6-7 (CPU#1) so dom0 should have 100% time on
2005 May 02
0
[ANNOUNCE] Areca SATA RAID drivers for CentOS 4.0
Hi, To follow up my CentOS 4.0/Areca SATA RAID driver disk I created a month or two ago, I have now created kernel-module-arcmsr RPMs containing just the kernel module to track the kernel updates. This means: a) No need to patch, rebuild and maintain customised kernels for Areca support b) Keep everything maintained with RPM I've taken the latest 1.20.00.07 kernel driver source as found in