similar to: Dirves going offline in Zpool

Displaying 20 results from an estimated 200 matches similar to: "Dirves going offline in Zpool"

2010 Dec 05
4
Zfs ignoring spares?
Hi all I have installed a new server with 77 2TB drives in 11 7-drive RAIDz2 VDEVs, all on WD Black drives. Now, it seems two of these drives were bad, one of them had a bunch of errors, the other was very slow. After zfs offlining these and then zfs replacing them with online spares, resilver ended and I thought it''d be ok. Appearently not. Albeit the resilver succeeds, the pool status
2011 Feb 05
2
kernel messages question
Hi I keep getting these messages on this one box. There are issues with at least one of the drives in it, but since there are some 80 drives in it, that''s not really an issue. I just want to know, if anyone knows, what this kernel message mean. Anyone? Feb 5 19:35:57 prv-backup scsi: [ID 365881 kern.info] /pci at 7a,0/pci8086,340e at 7/pci1000,3140 at 0 (mpt1): Feb 5 19:35:57
2015 Apr 02
4
NFS Stale file handle drives me crazy (Centos 6)
Hi folks, I have a Centos 6 NFS server, which dirves me crazy. The directory I try to export cant be accessed by different clients. I tried a centos 7, centos 6 and a pool of vmware esxi 5.5 systems. At the client side I get errors like: mount.nfs: Stale file handle or Sysinfo set operation VSI_MODULE_NODE_mount failed with the tatus Unable to query remote mount point's attributes. On
2013 Mar 12
1
what is "single spanned virtual disk" on RAID10????
We have DELL R910 with H800 adapter in it. several MD1220 connect to H800. Since MD1220 have 24 hard disks in it. When I configured RAID10, there is a choice call 'single spanned virtual disk" (22 disks). Can anyone tell me how "single spanned virtual disk" work? Any document relate to it? Thanks.
2015 Oct 07
0
OT hardware issue: HP controller to 3rd party RAID
On 10/7/2015 2:15 PM, m.roth at 5-cent.us wrote: > It's an old box, a DL580 G5. And we didn't get the card with the RAID. I > suppose I can single path each RAID from the PERC H800, until budgets open > up. wait, you originally said HP SmartArrray P800. Now, its a Dell PERC H800? I'm confused. -- john r pierce, recycling bits in santa cruz
2015 Feb 12
2
test, and H/W
Hi, folks, This is a test post; to make it of interest, here's an issue you might want to be aware of, those with not brand new hardware. We've got a few Dell PE R415s. A 2TB b/u drive on one was getting full, so I went to replace it with a 3TB drive (a WD Red, not that it matters.) We got the system in '11. I built the drive - a GPT, 1 3TB partition, on an R320, from '12.
2015 Oct 07
2
OT hardware issue: HP controller to 3rd party RAID
John R Pierce wrote: > On 10/7/2015 1:30 PM, m.roth at 5-cent.us wrote: >> No. I'm the one who got the quote and set up the order for my managers, >> and I know what it's got: the ethernet connection is to the internal >> webserver to manage the RAID. the 712S I have is dual-path SAS for data. > > then yeah, you'll need a plain SAS2 external HBA card. If
2015 Oct 26
3
semi-OT: HW
Running CentOS 6.7, on an older HP DL580 G5. We've got a Dell 12-bay RAID box plugged into a PERC H800 (aka LSI Liberator) that we put in the HP, and that works fine. We've got a new RAID (a JetStor), and are trying to plug it in. The layout is that the Dell RAID is dual-pathed DAS. Each PERC has two DAS ports. We've got the Dell RAID in the top port of each PERC. I *thought* I could
2015 Apr 06
0
NFS Stale file handle drives me crazy (Centos 6)
On 04/02/2015 09:03 AM, G?tz Reinicke - IT Koordinator wrote: > Hi folks, > > I have a Centos 6 NFS server, which dirves me crazy. > > The directory I try to export cant be accessed by different clients. > > I tried a centos 7, centos 6 and a pool of vmware esxi 5.5 systems. > > At the client side I get errors like: > > mount.nfs: Stale file handle > > or
2015 Apr 08
0
NFS Stale file handle drives me crazy (Centos 6)
Dnia czwartek, 2 kwietnia 2015 3:03:53 PM G?tz Reinicke - IT Koordinator pisze: > Hi folks, > I have a Centos 6 NFS server, which dirves me crazy. > The directory I try to export cant be accessed by different clients. > I tried a centos 7, centos 6 and a pool of vmware esxi 5.5 systems. > At the client side I get errors like: > mount.nfs: Stale file handle [...] > I use xfs
2003 Dec 01
0
No subject
This means that when a person logs into the samba server it will execute this file. You should possibly put this file on your NT server that's currently the PDC in the netlogon share there though, which would mean that presuming that samba authenticates ok back to the NT box, your users will transparently get a whole new bunch of drives, and won't need to worry about password maintenance
2011 Jun 01
1
How to properly read "zpool iostat -v" ? ;)
Hello experts, I''ve had a lingering question for some time: when I use "zpool iostat -v" the values do not quite sum up. In the example below with a raidz2 array made of 6 drives: * the reported 33K of writes are less than two disks'' workload at this time (at 17.9K each), overall disks writes are 107.4K = 325% of 33K. * write ops sum up to 18 = 225% of 8 ops to
2010 Nov 11
8
zpool import panics
Hi, I just had my Dell R610 reboot with a kernel panic when I threw a couple of zfs clone commands in the terminal at it. Now, after the system had rebooted zfs will not import my pool anylonger and instead the kernel will panic again. I have had the same symptom on my other host, for which this one is basically the backup, so this one is my last line if defense. I tried to run zdb -e
2010 Nov 06
10
Apparent SAS HBA failure-- now what?
My setup: A SuperMicro 24-drive chassis with Intel dual-processor motherboard, three LSI SAS3081E controllers, and 24 SATA 2TB hard drives, divided into three pools with each pool a single eight-disk RAID-Z2. (Boot is an SSD connected to motherboard SATA.) This morning I got a cheerful email from my monitoring script: "Zchecker has discovered a problem on bigdawg." The full output is
2011 Mar 07
6
Dell PERC H800 commandline RAID monitoring tools
We're looking for tools to be used in monitoring the PERC H800 arrays on a set of database servers running CentOS 5.5. We've installed most of the OMSA (Dell monitoring) suite. Our current alerting is happening through SNMP, though it's a bit hit or miss (we apparently missed a couple of earlier predictive failure alerts on one drive). OMSA conflicts with mega-cli, though we may
2012 Mar 16
1
NFS Hanging Under Heavy Load
Hello all, I'm currently experiencing an issue with an NFS server I've built (a Dell R710 with a Dell PERC H800/LSI 2108 and four external disk trays). It's a backup target for Solaris 10, CentOS 5.5 and CentOS 6.2 servers that mount it's data volume via NFS. It has two 10gig NICs set up in a layer2+3 bond for one network, and two more 10gig NICs set up in the same way in another
2019 Sep 05
1
[PATCH] ocaml: Change calls to caml_named_value() to cope with const value* return.
In OCaml >= 4.09 the return value pointer of caml_named_value is declared const. Based on Pino Toscano's original patch to ocaml-augeas. --- common/mlpcre/pcre-c.c | 3 +-- common/mltools/uri-c.c | 6 ++---- common/mlvisit/visit-c.c | 4 +--- generator/daemon.ml | 2 +- 4 files changed, 5 insertions(+), 10 deletions(-) diff --git a/common/mlpcre/pcre-c.c b/common/mlpcre/pcre-c.c
2017 Mar 29
1
[PATCH] mllib: cast integer pointers to intptr_t as intermediate step
This make sure there is no mismatch between the size of the integer value that Int64_val returns, and the size of the guestfs_h pointer. This should fix the warning on 32bit environments (and thus build, when --enable-werror is enabled). --- mllib/visit-c.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/mllib/visit-c.c b/mllib/visit-c.c index b46dd33..b1c1216 100644 ---
2019 Sep 05
2
[PATCH 0/1] Build fix for future OCaml 4.09
This is a simple fix for building also with the upcoming OCaml 4.09, which has a slight API change in the C library. This does not cover embedded copies such as ocaml-augeas, and ocaml-libvirt, which are being fixed separately, and will then be synchronized. Pino Toscano (1): ocaml: make const the return value of caml_named_value() common/mlpcre/pcre-c.c | 2 +- common/mltools/uri-c.c |
2010 Mar 17
0
checksum errors increasing on "spare" vdev?
Hi, One of my colleagues was confused by the output of ''zpool status'' on a pool where a hot spare is being resilvered in after a drive failure: $ zpool status data pool: data state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scrub: