Displaying 16 results from an estimated 16 matches for "ereports".
Did you mean:
reports
2010 Jul 16
6
Lost zpool after reboot
Hello,
I have a dual boot with Windows 7 64 bit enterprise edition and Opensolaris build 134. This is on Sun Ultra 40 M1 workstation. Three hard drives, 2 in ZFS mirror, 1 is shared with Windows.
Last 2 days I was working in Windows. I didn''t touch the hard drives in any way except I once opened Disk Management to figure out why a external USB hard drive is not being listed.
2011 Jan 04
0
zpool import hangs system
Hello,
I''ve been using Nexentastore Community Edition with no issues now for a
while now, however last week I was going to rebuild a different system so I
started to copy all the data off that to my to a raidz2 volume on me CE
system. This was going fine until I noticed that they copy was stalled, as
well as the entire system was non-responsive. I let it sit for several hours
with no
2008 Jul 06
14
confusion and frustration with zpool
I have a zpool which has grown "organically". I had a 60Gb disk, I added a 120, I added a 500, I got a 750 and sliced it and mirrored the other pieces.
The 60 and the 120 are internal PATA drives, the 500 and 750 are Maxtor OneTouch USB drives.
The original system I created the 60+120+500 pool on was Solaris 10 update 3, patched to use ZFS sometime last fall (November I believe). In
2010 Apr 05
3
no hot spare activation?
...ding that ZFS does not behave this way -- if
only some I/Os are failed, then the hot spare is failed, but if ZFS
decides that the label is gone, it takes no attempt to recruit a hot spare.
I had added FMA notification to my blkdev driver - it will post
device.no_response or device.invalid_state ereports (per the
ddi_fm_ereport_post() man page) in certain failure scenarios.
I *suspect* the problem is in the FMA notification for zfs-retire, where
the event is not being interpreted in a way that ZFS retire can figure
out that the drive is toasted.
Of course, this is just an educated guess on my...
2010 Jan 10
5
Repeating scrub does random fixes
I''ve been using a 5-disk raidZ for years on SXCE machine which I converted to OSOL. The only time I ever had zfs problems in SXCE was with snv_120, which was fixed.
So, now I''m at OSOL snv_111b and I''m finding that scrub repairs errors on random disks. If I repeat the scrub, it will fix errors on other disks. Occasionally it runs cleanly. That it doesn''t
2008 Jan 17
9
ATA UDMA data parity error
Hey all,
I''m not sure if this is a ZFS bug or a hardware issue I''m having - any
pointers would be great!
Following contents include:
- high-level info about my system
- my first thought to debugging this
- stack trace
- format output
- zpool status output
- dmesg output
High-Level Info About My System
---------------------------------------------
- fresh
2010 Feb 24
0
disks in zpool gone at the same time
Hi,
Yesterday I got all my disks in two zpool disconected.
They are not real disks - LUNS from StorageTek 2530 array.
What could that be - a failing LSI card or a mpt driver in 2009.06?
After reboot got four disks in FAILED state - zpool clear fixed
things with resilvering.
Here is how it started (/var/adm/messages)
Feb 23 12:39:03 nexus scsi: [ID 365881 kern.info]
/pci at 0,0/pci10de,5d at
2006 Jul 19
1
Domain Users Get Login Prompt with Guest grayed out
...path = /usr/gold
; create mode = 0777
; read only = no
; public = yes
; writeable = yes
; browseable = yes
[honddata]
comment = ESC Data Dircetory
path = /u/sghond
create mode = 0777
read only = no
public = yes
writeable = yes
browseable = yes
[ereports]
comment = ESC Data Dircetory
path = /u/ereports
create mode = 0777
read only = no
public = yes
writeable = yes
browseable = yes
[reldata]
comment = Test for Relativity
; hide files = /u1/gold/*.*
path = /u/dbms
read only = yes
public = yes
wri...
2008 Oct 08
5
Resilver hanging?
How can I diagnose why a resilver appears to be hanging at a certain
percentage, seemingly doing nothing for quite a while, even though the
HDD LED is lit up permanently (no apparent head seeking)?
The drives in the pool are WD Raid Editions, thus have TLER and should
time out on errors in just seconds. ZFS nor the syslog however were
reporting any IO errors, so it weren''t the disks.
2008 Dec 04
11
help diagnosing system hang
Hi all,
First, I''ll say my intent is not to spam a bunch of lists, but after
posting to opensolaris-discuss I had someone communicate with me offline
that these lists would possibly be a better place to start. So here we
are. For those on all three lists, sorry for the repetition.
Second, this message is meant to solicit help in diagnosing the issue
described below. Any hints on
2006 Mar 28
2
Error reporting & backup with tar
In the process of tar''ing up files in an older ZFS partition
(23.6.2005), the tar command seized up. Truss showed it hanging
in stat64(), so I went looking for symptoms.
In "zpool status -ve", I found "4" in the SUM column.
Being from the old school, I did "dmesg", expecting to see some
kernel error message about the disk but found nothing.
Is there
2007 Jun 01
10
SMART
On Solaris x86, does zpool (or anything) support PATA (or SATA) IDE
SMART data? With the Predictive Self Healing feature, I assumed that
Solaris would have at least some SMART support, but what I''ve googled so
far has been discouraging.
http://prefetch.net/blog/index.php/2006/10/29/solaris-needs-smart-support-please-help/
Bug ID: 4665068 SMART support in IDE driver
2006 Oct 31
0
6306370 prtfru output does not match FRU etching or external labeling
Author: venki
Repository: /hg/zfs-crypto/gate
Revision: da7abc07c2c46c9d8bc78c1ca90a9ecb98207c6b
Log message:
6306370 prtfru output does not match FRU etching or external labeling
6353030 Inconsistency in DIMM labels in ereport, prtfru, cpumem DE on Chicago
Files:
update: usr/src/cmd/picl/plugins/sun4u/chicago/frutree/system-board.info
2011 May 10
5
Tuning disk failure detection?
We recently had a disk fail on one of our whitebox (SuperMicro) ZFS
arrays (Solaris 10 U9).
The disk began throwing errors like this:
May 5 04:33:44 dev-zfs4 scsi: [ID 243001 kern.warning] WARNING: /pci at 0,0/pci8086,3410 at 9/pci15d9,400 at 0 (mpt_sas0):
May 5 04:33:44 dev-zfs4 mptsas_handle_event_sync: IOCStatus=0x8000, IOCLogInfo=0x31110610
And errors for the drive were
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance.
I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2008 Jul 10
49
Supermicro AOC-USAS-L8i
On Wed, Jul 9, 2008 at 1:12 PM, Tim <tim at tcsac.net> wrote:
> Perfect. Which means good ol'' supermicro would come through :) WOHOO!
>
> AOC-USAS-L8i
>
> http://www.supermicro.com/products/accessories/addon/AOC-USAS-L8i.cfm
Is this card new? I''m not finding it at the usual places like Newegg, etc.
It looks like the LSI SAS3081E-R, but probably at 1/2 the