Hi, this is current setup that I have been doing tests on: NAME STATE READ WRITE CKSUM zpool ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 c2t0d0 ONLINE 0 0 0 c2t1d0 ONLINE 0 0 0 c2t2d0 ONLINE 0 0 0 cache c2t3d0 FAULTED 0 0 0 too many errors c2t4d0 FAULTED 0 0 0 too many errors I would like to mention that this box uses Nexenta Community Edition, the cache disks are SSDs (ADATA AS599S-64GM-C ), and it is functional for about 1 month. The cache disks are mirrored. I wouldn''t mind a faulted disk, but those two are 64GB SSDs. Could you point me in the right direction to see what happend? Or to what generated the error? P.S. The system wasn''t under huge loads -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101220/048822f8/attachment.html>
Check the dmesg and system logs for any output concerning those devices re-seat one then the other just in case too. --- W. A. Khushil Dep - khushil.dep at gmail.com - 07905374843 Visit my blog at http://www.khushil.com/ On 20 December 2010 13:10, Paul Piscuc <paul.piscuc at sinergetic.ro> wrote:> Hi, this is current setup that I have been doing tests on: > > NAME STATE READ WRITE CKSUM > zpool ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > c2t0d0 ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > c2t2d0 ONLINE 0 0 0 > cache > c2t3d0 FAULTED 0 0 0 too many errors > c2t4d0 FAULTED 0 0 0 too many errors > > I would like to mention that this box uses Nexenta Community Edition, the > cache disks are SSDs (ADATA AS599S-64GM-C ), and it is functional for > about 1 month. The cache disks are mirrored. > I wouldn''t mind a faulted disk, but those two are 64GB SSDs. Could you > point me in the right direction to see what happend? Or to what generated > the error? > > P.S. The system wasn''t under huge loads > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101220/afcdc541/attachment.html>
Also check your email. NexentaStor sends an email message describing the actions taken when this occurs. If you did not setup email for the appliance, then look in the NMS log. -- richard On Dec 20, 2010, at 5:33 AM, Khushil Dep <khushil.dep at gmail.com> wrote:> Check the dmesg and system logs for any output concerning those devices > > re-seat one then the other just in case too. > > --- > W. A. Khushil Dep - khushil.dep at gmail.com - 07905374843 > > Visit my blog at http://www.khushil.com/ > > > > > > > On 20 December 2010 13:10, Paul Piscuc <paul.piscuc at sinergetic.ro> wrote: > Hi, this is current setup that I have been doing tests on: > > NAME STATE READ WRITE CKSUM > zpool ONLINE 0 0 0 > raidz1-0 ONLINE 0 0 0 > c2t0d0 ONLINE 0 0 0 > c2t1d0 ONLINE 0 0 0 > c2t2d0 ONLINE 0 0 0 > cache > c2t3d0 FAULTED 0 0 0 too many errors > c2t4d0 FAULTED 0 0 0 too many errors > > I would like to mention that this box uses Nexenta Community Edition, the cache disks are SSDs (ADATA AS599S-64GM-C ), and it is functional for about 1 month. The cache disks are mirrored. > I wouldn''t mind a faulted disk, but those two are 64GB SSDs. Could you point me in the right direction to see what happend? Or to what generated the error? > > P.S. The system wasn''t under huge loads > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > > > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101220/73790c6b/attachment.html>
Hi, The problem seems to be solved with a zpool clear. It is not clear what generated the issue, and I cannot locate what caused it, because a reboot seems to have deleted all logs:| . I have issued serveral grep''s under /var/log, /var and now under /, but I could find any record. Also, I thought that Nexenta might have rotated the logs, but I could find any archive. Anyways, that was rather strange, and hopefully, was a temporary issue. If it happens again, I''ll make sure that I won''t reboot the system. Thx alot for all your help. Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101220/06d04628/attachment.html>
NexentaStor logs are in /var/log. But the real information of interest is in the FMA ereports. fmdump -eV is your friend. -- richard On Dec 20, 2010, at 6:39 AM, Paul Piscuc <paul.piscuc at sinergetic.ro> wrote:> Hi, > > The problem seems to be solved with a zpool clear. It is not clear what generated the issue, and I cannot locate what caused it, because a reboot seems to have deleted all logs:| . I have issued serveral grep''s under /var/log, /var and now under /, but I could find any record. Also, I thought that Nexenta might have rotated the logs, but I could find any archive. > > Anyways, that was rather strange, and hopefully, was a temporary issue. If it happens again, I''ll make sure that I won''t reboot the system. > > Thx alot for all your help. > > Paul > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Here is a part of "fmdump -eV" : Dec 19 2010 03:02:47.919024953 ereport.fs.zfs.probe_failure nvlist version: 0 class = ereport.fs.zfs.probe_failure ena = 0x4bd7543b8cf00001 detector = (embedded nvlist) nvlist version: 0 version = 0x0 scheme = zfs pool = 0x9e0b8e0f936d08c6 vdev = 0xf3c1ec665a2f2e9a (end detector) pool = zpool pool_guid = 0x9e0b8e0f936d08c6 pool_context = 0 pool_failmode = continue vdev_guid = 0xf3c1ec665a2f2e9a vdev_type = disk vdev_path = /dev/dsk/c2t3d0s0 vdev_devid = id1,sd at SATA_____ADATA_SSD_S599_600000000000000000045/a prev_state = 0x0 __ttl = 0x1 __tod = 0x4d0de657 0x36c73539 There are also similar errors, regarding /pci at 0,0/pci8086,3484 at 1f,2/disk at 3,0 with the same error name. On Mon, Dec 20, 2010 at 4:52 PM, Richard Elling <richard.elling at gmail.com>wrote:> NexentaStor logs are in /var/log. But the real information of > interest is in the FMA ereports. fmdump -eV is your friend. > > -- richard > > On Dec 20, 2010, at 6:39 AM, Paul Piscuc <paul.piscuc at sinergetic.ro> > wrote: > > > Hi, > > > > The problem seems to be solved with a zpool clear. It is not clear what > generated the issue, and I cannot locate what caused it, because a reboot > seems to have deleted all logs:| . I have issued serveral grep''s under > /var/log, /var and now under /, but I could find any record. Also, I thought > that Nexenta might have rotated the logs, but I could find any archive. > > > > Anyways, that was rather strange, and hopefully, was a temporary issue. > If it happens again, I''ll make sure that I won''t reboot the system. > > > > Thx alot for all your help. > > > > Paul > > _______________________________________________ > > zfs-discuss mailing list > > zfs-discuss at opensolaris.org > > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20101220/d7a77394/attachment.html>
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of Paul Piscuc > > NAME ? ? ? ?STATE ? ? READ WRITE CKSUM > ?? ? ? ?zpool ? ? ? ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ?? ? ? ? ?raidz1-0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ?? ? ? ? ? ?c2t0d0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ?? ? ? ? ? ?c2t1d0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ?? ? ? ? ? ?c2t2d0 ?ONLINE ? ? ? 0 ? ? 0 ? ? 0 > ?? ? ? ?cache > ?? ? ? ? ?c2t3d0 ? ?FAULTED ? ? ?0 ? ? 0 ? ? 0 ?too many errors > ?? ? ? ? ?c2t4d0 ? ?FAULTED ? ? ?0 ? ? 0 ? ? 0 ?too many errors > > The cache disks are mirrored.This may be irrelevant, but no, they are not mirrored. (1) you can''t mirror cache devices (nor is there any need to) and (2) in the listing above, they are not misrepresented as mirrors in any way.