similar to: Clear corrupted data

Displaying 20 results from an estimated 500 matches similar to: "Clear corrupted data"

2006 Mar 30
39
Proposal: ZFS Hot Spare support
As mentioned last night, we''ve been reviewing a proposal for hot spare support in ZFS. Below you can find a current draft of the proposed interfaces. This has not yet been submitted for ARC review, but comments are welcome. Note that this does not include any enhanced FMA diagnosis to determine when a device is "faulted". This will come in a follow-on project, of which some
2006 Jun 26
2
raidz2 is alive!
Already making use of it, thank you! http://www.justinconover.com/blog/?p=17 I took 6x250gb disk and tried raidz2/raidz/none # zpool create zfs raidz2 c0d0 c1d0 c2d0 c3d0 c7d0 c8d0 df -h zfs Filesystem size used avail capacity Mounted on zfs 915G 49K 915G 1% /zfs # zpool destroy -f zfs Plain old raidz (raid-5ish) # zpool create zfs raidz c0d0
2006 May 09
3
Possible corruption after disk hiccups...
I''m not sure exactly what happened with my box here, but something caused a hiccup on multiple sata disks... May 9 16:40:33 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci10de,5c at 9/pci-ide at a/ide at 0 (ata6): May 9 16:47:43 sol scsi: [ID 107833 kern.warning] WARNING: /pci at 0,0/pci-ide at 7/ide at 1 (ata3): May 9 16:47:43 sol timeout: abort request, target=0
2007 Oct 14
1
odd behavior from zpool replace.
i''ve got a little zpool with a naughty raidz vdev that won''t take a replacement that as far as i can tell should be adequate. a history: this could well be some bizarro edge case, as the pool doesn''t have the cleanest lineage. initial creation happened on NexentaCP inside vmware in linux. i had given the virtual machine raw device access to 4 500gb drives and 1 ~200gb
2008 Jan 06
7
ZFS problem after disk faliure
One of my disks in the zfs raidz2 pool developed some mechanical faliure and had to be replaced. It is possible that I may have swaped the sata cables during the exchange, but this has never been a problem before in my previous tests. What concerns me is the output from zpool status for the c2d0 disk. The exchanged disk is now c3d0 but is no longer a part of the pool?! This is build_75 on x86
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface
2009 Jul 25
1
OpenSolaris 2009.06 - ZFS Install Issue
I''ve installed Open Solaris 2009.06 on a machine with 5 identical 1TB wd green drives to create a ZFS nas. The intended install is one drive dedicated to the OS and the remaining 4 drives in a raidz1 configuration. The install is working fine, but creating the raidz1 pool and rebooting is causing the machine report "Cannot find active partition" upon reboot. Below is command
2006 Sep 15
8
reslivering, how long will it take?
Being resilvered 444.00 GB 168.21 GB 158.73 GB Just wondering if anyone has any rough guesstimate of how long this will take? It''s 3x1200JB ata drives and one Seagate SATA drive. The SATA drive is the one that was replaced. Any idea how long this will take? As in 5 hours? 2 days? I don''t see any way to get a status update on where it''s at in the reslivering
2009 Jun 16
3
Adding zvols to a DomU
I''m trying to add extra zvols to a Solaris10 DomU, sv_113 Dom0 I can use virsh attach-disk <name> <zvol> hdb --device phy to attach the zvol as c0d1. Replacing hdb by hdd gives me c1d1 but then that is it. Being able to attach several more zvols would be nice but even being able to get at c1d0 would be useful Am I missing something or can I only attach to hda/hdb/hdd?
2007 Jul 03
1
zpool status -v: machine readable format?
I was wondering if anyone had a script to parse the "zpool status -v" output into a more machine readable format? Thanks, David This message posted from opensolaris.org
2007 Oct 26
1
data error in dataset 0. what''s that?
Hi forum, I did something stupid the other day, managed to connect an external disk that was part of zpool A such that it appeared in zpool B. I realised as soon as I had done zpool status that zpool B should not have been online, but it was. I immediately switched off the machine, booted without that disk connected and destroyed zpool B. I managed to get zpool A back and all of my data appears
2009 Feb 03
1
Cannot Mirror RPOOL, Can''t Label Disk to SMI
Dear ZFS experts, I have 2 SATA 500 GB Hard Drive on my Dual Core PC I have installed OpenSolaris 2008.11 using Live CD I got from Sun Tech Days in Singapore Now, using all the guidelines I got here at Indiana Discussion, I can''t attach my second drive to rpool to make them mirror Initially I was playing around with similar configuration in VirtualBox, and it does not succeed. Finally
2007 Jul 27
0
cloning disk with zpool
Hello the list, I thought that it should be easy to do a clone (not in the term of zfs) of a disk with zpool. This manipulation is strongly inspired by http://www.opensolaris.org/jive/thread.jspa?messageID=135038 and http://www.opensolaris.org/os/community/zfs/boot/ But unfortunately this doesn''t work, and we do have no clue what could be wrong on c1d0 you have a zfs root create a
2006 Sep 14
1
Remounting ZFS formatted disk after system reinstall
I was running solaris 10 6/06 with latest kernel patch on ultra 20 (x86) with two internal disks, the root with the OS (c1d0) as UFS and the userland data on c2d0s7 formatted as ZFS. An update made the system unusable and required reinstallation of the OS on c1d0 (solaris 6/06). I cannot figure out how to remount the second drive c2d0s7 which is formatted as ZFS. I have not created any ZFS
2007 Dec 14
0
scrub percentage complete decreasing, but without snaps.
I''ve seen the problems with bug 6343667, but I haven''t seen the problem I have at the the moment. I started a scrub of a b72 system that doesn''t have any recent snapshots (none since the last scrub) and the % complete is cycling: scrub: scrub in progress, 69.08% done, 0h13m to go scrub: scrub in progress, 46.63% done, 0h28m to go scrub: scrub in progress, 6.36%
2009 Nov 22
9
Resilver/scrub times?
Hi all! I''ve decided to take the "big jump" and build a ZFS home filer (although it might also do "other work" like caching DNS, mail, usenet, bittorent and so forth). YAY! I wonder if anyone can shed some light on how long a pool scrub would take on a fairly decent rig. These are the specs as-ordered: Asus P5Q-EM mainboard Core2 Quad 2.83 GHZ 8GB DDR2/80 OS: 2 x
2009 Mar 06
5
RePartition OS disk, give some to zpool
I''ve got knee deep into learning how to use Opensolaris and zfs, and I see now that my goal of home zfs server may have been better served if I had partitioned the install disk leaving some of the 60GB to be added to a zpool. First, how much space does a working OS need. I don''t mean bare minimum but to be comfortable and have some growing room (on the install disk)?
2008 Mar 15
1
feeding merge.zoo a vector containing the names of zoo objects?
Hi, the snippet of code below works, but I would like to know how to feed the function merge.zoo the contents of CADstocknames rather than having to hard code it into the merge.zoo command. I think I must be missing something simple, but I cannot for the life of me figure it out. Thanks in advance for any enlightenment offered. library(zoo) CADstocknames <-
2007 Oct 15
3
Trying to recover data off SATA-to-SCSI external 2TB ARRAY
Original the array was attach to an HP server via a Smart Array Controller.(which I didn't setup, I just inherited the problem) This controller no longer recognizes the array even though the front panel of the array indicates its intact. I then took the array and plugged it into my Centos server and it recognized it ... cat /proc/scsi/scsi Attached devices: Host: scsi0 Channel: 00 Id: 00 Lun:
2012 Oct 09
1
MDS read-only
Dear all, Two of our MDS have got repeatedly read-only error recently after once e2fsck on lustre 1.8.5. After the MDT mounted for a while, the kernel will reports errors like: Oct 8 20:16:44 mainmds kernel: LDISKFS-fs error (device cciss!c0d1): ldiskfs_ext_check_inode: bad header/extent in inode #50736178: invalid magic - magic 0, entries 0, max 0(0), depth 0(0) Oct 8 20:16:44 mainmds