Hi Everyone, I recently decided to use btrfs. It works perfectly for a week even under heavy load. Yesterday I destroyed backups as cannot afford to have ~10TB in backups. I decided to switch on Btrfs because it was announced that it stable already I need to recover ~5TB data, this data is important and I do not have backups.... uname -a Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux sudo mount -o recovery /dev/sdb /tank mount: wrong fs type, bad option, bad superblock on /dev/sdb, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg: [ 9612.971149] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 transid 9096 /dev/sdb [ 9613.048476] btrfs: enabling auto recovery [ 9613.048482] btrfs: disk space caching is enabled [ 9621.172540] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 9621.181369] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 9621.182167] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdd sector 2143292648) [ 9621.182181] Failed to read block groups: -5 [ 9621.193680] btrfs: open_ctree failed sudo /usr/local/bin/btrfs-find-root /dev/sdb ................................... Well block 4455562448896 seems great, but generation doesn''t match, have=9092, want=9096 Well block 4455568302080 seems great, but generation doesn''t match, have=9091, want=9096 Well block 4848395739136 seems great, but generation doesn''t match, have=9093, want=9096 Well block 4923796594688 seems great, but generation doesn''t match, have=9094, want=9096 Well block 4923798065152 seems great, but generation doesn''t match, have=9095, want=9096 Found tree root at 5532762525696 $ sudo btrfs-restore -v -t 4923798065152 /dev/sdb ./ parent transid verify failed on 4923798065152 wanted 9096 found 9095 parent transid verify failed on 4923798065152 wanted 9096 found 9095 parent transid verify failed on 4923798065152 wanted 9096 found 9095 parent transid verify failed on 4923798065152 wanted 9096 found 9095 Ignoring transid failure Root objectid is 5 Restoring ./Irina Restoring ./Irina/.idmapdir2 Skipping existing file ./Irina/.idmapdir2/4.bucket.lock If you wish to overwrite use the -o option to overwrite Skipping existing file ./Irina/.idmapdir2/7.bucket Skipping existing file ./Irina/.idmapdir2/15.bucket Skipping existing file ./Irina/.idmapdir2/12.bucket.lock Skipping existing file ./Irina/.idmapdir2/cap.txt Skipping existing file ./Irina/.idmapdir2/5.bucket Restoring ./Irina/.idmapdir2/10.bucket.lock Restoring ./Irina/.idmapdir2/6.bucket.lock Restoring ./Irina/.idmapdir2/8.bucket ret is -3 sudo btrfs-zero-log /dev/sdb ........................... parent transid verify failed on 5468231311360 wanted 9096 found 7621 parent transid verify failed on 5468231311360 wanted 9096 found 7621 parent transid verify failed on 5468060102656 wanted 9096 found 7621 Ignoring transid failure leaf parent key incorrect 59310080 btrfs-zero-log: extent-tree.c:2578: alloc_reserved_tree_block: Assertion `!(ret)'' failed. Help me please..... Max -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
After command: sudo /usr/local/bin/btrfs device scan i got new lines in dmesg: 11329.598535] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 transid 9096 /dev/sdb [11329.599885] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 transid 9095 /dev/sdd [11329.600840] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 transid 9096 /dev/sda [11329.602083] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 transid 9096 /dev/sde [11329.603036] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 transid 9096 /dev/sdf looks like /dev/sdd lost one transid. Is it possible to roll back on transid 9095? Thanks On 05/29/2012 06:14 PM, Maxim Mikheev wrote:> Hi Everyone, > > I recently decided to use btrfs. It works perfectly for a week even > under heavy load. Yesterday I destroyed backups as cannot afford to > have ~10TB in backups. I decided to switch on Btrfs because it was > announced that it stable already > I need to recover ~5TB data, this data is important and I do not have > backups.... > > > uname -a > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 > UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > > sudo mount -o recovery /dev/sdb /tank > mount: wrong fs type, bad option, bad superblock on /dev/sdb, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so > > dmesg: > [ 9612.971149] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid > 2 transid 9096 /dev/sdb > [ 9613.048476] btrfs: enabling auto recovery > [ 9613.048482] btrfs: disk space caching is enabled > [ 9621.172540] parent transid verify failed on 5468060241920 wanted > 9096 found 7621 > [ 9621.181369] parent transid verify failed on 5468060241920 wanted > 9096 found 7621 > [ 9621.182167] btrfs read error corrected: ino 1 off 5468060241920 > (dev /dev/sdd sector 2143292648) > [ 9621.182181] Failed to read block groups: -5 > [ 9621.193680] btrfs: open_ctree failed > > sudo /usr/local/bin/btrfs-find-root /dev/sdb > ................................... > Well block 4455562448896 seems great, but generation doesn''t match, > have=9092, want=9096 > Well block 4455568302080 seems great, but generation doesn''t match, > have=9091, want=9096 > Well block 4848395739136 seems great, but generation doesn''t match, > have=9093, want=9096 > Well block 4923796594688 seems great, but generation doesn''t match, > have=9094, want=9096 > Well block 4923798065152 seems great, but generation doesn''t match, > have=9095, want=9096 > Found tree root at 5532762525696 > > > $ sudo btrfs-restore -v -t 4923798065152 /dev/sdb ./ > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > Ignoring transid failure > Root objectid is 5 > Restoring ./Irina > Restoring ./Irina/.idmapdir2 > Skipping existing file ./Irina/.idmapdir2/4.bucket.lock > If you wish to overwrite use the -o option to overwrite > Skipping existing file ./Irina/.idmapdir2/7.bucket > Skipping existing file ./Irina/.idmapdir2/15.bucket > Skipping existing file ./Irina/.idmapdir2/12.bucket.lock > Skipping existing file ./Irina/.idmapdir2/cap.txt > Skipping existing file ./Irina/.idmapdir2/5.bucket > Restoring ./Irina/.idmapdir2/10.bucket.lock > Restoring ./Irina/.idmapdir2/6.bucket.lock > Restoring ./Irina/.idmapdir2/8.bucket > ret is -3 > > > sudo btrfs-zero-log /dev/sdb > ........................... > parent transid verify failed on 5468231311360 wanted 9096 found 7621 > parent transid verify failed on 5468231311360 wanted 9096 found 7621 > parent transid verify failed on 5468060102656 wanted 9096 found 7621 > Ignoring transid failure > leaf parent key incorrect 59310080 > btrfs-zero-log: extent-tree.c:2578: alloc_reserved_tree_block: > Assertion `!(ret)'' failed. > > Help me please..... > > Max-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
I can''t help much at the moment, but the following will help sort things out: Can you provide as much detail as possible about how things were configured at the time of the failure? Raid levels used, kernel versions at the time of the failure, how the disks are connected, general description of the activity on the disk and the nature of its contents (all large files? rootfs? mail spools?) What you were thinking at the time you decided that you couldn''t afford backups? As much detail as possible on what all you''ve tried since the failure to recover things? It''s likely the data is fine (if currently inaccessible), but obviously things are in a fragile state, and the important thing right now is to not make things worse: a recoverable situation may otherwise turn into an irrecoverable one. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 5/30/12 12:14 AM, Maxim Mikheev wrote:> Hi Everyone, > > I recently decided to use btrfs. It works perfectly for a week even > under heavy load. Yesterday I destroyed backups as cannot afford to have > ~10TB in backups. I decided to switch on Btrfs because it was announced > that it stable already > I need to recover ~5TB data, this data is important and I do not have > backups.... >Just out of curiosity: Who announced that BTRFS is stable already?! The kernel says something different and there is still no 100% working fsck for btrfs. Imho it is far away from being stable :) And btw: Even it would be stable, allways keep backups for important data ffs! I don''t understand why there are still technical experienced people who don''t do backups :/ Imho if you don''t do backups from a portion of data they are considered not to be important.> > uname -a > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC > 2012 x86_64 x86_64 x86_64 GNU/Linux > > sudo mount -o recovery /dev/sdb /tank > mount: wrong fs type, bad option, bad superblock on /dev/sdb, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so > > dmesg: > [ 9612.971149] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 > transid 9096 /dev/sdb > [ 9613.048476] btrfs: enabling auto recovery > [ 9613.048482] btrfs: disk space caching is enabled > [ 9621.172540] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 9621.181369] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 9621.182167] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdd sector 2143292648) > [ 9621.182181] Failed to read block groups: -5 > [ 9621.193680] btrfs: open_ctree failed > > sudo /usr/local/bin/btrfs-find-root /dev/sdb > ................................... > Well block 4455562448896 seems great, but generation doesn''t match, > have=9092, want=9096 > Well block 4455568302080 seems great, but generation doesn''t match, > have=9091, want=9096 > Well block 4848395739136 seems great, but generation doesn''t match, > have=9093, want=9096 > Well block 4923796594688 seems great, but generation doesn''t match, > have=9094, want=9096 > Well block 4923798065152 seems great, but generation doesn''t match, > have=9095, want=9096 > Found tree root at 5532762525696 > > > $ sudo btrfs-restore -v -t 4923798065152 /dev/sdb ./ > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > Ignoring transid failure > Root objectid is 5 > Restoring ./Irina > Restoring ./Irina/.idmapdir2 > Skipping existing file ./Irina/.idmapdir2/4.bucket.lock > If you wish to overwrite use the -o option to overwrite > Skipping existing file ./Irina/.idmapdir2/7.bucket > Skipping existing file ./Irina/.idmapdir2/15.bucket > Skipping existing file ./Irina/.idmapdir2/12.bucket.lock > Skipping existing file ./Irina/.idmapdir2/cap.txt > Skipping existing file ./Irina/.idmapdir2/5.bucket > Restoring ./Irina/.idmapdir2/10.bucket.lock > Restoring ./Irina/.idmapdir2/6.bucket.lock > Restoring ./Irina/.idmapdir2/8.bucket > ret is -3 > > > sudo btrfs-zero-log /dev/sdb > ........................... > parent transid verify failed on 5468231311360 wanted 9096 found 7621 > parent transid verify failed on 5468231311360 wanted 9096 found 7621 > parent transid verify failed on 5468060102656 wanted 9096 found 7621 > Ignoring transid failure > leaf parent key incorrect 59310080 > btrfs-zero-log: extent-tree.c:2578: alloc_reserved_tree_block: Assertion > `!(ret)'' failed. > > Help me please..... > > Max > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, May 29, 2012 at 5:14 PM, Felix Blanke <felixblanke@gmail.com> wrote:> > > On 5/30/12 12:14 AM, Maxim Mikheev wrote: >> >> Hi Everyone, >> >> I recently decided to use btrfs. It works perfectly for a week even >> under heavy load. Yesterday I destroyed backups as cannot afford to have >> ~10TB in backups. I decided to switch on Btrfs because it was announced >> that it stable already >> I need to recover ~5TB data, this data is important and I do not have >> backups.... >> > > Just out of curiosity: Who announced that BTRFS is stable already?! The > kernel says something different and there is still no 100% working fsck for > btrfs. Imho it is far away from being stable :) > > And btw: Even it would be stable, allways keep backups for important data > ffs! I don''t understand why there are still technical experienced people who > don''t do backups :/ Imho if you don''t do backups from a portion of data they > are considered not to be important.Some distros do offer support, but that''s usually in the sense of "if you have a support contract (and are on qualified hardware and using it in a supported configuration), we''ll help you fix what breaks (and we''re confident we can)", rather than a claim that things will never break. I expect (but haven''t actually checked recently) that such distros actively backport btrfs fixes to their supported kernels (btrfs in Distro X''s 3.2 kernel may have fixes that Distro Y''s 3.2 kernel does not, etc), which can lead to unfortunate misunderstandings; we don''t have enough information yet to determine whether that''s the case here though. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Thank you for your answer. The system kernel was and now: Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux the raid was created by: mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf Disk are connected through RocketRaid 2670. for mounting I used line in fstab: UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank btrfs defaults,compress=lzo 0 1 On machine was running several Virtual machines. Only one was actively using disks. VM has active several threads: 1. 2 threads reading big files (50GB each) 2. reading from 50 files and writing one big file 3. The kernel panic happens when I run another program with 30 threads of reading/writing of small files. Virtual Machine accessed to underline btrfs through 9-p file system which actively used xattr. After reboot system was in this stage. I hope that btrfsck --repair will not make it worse, It is now running. ................................. Backups, you everytime need them when you don''t have..... We was urgently need extra space and planed to buy new disks soon..... On 05/29/2012 07:11 PM, cwillu wrote:> I can''t help much at the moment, but the following will help sort things out: > > Can you provide as much detail as possible about how things were > configured at the time of the failure? Raid levels used, kernel > versions at the time of the failure, how the disks are connected, > general description of the activity on the disk and the nature of its > contents (all large files? rootfs? mail spools?) What you were > thinking at the time you decided that you couldn''t afford backups? As > much detail as possible on what all you''ve tried since the failure to > recover things? > > It''s likely the data is fine (if currently inaccessible), but > obviously things are in a fragile state, and the important thing right > now is to not make things worse: a recoverable situation may otherwise > turn into an irrecoverable one.-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Tue, May 29, 2012 at 5:24 PM, Maxim Mikheev <mikhmv@gmail.com> wrote:> Thank you for your answer. > > > The system kernel was and now: > > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC 2012 > x86_64 x86_64 x86_64 GNU/Linux > > the raid was created by: > mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf > > Disk are connected through RocketRaid 2670. > > for mounting I used line in fstab: > UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank btrfs > defaults,compress=lzo 0 1 > > On machine was running several Virtual machines. Only one was actively using > disks. > > VM has active several threads: > 1. 2 threads reading big files (50GB each) > 2. reading from 50 files and writing one big file > 3. The kernel panic happens when I run another program with 30 threads of > reading/writing of small files. > > Virtual Machine accessed to underline btrfs through 9-p file system which > actively used xattr. > > After reboot system was in this stage. > > I hope that btrfsck --repair will not make it worse, It is now running.**twitch** Well, I also hope it won''t make it worse. Do not cancel it now, let it finish (aborting it will make things worse), but I suggest waiting until a few more people have weighed in before attempting anything beyond that. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
I forgot to add. Btrfs-tools was build from: git clone git://git.kernel.org/pub/scm/linux/kernel/git/mason/btrfs-progs.git On 05/29/2012 07:24 PM, Maxim Mikheev wrote:> Thank you for your answer. > > > The system kernel was and now: > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 > UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > > the raid was created by: > mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf > > Disk are connected through RocketRaid 2670. > > for mounting I used line in fstab: > UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank btrfs > defaults,compress=lzo 0 1 > > On machine was running several Virtual machines. Only one was actively > using disks. > > VM has active several threads: > 1. 2 threads reading big files (50GB each) > 2. reading from 50 files and writing one big file > 3. The kernel panic happens when I run another program with 30 threads > of reading/writing of small files. > > Virtual Machine accessed to underline btrfs through 9-p file system > which actively used xattr. > > After reboot system was in this stage. > > I hope that btrfsck --repair will not make it worse, It is now running. > > ................................. > Backups, you everytime need them when you don''t have..... > We was urgently need extra space and planed to buy new disks soon..... > > > On 05/29/2012 07:11 PM, cwillu wrote: >> I can''t help much at the moment, but the following will help sort >> things out: >> >> Can you provide as much detail as possible about how things were >> configured at the time of the failure? Raid levels used, kernel >> versions at the time of the failure, how the disks are connected, >> general description of the activity on the disk and the nature of its >> contents (all large files? rootfs? mail spools?) What you were >> thinking at the time you decided that you couldn''t afford backups? As >> much detail as possible on what all you''ve tried since the failure to >> recover things? >> >> It''s likely the data is fine (if currently inaccessible), but >> obviously things are in a fragile state, and the important thing right >> now is to not make things worse: a recoverable situation may otherwise >> turn into an irrecoverable one.-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
btrfsck --repair running already for 26 hours. Is it have sense to wait more? Thanks On 05/29/2012 07:36 PM, cwillu wrote:> On Tue, May 29, 2012 at 5:24 PM, Maxim Mikheev<mikhmv@gmail.com> wrote: >> Thank you for your answer. >> >> >> The system kernel was and now: >> >> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC 2012 >> x86_64 x86_64 x86_64 GNU/Linux >> >> the raid was created by: >> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >> >> Disk are connected through RocketRaid 2670. >> >> for mounting I used line in fstab: >> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank btrfs >> defaults,compress=lzo 0 1 >> >> On machine was running several Virtual machines. Only one was actively using >> disks. >> >> VM has active several threads: >> 1. 2 threads reading big files (50GB each) >> 2. reading from 50 files and writing one big file >> 3. The kernel panic happens when I run another program with 30 threads of >> reading/writing of small files. >> >> Virtual Machine accessed to underline btrfs through 9-p file system which >> actively used xattr. >> >> After reboot system was in this stage. >> >> I hope that btrfsck --repair will not make it worse, It is now running. > **twitch** > > Well, I also hope it won''t make it worse. Do not cancel it now, let > it finish (aborting it will make things worse), but I suggest waiting > until a few more people have weighed in before attempting anything > beyond that.-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Seems it does not work. What should be a next step in data recovering? On 05/30/2012 10:50 PM, Gareth Pye wrote:> Stopping an experimental fsck for an exterimental file system would > probably be the worst idea possible. I''d only think about stopping it > after it had spent many many hours not doing anything*. If it was > working hard after 26 hours I''d just keep working > > *This isn''t advice to stop it if that is true, just a minimal > condition on me stopping any fsck. > > On Thu, May 31, 2012 at 12:02 PM, Maxim Mikheev <mikhmv@gmail.com > <mailto:mikhmv@gmail.com>> wrote: > > btrfsck --repair running already for 26 hours. > > Is it have sense to wait more? > > Thanks > > > On 05/29/2012 07:36 PM, cwillu wrote: > > On Tue, May 29, 2012 at 5:24 PM, Maxim > Mikheev<mikhmv@gmail.com <mailto:mikhmv@gmail.com>> wrote: > > Thank you for your answer. > > > The system kernel was and now: > > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 > 09:22:02 UTC 2012 > x86_64 x86_64 x86_64 GNU/Linux > > the raid was created by: > mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf > > Disk are connected through RocketRaid 2670. > > for mounting I used line in fstab: > UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank > btrfs > defaults,compress=lzo 0 1 > > On machine was running several Virtual machines. Only one > was actively using > disks. > > VM has active several threads: > 1. 2 threads reading big files (50GB each) > 2. reading from 50 files and writing one big file > 3. The kernel panic happens when I run another program > with 30 threads of > reading/writing of small files. > > Virtual Machine accessed to underline btrfs through 9-p > file system which > actively used xattr. > > After reboot system was in this stage. > > I hope that btrfsck --repair will not make it worse, It is > now running. > > **twitch** > > Well, I also hope it won''t make it worse. Do not cancel it > now, let > it finish (aborting it will make things worse), but I suggest > waiting > until a few more people have weighed in before attempting anything > beyond that. > > -- > To unsubscribe from this list: send the line "unsubscribe > linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > <mailto:majordomo@vger.kernel.org> > More majordomo info at http://vger.kernel.org/majordomo-info.html > > > > > -- > Gareth Pye > Level 2 Judge, Melbourne, Australia > Australian MTG Forum: mtgau.com <http://mtgau.com/> > gareth@cerberos.id.au <mailto:gareth@cerberos.id.au> - > www.rockpaperdynamite.wordpress.com > <http://www.rockpaperdynamite.wordpress.com/> > "Dear God, I would like to file a bug report" >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Repair was not helpful. Is any other ways to get access to data? Please help.... On 05/30/2012 11:15 PM, Michael K wrote:> > Let it run to completion. There is little you can do other than hope > and wait. > > On May 30, 2012 9:02 PM, "Maxim Mikheev" <mikhmv@gmail.com > <mailto:mikhmv@gmail.com>> wrote: > > btrfsck --repair running already for 26 hours. > > Is it have sense to wait more? > > Thanks > > On 05/29/2012 07:36 PM, cwillu wrote: > > On Tue, May 29, 2012 at 5:24 PM, Maxim > Mikheev<mikhmv@gmail.com <mailto:mikhmv@gmail.com>> wrote: > > Thank you for your answer. > > > The system kernel was and now: > > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 > 09:22:02 UTC 2012 > x86_64 x86_64 x86_64 GNU/Linux > > the raid was created by: > mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf > > Disk are connected through RocketRaid 2670. > > for mounting I used line in fstab: > UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank > btrfs > defaults,compress=lzo 0 1 > > On machine was running several Virtual machines. Only one > was actively using > disks. > > VM has active several threads: > 1. 2 threads reading big files (50GB each) > 2. reading from 50 files and writing one big file > 3. The kernel panic happens when I run another program > with 30 threads of > reading/writing of small files. > > Virtual Machine accessed to underline btrfs through 9-p > file system which > actively used xattr. > > After reboot system was in this stage. > > I hope that btrfsck --repair will not make it worse, It is > now running. > > **twitch** > > Well, I also hope it won''t make it worse. Do not cancel it > now, let > it finish (aborting it will make things worse), but I suggest > waiting > until a few more people have weighed in before attempting anything > beyond that. > > -- > To unsubscribe from this list: send the line "unsubscribe > linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > <mailto:majordomo@vger.kernel.org> > More majordomo info at http://vger.kernel.org/majordomo-info.html >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/02/2012 09:43 PM, Maxim Mikheev wrote:> Repair was not helpful. > Is any other ways to get access to data? > > Please help.... >Hi Maxim, Besides btrfsck --repair, we also have a recovery mount option to deal with your situation, maybe you can try mount xxx -o recovery and see if it helps? thanks, liubo> On 05/30/2012 11:15 PM, Michael K wrote: >> >> Let it run to completion. There is little you can do other than hope >> and wait. >> >> On May 30, 2012 9:02 PM, "Maxim Mikheev" <mikhmv@gmail.com >> <mailto:mikhmv@gmail.com>> wrote: >> >> btrfsck --repair running already for 26 hours. >> >> Is it have sense to wait more? >> >> Thanks >> >> On 05/29/2012 07:36 PM, cwillu wrote: >> >> On Tue, May 29, 2012 at 5:24 PM, Maxim >> Mikheev<mikhmv@gmail.com <mailto:mikhmv@gmail.com>> wrote: >> >> Thank you for your answer. >> >> >> The system kernel was and now: >> >> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 >> 09:22:02 UTC 2012 >> x86_64 x86_64 x86_64 GNU/Linux >> >> the raid was created by: >> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >> >> Disk are connected through RocketRaid 2670. >> >> for mounting I used line in fstab: >> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >> btrfs >> defaults,compress=lzo 0 1 >> >> On machine was running several Virtual machines. Only one >> was actively using >> disks. >> >> VM has active several threads: >> 1. 2 threads reading big files (50GB each) >> 2. reading from 50 files and writing one big file >> 3. The kernel panic happens when I run another program >> with 30 threads of >> reading/writing of small files. >> >> Virtual Machine accessed to underline btrfs through 9-p >> file system which >> actively used xattr. >> >> After reboot system was in this stage. >> >> I hope that btrfsck --repair will not make it worse, It is >> now running. >> >> **twitch** >> >> Well, I also hope it won''t make it worse. Do not cancel it >> now, let >> it finish (aborting it will make things worse), but I suggest >> waiting >> until a few more people have weighed in before attempting >> anything >> beyond that. >> >> -- >> To unsubscribe from this list: send the line "unsubscribe >> linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> <mailto:majordomo@vger.kernel.org> >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Liu, thanks for advice. I tried it before btrfsck. results are here: max@s0:~$ sudo mount /tank -o recovery [sudo] password for max: mount: wrong fs type, bad option, bad superblock on /dev/sdf, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so max@s0:~$ sudo mount -o recovery /tank mount: wrong fs type, bad option, bad superblock on /dev/sdf, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so dmesg after boot before mount -o recovery: [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdb sector 2143292648) [ 51.841610] Failed to read block groups: -5 [ 51.848057] btrfs: open_ctree failed .............................. dmesg after both mounts: [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 transid 9096 /dev/sdf [ 123.733678] btrfs: use lzo compression [ 123.733683] btrfs: enabling auto recovery [ 123.733686] btrfs: disk space caching is enabled [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdb sector 2143292648) [ 131.715072] Failed to read block groups: -5 [ 131.727176] btrfs: open_ctree failed [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 transid 9096 /dev/sdf [ 161.746345] btrfs: use lzo compression [ 161.746354] btrfs: enabling auto recovery [ 161.746358] btrfs: disk space caching is enabled [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdb sector 2143292648) [ 169.732623] Failed to read block groups: -5 [ 169.743437] btrfs: open_ctree failed So It does not work. I have seen in some posts command: sudo mount -s 2 -o recovery /tank Should I try it? Please help me, I need to get this data ASAP. Regards, Max On 06/03/2012 09:22 PM, Liu Bo wrote:> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: > >> Repair was not helpful. >> Is any other ways to get access to data? >> >> Please help.... >> > > Hi Maxim, > > Besides btrfsck --repair, we also have a recovery mount option to deal with your situation, > maybe you can try mount xxx -o recovery and see if it helps? > > > thanks, > liubo > >> On 05/30/2012 11:15 PM, Michael K wrote: >>> Let it run to completion. There is little you can do other than hope >>> and wait. >>> >>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>> <mailto:mikhmv@gmail.com>> wrote: >>> >>> btrfsck --repair running already for 26 hours. >>> >>> Is it have sense to wait more? >>> >>> Thanks >>> >>> On 05/29/2012 07:36 PM, cwillu wrote: >>> >>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> wrote: >>> >>> Thank you for your answer. >>> >>> >>> The system kernel was and now: >>> >>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 >>> 09:22:02 UTC 2012 >>> x86_64 x86_64 x86_64 GNU/Linux >>> >>> the raid was created by: >>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>> >>> Disk are connected through RocketRaid 2670. >>> >>> for mounting I used line in fstab: >>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>> btrfs >>> defaults,compress=lzo 0 1 >>> >>> On machine was running several Virtual machines. Only one >>> was actively using >>> disks. >>> >>> VM has active several threads: >>> 1. 2 threads reading big files (50GB each) >>> 2. reading from 50 files and writing one big file >>> 3. The kernel panic happens when I run another program >>> with 30 threads of >>> reading/writing of small files. >>> >>> Virtual Machine accessed to underline btrfs through 9-p >>> file system which >>> actively used xattr. >>> >>> After reboot system was in this stage. >>> >>> I hope that btrfsck --repair will not make it worse, It is >>> now running. >>> >>> **twitch** >>> >>> Well, I also hope it won''t make it worse. Do not cancel it >>> now, let >>> it finish (aborting it will make things worse), but I suggest >>> waiting >>> until a few more people have weighed in before attempting >>> anything >>> beyond that. >>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe >>> linux-btrfs" in >>> the body of a message to majordomo@vger.kernel.org >>> <mailto:majordomo@vger.kernel.org> >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/04/2012 09:43 AM, Maxim Mikheev wrote:> Hi Liu, > > thanks for advice. I tried it before btrfsck. results are here: > max@s0:~$ sudo mount /tank -o recovery > [sudo] password for max: > mount: wrong fs type, bad option, bad superblock on /dev/sdf, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so > > max@s0:~$ sudo mount -o recovery /tank > mount: wrong fs type, bad option, bad superblock on /dev/sdf, > missing codepage or helper program, or other error > In some cases useful info is found in syslog - try > dmesg | tail or so >Two possible ways: 1) I noticed that your btrfs had 5 partitions in all: mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf Can you try to mount other disk partitions instead by hand, like: mount /dev/sdb /tank mount /dev/sdc /tank mount /dev/sdd /tank mount /dev/sde /tank mount /dev/sdf /tank 2) use btrfs''s scrub to resort to metadata backups created by RAID1. thanks, liubo> dmesg after boot before mount -o recovery: > [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdb sector 2143292648) > [ 51.841610] Failed to read block groups: -5 > [ 51.848057] btrfs: open_ctree failed > .............................. > dmesg after both mounts: > > [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 > transid 9096 /dev/sdf > [ 123.733678] btrfs: use lzo compression > [ 123.733683] btrfs: enabling auto recovery > [ 123.733686] btrfs: disk space caching is enabled > [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdb sector 2143292648) > [ 131.715072] Failed to read block groups: -5 > [ 131.727176] btrfs: open_ctree failed > [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 > transid 9096 /dev/sdf > [ 161.746345] btrfs: use lzo compression > [ 161.746354] btrfs: enabling auto recovery > [ 161.746358] btrfs: disk space caching is enabled > [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdb sector 2143292648) > [ 169.732623] Failed to read block groups: -5 > [ 169.743437] btrfs: open_ctree failed > > So It does not work. I have seen in some posts command: > > sudo mount -s 2 -o recovery /tank > Should I try it? > > Please help me, I need to get this data ASAP. > > Regards, > Max > > On 06/03/2012 09:22 PM, Liu Bo wrote: >> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >> >>> Repair was not helpful. >>> Is any other ways to get access to data? >>> >>> Please help.... >>> >> >> Hi Maxim, >> >> Besides btrfsck --repair, we also have a recovery mount option to deal >> with your situation, >> maybe you can try mount xxx -o recovery and see if it helps? >> >> >> thanks, >> liubo >> >>> On 05/30/2012 11:15 PM, Michael K wrote: >>>> Let it run to completion. There is little you can do other than hope >>>> and wait. >>>> >>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>> <mailto:mikhmv@gmail.com>> wrote: >>>> >>>> btrfsck --repair running already for 26 hours. >>>> >>>> Is it have sense to wait more? >>>> >>>> Thanks >>>> >>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>> >>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> wrote: >>>> >>>> Thank you for your answer. >>>> >>>> >>>> The system kernel was and now: >>>> >>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 >>>> 09:22:02 UTC 2012 >>>> x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> the raid was created by: >>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>> >>>> Disk are connected through RocketRaid 2670. >>>> >>>> for mounting I used line in fstab: >>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>> btrfs >>>> defaults,compress=lzo 0 1 >>>> >>>> On machine was running several Virtual machines. Only one >>>> was actively using >>>> disks. >>>> >>>> VM has active several threads: >>>> 1. 2 threads reading big files (50GB each) >>>> 2. reading from 50 files and writing one big file >>>> 3. The kernel panic happens when I run another program >>>> with 30 threads of >>>> reading/writing of small files. >>>> >>>> Virtual Machine accessed to underline btrfs through 9-p >>>> file system which >>>> actively used xattr. >>>> >>>> After reboot system was in this stage. >>>> >>>> I hope that btrfsck --repair will not make it worse, It is >>>> now running. >>>> >>>> **twitch** >>>> >>>> Well, I also hope it won''t make it worse. Do not cancel it >>>> now, let >>>> it finish (aborting it will make things worse), but I suggest >>>> waiting >>>> until a few more people have weighed in before attempting >>>> anything >>>> beyond that. >>>> >>>> -- >>>> To unsubscribe from this list: send the line "unsubscribe >>>> linux-btrfs" in >>>> the body of a message to majordomo@vger.kernel.org >>>> <mailto:majordomo@vger.kernel.org> >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe >>> linux-btrfs" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Liu, 1) all of them not working (see dmesg at the end) 2) max@s0:~$ sudo btrfs scrub start /dev/sdb ERROR: getting dev info for scrub failed: Inappropriate ioctl for device max@s0:~$ sudo btrfs scrub start /dev/sda ERROR: getting dev info for scrub failed: Inappropriate ioctl for device max@s0:~$ sudo btrfs scrub start /dev/sdd ERROR: getting dev info for scrub failed: Inappropriate ioctl for device max@s0:~$ sudo btrfs scrub start /dev/sde ERROR: getting dev info for scrub failed: Inappropriate ioctl for device max@s0:~$ sudo btrfs scrub start /dev/sdf ERROR: getting dev info for scrub failed: Inappropriate ioctl for device dmesg after all operations: [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 transid 9096 /dev/sdb [ 2183.916128] btrfs: disk space caching is enabled [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdb sector 2143292648) [ 2191.873678] Failed to read block groups: -5 [ 2191.884636] btrfs: open_ctree failed [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 transid 9095 /dev/sdd [ 2222.959128] btrfs: disk space caching is enabled [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdd sector 2143292648) [ 2231.275207] Failed to read block groups: -5 [ 2231.288795] btrfs: open_ctree failed [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 transid 9096 /dev/sde [ 2240.671344] btrfs: disk space caching is enabled [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdd sector 2143292648) [ 2248.929105] Failed to read block groups: -5 [ 2248.939081] btrfs: open_ctree failed [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 transid 9096 /dev/sdf [ 2253.879940] btrfs: disk space caching is enabled [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdb sector 2143292648) [ 2261.767942] Failed to read block groups: -5 [ 2261.778219] btrfs: open_ctree failed [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 transid 9096 /dev/sda [ 2309.904520] btrfs: disk space caching is enabled [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 found 7621 [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev /dev/sdd sector 2143292648) [ 2318.304013] Failed to read block groups: -5 [ 2318.314587] btrfs: open_ctree failed On 06/03/2012 10:16 PM, Liu Bo wrote:> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: > >> Hi Liu, >> >> thanks for advice. I tried it before btrfsck. results are here: >> max@s0:~$ sudo mount /tank -o recovery >> [sudo] password for max: >> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >> missing codepage or helper program, or other error >> In some cases useful info is found in syslog - try >> dmesg | tail or so >> >> max@s0:~$ sudo mount -o recovery /tank >> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >> missing codepage or helper program, or other error >> In some cases useful info is found in syslog - try >> dmesg | tail or so >> > > Two possible ways: > > 1) > I noticed that your btrfs had 5 partitions in all: > mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf > > Can you try to mount other disk partitions instead by hand, like: > mount /dev/sdb /tank > mount /dev/sdc /tank > mount /dev/sdd /tank > mount /dev/sde /tank > mount /dev/sdf /tank > > 2) > use btrfs''s scrub to resort to metadata backups created by RAID1. > > thanks, > liubo > >> dmesg after boot before mount -o recovery: >> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 51.841610] Failed to read block groups: -5 >> [ 51.848057] btrfs: open_ctree failed >> .............................. >> dmesg after both mounts: >> >> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >> transid 9096 /dev/sdf >> [ 123.733678] btrfs: use lzo compression >> [ 123.733683] btrfs: enabling auto recovery >> [ 123.733686] btrfs: disk space caching is enabled >> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 131.715072] Failed to read block groups: -5 >> [ 131.727176] btrfs: open_ctree failed >> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >> transid 9096 /dev/sdf >> [ 161.746345] btrfs: use lzo compression >> [ 161.746354] btrfs: enabling auto recovery >> [ 161.746358] btrfs: disk space caching is enabled >> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 169.732623] Failed to read block groups: -5 >> [ 169.743437] btrfs: open_ctree failed >> >> So It does not work. I have seen in some posts command: >> >> sudo mount -s 2 -o recovery /tank >> Should I try it? >> >> Please help me, I need to get this data ASAP. >> >> Regards, >> Max >> >> On 06/03/2012 09:22 PM, Liu Bo wrote: >>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>> >>>> Repair was not helpful. >>>> Is any other ways to get access to data? >>>> >>>> Please help.... >>>> >>> Hi Maxim, >>> >>> Besides btrfsck --repair, we also have a recovery mount option to deal >>> with your situation, >>> maybe you can try mount xxx -o recovery and see if it helps? >>> >>> >>> thanks, >>> liubo >>> >>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>> Let it run to completion. There is little you can do other than hope >>>>> and wait. >>>>> >>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>> >>>>> btrfsck --repair running already for 26 hours. >>>>> >>>>> Is it have sense to wait more? >>>>> >>>>> Thanks >>>>> >>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>> >>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> wrote: >>>>> >>>>> Thank you for your answer. >>>>> >>>>> >>>>> The system kernel was and now: >>>>> >>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 >>>>> 09:22:02 UTC 2012 >>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>> >>>>> the raid was created by: >>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>> >>>>> Disk are connected through RocketRaid 2670. >>>>> >>>>> for mounting I used line in fstab: >>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>> btrfs >>>>> defaults,compress=lzo 0 1 >>>>> >>>>> On machine was running several Virtual machines. Only one >>>>> was actively using >>>>> disks. >>>>> >>>>> VM has active several threads: >>>>> 1. 2 threads reading big files (50GB each) >>>>> 2. reading from 50 files and writing one big file >>>>> 3. The kernel panic happens when I run another program >>>>> with 30 threads of >>>>> reading/writing of small files. >>>>> >>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>> file system which >>>>> actively used xattr. >>>>> >>>>> After reboot system was in this stage. >>>>> >>>>> I hope that btrfsck --repair will not make it worse, It is >>>>> now running. >>>>> >>>>> **twitch** >>>>> >>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>> now, let >>>>> it finish (aborting it will make things worse), but I suggest >>>>> waiting >>>>> until a few more people have weighed in before attempting >>>>> anything >>>>> beyond that. >>>>> >>>>> -- >>>>> To unsubscribe from this list: send the line "unsubscribe >>>>> linux-btrfs" in >>>>> the body of a message to majordomo@vger.kernel.org >>>>> <mailto:majordomo@vger.kernel.org> >>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>> >>>> -- >>>> To unsubscribe from this list: send the line "unsubscribe >>>> linux-btrfs" in >>>> the body of a message to majordomo@vger.kernel.org >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/04/2012 10:18 AM, Maxim Mikheev wrote:> Hi Liu, > > 1) all of them not working (see dmesg at the end) > 2) > max@s0:~$ sudo btrfs scrub start /dev/sdb > ERROR: getting dev info for scrub failed: Inappropriate ioctl for device > max@s0:~$ sudo btrfs scrub start /dev/sda > ERROR: getting dev info for scrub failed: Inappropriate ioctl for device > max@s0:~$ sudo btrfs scrub start /dev/sdd > ERROR: getting dev info for scrub failed: Inappropriate ioctl for device > max@s0:~$ sudo btrfs scrub start /dev/sde > ERROR: getting dev info for scrub failed: Inappropriate ioctl for device > max@s0:~$ sudo btrfs scrub start /dev/sdf > ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >(add Jan and Arne to cc, they are authors of scrub) I''m not an expert on scrub, and I''m not clear how to scrub a device directly :( btw, have you tried restore (for attempting to recover data from an unmountable filesystem): https://btrfs.wiki.kernel.org/index.php/Restore thanks, liubo> dmesg after all operations: > [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 > transid 9096 /dev/sdb > [ 2183.916128] btrfs: disk space caching is enabled > [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdb sector 2143292648) > [ 2191.873678] Failed to read block groups: -5 > [ 2191.884636] btrfs: open_ctree failed > [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 > transid 9095 /dev/sdd > [ 2222.959128] btrfs: disk space caching is enabled > [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdd sector 2143292648) > [ 2231.275207] Failed to read block groups: -5 > [ 2231.288795] btrfs: open_ctree failed > [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 > transid 9096 /dev/sde > [ 2240.671344] btrfs: disk space caching is enabled > [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdd sector 2143292648) > [ 2248.929105] Failed to read block groups: -5 > [ 2248.939081] btrfs: open_ctree failed > [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 > transid 9096 /dev/sdf > [ 2253.879940] btrfs: disk space caching is enabled > [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdb sector 2143292648) > [ 2261.767942] Failed to read block groups: -5 > [ 2261.778219] btrfs: open_ctree failed > [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 > transid 9096 /dev/sda > [ 2309.904520] btrfs: disk space caching is enabled > [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 > found 7621 > [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev > /dev/sdd sector 2143292648) > [ 2318.304013] Failed to read block groups: -5 > [ 2318.314587] btrfs: open_ctree failed > > On 06/03/2012 10:16 PM, Liu Bo wrote: >> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >> >>> Hi Liu, >>> >>> thanks for advice. I tried it before btrfsck. results are here: >>> max@s0:~$ sudo mount /tank -o recovery >>> [sudo] password for max: >>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>> missing codepage or helper program, or other error >>> In some cases useful info is found in syslog - try >>> dmesg | tail or so >>> >>> max@s0:~$ sudo mount -o recovery /tank >>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>> missing codepage or helper program, or other error >>> In some cases useful info is found in syslog - try >>> dmesg | tail or so >>> >> >> Two possible ways: >> >> 1) >> I noticed that your btrfs had 5 partitions in all: >> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >> >> Can you try to mount other disk partitions instead by hand, like: >> mount /dev/sdb /tank >> mount /dev/sdc /tank >> mount /dev/sdd /tank >> mount /dev/sde /tank >> mount /dev/sdf /tank >> >> 2) >> use btrfs''s scrub to resort to metadata backups created by RAID1. >> >> thanks, >> liubo >> >>> dmesg after boot before mount -o recovery: >>> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 51.841610] Failed to read block groups: -5 >>> [ 51.848057] btrfs: open_ctree failed >>> .............................. >>> dmesg after both mounts: >>> >>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>> transid 9096 /dev/sdf >>> [ 123.733678] btrfs: use lzo compression >>> [ 123.733683] btrfs: enabling auto recovery >>> [ 123.733686] btrfs: disk space caching is enabled >>> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 131.715072] Failed to read block groups: -5 >>> [ 131.727176] btrfs: open_ctree failed >>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>> transid 9096 /dev/sdf >>> [ 161.746345] btrfs: use lzo compression >>> [ 161.746354] btrfs: enabling auto recovery >>> [ 161.746358] btrfs: disk space caching is enabled >>> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 169.732623] Failed to read block groups: -5 >>> [ 169.743437] btrfs: open_ctree failed >>> >>> So It does not work. I have seen in some posts command: >>> >>> sudo mount -s 2 -o recovery /tank >>> Should I try it? >>> >>> Please help me, I need to get this data ASAP. >>> >>> Regards, >>> Max >>> >>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>> >>>>> Repair was not helpful. >>>>> Is any other ways to get access to data? >>>>> >>>>> Please help.... >>>>> >>>> Hi Maxim, >>>> >>>> Besides btrfsck --repair, we also have a recovery mount option to deal >>>> with your situation, >>>> maybe you can try mount xxx -o recovery and see if it helps? >>>> >>>> >>>> thanks, >>>> liubo >>>> >>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>> Let it run to completion. There is little you can do other than hope >>>>>> and wait. >>>>>> >>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>> >>>>>> btrfsck --repair running already for 26 hours. >>>>>> >>>>>> Is it have sense to wait more? >>>>>> >>>>>> Thanks >>>>>> >>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>> >>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>> wrote: >>>>>> >>>>>> Thank you for your answer. >>>>>> >>>>>> >>>>>> The system kernel was and now: >>>>>> >>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>> May 21 >>>>>> 09:22:02 UTC 2012 >>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>> >>>>>> the raid was created by: >>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>> >>>>>> Disk are connected through RocketRaid 2670. >>>>>> >>>>>> for mounting I used line in fstab: >>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>> btrfs >>>>>> defaults,compress=lzo 0 1 >>>>>> >>>>>> On machine was running several Virtual machines. >>>>>> Only one >>>>>> was actively using >>>>>> disks. >>>>>> >>>>>> VM has active several threads: >>>>>> 1. 2 threads reading big files (50GB each) >>>>>> 2. reading from 50 files and writing one big file >>>>>> 3. The kernel panic happens when I run another program >>>>>> with 30 threads of >>>>>> reading/writing of small files. >>>>>> >>>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>>> file system which >>>>>> actively used xattr. >>>>>> >>>>>> After reboot system was in this stage. >>>>>> >>>>>> I hope that btrfsck --repair will not make it worse, >>>>>> It is >>>>>> now running. >>>>>> >>>>>> **twitch** >>>>>> >>>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>>> now, let >>>>>> it finish (aborting it will make things worse), but I >>>>>> suggest >>>>>> waiting >>>>>> until a few more people have weighed in before attempting >>>>>> anything >>>>>> beyond that. >>>>>> >>>>>> -- >>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>> linux-btrfs" in >>>>>> the body of a message to majordomo@vger.kernel.org >>>>>> <mailto:majordomo@vger.kernel.org> >>>>>> More majordomo info at >>>>>> http://vger.kernel.org/majordomo-info.html >>>>>> >>>>> -- >>>>> To unsubscribe from this list: send the line "unsubscribe >>>>> linux-btrfs" in >>>>> the body of a message to majordomo@vger.kernel.org >>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe >>> linux-btrfs" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >> > -- > To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in > the body of a message to majordomo@vger.kernel.org > More majordomo info at http://vger.kernel.org/majordomo-info.html >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
I tried: max@s0:~$ sudo btrfs-restore /dev/sdb ~/restored parent transid verify failed on 5468060241920 wanted 9096 found 7621 parent transid verify failed on 5468060241920 wanted 9096 found 7621 parent transid verify failed on 5468060241920 wanted 9096 found 7621 parent transid verify failed on 5468060241920 wanted 9096 found 7621 Ignoring transid failure leaf parent key incorrect 5468060241920 parent transid verify failed on 5333392302080 wanted 9096 found 4585 parent transid verify failed on 5333392302080 wanted 9096 found 4585 Root objectid is 5 ret is -3 max@s0:~$ ls -lahs restored/Irina/ total 12K 4.0K drwxr-xr-x 3 root root 4.0K Jun 3 23:12 . 4.0K drwxrwxr-x 3 max max 4.0K Jun 3 23:12 .. 4.0K drwxr-xr-x 2 root root 4.0K Jun 3 23:12 .idmapdir2 max@s0:~$ ls -lahs restored/Irina/.idmapdir2/ total 8.0K 4.0K drwxr-xr-x 2 root root 4.0K Jun 3 23:12 . 4.0K drwxr-xr-x 3 root root 4.0K Jun 3 23:12 .. 0 -rw-r--r-- 1 root root 0 Jun 3 23:12 4.bucket.lock 0 -rw-r--r-- 1 root root 0 Jun 3 23:12 7.bucket max@s0:~$ dmesg: [ 4764.795798] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 transid 9096 /dev/sdb [ 4764.796901] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 transid 9095 /dev/sdd [ 4764.797888] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 transid 9096 /dev/sda [ 4764.799309] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 transid 9096 /dev/sde [ 4764.801220] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 transid 9096 /dev/sdf Command: sudo btrfs-find-root /dev/sdb gave me many lines like these: Well block 4046409486336 seems great, but generation doesn''t match, have=9087, want=9096 Well block 4046414626816 seems great, but generation doesn''t match, have=9088, want=9096 Well block 4148447113216 seems great, but generation doesn''t match, have=7618, want=9096 Well block 4148522024960 seems great, but generation doesn''t match, have=9089, want=9096 Well block 4148539457536 seems great, but generation doesn''t match, have=9090, want=9096 Well block 4455562448896 seems great, but generation doesn''t match, have=9092, want=9096 Well block 4455568302080 seems great, but generation doesn''t match, have=9091, want=9096 Well block 4848395739136 seems great, but generation doesn''t match, have=9093, want=9096 Well block 4923796594688 seems great, but generation doesn''t match, have=9094, want=9096 Well block 4923798065152 seems great, but generation doesn''t match, have=9095, want=9096 max@s0:~$ sudo btrfs-restore -t 4923798065152 /dev/sdb ~/restored parent transid verify failed on 4923798065152 wanted 9096 found 9095 parent transid verify failed on 4923798065152 wanted 9096 found 9095 parent transid verify failed on 4923798065152 wanted 9096 found 9095 parent transid verify failed on 4923798065152 wanted 9096 found 9095 Ignoring transid failure Root objectid is 5 ret is -3 On 06/03/2012 10:59 PM, Liu Bo wrote:> On 06/04/2012 10:18 AM, Maxim Mikheev wrote: > >> Hi Liu, >> >> 1) all of them not working (see dmesg at the end) >> 2) >> max@s0:~$ sudo btrfs scrub start /dev/sdb >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sda >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sdd >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sde >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sdf >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> > > (add Jan and Arne to cc, they are authors of scrub) > > I''m not an expert on scrub, and I''m not clear how to scrub a device directly :( > > btw, have you tried restore (for attempting to recover data from an unmountable filesystem): > > https://btrfs.wiki.kernel.org/index.php/Restore > > thanks, > liubo > > >> dmesg after all operations: >> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 >> transid 9096 /dev/sdb >> [ 2183.916128] btrfs: disk space caching is enabled >> [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 2191.873678] Failed to read block groups: -5 >> [ 2191.884636] btrfs: open_ctree failed >> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 >> transid 9095 /dev/sdd >> [ 2222.959128] btrfs: disk space caching is enabled >> [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdd sector 2143292648) >> [ 2231.275207] Failed to read block groups: -5 >> [ 2231.288795] btrfs: open_ctree failed >> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 >> transid 9096 /dev/sde >> [ 2240.671344] btrfs: disk space caching is enabled >> [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdd sector 2143292648) >> [ 2248.929105] Failed to read block groups: -5 >> [ 2248.939081] btrfs: open_ctree failed >> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >> transid 9096 /dev/sdf >> [ 2253.879940] btrfs: disk space caching is enabled >> [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 2261.767942] Failed to read block groups: -5 >> [ 2261.778219] btrfs: open_ctree failed >> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 >> transid 9096 /dev/sda >> [ 2309.904520] btrfs: disk space caching is enabled >> [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdd sector 2143292648) >> [ 2318.304013] Failed to read block groups: -5 >> [ 2318.314587] btrfs: open_ctree failed >> >> On 06/03/2012 10:16 PM, Liu Bo wrote: >>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >>> >>>> Hi Liu, >>>> >>>> thanks for advice. I tried it before btrfsck. results are here: >>>> max@s0:~$ sudo mount /tank -o recovery >>>> [sudo] password for max: >>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>> missing codepage or helper program, or other error >>>> In some cases useful info is found in syslog - try >>>> dmesg | tail or so >>>> >>>> max@s0:~$ sudo mount -o recovery /tank >>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>> missing codepage or helper program, or other error >>>> In some cases useful info is found in syslog - try >>>> dmesg | tail or so >>>> >>> Two possible ways: >>> >>> 1) >>> I noticed that your btrfs had 5 partitions in all: >>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>> >>> Can you try to mount other disk partitions instead by hand, like: >>> mount /dev/sdb /tank >>> mount /dev/sdc /tank >>> mount /dev/sdd /tank >>> mount /dev/sde /tank >>> mount /dev/sdf /tank >>> >>> 2) >>> use btrfs''s scrub to resort to metadata backups created by RAID1. >>> >>> thanks, >>> liubo >>> >>>> dmesg after boot before mount -o recovery: >>>> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 51.841610] Failed to read block groups: -5 >>>> [ 51.848057] btrfs: open_ctree failed >>>> .............................. >>>> dmesg after both mounts: >>>> >>>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>> transid 9096 /dev/sdf >>>> [ 123.733678] btrfs: use lzo compression >>>> [ 123.733683] btrfs: enabling auto recovery >>>> [ 123.733686] btrfs: disk space caching is enabled >>>> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 131.715072] Failed to read block groups: -5 >>>> [ 131.727176] btrfs: open_ctree failed >>>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>> transid 9096 /dev/sdf >>>> [ 161.746345] btrfs: use lzo compression >>>> [ 161.746354] btrfs: enabling auto recovery >>>> [ 161.746358] btrfs: disk space caching is enabled >>>> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 169.732623] Failed to read block groups: -5 >>>> [ 169.743437] btrfs: open_ctree failed >>>> >>>> So It does not work. I have seen in some posts command: >>>> >>>> sudo mount -s 2 -o recovery /tank >>>> Should I try it? >>>> >>>> Please help me, I need to get this data ASAP. >>>> >>>> Regards, >>>> Max >>>> >>>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>>> >>>>>> Repair was not helpful. >>>>>> Is any other ways to get access to data? >>>>>> >>>>>> Please help.... >>>>>> >>>>> Hi Maxim, >>>>> >>>>> Besides btrfsck --repair, we also have a recovery mount option to deal >>>>> with your situation, >>>>> maybe you can try mount xxx -o recovery and see if it helps? >>>>> >>>>> >>>>> thanks, >>>>> liubo >>>>> >>>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>>> Let it run to completion. There is little you can do other than hope >>>>>>> and wait. >>>>>>> >>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>>> >>>>>>> btrfsck --repair running already for 26 hours. >>>>>>> >>>>>>> Is it have sense to wait more? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>>> >>>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>>> wrote: >>>>>>> >>>>>>> Thank you for your answer. >>>>>>> >>>>>>> >>>>>>> The system kernel was and now: >>>>>>> >>>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>>> May 21 >>>>>>> 09:22:02 UTC 2012 >>>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>>> >>>>>>> the raid was created by: >>>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>>> >>>>>>> Disk are connected through RocketRaid 2670. >>>>>>> >>>>>>> for mounting I used line in fstab: >>>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>>> btrfs >>>>>>> defaults,compress=lzo 0 1 >>>>>>> >>>>>>> On machine was running several Virtual machines. >>>>>>> Only one >>>>>>> was actively using >>>>>>> disks. >>>>>>> >>>>>>> VM has active several threads: >>>>>>> 1. 2 threads reading big files (50GB each) >>>>>>> 2. reading from 50 files and writing one big file >>>>>>> 3. The kernel panic happens when I run another program >>>>>>> with 30 threads of >>>>>>> reading/writing of small files. >>>>>>> >>>>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>>>> file system which >>>>>>> actively used xattr. >>>>>>> >>>>>>> After reboot system was in this stage. >>>>>>> >>>>>>> I hope that btrfsck --repair will not make it worse, >>>>>>> It is >>>>>>> now running. >>>>>>> >>>>>>> **twitch** >>>>>>> >>>>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>>>> now, let >>>>>>> it finish (aborting it will make things worse), but I >>>>>>> suggest >>>>>>> waiting >>>>>>> until a few more people have weighed in before attempting >>>>>>> anything >>>>>>> beyond that. >>>>>>> >>>>>>> -- >>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>> linux-btrfs" in >>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>> <mailto:majordomo@vger.kernel.org> >>>>>>> More majordomo info at >>>>>>> http://vger.kernel.org/majordomo-info.html >>>>>>> >>>>>> -- >>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>> linux-btrfs" in >>>>>> the body of a message to majordomo@vger.kernel.org >>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>>> >>>> -- >>>> To unsubscribe from this list: send the line "unsubscribe >>>> linux-btrfs" in >>>> the body of a message to majordomo@vger.kernel.org >>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>> >> -- >> To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in >> the body of a message to majordomo@vger.kernel.org >> More majordomo info at http://vger.kernel.org/majordomo-info.html >> >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Everyone, Can I do anything else? Max On 06/03/2012 11:13 PM, Maxim Mikheev wrote:> I tried: > > max@s0:~$ sudo btrfs-restore /dev/sdb ~/restored > parent transid verify failed on 5468060241920 wanted 9096 found 7621 > parent transid verify failed on 5468060241920 wanted 9096 found 7621 > parent transid verify failed on 5468060241920 wanted 9096 found 7621 > parent transid verify failed on 5468060241920 wanted 9096 found 7621 > Ignoring transid failure > leaf parent key incorrect 5468060241920 > parent transid verify failed on 5333392302080 wanted 9096 found 4585 > parent transid verify failed on 5333392302080 wanted 9096 found 4585 > Root objectid is 5 > ret is -3 > > max@s0:~$ ls -lahs restored/Irina/ > total 12K > 4.0K drwxr-xr-x 3 root root 4.0K Jun 3 23:12 . > 4.0K drwxrwxr-x 3 max max 4.0K Jun 3 23:12 .. > 4.0K drwxr-xr-x 2 root root 4.0K Jun 3 23:12 .idmapdir2 > max@s0:~$ ls -lahs restored/Irina/.idmapdir2/ > total 8.0K > 4.0K drwxr-xr-x 2 root root 4.0K Jun 3 23:12 . > 4.0K drwxr-xr-x 3 root root 4.0K Jun 3 23:12 .. > 0 -rw-r--r-- 1 root root 0 Jun 3 23:12 4.bucket.lock > 0 -rw-r--r-- 1 root root 0 Jun 3 23:12 7.bucket > max@s0:~$ > > > dmesg: > [ 4764.795798] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid > 2 transid 9096 /dev/sdb > [ 4764.796901] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid > 3 transid 9095 /dev/sdd > [ 4764.797888] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid > 1 transid 9096 /dev/sda > [ 4764.799309] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid > 4 transid 9096 /dev/sde > [ 4764.801220] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid > 5 transid 9096 /dev/sdf > > Command: > sudo btrfs-find-root /dev/sdb > gave me many lines like these: > Well block 4046409486336 seems great, but generation doesn''t match, > have=9087, want=9096 > Well block 4046414626816 seems great, but generation doesn''t match, > have=9088, want=9096 > Well block 4148447113216 seems great, but generation doesn''t match, > have=7618, want=9096 > Well block 4148522024960 seems great, but generation doesn''t match, > have=9089, want=9096 > Well block 4148539457536 seems great, but generation doesn''t match, > have=9090, want=9096 > Well block 4455562448896 seems great, but generation doesn''t match, > have=9092, want=9096 > Well block 4455568302080 seems great, but generation doesn''t match, > have=9091, want=9096 > Well block 4848395739136 seems great, but generation doesn''t match, > have=9093, want=9096 > Well block 4923796594688 seems great, but generation doesn''t match, > have=9094, want=9096 > Well block 4923798065152 seems great, but generation doesn''t match, > have=9095, want=9096 > > max@s0:~$ sudo btrfs-restore -t 4923798065152 /dev/sdb ~/restored > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > parent transid verify failed on 4923798065152 wanted 9096 found 9095 > Ignoring transid failure > Root objectid is 5 > ret is -3 > > > On 06/03/2012 10:59 PM, Liu Bo wrote: >> On 06/04/2012 10:18 AM, Maxim Mikheev wrote: >> >>> Hi Liu, >>> >>> 1) all of them not working (see dmesg at the end) >>> 2) >>> max@s0:~$ sudo btrfs scrub start /dev/sdb >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for >>> device >>> max@s0:~$ sudo btrfs scrub start /dev/sda >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for >>> device >>> max@s0:~$ sudo btrfs scrub start /dev/sdd >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for >>> device >>> max@s0:~$ sudo btrfs scrub start /dev/sde >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for >>> device >>> max@s0:~$ sudo btrfs scrub start /dev/sdf >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for >>> device >>> >> >> (add Jan and Arne to cc, they are authors of scrub) >> >> I''m not an expert on scrub, and I''m not clear how to scrub a device >> directly :( >> >> btw, have you tried restore (for attempting to recover data from an >> unmountable filesystem): >> >> https://btrfs.wiki.kernel.org/index.php/Restore >> >> thanks, >> liubo >> >> >>> dmesg after all operations: >>> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 >>> transid 9096 /dev/sdb >>> [ 2183.916128] btrfs: disk space caching is enabled >>> [ 2191.863409] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2191.872937] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 2191.873678] Failed to read block groups: -5 >>> [ 2191.884636] btrfs: open_ctree failed >>> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 >>> transid 9095 /dev/sdd >>> [ 2222.959128] btrfs: disk space caching is enabled >>> [ 2231.264285] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2231.274306] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdd sector 2143292648) >>> [ 2231.275207] Failed to read block groups: -5 >>> [ 2231.288795] btrfs: open_ctree failed >>> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 >>> transid 9096 /dev/sde >>> [ 2240.671344] btrfs: disk space caching is enabled >>> [ 2248.916772] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2248.928106] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdd sector 2143292648) >>> [ 2248.929105] Failed to read block groups: -5 >>> [ 2248.939081] btrfs: open_ctree failed >>> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>> transid 9096 /dev/sdf >>> [ 2253.879940] btrfs: disk space caching is enabled >>> [ 2261.754357] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2261.767118] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 2261.767942] Failed to read block groups: -5 >>> [ 2261.778219] btrfs: open_ctree failed >>> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 >>> transid 9096 /dev/sda >>> [ 2309.904520] btrfs: disk space caching is enabled >>> [ 2318.286463] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2318.302991] parent transid verify failed on 5468060241920 wanted >>> 9096 >>> found 7621 >>> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdd sector 2143292648) >>> [ 2318.304013] Failed to read block groups: -5 >>> [ 2318.314587] btrfs: open_ctree failed >>> >>> On 06/03/2012 10:16 PM, Liu Bo wrote: >>>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >>>> >>>>> Hi Liu, >>>>> >>>>> thanks for advice. I tried it before btrfsck. results are here: >>>>> max@s0:~$ sudo mount /tank -o recovery >>>>> [sudo] password for max: >>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>> missing codepage or helper program, or other error >>>>> In some cases useful info is found in syslog - try >>>>> dmesg | tail or so >>>>> >>>>> max@s0:~$ sudo mount -o recovery /tank >>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>> missing codepage or helper program, or other error >>>>> In some cases useful info is found in syslog - try >>>>> dmesg | tail or so >>>>> >>>> Two possible ways: >>>> >>>> 1) >>>> I noticed that your btrfs had 5 partitions in all: >>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>> >>>> Can you try to mount other disk partitions instead by hand, like: >>>> mount /dev/sdb /tank >>>> mount /dev/sdc /tank >>>> mount /dev/sdd /tank >>>> mount /dev/sde /tank >>>> mount /dev/sdf /tank >>>> >>>> 2) >>>> use btrfs''s scrub to resort to metadata backups created by RAID1. >>>> >>>> thanks, >>>> liubo >>>> >>>>> dmesg after boot before mount -o recovery: >>>>> [ 51.829352] parent transid verify failed on 5468060241920 >>>>> wanted 9096 >>>>> found 7621 >>>>> [ 51.841153] parent transid verify failed on 5468060241920 >>>>> wanted 9096 >>>>> found 7621 >>>>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 >>>>> (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 51.841610] Failed to read block groups: -5 >>>>> [ 51.848057] btrfs: open_ctree failed >>>>> .............................. >>>>> dmesg after both mounts: >>>>> >>>>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 >>>>> devid 5 >>>>> transid 9096 /dev/sdf >>>>> [ 123.733678] btrfs: use lzo compression >>>>> [ 123.733683] btrfs: enabling auto recovery >>>>> [ 123.733686] btrfs: disk space caching is enabled >>>>> [ 131.699910] parent transid verify failed on 5468060241920 >>>>> wanted 9096 >>>>> found 7621 >>>>> [ 131.714018] parent transid verify failed on 5468060241920 >>>>> wanted 9096 >>>>> found 7621 >>>>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 >>>>> (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 131.715072] Failed to read block groups: -5 >>>>> [ 131.727176] btrfs: open_ctree failed >>>>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 >>>>> devid 5 >>>>> transid 9096 /dev/sdf >>>>> [ 161.746345] btrfs: use lzo compression >>>>> [ 161.746354] btrfs: enabling auto recovery >>>>> [ 161.746358] btrfs: disk space caching is enabled >>>>> [ 169.720823] parent transid verify failed on 5468060241920 >>>>> wanted 9096 >>>>> found 7621 >>>>> [ 169.732048] parent transid verify failed on 5468060241920 >>>>> wanted 9096 >>>>> found 7621 >>>>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 >>>>> (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 169.732623] Failed to read block groups: -5 >>>>> [ 169.743437] btrfs: open_ctree failed >>>>> >>>>> So It does not work. I have seen in some posts command: >>>>> >>>>> sudo mount -s 2 -o recovery /tank >>>>> Should I try it? >>>>> >>>>> Please help me, I need to get this data ASAP. >>>>> >>>>> Regards, >>>>> Max >>>>> >>>>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>>>> >>>>>>> Repair was not helpful. >>>>>>> Is any other ways to get access to data? >>>>>>> >>>>>>> Please help.... >>>>>>> >>>>>> Hi Maxim, >>>>>> >>>>>> Besides btrfsck --repair, we also have a recovery mount option to >>>>>> deal >>>>>> with your situation, >>>>>> maybe you can try mount xxx -o recovery and see if it helps? >>>>>> >>>>>> >>>>>> thanks, >>>>>> liubo >>>>>> >>>>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>>>> Let it run to completion. There is little you can do other than >>>>>>>> hope >>>>>>>> and wait. >>>>>>>> >>>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>>>> >>>>>>>> btrfsck --repair running already for 26 hours. >>>>>>>> >>>>>>>> Is it have sense to wait more? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>>>> >>>>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Thank you for your answer. >>>>>>>> >>>>>>>> >>>>>>>> The system kernel was and now: >>>>>>>> >>>>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>>>> May 21 >>>>>>>> 09:22:02 UTC 2012 >>>>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>>>> >>>>>>>> the raid was created by: >>>>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde >>>>>>>> /dev/sdf >>>>>>>> >>>>>>>> Disk are connected through RocketRaid 2670. >>>>>>>> >>>>>>>> for mounting I used line in fstab: >>>>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>>>> btrfs >>>>>>>> defaults,compress=lzo 0 1 >>>>>>>> >>>>>>>> On machine was running several Virtual machines. >>>>>>>> Only one >>>>>>>> was actively using >>>>>>>> disks. >>>>>>>> >>>>>>>> VM has active several threads: >>>>>>>> 1. 2 threads reading big files (50GB each) >>>>>>>> 2. reading from 50 files and writing one big file >>>>>>>> 3. The kernel panic happens when I run another >>>>>>>> program >>>>>>>> with 30 threads of >>>>>>>> reading/writing of small files. >>>>>>>> >>>>>>>> Virtual Machine accessed to underline btrfs >>>>>>>> through 9-p >>>>>>>> file system which >>>>>>>> actively used xattr. >>>>>>>> >>>>>>>> After reboot system was in this stage. >>>>>>>> >>>>>>>> I hope that btrfsck --repair will not make it >>>>>>>> worse, >>>>>>>> It is >>>>>>>> now running. >>>>>>>> >>>>>>>> **twitch** >>>>>>>> >>>>>>>> Well, I also hope it won''t make it worse. Do not >>>>>>>> cancel it >>>>>>>> now, let >>>>>>>> it finish (aborting it will make things worse), but I >>>>>>>> suggest >>>>>>>> waiting >>>>>>>> until a few more people have weighed in before >>>>>>>> attempting >>>>>>>> anything >>>>>>>> beyond that. >>>>>>>> >>>>>>>> -- >>>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>>> linux-btrfs" in >>>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>>> <mailto:majordomo@vger.kernel.org> >>>>>>>> More majordomo info at >>>>>>>> http://vger.kernel.org/majordomo-info.html >>>>>>>> >>>>>>> -- >>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>> linux-btrfs" in >>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>>>> >>>>> -- >>>>> To unsubscribe from this list: send the line "unsubscribe >>>>> linux-btrfs" in >>>>> the body of a message to majordomo@vger.kernel.org >>>>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>>>> >>> -- >>> To unsubscribe from this list: send the line "unsubscribe >>> linux-btrfs" in >>> the body of a message to majordomo@vger.kernel.org >>> More majordomo info at http://vger.kernel.org/majordomo-info.html >>> >>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04.06.2012 04:59, Liu Bo wrote:> On 06/04/2012 10:18 AM, Maxim Mikheev wrote: > >> Hi Liu, >> >> 1) all of them not working (see dmesg at the end) >> 2) >> max@s0:~$ sudo btrfs scrub start /dev/sdb >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sda >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sdd >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sde >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> max@s0:~$ sudo btrfs scrub start /dev/sdf >> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >> >Even to scrub a single device, the filesystem has to be mounted.> > (add Jan and Arne to cc, they are authors of scrub) > > I''m not an expert on scrub, and I''m not clear how to scrub a device directly :( > > btw, have you tried restore (for attempting to recover data from an unmountable filesystem): > > https://btrfs.wiki.kernel.org/index.php/Restore > > thanks, > liubo > > >> dmesg after all operations: >> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 >> transid 9096 /dev/sdb >> [ 2183.916128] btrfs: disk space caching is enabled >> [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 2191.873678] Failed to read block groups: -5 >> [ 2191.884636] btrfs: open_ctree failed >> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 >> transid 9095 /dev/sdd >> [ 2222.959128] btrfs: disk space caching is enabled >> [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdd sector 2143292648) >> [ 2231.275207] Failed to read block groups: -5 >> [ 2231.288795] btrfs: open_ctree failed >> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 >> transid 9096 /dev/sde >> [ 2240.671344] btrfs: disk space caching is enabled >> [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdd sector 2143292648) >> [ 2248.929105] Failed to read block groups: -5 >> [ 2248.939081] btrfs: open_ctree failed >> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >> transid 9096 /dev/sdf >> [ 2253.879940] btrfs: disk space caching is enabled >> [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdb sector 2143292648) >> [ 2261.767942] Failed to read block groups: -5 >> [ 2261.778219] btrfs: open_ctree failed >> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 >> transid 9096 /dev/sda >> [ 2309.904520] btrfs: disk space caching is enabled >> [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 >> found 7621 >> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev >> /dev/sdd sector 2143292648) >> [ 2318.304013] Failed to read block groups: -5 >> [ 2318.314587] btrfs: open_ctree failed >> >> On 06/03/2012 10:16 PM, Liu Bo wrote: >>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >>> >>>> Hi Liu, >>>> >>>> thanks for advice. I tried it before btrfsck. results are here: >>>> max@s0:~$ sudo mount /tank -o recovery >>>> [sudo] password for max: >>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>> missing codepage or helper program, or other error >>>> In some cases useful info is found in syslog - try >>>> dmesg | tail or so >>>> >>>> max@s0:~$ sudo mount -o recovery /tank >>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>> missing codepage or helper program, or other error >>>> In some cases useful info is found in syslog - try >>>> dmesg | tail or so >>>> >>> >>> Two possible ways: >>> >>> 1) >>> I noticed that your btrfs had 5 partitions in all: >>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>> >>> Can you try to mount other disk partitions instead by hand, like: >>> mount /dev/sdb /tank >>> mount /dev/sdc /tank >>> mount /dev/sdd /tank >>> mount /dev/sde /tank >>> mount /dev/sdf /tank >>> >>> 2) >>> use btrfs''s scrub to resort to metadata backups created by RAID1. >>> >>> thanks, >>> liubo >>> >>>> dmesg after boot before mount -o recovery: >>>> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 51.841610] Failed to read block groups: -5 >>>> [ 51.848057] btrfs: open_ctree failed >>>> .............................. >>>> dmesg after both mounts: >>>> >>>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>> transid 9096 /dev/sdf >>>> [ 123.733678] btrfs: use lzo compression >>>> [ 123.733683] btrfs: enabling auto recovery >>>> [ 123.733686] btrfs: disk space caching is enabled >>>> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 131.715072] Failed to read block groups: -5 >>>> [ 131.727176] btrfs: open_ctree failed >>>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>> transid 9096 /dev/sdf >>>> [ 161.746345] btrfs: use lzo compression >>>> [ 161.746354] btrfs: enabling auto recovery >>>> [ 161.746358] btrfs: disk space caching is enabled >>>> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 169.732623] Failed to read block groups: -5 >>>> [ 169.743437] btrfs: open_ctree failed >>>> >>>> So It does not work. I have seen in some posts command: >>>> >>>> sudo mount -s 2 -o recovery /tank >>>> Should I try it? >>>> >>>> Please help me, I need to get this data ASAP. >>>> >>>> Regards, >>>> Max >>>> >>>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>>> >>>>>> Repair was not helpful. >>>>>> Is any other ways to get access to data? >>>>>> >>>>>> Please help.... >>>>>> >>>>> Hi Maxim, >>>>> >>>>> Besides btrfsck --repair, we also have a recovery mount option to deal >>>>> with your situation, >>>>> maybe you can try mount xxx -o recovery and see if it helps? >>>>> >>>>> >>>>> thanks, >>>>> liubo >>>>> >>>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>>> Let it run to completion. There is little you can do other than hope >>>>>>> and wait. >>>>>>> >>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>>> >>>>>>> btrfsck --repair running already for 26 hours. >>>>>>> >>>>>>> Is it have sense to wait more? >>>>>>> >>>>>>> Thanks >>>>>>> >>>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>>> >>>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>>> wrote: >>>>>>> >>>>>>> Thank you for your answer. >>>>>>> >>>>>>> >>>>>>> The system kernel was and now: >>>>>>> >>>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>>> May 21 >>>>>>> 09:22:02 UTC 2012 >>>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>>> >>>>>>> the raid was created by: >>>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>>> >>>>>>> Disk are connected through RocketRaid 2670. >>>>>>> >>>>>>> for mounting I used line in fstab: >>>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>>> btrfs >>>>>>> defaults,compress=lzo 0 1 >>>>>>> >>>>>>> On machine was running several Virtual machines. >>>>>>> Only one >>>>>>> was actively using >>>>>>> disks. >>>>>>> >>>>>>> VM has active several threads: >>>>>>> 1. 2 threads reading big files (50GB each) >>>>>>> 2. reading from 50 files and writing one big file >>>>>>> 3. The kernel panic happens when I run another program >>>>>>> with 30 threads of >>>>>>> reading/writing of small files. >>>>>>> >>>>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>>>> file system which >>>>>>> actively used xattr. >>>>>>> >>>>>>> After reboot system was in this stage. >>>>>>> >>>>>>> I hope that btrfsck --repair will not make it worse, >>>>>>> It is >>>>>>> now running. >>>>>>> >>>>>>> **twitch** >>>>>>> >>>>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>>>> now, let >>>>>>> it finish (aborting it will make things worse), but I >>>>>>> suggest >>>>>>> waiting >>>>>>> until a few more people have weighed in before attempting >>>>>>> anything >>>>>>> beyond that. >>>>>>> >>>>>>> -- >>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>> linux-btrfs" in >>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>> <mailto:majordomo@vger.kernel.org> >>>>>>> More majordomo info at >>>>>>> http://vger.kernel.org/majordomo-info.html >>>>>>>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
How can I mount it at the first? On 06/04/2012 04:18 AM, Arne Jansen wrote:> On 04.06.2012 04:59, Liu Bo wrote: >> On 06/04/2012 10:18 AM, Maxim Mikheev wrote: >> >>> Hi Liu, >>> >>> 1) all of them not working (see dmesg at the end) >>> 2) >>> max@s0:~$ sudo btrfs scrub start /dev/sdb >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>> max@s0:~$ sudo btrfs scrub start /dev/sda >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>> max@s0:~$ sudo btrfs scrub start /dev/sdd >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>> max@s0:~$ sudo btrfs scrub start /dev/sde >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>> max@s0:~$ sudo btrfs scrub start /dev/sdf >>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>> > Even to scrub a single device, the filesystem has to be mounted. > >> (add Jan and Arne to cc, they are authors of scrub) >> >> I''m not an expert on scrub, and I''m not clear how to scrub a device directly :( >> >> btw, have you tried restore (for attempting to recover data from an unmountable filesystem): >> >> https://btrfs.wiki.kernel.org/index.php/Restore >> >> thanks, >> liubo >> >> >>> dmesg after all operations: >>> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 >>> transid 9096 /dev/sdb >>> [ 2183.916128] btrfs: disk space caching is enabled >>> [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 2191.873678] Failed to read block groups: -5 >>> [ 2191.884636] btrfs: open_ctree failed >>> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 >>> transid 9095 /dev/sdd >>> [ 2222.959128] btrfs: disk space caching is enabled >>> [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdd sector 2143292648) >>> [ 2231.275207] Failed to read block groups: -5 >>> [ 2231.288795] btrfs: open_ctree failed >>> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 >>> transid 9096 /dev/sde >>> [ 2240.671344] btrfs: disk space caching is enabled >>> [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdd sector 2143292648) >>> [ 2248.929105] Failed to read block groups: -5 >>> [ 2248.939081] btrfs: open_ctree failed >>> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>> transid 9096 /dev/sdf >>> [ 2253.879940] btrfs: disk space caching is enabled >>> [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdb sector 2143292648) >>> [ 2261.767942] Failed to read block groups: -5 >>> [ 2261.778219] btrfs: open_ctree failed >>> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 >>> transid 9096 /dev/sda >>> [ 2309.904520] btrfs: disk space caching is enabled >>> [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 >>> found 7621 >>> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev >>> /dev/sdd sector 2143292648) >>> [ 2318.304013] Failed to read block groups: -5 >>> [ 2318.314587] btrfs: open_ctree failed >>> >>> On 06/03/2012 10:16 PM, Liu Bo wrote: >>>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >>>> >>>>> Hi Liu, >>>>> >>>>> thanks for advice. I tried it before btrfsck. results are here: >>>>> max@s0:~$ sudo mount /tank -o recovery >>>>> [sudo] password for max: >>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>> missing codepage or helper program, or other error >>>>> In some cases useful info is found in syslog - try >>>>> dmesg | tail or so >>>>> >>>>> max@s0:~$ sudo mount -o recovery /tank >>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>> missing codepage or helper program, or other error >>>>> In some cases useful info is found in syslog - try >>>>> dmesg | tail or so >>>>> >>>> Two possible ways: >>>> >>>> 1) >>>> I noticed that your btrfs had 5 partitions in all: >>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>> >>>> Can you try to mount other disk partitions instead by hand, like: >>>> mount /dev/sdb /tank >>>> mount /dev/sdc /tank >>>> mount /dev/sdd /tank >>>> mount /dev/sde /tank >>>> mount /dev/sdf /tank >>>> >>>> 2) >>>> use btrfs''s scrub to resort to metadata backups created by RAID1. >>>> >>>> thanks, >>>> liubo >>>> >>>>> dmesg after boot before mount -o recovery: >>>>> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 51.841610] Failed to read block groups: -5 >>>>> [ 51.848057] btrfs: open_ctree failed >>>>> .............................. >>>>> dmesg after both mounts: >>>>> >>>>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>> transid 9096 /dev/sdf >>>>> [ 123.733678] btrfs: use lzo compression >>>>> [ 123.733683] btrfs: enabling auto recovery >>>>> [ 123.733686] btrfs: disk space caching is enabled >>>>> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 131.715072] Failed to read block groups: -5 >>>>> [ 131.727176] btrfs: open_ctree failed >>>>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>> transid 9096 /dev/sdf >>>>> [ 161.746345] btrfs: use lzo compression >>>>> [ 161.746354] btrfs: enabling auto recovery >>>>> [ 161.746358] btrfs: disk space caching is enabled >>>>> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 169.732623] Failed to read block groups: -5 >>>>> [ 169.743437] btrfs: open_ctree failed >>>>> >>>>> So It does not work. I have seen in some posts command: >>>>> >>>>> sudo mount -s 2 -o recovery /tank >>>>> Should I try it? >>>>> >>>>> Please help me, I need to get this data ASAP. >>>>> >>>>> Regards, >>>>> Max >>>>> >>>>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>>>> >>>>>>> Repair was not helpful. >>>>>>> Is any other ways to get access to data? >>>>>>> >>>>>>> Please help.... >>>>>>> >>>>>> Hi Maxim, >>>>>> >>>>>> Besides btrfsck --repair, we also have a recovery mount option to deal >>>>>> with your situation, >>>>>> maybe you can try mount xxx -o recovery and see if it helps? >>>>>> >>>>>> >>>>>> thanks, >>>>>> liubo >>>>>> >>>>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>>>> Let it run to completion. There is little you can do other than hope >>>>>>>> and wait. >>>>>>>> >>>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>>>> >>>>>>>> btrfsck --repair running already for 26 hours. >>>>>>>> >>>>>>>> Is it have sense to wait more? >>>>>>>> >>>>>>>> Thanks >>>>>>>> >>>>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>>>> >>>>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>>>> wrote: >>>>>>>> >>>>>>>> Thank you for your answer. >>>>>>>> >>>>>>>> >>>>>>>> The system kernel was and now: >>>>>>>> >>>>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>>>> May 21 >>>>>>>> 09:22:02 UTC 2012 >>>>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>>>> >>>>>>>> the raid was created by: >>>>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>>>> >>>>>>>> Disk are connected through RocketRaid 2670. >>>>>>>> >>>>>>>> for mounting I used line in fstab: >>>>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>>>> btrfs >>>>>>>> defaults,compress=lzo 0 1 >>>>>>>> >>>>>>>> On machine was running several Virtual machines. >>>>>>>> Only one >>>>>>>> was actively using >>>>>>>> disks. >>>>>>>> >>>>>>>> VM has active several threads: >>>>>>>> 1. 2 threads reading big files (50GB each) >>>>>>>> 2. reading from 50 files and writing one big file >>>>>>>> 3. The kernel panic happens when I run another program >>>>>>>> with 30 threads of >>>>>>>> reading/writing of small files. >>>>>>>> >>>>>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>>>>> file system which >>>>>>>> actively used xattr. >>>>>>>> >>>>>>>> After reboot system was in this stage. >>>>>>>> >>>>>>>> I hope that btrfsck --repair will not make it worse, >>>>>>>> It is >>>>>>>> now running. >>>>>>>> >>>>>>>> **twitch** >>>>>>>> >>>>>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>>>>> now, let >>>>>>>> it finish (aborting it will make things worse), but I >>>>>>>> suggest >>>>>>>> waiting >>>>>>>> until a few more people have weighed in before attempting >>>>>>>> anything >>>>>>>> beyond that. >>>>>>>> >>>>>>>> -- >>>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>>> linux-btrfs" in >>>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>>> <mailto:majordomo@vger.kernel.org> >>>>>>>> More majordomo info at >>>>>>>> http://vger.kernel.org/majordomo-info.html >>>>>>>>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 04.06.2012 13:30, Maxim Mikheev wrote:> How can I mount it at the first?Let me state it differently: If you can''t mount it, you can''t scrub it.> > On 06/04/2012 04:18 AM, Arne Jansen wrote: >> On 04.06.2012 04:59, Liu Bo wrote: >>> On 06/04/2012 10:18 AM, Maxim Mikheev wrote: >>> >>>> Hi Liu, >>>> >>>> 1) all of them not working (see dmesg at the end) >>>> 2) >>>> max@s0:~$ sudo btrfs scrub start /dev/sdb >>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>> max@s0:~$ sudo btrfs scrub start /dev/sda >>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>> max@s0:~$ sudo btrfs scrub start /dev/sdd >>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>> max@s0:~$ sudo btrfs scrub start /dev/sde >>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>> max@s0:~$ sudo btrfs scrub start /dev/sdf >>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>> >> Even to scrub a single device, the filesystem has to be mounted. >> >>> (add Jan and Arne to cc, they are authors of scrub) >>> >>> I''m not an expert on scrub, and I''m not clear how to scrub a device directly :( >>> >>> btw, have you tried restore (for attempting to recover data from an >>> unmountable filesystem): >>> >>> https://btrfs.wiki.kernel.org/index.php/Restore >>> >>> thanks, >>> liubo >>> >>> >>>> dmesg after all operations: >>>> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 >>>> transid 9096 /dev/sdb >>>> [ 2183.916128] btrfs: disk space caching is enabled >>>> [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 2191.873678] Failed to read block groups: -5 >>>> [ 2191.884636] btrfs: open_ctree failed >>>> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 >>>> transid 9095 /dev/sdd >>>> [ 2222.959128] btrfs: disk space caching is enabled >>>> [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdd sector 2143292648) >>>> [ 2231.275207] Failed to read block groups: -5 >>>> [ 2231.288795] btrfs: open_ctree failed >>>> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 >>>> transid 9096 /dev/sde >>>> [ 2240.671344] btrfs: disk space caching is enabled >>>> [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdd sector 2143292648) >>>> [ 2248.929105] Failed to read block groups: -5 >>>> [ 2248.939081] btrfs: open_ctree failed >>>> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>> transid 9096 /dev/sdf >>>> [ 2253.879940] btrfs: disk space caching is enabled >>>> [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdb sector 2143292648) >>>> [ 2261.767942] Failed to read block groups: -5 >>>> [ 2261.778219] btrfs: open_ctree failed >>>> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 >>>> transid 9096 /dev/sda >>>> [ 2309.904520] btrfs: disk space caching is enabled >>>> [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 >>>> found 7621 >>>> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>> /dev/sdd sector 2143292648) >>>> [ 2318.304013] Failed to read block groups: -5 >>>> [ 2318.314587] btrfs: open_ctree failed >>>> >>>> On 06/03/2012 10:16 PM, Liu Bo wrote: >>>>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >>>>> >>>>>> Hi Liu, >>>>>> >>>>>> thanks for advice. I tried it before btrfsck. results are here: >>>>>> max@s0:~$ sudo mount /tank -o recovery >>>>>> [sudo] password for max: >>>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>>> missing codepage or helper program, or other error >>>>>> In some cases useful info is found in syslog - try >>>>>> dmesg | tail or so >>>>>> >>>>>> max@s0:~$ sudo mount -o recovery /tank >>>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>>> missing codepage or helper program, or other error >>>>>> In some cases useful info is found in syslog - try >>>>>> dmesg | tail or so >>>>>> >>>>> Two possible ways: >>>>> >>>>> 1) >>>>> I noticed that your btrfs had 5 partitions in all: >>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>> >>>>> Can you try to mount other disk partitions instead by hand, like: >>>>> mount /dev/sdb /tank >>>>> mount /dev/sdc /tank >>>>> mount /dev/sdd /tank >>>>> mount /dev/sde /tank >>>>> mount /dev/sdf /tank >>>>> >>>>> 2) >>>>> use btrfs''s scrub to resort to metadata backups created by RAID1. >>>>> >>>>> thanks, >>>>> liubo >>>>> >>>>>> dmesg after boot before mount -o recovery: >>>>>> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >>>>>> found 7621 >>>>>> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >>>>>> found 7621 >>>>>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>>> /dev/sdb sector 2143292648) >>>>>> [ 51.841610] Failed to read block groups: -5 >>>>>> [ 51.848057] btrfs: open_ctree failed >>>>>> .............................. >>>>>> dmesg after both mounts: >>>>>> >>>>>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>>> transid 9096 /dev/sdf >>>>>> [ 123.733678] btrfs: use lzo compression >>>>>> [ 123.733683] btrfs: enabling auto recovery >>>>>> [ 123.733686] btrfs: disk space caching is enabled >>>>>> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >>>>>> found 7621 >>>>>> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >>>>>> found 7621 >>>>>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>>> /dev/sdb sector 2143292648) >>>>>> [ 131.715072] Failed to read block groups: -5 >>>>>> [ 131.727176] btrfs: open_ctree failed >>>>>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>>> transid 9096 /dev/sdf >>>>>> [ 161.746345] btrfs: use lzo compression >>>>>> [ 161.746354] btrfs: enabling auto recovery >>>>>> [ 161.746358] btrfs: disk space caching is enabled >>>>>> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >>>>>> found 7621 >>>>>> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >>>>>> found 7621 >>>>>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>>> /dev/sdb sector 2143292648) >>>>>> [ 169.732623] Failed to read block groups: -5 >>>>>> [ 169.743437] btrfs: open_ctree failed >>>>>> >>>>>> So It does not work. I have seen in some posts command: >>>>>> >>>>>> sudo mount -s 2 -o recovery /tank >>>>>> Should I try it? >>>>>> >>>>>> Please help me, I need to get this data ASAP. >>>>>> >>>>>> Regards, >>>>>> Max >>>>>> >>>>>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>>>>> >>>>>>>> Repair was not helpful. >>>>>>>> Is any other ways to get access to data? >>>>>>>> >>>>>>>> Please help.... >>>>>>>> >>>>>>> Hi Maxim, >>>>>>> >>>>>>> Besides btrfsck --repair, we also have a recovery mount option to deal >>>>>>> with your situation, >>>>>>> maybe you can try mount xxx -o recovery and see if it helps? >>>>>>> >>>>>>> >>>>>>> thanks, >>>>>>> liubo >>>>>>> >>>>>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>>>>> Let it run to completion. There is little you can do other than hope >>>>>>>>> and wait. >>>>>>>>> >>>>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>>>>> >>>>>>>>> btrfsck --repair running already for 26 hours. >>>>>>>>> >>>>>>>>> Is it have sense to wait more? >>>>>>>>> >>>>>>>>> Thanks >>>>>>>>> >>>>>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>>>>> >>>>>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>>>>> wrote: >>>>>>>>> >>>>>>>>> Thank you for your answer. >>>>>>>>> >>>>>>>>> >>>>>>>>> The system kernel was and now: >>>>>>>>> >>>>>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>>>>> May 21 >>>>>>>>> 09:22:02 UTC 2012 >>>>>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>>>>> >>>>>>>>> the raid was created by: >>>>>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>>>>> >>>>>>>>> Disk are connected through RocketRaid 2670. >>>>>>>>> >>>>>>>>> for mounting I used line in fstab: >>>>>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>>>>> btrfs >>>>>>>>> defaults,compress=lzo 0 1 >>>>>>>>> >>>>>>>>> On machine was running several Virtual machines. >>>>>>>>> Only one >>>>>>>>> was actively using >>>>>>>>> disks. >>>>>>>>> >>>>>>>>> VM has active several threads: >>>>>>>>> 1. 2 threads reading big files (50GB each) >>>>>>>>> 2. reading from 50 files and writing one big file >>>>>>>>> 3. The kernel panic happens when I run another program >>>>>>>>> with 30 threads of >>>>>>>>> reading/writing of small files. >>>>>>>>> >>>>>>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>>>>>> file system which >>>>>>>>> actively used xattr. >>>>>>>>> >>>>>>>>> After reboot system was in this stage. >>>>>>>>> >>>>>>>>> I hope that btrfsck --repair will not make it worse, >>>>>>>>> It is >>>>>>>>> now running. >>>>>>>>> >>>>>>>>> **twitch** >>>>>>>>> >>>>>>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>>>>>> now, let >>>>>>>>> it finish (aborting it will make things worse), but I >>>>>>>>> suggest >>>>>>>>> waiting >>>>>>>>> until a few more people have weighed in before attempting >>>>>>>>> anything >>>>>>>>> beyond that. >>>>>>>>> >>>>>>>>> -- >>>>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>>>> linux-btrfs" in >>>>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>>>> <mailto:majordomo@vger.kernel.org> >>>>>>>>> More majordomo info at >>>>>>>>> http://vger.kernel.org/majordomo-info.html >>>>>>>>>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Arne, Can you advice how can I recover data? I tried almost everything what I found on https://btrfs.wiki.kernel.org /btrfs-restore restored some files but it is not what was stored. I have seen this command -------------------------------------------------- In case of a corrupted superblock, start by asking btrfsck to use an alternate copy of the superblock instead of the superblock #0. This is achieved via the -s option followed by the number of the alternate copy you wish to use. In the following example we ask for using the superblock copy #2 of /dev/sda7: # ./btrfsck -s 2 /dev/sd7 ----------------------------------------- but it gave me: $ sudo btrfsck -s 2 /dev/sdb btrfsck: invalid option -- ''s'' usage: btrfsck dev Btrfs Btrfs v0.19 What can I do more? On 06/04/2012 07:32 AM, Arne Jansen wrote:> On 04.06.2012 13:30, Maxim Mikheev wrote: >> How can I mount it at the first? > Let me state it differently: If you can''t mount it, you can''t scrub it. > >> On 06/04/2012 04:18 AM, Arne Jansen wrote: >>> On 04.06.2012 04:59, Liu Bo wrote: >>>> On 06/04/2012 10:18 AM, Maxim Mikheev wrote: >>>> >>>>> Hi Liu, >>>>> >>>>> 1) all of them not working (see dmesg at the end) >>>>> 2) >>>>> max@s0:~$ sudo btrfs scrub start /dev/sdb >>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>>> max@s0:~$ sudo btrfs scrub start /dev/sda >>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>>> max@s0:~$ sudo btrfs scrub start /dev/sdd >>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>>> max@s0:~$ sudo btrfs scrub start /dev/sde >>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>>> max@s0:~$ sudo btrfs scrub start /dev/sdf >>>>> ERROR: getting dev info for scrub failed: Inappropriate ioctl for device >>>>> >>> Even to scrub a single device, the filesystem has to be mounted. >>> >>>> (add Jan and Arne to cc, they are authors of scrub) >>>> >>>> I''m not an expert on scrub, and I''m not clear how to scrub a device directly :( >>>> >>>> btw, have you tried restore (for attempting to recover data from an >>>> unmountable filesystem): >>>> >>>> https://btrfs.wiki.kernel.org/index.php/Restore >>>> >>>> thanks, >>>> liubo >>>> >>>> >>>>> dmesg after all operations: >>>>> [ 2183.864056] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 2 >>>>> transid 9096 /dev/sdb >>>>> [ 2183.916128] btrfs: disk space caching is enabled >>>>> [ 2191.863409] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2191.872937] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2191.873666] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 2191.873678] Failed to read block groups: -5 >>>>> [ 2191.884636] btrfs: open_ctree failed >>>>> [ 2222.910225] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 3 >>>>> transid 9095 /dev/sdd >>>>> [ 2222.959128] btrfs: disk space caching is enabled >>>>> [ 2231.264285] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2231.274306] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2231.275194] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdd sector 2143292648) >>>>> [ 2231.275207] Failed to read block groups: -5 >>>>> [ 2231.288795] btrfs: open_ctree failed >>>>> [ 2240.624691] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 4 >>>>> transid 9096 /dev/sde >>>>> [ 2240.671344] btrfs: disk space caching is enabled >>>>> [ 2248.916772] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2248.928106] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2248.929091] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdd sector 2143292648) >>>>> [ 2248.929105] Failed to read block groups: -5 >>>>> [ 2248.939081] btrfs: open_ctree failed >>>>> [ 2253.829071] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>> transid 9096 /dev/sdf >>>>> [ 2253.879940] btrfs: disk space caching is enabled >>>>> [ 2261.754357] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2261.767118] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2261.767929] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdb sector 2143292648) >>>>> [ 2261.767942] Failed to read block groups: -5 >>>>> [ 2261.778219] btrfs: open_ctree failed >>>>> [ 2309.831415] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 1 >>>>> transid 9096 /dev/sda >>>>> [ 2309.904520] btrfs: disk space caching is enabled >>>>> [ 2318.286463] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2318.302991] parent transid verify failed on 5468060241920 wanted 9096 >>>>> found 7621 >>>>> [ 2318.304000] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>> /dev/sdd sector 2143292648) >>>>> [ 2318.304013] Failed to read block groups: -5 >>>>> [ 2318.314587] btrfs: open_ctree failed >>>>> >>>>> On 06/03/2012 10:16 PM, Liu Bo wrote: >>>>>> On 06/04/2012 09:43 AM, Maxim Mikheev wrote: >>>>>> >>>>>>> Hi Liu, >>>>>>> >>>>>>> thanks for advice. I tried it before btrfsck. results are here: >>>>>>> max@s0:~$ sudo mount /tank -o recovery >>>>>>> [sudo] password for max: >>>>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>>>> missing codepage or helper program, or other error >>>>>>> In some cases useful info is found in syslog - try >>>>>>> dmesg | tail or so >>>>>>> >>>>>>> max@s0:~$ sudo mount -o recovery /tank >>>>>>> mount: wrong fs type, bad option, bad superblock on /dev/sdf, >>>>>>> missing codepage or helper program, or other error >>>>>>> In some cases useful info is found in syslog - try >>>>>>> dmesg | tail or so >>>>>>> >>>>>> Two possible ways: >>>>>> >>>>>> 1) >>>>>> I noticed that your btrfs had 5 partitions in all: >>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>> >>>>>> Can you try to mount other disk partitions instead by hand, like: >>>>>> mount /dev/sdb /tank >>>>>> mount /dev/sdc /tank >>>>>> mount /dev/sdd /tank >>>>>> mount /dev/sde /tank >>>>>> mount /dev/sdf /tank >>>>>> >>>>>> 2) >>>>>> use btrfs''s scrub to resort to metadata backups created by RAID1. >>>>>> >>>>>> thanks, >>>>>> liubo >>>>>> >>>>>>> dmesg after boot before mount -o recovery: >>>>>>> [ 51.829352] parent transid verify failed on 5468060241920 wanted 9096 >>>>>>> found 7621 >>>>>>> [ 51.841153] parent transid verify failed on 5468060241920 wanted 9096 >>>>>>> found 7621 >>>>>>> [ 51.841603] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>>>> /dev/sdb sector 2143292648) >>>>>>> [ 51.841610] Failed to read block groups: -5 >>>>>>> [ 51.848057] btrfs: open_ctree failed >>>>>>> .............................. >>>>>>> dmesg after both mounts: >>>>>>> >>>>>>> [ 123.687773] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>>>> transid 9096 /dev/sdf >>>>>>> [ 123.733678] btrfs: use lzo compression >>>>>>> [ 123.733683] btrfs: enabling auto recovery >>>>>>> [ 123.733686] btrfs: disk space caching is enabled >>>>>>> [ 131.699910] parent transid verify failed on 5468060241920 wanted 9096 >>>>>>> found 7621 >>>>>>> [ 131.714018] parent transid verify failed on 5468060241920 wanted 9096 >>>>>>> found 7621 >>>>>>> [ 131.715059] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>>>> /dev/sdb sector 2143292648) >>>>>>> [ 131.715072] Failed to read block groups: -5 >>>>>>> [ 131.727176] btrfs: open_ctree failed >>>>>>> [ 161.697873] device fsid c9776e19-37eb-4f9c-bd6b-04e8dde97682 devid 5 >>>>>>> transid 9096 /dev/sdf >>>>>>> [ 161.746345] btrfs: use lzo compression >>>>>>> [ 161.746354] btrfs: enabling auto recovery >>>>>>> [ 161.746358] btrfs: disk space caching is enabled >>>>>>> [ 169.720823] parent transid verify failed on 5468060241920 wanted 9096 >>>>>>> found 7621 >>>>>>> [ 169.732048] parent transid verify failed on 5468060241920 wanted 9096 >>>>>>> found 7621 >>>>>>> [ 169.732611] btrfs read error corrected: ino 1 off 5468060241920 (dev >>>>>>> /dev/sdb sector 2143292648) >>>>>>> [ 169.732623] Failed to read block groups: -5 >>>>>>> [ 169.743437] btrfs: open_ctree failed >>>>>>> >>>>>>> So It does not work. I have seen in some posts command: >>>>>>> >>>>>>> sudo mount -s 2 -o recovery /tank >>>>>>> Should I try it? >>>>>>> >>>>>>> Please help me, I need to get this data ASAP. >>>>>>> >>>>>>> Regards, >>>>>>> Max >>>>>>> >>>>>>> On 06/03/2012 09:22 PM, Liu Bo wrote: >>>>>>>> On 06/02/2012 09:43 PM, Maxim Mikheev wrote: >>>>>>>> >>>>>>>>> Repair was not helpful. >>>>>>>>> Is any other ways to get access to data? >>>>>>>>> >>>>>>>>> Please help.... >>>>>>>>> >>>>>>>> Hi Maxim, >>>>>>>> >>>>>>>> Besides btrfsck --repair, we also have a recovery mount option to deal >>>>>>>> with your situation, >>>>>>>> maybe you can try mount xxx -o recovery and see if it helps? >>>>>>>> >>>>>>>> >>>>>>>> thanks, >>>>>>>> liubo >>>>>>>> >>>>>>>>> On 05/30/2012 11:15 PM, Michael K wrote: >>>>>>>>>> Let it run to completion. There is little you can do other than hope >>>>>>>>>> and wait. >>>>>>>>>> >>>>>>>>>> On May 30, 2012 9:02 PM, "Maxim Mikheev"<mikhmv@gmail.com >>>>>>>>>> <mailto:mikhmv@gmail.com>> wrote: >>>>>>>>>> >>>>>>>>>> btrfsck --repair running already for 26 hours. >>>>>>>>>> >>>>>>>>>> Is it have sense to wait more? >>>>>>>>>> >>>>>>>>>> Thanks >>>>>>>>>> >>>>>>>>>> On 05/29/2012 07:36 PM, cwillu wrote: >>>>>>>>>> >>>>>>>>>> On Tue, May 29, 2012 at 5:24 PM, Maxim >>>>>>>>>> Mikheev<mikhmv@gmail.com<mailto:mikhmv@gmail.com>> >>>>>>>>>> wrote: >>>>>>>>>> >>>>>>>>>> Thank you for your answer. >>>>>>>>>> >>>>>>>>>> >>>>>>>>>> The system kernel was and now: >>>>>>>>>> >>>>>>>>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon >>>>>>>>>> May 21 >>>>>>>>>> 09:22:02 UTC 2012 >>>>>>>>>> x86_64 x86_64 x86_64 GNU/Linux >>>>>>>>>> >>>>>>>>>> the raid was created by: >>>>>>>>>> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf >>>>>>>>>> >>>>>>>>>> Disk are connected through RocketRaid 2670. >>>>>>>>>> >>>>>>>>>> for mounting I used line in fstab: >>>>>>>>>> UUID=c9776e19-37eb-4f9c-bd6b-04e8dde97682 /tank >>>>>>>>>> btrfs >>>>>>>>>> defaults,compress=lzo 0 1 >>>>>>>>>> >>>>>>>>>> On machine was running several Virtual machines. >>>>>>>>>> Only one >>>>>>>>>> was actively using >>>>>>>>>> disks. >>>>>>>>>> >>>>>>>>>> VM has active several threads: >>>>>>>>>> 1. 2 threads reading big files (50GB each) >>>>>>>>>> 2. reading from 50 files and writing one big file >>>>>>>>>> 3. The kernel panic happens when I run another program >>>>>>>>>> with 30 threads of >>>>>>>>>> reading/writing of small files. >>>>>>>>>> >>>>>>>>>> Virtual Machine accessed to underline btrfs through 9-p >>>>>>>>>> file system which >>>>>>>>>> actively used xattr. >>>>>>>>>> >>>>>>>>>> After reboot system was in this stage. >>>>>>>>>> >>>>>>>>>> I hope that btrfsck --repair will not make it worse, >>>>>>>>>> It is >>>>>>>>>> now running. >>>>>>>>>> >>>>>>>>>> **twitch** >>>>>>>>>> >>>>>>>>>> Well, I also hope it won''t make it worse. Do not cancel it >>>>>>>>>> now, let >>>>>>>>>> it finish (aborting it will make things worse), but I >>>>>>>>>> suggest >>>>>>>>>> waiting >>>>>>>>>> until a few more people have weighed in before attempting >>>>>>>>>> anything >>>>>>>>>> beyond that. >>>>>>>>>> >>>>>>>>>> -- >>>>>>>>>> To unsubscribe from this list: send the line "unsubscribe >>>>>>>>>> linux-btrfs" in >>>>>>>>>> the body of a message to majordomo@vger.kernel.org >>>>>>>>>> <mailto:majordomo@vger.kernel.org> >>>>>>>>>> More majordomo info at >>>>>>>>>> http://vger.kernel.org/majordomo-info.html >>>>>>>>>>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote:> Hi Arne, > > Can you advice how can I recover data? > I tried almost everything what I found on https://btrfs.wiki.kernel.org > > /btrfs-restore restored some files but it is not what was stored.Can you post the complete output of find-root please?> I have seen this command > > -------------------------------------------------- > In case of a corrupted superblock, start by asking btrfsck to use an > alternate copy of the superblock instead of the superblock #0. This > is achieved via the -s option followed by the number of the > alternate copy you wish to use. In the following example we ask for > using the superblock copy #2 of /dev/sda7: > > # ./btrfsck -s 2 /dev/sd7 > > ----------------------------------------- > but it gave me: > $ sudo btrfsck -s 2 /dev/sdb > btrfsck: invalid option -- ''s'' > usage: btrfsck dev > Btrfs Btrfs v0.19What exact version of the package do you have? Did you compile from a recent git, or do you have a distribution -progs package installed? If the latter, what date does it have in the version number? Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- This year, I''m giving up Lent. ---
Thank you for helping. ~$ uname -a Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux I compiled progs from recent git (week or two ago). I can compile it again if there updates. The output of btrfs-find-root is pretty long and below: max@s0:~$ sudo btrfs-find-root /dev/sdb Super think''s the tree root is at 5532762525696, chunk root 20979712 Well block 619435147264 seems great, but generation doesn''t match, have=8746, want=9096 Well block 743223128064 seems great, but generation doesn''t match, have=8748, want=9096 Well block 743248633856 seems great, but generation doesn''t match, have=8752, want=9096 Well block 743266234368 seems great, but generation doesn''t match, have=8753, want=9096 Well block 969724792832 seems great, but generation doesn''t match, have=5637, want=9096 Well block 1098761064448 seems great, but generation doesn''t match, have=8754, want=9096 Well block 1098864041984 seems great, but generation doesn''t match, have=8755, want=9096 Well block 1098934513664 seems great, but generation doesn''t match, have=5669, want=9096 Well block 1158068850688 seems great, but generation doesn''t match, have=8756, want=9096 Well block 1158075731968 seems great, but generation doesn''t match, have=8758, want=9096 Well block 1158076698624 seems great, but generation doesn''t match, have=8757, want=9096 Well block 1158088052736 seems great, but generation doesn''t match, have=8759, want=9096 Well block 1158100176896 seems great, but generation doesn''t match, have=8760, want=9096 Well block 1158148661248 seems great, but generation doesn''t match, have=8761, want=9096 Well block 2100757860352 seems great, but generation doesn''t match, have=5764, want=9096 Well block 2466610917376 seems great, but generation doesn''t match, have=8762, want=9096 Well block 2466672377856 seems great, but generation doesn''t match, have=8763, want=9096 Well block 2488210591744 seems great, but generation doesn''t match, have=8764, want=9096 Well block 2531410206720 seems great, but generation doesn''t match, have=8767, want=9096 Well block 2531417419776 seems great, but generation doesn''t match, have=8768, want=9096 Well block 2531424862208 seems great, but generation doesn''t match, have=8766, want=9096 Well block 2531434991616 seems great, but generation doesn''t match, have=8771, want=9096 Well block 2531487129600 seems great, but generation doesn''t match, have=8772, want=9096 Well block 2531488305152 seems great, but generation doesn''t match, have=8773, want=9096 Well block 2531535376384 seems great, but generation doesn''t match, have=8774, want=9096 Well block 2531544256512 seems great, but generation doesn''t match, have=8775, want=9096 Well block 2531545665536 seems great, but generation doesn''t match, have=8777, want=9096 Well block 2531545931776 seems great, but generation doesn''t match, have=8776, want=9096 Well block 2531556044800 seems great, but generation doesn''t match, have=8778, want=9096 Well block 2531566743552 seems great, but generation doesn''t match, have=8779, want=9096 Well block 2531568861184 seems great, but generation doesn''t match, have=8781, want=9096 Well block 2531580190720 seems great, but generation doesn''t match, have=8780, want=9096 Well block 2531622813696 seems great, but generation doesn''t match, have=8783, want=9096 Well block 2531640410112 seems great, but generation doesn''t match, have=8784, want=9096 Well block 2579974737920 seems great, but generation doesn''t match, have=5973, want=9096 Well block 2579981524992 seems great, but generation doesn''t match, have=8786, want=9096 Well block 2579986628608 seems great, but generation doesn''t match, have=8785, want=9096 Well block 2579986755584 seems great, but generation doesn''t match, have=8787, want=9096 Well block 2580003811328 seems great, but generation doesn''t match, have=8788, want=9096 Well block 2580047167488 seems great, but generation doesn''t match, have=8789, want=9096 Well block 2580099772416 seems great, but generation doesn''t match, have=8790, want=9096 Well block 2580101951488 seems great, but generation doesn''t match, have=8791, want=9096 Well block 2580141211648 seems great, but generation doesn''t match, have=8793, want=9096 Well block 2580156325888 seems great, but generation doesn''t match, have=8795, want=9096 Well block 2580163178496 seems great, but generation doesn''t match, have=8797, want=9096 Well block 2580167774208 seems great, but generation doesn''t match, have=8796, want=9096 Well block 2580172333056 seems great, but generation doesn''t match, have=8798, want=9096 Well block 2580174557184 seems great, but generation doesn''t match, have=8799, want=9096 Well block 2580185227264 seems great, but generation doesn''t match, have=8801, want=9096 Well block 2580185432064 seems great, but generation doesn''t match, have=8800, want=9096 Well block 2623192694784 seems great, but generation doesn''t match, have=8802, want=9096 Well block 2623196753920 seems great, but generation doesn''t match, have=8803, want=9096 Well block 2623215923200 seems great, but generation doesn''t match, have=8805, want=9096 Well block 2623224307712 seems great, but generation doesn''t match, have=8806, want=9096 Well block 2623225470976 seems great, but generation doesn''t match, have=8807, want=9096 Well block 2623260155904 seems great, but generation doesn''t match, have=8808, want=9096 Well block 2623262482432 seems great, but generation doesn''t match, have=8809, want=9096 Well block 2623318437888 seems great, but generation doesn''t match, have=8810, want=9096 Well block 2623324426240 seems great, but generation doesn''t match, have=8812, want=9096 Well block 2623325806592 seems great, but generation doesn''t match, have=8811, want=9096 Well block 2623406964736 seems great, but generation doesn''t match, have=8813, want=9096 Well block 2623412379648 seems great, but generation doesn''t match, have=8814, want=9096 Well block 2714790506496 seems great, but generation doesn''t match, have=8815, want=9096 Well block 2827742171136 seems great, but generation doesn''t match, have=6089, want=9096 Well block 2827822125056 seems great, but generation doesn''t match, have=8816, want=9096 Well block 2827859574784 seems great, but generation doesn''t match, have=8817, want=9096 Well block 2827885608960 seems great, but generation doesn''t match, have=8818, want=9096 Well block 2827965370368 seems great, but generation doesn''t match, have=8819, want=9096 Well block 2827996184576 seems great, but generation doesn''t match, have=8820, want=9096 Well block 2887060779008 seems great, but generation doesn''t match, have=6096, want=9096 Well block 2887071612928 seems great, but generation doesn''t match, have=8823, want=9096 Well block 2887072702464 seems great, but generation doesn''t match, have=8821, want=9096 Well block 2887078473728 seems great, but generation doesn''t match, have=8822, want=9096 Well block 2887120949248 seems great, but generation doesn''t match, have=8825, want=9096 Well block 2887138783232 seems great, but generation doesn''t match, have=8827, want=9096 Well block 2887139446784 seems great, but generation doesn''t match, have=8828, want=9096 Well block 2887146373120 seems great, but generation doesn''t match, have=8829, want=9096 Well block 2887158681600 seems great, but generation doesn''t match, have=8830, want=9096 Well block 2887163641856 seems great, but generation doesn''t match, have=8831, want=9096 Well block 2887189938176 seems great, but generation doesn''t match, have=6113, want=9096 Well block 2887196823552 seems great, but generation doesn''t match, have=6114, want=9096 Well block 2887205089280 seems great, but generation doesn''t match, have=6115, want=9096 Well block 2887206965248 seems great, but generation doesn''t match, have=6116, want=9096 Well block 2887213744128 seems great, but generation doesn''t match, have=6117, want=9096 Well block 2887220994048 seems great, but generation doesn''t match, have=6118, want=9096 Well block 2887242031104 seems great, but generation doesn''t match, have=6119, want=9096 Well block 2887250567168 seems great, but generation doesn''t match, have=6120, want=9096 Well block 2887255285760 seems great, but generation doesn''t match, have=6121, want=9096 Well block 2887261667328 seems great, but generation doesn''t match, have=6122, want=9096 Well block 2887268433920 seems great, but generation doesn''t match, have=6123, want=9096 Well block 2887275102208 seems great, but generation doesn''t match, have=6124, want=9096 Well block 2887281631232 seems great, but generation doesn''t match, have=6125, want=9096 Well block 2887284424704 seems great, but generation doesn''t match, have=6126, want=9096 Well block 2887291019264 seems great, but generation doesn''t match, have=6127, want=9096 Well block 2887296684032 seems great, but generation doesn''t match, have=6128, want=9096 Well block 2887303159808 seems great, but generation doesn''t match, have=6129, want=9096 Well block 2887309631488 seems great, but generation doesn''t match, have=8833, want=9096 Well block 2887310200832 seems great, but generation doesn''t match, have=8832, want=9096 Well block 2946415992832 seems great, but generation doesn''t match, have=8834, want=9096 Well block 2946421940224 seems great, but generation doesn''t match, have=8835, want=9096 Well block 2946563887104 seems great, but generation doesn''t match, have=8837, want=9096 Well block 2946566844416 seems great, but generation doesn''t match, have=8838, want=9096 Well block 2984269713408 seems great, but generation doesn''t match, have=8844, want=9096 Well block 2984277966848 seems great, but generation doesn''t match, have=8840, want=9096 Well block 2984284127232 seems great, but generation doesn''t match, have=8845, want=9096 Well block 2984285650944 seems great, but generation doesn''t match, have=8846, want=9096 Well block 2984365961216 seems great, but generation doesn''t match, have=8847, want=9096 Well block 2984382795776 seems great, but generation doesn''t match, have=8848, want=9096 Well block 2984383729664 seems great, but generation doesn''t match, have=8849, want=9096 Well block 2984385560576 seems great, but generation doesn''t match, have=8850, want=9096 Well block 2984389398528 seems great, but generation doesn''t match, have=8851, want=9096 Well block 2984396165120 seems great, but generation doesn''t match, have=8852, want=9096 Well block 2984406396928 seems great, but generation doesn''t match, have=8854, want=9096 Well block 2984458833920 seems great, but generation doesn''t match, have=8856, want=9096 Well block 2984486629376 seems great, but generation doesn''t match, have=8857, want=9096 Well block 3022114922496 seems great, but generation doesn''t match, have=8860, want=9096 Well block 3022121263104 seems great, but generation doesn''t match, have=8859, want=9096 Well block 3022132695040 seems great, but generation doesn''t match, have=8861, want=9096 Well block 3022162219008 seems great, but generation doesn''t match, have=8862, want=9096 Well block 3022205132800 seems great, but generation doesn''t match, have=8863, want=9096 Well block 3022215454720 seems great, but generation doesn''t match, have=8864, want=9096 Well block 3022216146944 seems great, but generation doesn''t match, have=8865, want=9096 Well block 3022300733440 seems great, but generation doesn''t match, have=8866, want=9096 Well block 3022307860480 seems great, but generation doesn''t match, have=8867, want=9096 Well block 3065306001408 seems great, but generation doesn''t match, have=8868, want=9096 Well block 3065333374976 seems great, but generation doesn''t match, have=8870, want=9096 Well block 3065333981184 seems great, but generation doesn''t match, have=8869, want=9096 Well block 3065341300736 seems great, but generation doesn''t match, have=8871, want=9096 Well block 3065341947904 seems great, but generation doesn''t match, have=8872, want=9096 Well block 3065384251392 seems great, but generation doesn''t match, have=8873, want=9096 Well block 3065395499008 seems great, but generation doesn''t match, have=8874, want=9096 Well block 3065432784896 seems great, but generation doesn''t match, have=8875, want=9096 Well block 3065435996160 seems great, but generation doesn''t match, have=8876, want=9096 Well block 3065474654208 seems great, but generation doesn''t match, have=8877, want=9096 Well block 3065476714496 seems great, but generation doesn''t match, have=8878, want=9096 Well block 3065496293376 seems great, but generation doesn''t match, have=8880, want=9096 Well block 3065496625152 seems great, but generation doesn''t match, have=8881, want=9096 Well block 3065499615232 seems great, but generation doesn''t match, have=8882, want=9096 Well block 3065522581504 seems great, but generation doesn''t match, have=8886, want=9096 Well block 3103147339776 seems great, but generation doesn''t match, have=8887, want=9096 Well block 3103149096960 seems great, but generation doesn''t match, have=8888, want=9096 Well block 3103154950144 seems great, but generation doesn''t match, have=8889, want=9096 Well block 3103165263872 seems great, but generation doesn''t match, have=8890, want=9096 Well block 3103165509632 seems great, but generation doesn''t match, have=8891, want=9096 Well block 3103168155648 seems great, but generation doesn''t match, have=8892, want=9096 Well block 3103171592192 seems great, but generation doesn''t match, have=8893, want=9096 Well block 3103175024640 seems great, but generation doesn''t match, have=8894, want=9096 Well block 3103185793024 seems great, but generation doesn''t match, have=8896, want=9096 Well block 3103188922368 seems great, but generation doesn''t match, have=8895, want=9096 Well block 3103210176512 seems great, but generation doesn''t match, have=8897, want=9096 Well block 3103210971136 seems great, but generation doesn''t match, have=8898, want=9096 Well block 3103228485632 seems great, but generation doesn''t match, have=8899, want=9096 Well block 3103237844992 seems great, but generation doesn''t match, have=8900, want=9096 Well block 3103238144000 seems great, but generation doesn''t match, have=8901, want=9096 Well block 3103240544256 seems great, but generation doesn''t match, have=8902, want=9096 Well block 3103243677696 seems great, but generation doesn''t match, have=8903, want=9096 Well block 3103244447744 seems great, but generation doesn''t match, have=8904, want=9096 Well block 3103258718208 seems great, but generation doesn''t match, have=8905, want=9096 Well block 3103261290496 seems great, but generation doesn''t match, have=8906, want=9096 Well block 3103281463296 seems great, but generation doesn''t match, have=8907, want=9096 Well block 3103282315264 seems great, but generation doesn''t match, have=8908, want=9096 Well block 3103298441216 seems great, but generation doesn''t match, have=8909, want=9096 Well block 3103302393856 seems great, but generation doesn''t match, have=6389, want=9096 Well block 3103323017216 seems great, but generation doesn''t match, have=8910, want=9096 Well block 3103323447296 seems great, but generation doesn''t match, have=8911, want=9096 Well block 3103325896704 seems great, but generation doesn''t match, have=8912, want=9096 Well block 3103329468416 seems great, but generation doesn''t match, have=8913, want=9096 Well block 3103330926592 seems great, but generation doesn''t match, have=8914, want=9096 Well block 3103345201152 seems great, but generation doesn''t match, have=8915, want=9096 Well block 3103345958912 seems great, but generation doesn''t match, have=8916, want=9096 Well block 3103368241152 seems great, but generation doesn''t match, have=8917, want=9096 Well block 3103369076736 seems great, but generation doesn''t match, have=8918, want=9096 Well block 3103382474752 seems great, but generation doesn''t match, have=8919, want=9096 Well block 3103395471360 seems great, but generation doesn''t match, have=8920, want=9096 Well block 3103395753984 seems great, but generation doesn''t match, have=8921, want=9096 Well block 3103398383616 seems great, but generation doesn''t match, have=8922, want=9096 Well block 3103401709568 seems great, but generation doesn''t match, have=8923, want=9096 Well block 3103403487232 seems great, but generation doesn''t match, have=8924, want=9096 Well block 3103411101696 seems great, but generation doesn''t match, have=8925, want=9096 Well block 3216163819520 seems great, but generation doesn''t match, have=6451, want=9096 Well block 3216170438656 seems great, but generation doesn''t match, have=6507, want=9096 Well block 3216180826112 seems great, but generation doesn''t match, have=6509, want=9096 Well block 3216188633088 seems great, but generation doesn''t match, have=8926, want=9096 Well block 3216201068544 seems great, but generation doesn''t match, have=8927, want=9096 Well block 3216202895360 seems great, but generation doesn''t match, have=8928, want=9096 Well block 3216226570240 seems great, but generation doesn''t match, have=8929, want=9096 Well block 3216227221504 seems great, but generation doesn''t match, have=8930, want=9096 Well block 3216242212864 seems great, but generation doesn''t match, have=8931, want=9096 Well block 3216250953728 seems great, but generation doesn''t match, have=8932, want=9096 Well block 3216251314176 seems great, but generation doesn''t match, have=8933, want=9096 Well block 3216253743104 seems great, but generation doesn''t match, have=8934, want=9096 Well block 3216257495040 seems great, but generation doesn''t match, have=8935, want=9096 Well block 3216260059136 seems great, but generation doesn''t match, have=8936, want=9096 Well block 3216274374656 seems great, but generation doesn''t match, have=8938, want=9096 Well block 3216274612224 seems great, but generation doesn''t match, have=8937, want=9096 Well block 3216289120256 seems great, but generation doesn''t match, have=6477, want=9096 Well block 3216298061824 seems great, but generation doesn''t match, have=6520, want=9096 Well block 3216307097600 seems great, but generation doesn''t match, have=6521, want=9096 Well block 3216364236800 seems great, but generation doesn''t match, have=8939, want=9096 Well block 3216365404160 seems great, but generation doesn''t match, have=8940, want=9096 Well block 3216387854336 seems great, but generation doesn''t match, have=8941, want=9096 Well block 3216403558400 seems great, but generation doesn''t match, have=8942, want=9096 Well block 3216404262912 seems great, but generation doesn''t match, have=8943, want=9096 Well block 3216407031808 seems great, but generation doesn''t match, have=8944, want=9096 Well block 3216410210304 seems great, but generation doesn''t match, have=8945, want=9096 Well block 3216411242496 seems great, but generation doesn''t match, have=8946, want=9096 Well block 3259377135616 seems great, but generation doesn''t match, have=8947, want=9096 Well block 3259392462848 seems great, but generation doesn''t match, have=8949, want=9096 Well block 3259393642496 seems great, but generation doesn''t match, have=8950, want=9096 Well block 3259398754304 seems great, but generation doesn''t match, have=8948, want=9096 Well block 3259411787776 seems great, but generation doesn''t match, have=8951, want=9096 Well block 3259419025408 seems great, but generation doesn''t match, have=6571, want=9096 Well block 3259422646272 seems great, but generation doesn''t match, have=8952, want=9096 Well block 3259423043584 seems great, but generation doesn''t match, have=8953, want=9096 Well block 3259430191104 seems great, but generation doesn''t match, have=8954, want=9096 Well block 3259436924928 seems great, but generation doesn''t match, have=8955, want=9096 Well block 3259439054848 seems great, but generation doesn''t match, have=8956, want=9096 Well block 3259453116416 seems great, but generation doesn''t match, have=8958, want=9096 Well block 3259454189568 seems great, but generation doesn''t match, have=8957, want=9096 Well block 3259479363584 seems great, but generation doesn''t match, have=8959, want=9096 Well block 3259480965120 seems great, but generation doesn''t match, have=8960, want=9096 Well block 3259514920960 seems great, but generation doesn''t match, have=8961, want=9096 Well block 3259539255296 seems great, but generation doesn''t match, have=8962, want=9096 Well block 3259540201472 seems great, but generation doesn''t match, have=8963, want=9096 Well block 3259547783168 seems great, but generation doesn''t match, have=8964, want=9096 Well block 3259555237888 seems great, but generation doesn''t match, have=8965, want=9096 Well block 3259556577280 seems great, but generation doesn''t match, have=8966, want=9096 Well block 3259586670592 seems great, but generation doesn''t match, have=6553, want=9096 Well block 3259622793216 seems great, but generation doesn''t match, have=8967, want=9096 Well block 3259631628288 seems great, but generation doesn''t match, have=8968, want=9096 Well block 3259633111040 seems great, but generation doesn''t match, have=8969, want=9096 Well block 3307982426112 seems great, but generation doesn''t match, have=8971, want=9096 Well block 3308003274752 seems great, but generation doesn''t match, have=8972, want=9096 Well block 3308008677376 seems great, but generation doesn''t match, have=6881, want=9096 Well block 3308015095808 seems great, but generation doesn''t match, have=8975, want=9096 Well block 3308031418368 seems great, but generation doesn''t match, have=8977, want=9096 Well block 3308041916416 seems great, but generation doesn''t match, have=8979, want=9096 Well block 3308047122432 seems great, but generation doesn''t match, have=8978, want=9096 Well block 3308071120896 seems great, but generation doesn''t match, have=8980, want=9096 Well block 3308072128512 seems great, but generation doesn''t match, have=8981, want=9096 Well block 3308085768192 seems great, but generation doesn''t match, have=8982, want=9096 Well block 3308094103552 seems great, but generation doesn''t match, have=8983, want=9096 Well block 3308094361600 seems great, but generation doesn''t match, have=8984, want=9096 Well block 3308096815104 seems great, but generation doesn''t match, have=8985, want=9096 Well block 3308100771840 seems great, but generation doesn''t match, have=8986, want=9096 Well block 3308102782976 seems great, but generation doesn''t match, have=8987, want=9096 Well block 3308120915968 seems great, but generation doesn''t match, have=6936, want=9096 Well block 3308123820032 seems great, but generation doesn''t match, have=8988, want=9096 Well block 3308124946432 seems great, but generation doesn''t match, have=8989, want=9096 Well block 3308149964800 seems great, but generation doesn''t match, have=8990, want=9096 Well block 3308151025664 seems great, but generation doesn''t match, have=8991, want=9096 Well block 3308163780608 seems great, but generation doesn''t match, have=8992, want=9096 Well block 3308167315456 seems great, but generation doesn''t match, have=6961, want=9096 Well block 3308169089024 seems great, but generation doesn''t match, have=6962, want=9096 Well block 3308169162752 seems great, but generation doesn''t match, have=6963, want=9096 Well block 3308170137600 seems great, but generation doesn''t match, have=6964, want=9096 Well block 3308173950976 seems great, but generation doesn''t match, have=6965, want=9096 Well block 3308174385152 seems great, but generation doesn''t match, have=6966, want=9096 Well block 3308176388096 seems great, but generation doesn''t match, have=8993, want=9096 Well block 3308176777216 seems great, but generation doesn''t match, have=8994, want=9096 Well block 3308179423232 seems great, but generation doesn''t match, have=8995, want=9096 Well block 3308183064576 seems great, but generation doesn''t match, have=8996, want=9096 Well block 3308184117248 seems great, but generation doesn''t match, have=8997, want=9096 Well block 3308192186368 seems great, but generation doesn''t match, have=6974, want=9096 Well block 3308195463168 seems great, but generation doesn''t match, have=8998, want=9096 Well block 3308196433920 seems great, but generation doesn''t match, have=8999, want=9096 Well block 3308217372672 seems great, but generation doesn''t match, have=9000, want=9096 Well block 3308218216448 seems great, but generation doesn''t match, have=9001, want=9096 Well block 3356598226944 seems great, but generation doesn''t match, have=7009, want=9096 Well block 3356689129472 seems great, but generation doesn''t match, have=7012, want=9096 Well block 3356693929984 seems great, but generation doesn''t match, have=7013, want=9096 Well block 3356699000832 seems great, but generation doesn''t match, have=7015, want=9096 Well block 3356700569600 seems great, but generation doesn''t match, have=9002, want=9096 Well block 3356708118528 seems great, but generation doesn''t match, have=9003, want=9096 Well block 3356712800256 seems great, but generation doesn''t match, have=9005, want=9096 Well block 3356714262528 seems great, but generation doesn''t match, have=9004, want=9096 Well block 3356742012928 seems great, but generation doesn''t match, have=9009, want=9096 Well block 3356807712768 seems great, but generation doesn''t match, have=9010, want=9096 Well block 3356810645504 seems great, but generation doesn''t match, have=9011, want=9096 Well block 3405159735296 seems great, but generation doesn''t match, have=5263, want=9096 Well block 3405196206080 seems great, but generation doesn''t match, have=9012, want=9096 Well block 3405245640704 seems great, but generation doesn''t match, have=9015, want=9096 Well block 3405265301504 seems great, but generation doesn''t match, have=9016, want=9096 Well block 3405270106112 seems great, but generation doesn''t match, have=9017, want=9096 Well block 3405289385984 seems great, but generation doesn''t match, have=9019, want=9096 Well block 3405383557120 seems great, but generation doesn''t match, have=9020, want=9096 Well block 3405384617984 seems great, but generation doesn''t match, have=9021, want=9096 Well block 3475205132288 seems great, but generation doesn''t match, have=5278, want=9096 Well block 3475206082560 seems great, but generation doesn''t match, have=5279, want=9096 Well block 3475230576640 seems great, but generation doesn''t match, have=7123, want=9096 Well block 3475232927744 seems great, but generation doesn''t match, have=7125, want=9096 Well block 3475235311616 seems great, but generation doesn''t match, have=9023, want=9096 Well block 3475237179392 seems great, but generation doesn''t match, have=9025, want=9096 Well block 3475240382464 seems great, but generation doesn''t match, have=7128, want=9096 Well block 3475266129920 seems great, but generation doesn''t match, have=9028, want=9096 Well block 3475279261696 seems great, but generation doesn''t match, have=9027, want=9096 Well block 3475337981952 seems great, but generation doesn''t match, have=9032, want=9096 Well block 3475352297472 seems great, but generation doesn''t match, have=9031, want=9096 Well block 3518444724224 seems great, but generation doesn''t match, have=9038, want=9096 Well block 3518445928448 seems great, but generation doesn''t match, have=9035, want=9096 Well block 3518446641152 seems great, but generation doesn''t match, have=9033, want=9096 Well block 3518460506112 seems great, but generation doesn''t match, have=9042, want=9096 Well block 3518469042176 seems great, but generation doesn''t match, have=9041, want=9096 Well block 3518602706944 seems great, but generation doesn''t match, have=9048, want=9096 Well block 3518610194432 seems great, but generation doesn''t match, have=9044, want=9096 Well block 3518614880256 seems great, but generation doesn''t match, have=9049, want=9096 Well block 3518619107328 seems great, but generation doesn''t match, have=9050, want=9096 Well block 3567122915328 seems great, but generation doesn''t match, have=9054, want=9096 Well block 3567130849280 seems great, but generation doesn''t match, have=9055, want=9096 Well block 3567159414784 seems great, but generation doesn''t match, have=9056, want=9096 Well block 3567164891136 seems great, but generation doesn''t match, have=9059, want=9096 Well block 3567209717760 seems great, but generation doesn''t match, have=9061, want=9096 Well block 3567210893312 seems great, but generation doesn''t match, have=9062, want=9096 Well block 3674660569088 seems great, but generation doesn''t match, have=9063, want=9096 Well block 3674713247744 seems great, but generation doesn''t match, have=9065, want=9096 Well block 3674727108608 seems great, but generation doesn''t match, have=9066, want=9096 Well block 3674785320960 seems great, but generation doesn''t match, have=9067, want=9096 Well block 3674788827136 seems great, but generation doesn''t match, have=9069, want=9096 Well block 3674792534016 seems great, but generation doesn''t match, have=9068, want=9096 Well block 3674808315904 seems great, but generation doesn''t match, have=9071, want=9096 Well block 3728604938240 seems great, but generation doesn''t match, have=5297, want=9096 Well block 3728635133952 seems great, but generation doesn''t match, have=7598, want=9096 Well block 3728682438656 seems great, but generation doesn''t match, have=7599, want=9096 Well block 3728770461696 seems great, but generation doesn''t match, have=9074, want=9096 Well block 3728819929088 seems great, but generation doesn''t match, have=9073, want=9096 Well block 3820340637696 seems great, but generation doesn''t match, have=9075, want=9096 Well block 3960145862656 seems great, but generation doesn''t match, have=9076, want=9096 Well block 4046161489920 seems great, but generation doesn''t match, have=9077, want=9096 Well block 4046213595136 seems great, but generation doesn''t match, have=9079, want=9096 Well block 4046217637888 seems great, but generation doesn''t match, have=9081, want=9096 Well block 4046217846784 seems great, but generation doesn''t match, have=9080, want=9096 Well block 4046252736512 seems great, but generation doesn''t match, have=9083, want=9096 Well block 4046301515776 seems great, but generation doesn''t match, have=9085, want=9096 Well block 4046302756864 seems great, but generation doesn''t match, have=9084, want=9096 Well block 4046358921216 seems great, but generation doesn''t match, have=9086, want=9096 Well block 4046409486336 seems great, but generation doesn''t match, have=9087, want=9096 Well block 4046414626816 seems great, but generation doesn''t match, have=9088, want=9096 Well block 4148447113216 seems great, but generation doesn''t match, have=7618, want=9096 Well block 4148522024960 seems great, but generation doesn''t match, have=9089, want=9096 Well block 4148539457536 seems great, but generation doesn''t match, have=9090, want=9096 Well block 4455562448896 seems great, but generation doesn''t match, have=9092, want=9096 Well block 4455568302080 seems great, but generation doesn''t match, have=9091, want=9096 Well block 4848395739136 seems great, but generation doesn''t match, have=9093, want=9096 Well block 4923796594688 seems great, but generation doesn''t match, have=9094, want=9096 Well block 4923798065152 seems great, but generation doesn''t match, have=9095, want=9096 Found tree root at 5532762525696 On 06/04/2012 07:49 AM, Hugo Mills wrote:> On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote: >> Hi Arne, >> >> Can you advice how can I recover data? >> I tried almost everything what I found on https://btrfs.wiki.kernel.org >> >> /btrfs-restore restored some files but it is not what was stored. > Can you post the complete output of find-root please? > >> I have seen this command >> >> -------------------------------------------------- >> In case of a corrupted superblock, start by asking btrfsck to use an >> alternate copy of the superblock instead of the superblock #0. This >> is achieved via the -s option followed by the number of the >> alternate copy you wish to use. In the following example we ask for >> using the superblock copy #2 of /dev/sda7: >> >> # ./btrfsck -s 2 /dev/sd7 >> >> ----------------------------------------- >> but it gave me: >> $ sudo btrfsck -s 2 /dev/sdb >> btrfsck: invalid option -- ''s'' >> usage: btrfsck dev >> Btrfs Btrfs v0.19 > What exact version of the package do you have? Did you compile from > a recent git, or do you have a distribution -progs package installed? > If the latter, what date does it have in the version number? > > Hugo. >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote:> Thank you for helping.I''m not sure I can be of much help, but there were a few things missing from the earlier conversation that I wanted to check the details of.> ~$ uname -a > Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 > UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > > I compiled progs from recent git (week or two ago). I can compile it > again if there updates.No, that should be recent enough. I don''t think there have been any major updates since then.> The output of btrfs-find-root is pretty long and below: > max@s0:~$ sudo btrfs-find-root /dev/sdb > Super think''s the tree root is at 5532762525696, chunk root 20979712 > Well block 619435147264 seems great, but generation doesn''t match, > have=8746, want=9096This is not long enough, unfortunately. At least some of these should have a list of trees before them. At the moment, it''s not reporting any trees at all. (At least, it should be doing this unless Chris took that line of code out). Do you get anything extra from adding a few -v options to the command? I would suggest, in the absence of any better ideas, sorting this list by the "have=" value, and systematically working down from the largest to the smallest, running btrfs-restore -t $n for each one (where $n is corresponding block number). Hugo.> Well block 743223128064 seems great, but generation doesn''t match, > have=8748, want=9096 > Well block 743248633856 seems great, but generation doesn''t match, > have=8752, want=9096 > Well block 743266234368 seems great, but generation doesn''t match, > have=8753, want=9096 > Well block 969724792832 seems great, but generation doesn''t match, > have=5637, want=9096 > Well block 1098761064448 seems great, but generation doesn''t match, > have=8754, want=9096 > Well block 1098864041984 seems great, but generation doesn''t match, > have=8755, want=9096 > Well block 1098934513664 seems great, but generation doesn''t match, > have=5669, want=9096 > Well block 1158068850688 seems great, but generation doesn''t match, > have=8756, want=9096 > Well block 1158075731968 seems great, but generation doesn''t match, > have=8758, want=9096 > Well block 1158076698624 seems great, but generation doesn''t match, > have=8757, want=9096 > Well block 1158088052736 seems great, but generation doesn''t match, > have=8759, want=9096 > Well block 1158100176896 seems great, but generation doesn''t match, > have=8760, want=9096 > Well block 1158148661248 seems great, but generation doesn''t match, > have=8761, want=9096 > Well block 2100757860352 seems great, but generation doesn''t match, > have=5764, want=9096 > Well block 2466610917376 seems great, but generation doesn''t match, > have=8762, want=9096 > Well block 2466672377856 seems great, but generation doesn''t match, > have=8763, want=9096 > Well block 2488210591744 seems great, but generation doesn''t match, > have=8764, want=9096 > Well block 2531410206720 seems great, but generation doesn''t match, > have=8767, want=9096 > Well block 2531417419776 seems great, but generation doesn''t match, > have=8768, want=9096 > Well block 2531424862208 seems great, but generation doesn''t match, > have=8766, want=9096 > Well block 2531434991616 seems great, but generation doesn''t match, > have=8771, want=9096 > Well block 2531487129600 seems great, but generation doesn''t match, > have=8772, want=9096 > Well block 2531488305152 seems great, but generation doesn''t match, > have=8773, want=9096 > Well block 2531535376384 seems great, but generation doesn''t match, > have=8774, want=9096 > Well block 2531544256512 seems great, but generation doesn''t match, > have=8775, want=9096 > Well block 2531545665536 seems great, but generation doesn''t match, > have=8777, want=9096 > Well block 2531545931776 seems great, but generation doesn''t match, > have=8776, want=9096 > Well block 2531556044800 seems great, but generation doesn''t match, > have=8778, want=9096 > Well block 2531566743552 seems great, but generation doesn''t match, > have=8779, want=9096 > Well block 2531568861184 seems great, but generation doesn''t match, > have=8781, want=9096 > Well block 2531580190720 seems great, but generation doesn''t match, > have=8780, want=9096 > Well block 2531622813696 seems great, but generation doesn''t match, > have=8783, want=9096 > Well block 2531640410112 seems great, but generation doesn''t match, > have=8784, want=9096 > Well block 2579974737920 seems great, but generation doesn''t match, > have=5973, want=9096 > Well block 2579981524992 seems great, but generation doesn''t match, > have=8786, want=9096 > Well block 2579986628608 seems great, but generation doesn''t match, > have=8785, want=9096 > Well block 2579986755584 seems great, but generation doesn''t match, > have=8787, want=9096 > Well block 2580003811328 seems great, but generation doesn''t match, > have=8788, want=9096 > Well block 2580047167488 seems great, but generation doesn''t match, > have=8789, want=9096 > Well block 2580099772416 seems great, but generation doesn''t match, > have=8790, want=9096 > Well block 2580101951488 seems great, but generation doesn''t match, > have=8791, want=9096 > Well block 2580141211648 seems great, but generation doesn''t match, > have=8793, want=9096 > Well block 2580156325888 seems great, but generation doesn''t match, > have=8795, want=9096 > Well block 2580163178496 seems great, but generation doesn''t match, > have=8797, want=9096 > Well block 2580167774208 seems great, but generation doesn''t match, > have=8796, want=9096 > Well block 2580172333056 seems great, but generation doesn''t match, > have=8798, want=9096 > Well block 2580174557184 seems great, but generation doesn''t match, > have=8799, want=9096 > Well block 2580185227264 seems great, but generation doesn''t match, > have=8801, want=9096 > Well block 2580185432064 seems great, but generation doesn''t match, > have=8800, want=9096 > Well block 2623192694784 seems great, but generation doesn''t match, > have=8802, want=9096 > Well block 2623196753920 seems great, but generation doesn''t match, > have=8803, want=9096 > Well block 2623215923200 seems great, but generation doesn''t match, > have=8805, want=9096 > Well block 2623224307712 seems great, but generation doesn''t match, > have=8806, want=9096 > Well block 2623225470976 seems great, but generation doesn''t match, > have=8807, want=9096 > Well block 2623260155904 seems great, but generation doesn''t match, > have=8808, want=9096 > Well block 2623262482432 seems great, but generation doesn''t match, > have=8809, want=9096 > Well block 2623318437888 seems great, but generation doesn''t match, > have=8810, want=9096 > Well block 2623324426240 seems great, but generation doesn''t match, > have=8812, want=9096 > Well block 2623325806592 seems great, but generation doesn''t match, > have=8811, want=9096 > Well block 2623406964736 seems great, but generation doesn''t match, > have=8813, want=9096 > Well block 2623412379648 seems great, but generation doesn''t match, > have=8814, want=9096 > Well block 2714790506496 seems great, but generation doesn''t match, > have=8815, want=9096 > Well block 2827742171136 seems great, but generation doesn''t match, > have=6089, want=9096 > Well block 2827822125056 seems great, but generation doesn''t match, > have=8816, want=9096 > Well block 2827859574784 seems great, but generation doesn''t match, > have=8817, want=9096 > Well block 2827885608960 seems great, but generation doesn''t match, > have=8818, want=9096 > Well block 2827965370368 seems great, but generation doesn''t match, > have=8819, want=9096 > Well block 2827996184576 seems great, but generation doesn''t match, > have=8820, want=9096 > Well block 2887060779008 seems great, but generation doesn''t match, > have=6096, want=9096 > Well block 2887071612928 seems great, but generation doesn''t match, > have=8823, want=9096 > Well block 2887072702464 seems great, but generation doesn''t match, > have=8821, want=9096 > Well block 2887078473728 seems great, but generation doesn''t match, > have=8822, want=9096 > Well block 2887120949248 seems great, but generation doesn''t match, > have=8825, want=9096 > Well block 2887138783232 seems great, but generation doesn''t match, > have=8827, want=9096 > Well block 2887139446784 seems great, but generation doesn''t match, > have=8828, want=9096 > Well block 2887146373120 seems great, but generation doesn''t match, > have=8829, want=9096 > Well block 2887158681600 seems great, but generation doesn''t match, > have=8830, want=9096 > Well block 2887163641856 seems great, but generation doesn''t match, > have=8831, want=9096 > Well block 2887189938176 seems great, but generation doesn''t match, > have=6113, want=9096 > Well block 2887196823552 seems great, but generation doesn''t match, > have=6114, want=9096 > Well block 2887205089280 seems great, but generation doesn''t match, > have=6115, want=9096 > Well block 2887206965248 seems great, but generation doesn''t match, > have=6116, want=9096 > Well block 2887213744128 seems great, but generation doesn''t match, > have=6117, want=9096 > Well block 2887220994048 seems great, but generation doesn''t match, > have=6118, want=9096 > Well block 2887242031104 seems great, but generation doesn''t match, > have=6119, want=9096 > Well block 2887250567168 seems great, but generation doesn''t match, > have=6120, want=9096 > Well block 2887255285760 seems great, but generation doesn''t match, > have=6121, want=9096 > Well block 2887261667328 seems great, but generation doesn''t match, > have=6122, want=9096 > Well block 2887268433920 seems great, but generation doesn''t match, > have=6123, want=9096 > Well block 2887275102208 seems great, but generation doesn''t match, > have=6124, want=9096 > Well block 2887281631232 seems great, but generation doesn''t match, > have=6125, want=9096 > Well block 2887284424704 seems great, but generation doesn''t match, > have=6126, want=9096 > Well block 2887291019264 seems great, but generation doesn''t match, > have=6127, want=9096 > Well block 2887296684032 seems great, but generation doesn''t match, > have=6128, want=9096 > Well block 2887303159808 seems great, but generation doesn''t match, > have=6129, want=9096 > Well block 2887309631488 seems great, but generation doesn''t match, > have=8833, want=9096 > Well block 2887310200832 seems great, but generation doesn''t match, > have=8832, want=9096 > Well block 2946415992832 seems great, but generation doesn''t match, > have=8834, want=9096 > Well block 2946421940224 seems great, but generation doesn''t match, > have=8835, want=9096 > Well block 2946563887104 seems great, but generation doesn''t match, > have=8837, want=9096 > Well block 2946566844416 seems great, but generation doesn''t match, > have=8838, want=9096 > Well block 2984269713408 seems great, but generation doesn''t match, > have=8844, want=9096 > Well block 2984277966848 seems great, but generation doesn''t match, > have=8840, want=9096 > Well block 2984284127232 seems great, but generation doesn''t match, > have=8845, want=9096 > Well block 2984285650944 seems great, but generation doesn''t match, > have=8846, want=9096 > Well block 2984365961216 seems great, but generation doesn''t match, > have=8847, want=9096 > Well block 2984382795776 seems great, but generation doesn''t match, > have=8848, want=9096 > Well block 2984383729664 seems great, but generation doesn''t match, > have=8849, want=9096 > Well block 2984385560576 seems great, but generation doesn''t match, > have=8850, want=9096 > Well block 2984389398528 seems great, but generation doesn''t match, > have=8851, want=9096 > Well block 2984396165120 seems great, but generation doesn''t match, > have=8852, want=9096 > Well block 2984406396928 seems great, but generation doesn''t match, > have=8854, want=9096 > Well block 2984458833920 seems great, but generation doesn''t match, > have=8856, want=9096 > Well block 2984486629376 seems great, but generation doesn''t match, > have=8857, want=9096 > Well block 3022114922496 seems great, but generation doesn''t match, > have=8860, want=9096 > Well block 3022121263104 seems great, but generation doesn''t match, > have=8859, want=9096 > Well block 3022132695040 seems great, but generation doesn''t match, > have=8861, want=9096 > Well block 3022162219008 seems great, but generation doesn''t match, > have=8862, want=9096 > Well block 3022205132800 seems great, but generation doesn''t match, > have=8863, want=9096 > Well block 3022215454720 seems great, but generation doesn''t match, > have=8864, want=9096 > Well block 3022216146944 seems great, but generation doesn''t match, > have=8865, want=9096 > Well block 3022300733440 seems great, but generation doesn''t match, > have=8866, want=9096 > Well block 3022307860480 seems great, but generation doesn''t match, > have=8867, want=9096 > Well block 3065306001408 seems great, but generation doesn''t match, > have=8868, want=9096 > Well block 3065333374976 seems great, but generation doesn''t match, > have=8870, want=9096 > Well block 3065333981184 seems great, but generation doesn''t match, > have=8869, want=9096 > Well block 3065341300736 seems great, but generation doesn''t match, > have=8871, want=9096 > Well block 3065341947904 seems great, but generation doesn''t match, > have=8872, want=9096 > Well block 3065384251392 seems great, but generation doesn''t match, > have=8873, want=9096 > Well block 3065395499008 seems great, but generation doesn''t match, > have=8874, want=9096 > Well block 3065432784896 seems great, but generation doesn''t match, > have=8875, want=9096 > Well block 3065435996160 seems great, but generation doesn''t match, > have=8876, want=9096 > Well block 3065474654208 seems great, but generation doesn''t match, > have=8877, want=9096 > Well block 3065476714496 seems great, but generation doesn''t match, > have=8878, want=9096 > Well block 3065496293376 seems great, but generation doesn''t match, > have=8880, want=9096 > Well block 3065496625152 seems great, but generation doesn''t match, > have=8881, want=9096 > Well block 3065499615232 seems great, but generation doesn''t match, > have=8882, want=9096 > Well block 3065522581504 seems great, but generation doesn''t match, > have=8886, want=9096 > Well block 3103147339776 seems great, but generation doesn''t match, > have=8887, want=9096 > Well block 3103149096960 seems great, but generation doesn''t match, > have=8888, want=9096 > Well block 3103154950144 seems great, but generation doesn''t match, > have=8889, want=9096 > Well block 3103165263872 seems great, but generation doesn''t match, > have=8890, want=9096 > Well block 3103165509632 seems great, but generation doesn''t match, > have=8891, want=9096 > Well block 3103168155648 seems great, but generation doesn''t match, > have=8892, want=9096 > Well block 3103171592192 seems great, but generation doesn''t match, > have=8893, want=9096 > Well block 3103175024640 seems great, but generation doesn''t match, > have=8894, want=9096 > Well block 3103185793024 seems great, but generation doesn''t match, > have=8896, want=9096 > Well block 3103188922368 seems great, but generation doesn''t match, > have=8895, want=9096 > Well block 3103210176512 seems great, but generation doesn''t match, > have=8897, want=9096 > Well block 3103210971136 seems great, but generation doesn''t match, > have=8898, want=9096 > Well block 3103228485632 seems great, but generation doesn''t match, > have=8899, want=9096 > Well block 3103237844992 seems great, but generation doesn''t match, > have=8900, want=9096 > Well block 3103238144000 seems great, but generation doesn''t match, > have=8901, want=9096 > Well block 3103240544256 seems great, but generation doesn''t match, > have=8902, want=9096 > Well block 3103243677696 seems great, but generation doesn''t match, > have=8903, want=9096 > Well block 3103244447744 seems great, but generation doesn''t match, > have=8904, want=9096 > Well block 3103258718208 seems great, but generation doesn''t match, > have=8905, want=9096 > Well block 3103261290496 seems great, but generation doesn''t match, > have=8906, want=9096 > Well block 3103281463296 seems great, but generation doesn''t match, > have=8907, want=9096 > Well block 3103282315264 seems great, but generation doesn''t match, > have=8908, want=9096 > Well block 3103298441216 seems great, but generation doesn''t match, > have=8909, want=9096 > Well block 3103302393856 seems great, but generation doesn''t match, > have=6389, want=9096 > Well block 3103323017216 seems great, but generation doesn''t match, > have=8910, want=9096 > Well block 3103323447296 seems great, but generation doesn''t match, > have=8911, want=9096 > Well block 3103325896704 seems great, but generation doesn''t match, > have=8912, want=9096 > Well block 3103329468416 seems great, but generation doesn''t match, > have=8913, want=9096 > Well block 3103330926592 seems great, but generation doesn''t match, > have=8914, want=9096 > Well block 3103345201152 seems great, but generation doesn''t match, > have=8915, want=9096 > Well block 3103345958912 seems great, but generation doesn''t match, > have=8916, want=9096 > Well block 3103368241152 seems great, but generation doesn''t match, > have=8917, want=9096 > Well block 3103369076736 seems great, but generation doesn''t match, > have=8918, want=9096 > Well block 3103382474752 seems great, but generation doesn''t match, > have=8919, want=9096 > Well block 3103395471360 seems great, but generation doesn''t match, > have=8920, want=9096 > Well block 3103395753984 seems great, but generation doesn''t match, > have=8921, want=9096 > Well block 3103398383616 seems great, but generation doesn''t match, > have=8922, want=9096 > Well block 3103401709568 seems great, but generation doesn''t match, > have=8923, want=9096 > Well block 3103403487232 seems great, but generation doesn''t match, > have=8924, want=9096 > Well block 3103411101696 seems great, but generation doesn''t match, > have=8925, want=9096 > Well block 3216163819520 seems great, but generation doesn''t match, > have=6451, want=9096 > Well block 3216170438656 seems great, but generation doesn''t match, > have=6507, want=9096 > Well block 3216180826112 seems great, but generation doesn''t match, > have=6509, want=9096 > Well block 3216188633088 seems great, but generation doesn''t match, > have=8926, want=9096 > Well block 3216201068544 seems great, but generation doesn''t match, > have=8927, want=9096 > Well block 3216202895360 seems great, but generation doesn''t match, > have=8928, want=9096 > Well block 3216226570240 seems great, but generation doesn''t match, > have=8929, want=9096 > Well block 3216227221504 seems great, but generation doesn''t match, > have=8930, want=9096 > Well block 3216242212864 seems great, but generation doesn''t match, > have=8931, want=9096 > Well block 3216250953728 seems great, but generation doesn''t match, > have=8932, want=9096 > Well block 3216251314176 seems great, but generation doesn''t match, > have=8933, want=9096 > Well block 3216253743104 seems great, but generation doesn''t match, > have=8934, want=9096 > Well block 3216257495040 seems great, but generation doesn''t match, > have=8935, want=9096 > Well block 3216260059136 seems great, but generation doesn''t match, > have=8936, want=9096 > Well block 3216274374656 seems great, but generation doesn''t match, > have=8938, want=9096 > Well block 3216274612224 seems great, but generation doesn''t match, > have=8937, want=9096 > Well block 3216289120256 seems great, but generation doesn''t match, > have=6477, want=9096 > Well block 3216298061824 seems great, but generation doesn''t match, > have=6520, want=9096 > Well block 3216307097600 seems great, but generation doesn''t match, > have=6521, want=9096 > Well block 3216364236800 seems great, but generation doesn''t match, > have=8939, want=9096 > Well block 3216365404160 seems great, but generation doesn''t match, > have=8940, want=9096 > Well block 3216387854336 seems great, but generation doesn''t match, > have=8941, want=9096 > Well block 3216403558400 seems great, but generation doesn''t match, > have=8942, want=9096 > Well block 3216404262912 seems great, but generation doesn''t match, > have=8943, want=9096 > Well block 3216407031808 seems great, but generation doesn''t match, > have=8944, want=9096 > Well block 3216410210304 seems great, but generation doesn''t match, > have=8945, want=9096 > Well block 3216411242496 seems great, but generation doesn''t match, > have=8946, want=9096 > Well block 3259377135616 seems great, but generation doesn''t match, > have=8947, want=9096 > Well block 3259392462848 seems great, but generation doesn''t match, > have=8949, want=9096 > Well block 3259393642496 seems great, but generation doesn''t match, > have=8950, want=9096 > Well block 3259398754304 seems great, but generation doesn''t match, > have=8948, want=9096 > Well block 3259411787776 seems great, but generation doesn''t match, > have=8951, want=9096 > Well block 3259419025408 seems great, but generation doesn''t match, > have=6571, want=9096 > Well block 3259422646272 seems great, but generation doesn''t match, > have=8952, want=9096 > Well block 3259423043584 seems great, but generation doesn''t match, > have=8953, want=9096 > Well block 3259430191104 seems great, but generation doesn''t match, > have=8954, want=9096 > Well block 3259436924928 seems great, but generation doesn''t match, > have=8955, want=9096 > Well block 3259439054848 seems great, but generation doesn''t match, > have=8956, want=9096 > Well block 3259453116416 seems great, but generation doesn''t match, > have=8958, want=9096 > Well block 3259454189568 seems great, but generation doesn''t match, > have=8957, want=9096 > Well block 3259479363584 seems great, but generation doesn''t match, > have=8959, want=9096 > Well block 3259480965120 seems great, but generation doesn''t match, > have=8960, want=9096 > Well block 3259514920960 seems great, but generation doesn''t match, > have=8961, want=9096 > Well block 3259539255296 seems great, but generation doesn''t match, > have=8962, want=9096 > Well block 3259540201472 seems great, but generation doesn''t match, > have=8963, want=9096 > Well block 3259547783168 seems great, but generation doesn''t match, > have=8964, want=9096 > Well block 3259555237888 seems great, but generation doesn''t match, > have=8965, want=9096 > Well block 3259556577280 seems great, but generation doesn''t match, > have=8966, want=9096 > Well block 3259586670592 seems great, but generation doesn''t match, > have=6553, want=9096 > Well block 3259622793216 seems great, but generation doesn''t match, > have=8967, want=9096 > Well block 3259631628288 seems great, but generation doesn''t match, > have=8968, want=9096 > Well block 3259633111040 seems great, but generation doesn''t match, > have=8969, want=9096 > Well block 3307982426112 seems great, but generation doesn''t match, > have=8971, want=9096 > Well block 3308003274752 seems great, but generation doesn''t match, > have=8972, want=9096 > Well block 3308008677376 seems great, but generation doesn''t match, > have=6881, want=9096 > Well block 3308015095808 seems great, but generation doesn''t match, > have=8975, want=9096 > Well block 3308031418368 seems great, but generation doesn''t match, > have=8977, want=9096 > Well block 3308041916416 seems great, but generation doesn''t match, > have=8979, want=9096 > Well block 3308047122432 seems great, but generation doesn''t match, > have=8978, want=9096 > Well block 3308071120896 seems great, but generation doesn''t match, > have=8980, want=9096 > Well block 3308072128512 seems great, but generation doesn''t match, > have=8981, want=9096 > Well block 3308085768192 seems great, but generation doesn''t match, > have=8982, want=9096 > Well block 3308094103552 seems great, but generation doesn''t match, > have=8983, want=9096 > Well block 3308094361600 seems great, but generation doesn''t match, > have=8984, want=9096 > Well block 3308096815104 seems great, but generation doesn''t match, > have=8985, want=9096 > Well block 3308100771840 seems great, but generation doesn''t match, > have=8986, want=9096 > Well block 3308102782976 seems great, but generation doesn''t match, > have=8987, want=9096 > Well block 3308120915968 seems great, but generation doesn''t match, > have=6936, want=9096 > Well block 3308123820032 seems great, but generation doesn''t match, > have=8988, want=9096 > Well block 3308124946432 seems great, but generation doesn''t match, > have=8989, want=9096 > Well block 3308149964800 seems great, but generation doesn''t match, > have=8990, want=9096 > Well block 3308151025664 seems great, but generation doesn''t match, > have=8991, want=9096 > Well block 3308163780608 seems great, but generation doesn''t match, > have=8992, want=9096 > Well block 3308167315456 seems great, but generation doesn''t match, > have=6961, want=9096 > Well block 3308169089024 seems great, but generation doesn''t match, > have=6962, want=9096 > Well block 3308169162752 seems great, but generation doesn''t match, > have=6963, want=9096 > Well block 3308170137600 seems great, but generation doesn''t match, > have=6964, want=9096 > Well block 3308173950976 seems great, but generation doesn''t match, > have=6965, want=9096 > Well block 3308174385152 seems great, but generation doesn''t match, > have=6966, want=9096 > Well block 3308176388096 seems great, but generation doesn''t match, > have=8993, want=9096 > Well block 3308176777216 seems great, but generation doesn''t match, > have=8994, want=9096 > Well block 3308179423232 seems great, but generation doesn''t match, > have=8995, want=9096 > Well block 3308183064576 seems great, but generation doesn''t match, > have=8996, want=9096 > Well block 3308184117248 seems great, but generation doesn''t match, > have=8997, want=9096 > Well block 3308192186368 seems great, but generation doesn''t match, > have=6974, want=9096 > Well block 3308195463168 seems great, but generation doesn''t match, > have=8998, want=9096 > Well block 3308196433920 seems great, but generation doesn''t match, > have=8999, want=9096 > Well block 3308217372672 seems great, but generation doesn''t match, > have=9000, want=9096 > Well block 3308218216448 seems great, but generation doesn''t match, > have=9001, want=9096 > Well block 3356598226944 seems great, but generation doesn''t match, > have=7009, want=9096 > Well block 3356689129472 seems great, but generation doesn''t match, > have=7012, want=9096 > Well block 3356693929984 seems great, but generation doesn''t match, > have=7013, want=9096 > Well block 3356699000832 seems great, but generation doesn''t match, > have=7015, want=9096 > Well block 3356700569600 seems great, but generation doesn''t match, > have=9002, want=9096 > Well block 3356708118528 seems great, but generation doesn''t match, > have=9003, want=9096 > Well block 3356712800256 seems great, but generation doesn''t match, > have=9005, want=9096 > Well block 3356714262528 seems great, but generation doesn''t match, > have=9004, want=9096 > Well block 3356742012928 seems great, but generation doesn''t match, > have=9009, want=9096 > Well block 3356807712768 seems great, but generation doesn''t match, > have=9010, want=9096 > Well block 3356810645504 seems great, but generation doesn''t match, > have=9011, want=9096 > Well block 3405159735296 seems great, but generation doesn''t match, > have=5263, want=9096 > Well block 3405196206080 seems great, but generation doesn''t match, > have=9012, want=9096 > Well block 3405245640704 seems great, but generation doesn''t match, > have=9015, want=9096 > Well block 3405265301504 seems great, but generation doesn''t match, > have=9016, want=9096 > Well block 3405270106112 seems great, but generation doesn''t match, > have=9017, want=9096 > Well block 3405289385984 seems great, but generation doesn''t match, > have=9019, want=9096 > Well block 3405383557120 seems great, but generation doesn''t match, > have=9020, want=9096 > Well block 3405384617984 seems great, but generation doesn''t match, > have=9021, want=9096 > Well block 3475205132288 seems great, but generation doesn''t match, > have=5278, want=9096 > Well block 3475206082560 seems great, but generation doesn''t match, > have=5279, want=9096 > Well block 3475230576640 seems great, but generation doesn''t match, > have=7123, want=9096 > Well block 3475232927744 seems great, but generation doesn''t match, > have=7125, want=9096 > Well block 3475235311616 seems great, but generation doesn''t match, > have=9023, want=9096 > Well block 3475237179392 seems great, but generation doesn''t match, > have=9025, want=9096 > Well block 3475240382464 seems great, but generation doesn''t match, > have=7128, want=9096 > Well block 3475266129920 seems great, but generation doesn''t match, > have=9028, want=9096 > Well block 3475279261696 seems great, but generation doesn''t match, > have=9027, want=9096 > Well block 3475337981952 seems great, but generation doesn''t match, > have=9032, want=9096 > Well block 3475352297472 seems great, but generation doesn''t match, > have=9031, want=9096 > Well block 3518444724224 seems great, but generation doesn''t match, > have=9038, want=9096 > Well block 3518445928448 seems great, but generation doesn''t match, > have=9035, want=9096 > Well block 3518446641152 seems great, but generation doesn''t match, > have=9033, want=9096 > Well block 3518460506112 seems great, but generation doesn''t match, > have=9042, want=9096 > Well block 3518469042176 seems great, but generation doesn''t match, > have=9041, want=9096 > Well block 3518602706944 seems great, but generation doesn''t match, > have=9048, want=9096 > Well block 3518610194432 seems great, but generation doesn''t match, > have=9044, want=9096 > Well block 3518614880256 seems great, but generation doesn''t match, > have=9049, want=9096 > Well block 3518619107328 seems great, but generation doesn''t match, > have=9050, want=9096 > Well block 3567122915328 seems great, but generation doesn''t match, > have=9054, want=9096 > Well block 3567130849280 seems great, but generation doesn''t match, > have=9055, want=9096 > Well block 3567159414784 seems great, but generation doesn''t match, > have=9056, want=9096 > Well block 3567164891136 seems great, but generation doesn''t match, > have=9059, want=9096 > Well block 3567209717760 seems great, but generation doesn''t match, > have=9061, want=9096 > Well block 3567210893312 seems great, but generation doesn''t match, > have=9062, want=9096 > Well block 3674660569088 seems great, but generation doesn''t match, > have=9063, want=9096 > Well block 3674713247744 seems great, but generation doesn''t match, > have=9065, want=9096 > Well block 3674727108608 seems great, but generation doesn''t match, > have=9066, want=9096 > Well block 3674785320960 seems great, but generation doesn''t match, > have=9067, want=9096 > Well block 3674788827136 seems great, but generation doesn''t match, > have=9069, want=9096 > Well block 3674792534016 seems great, but generation doesn''t match, > have=9068, want=9096 > Well block 3674808315904 seems great, but generation doesn''t match, > have=9071, want=9096 > Well block 3728604938240 seems great, but generation doesn''t match, > have=5297, want=9096 > Well block 3728635133952 seems great, but generation doesn''t match, > have=7598, want=9096 > Well block 3728682438656 seems great, but generation doesn''t match, > have=7599, want=9096 > Well block 3728770461696 seems great, but generation doesn''t match, > have=9074, want=9096 > Well block 3728819929088 seems great, but generation doesn''t match, > have=9073, want=9096 > Well block 3820340637696 seems great, but generation doesn''t match, > have=9075, want=9096 > Well block 3960145862656 seems great, but generation doesn''t match, > have=9076, want=9096 > Well block 4046161489920 seems great, but generation doesn''t match, > have=9077, want=9096 > Well block 4046213595136 seems great, but generation doesn''t match, > have=9079, want=9096 > Well block 4046217637888 seems great, but generation doesn''t match, > have=9081, want=9096 > Well block 4046217846784 seems great, but generation doesn''t match, > have=9080, want=9096 > Well block 4046252736512 seems great, but generation doesn''t match, > have=9083, want=9096 > Well block 4046301515776 seems great, but generation doesn''t match, > have=9085, want=9096 > Well block 4046302756864 seems great, but generation doesn''t match, > have=9084, want=9096 > Well block 4046358921216 seems great, but generation doesn''t match, > have=9086, want=9096 > Well block 4046409486336 seems great, but generation doesn''t match, > have=9087, want=9096 > Well block 4046414626816 seems great, but generation doesn''t match, > have=9088, want=9096 > Well block 4148447113216 seems great, but generation doesn''t match, > have=7618, want=9096 > Well block 4148522024960 seems great, but generation doesn''t match, > have=9089, want=9096 > Well block 4148539457536 seems great, but generation doesn''t match, > have=9090, want=9096 > Well block 4455562448896 seems great, but generation doesn''t match, > have=9092, want=9096 > Well block 4455568302080 seems great, but generation doesn''t match, > have=9091, want=9096 > Well block 4848395739136 seems great, but generation doesn''t match, > have=9093, want=9096 > Well block 4923796594688 seems great, but generation doesn''t match, > have=9094, want=9096 > Well block 4923798065152 seems great, but generation doesn''t match, > have=9095, want=9096 > Found tree root at 5532762525696 > > > On 06/04/2012 07:49 AM, Hugo Mills wrote: > >On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote: > >>Hi Arne, > >> > >>Can you advice how can I recover data? > >>I tried almost everything what I found on https://btrfs.wiki.kernel.org > >> > >>/btrfs-restore restored some files but it is not what was stored. > > Can you post the complete output of find-root please? > > > >>I have seen this command > >> > >>-------------------------------------------------- > >>In case of a corrupted superblock, start by asking btrfsck to use an > >>alternate copy of the superblock instead of the superblock #0. This > >>is achieved via the -s option followed by the number of the > >>alternate copy you wish to use. In the following example we ask for > >>using the superblock copy #2 of /dev/sda7: > >> > >># ./btrfsck -s 2 /dev/sd7 > >> > >>----------------------------------------- > >>but it gave me: > >>$ sudo btrfsck -s 2 /dev/sdb > >>btrfsck: invalid option -- ''s'' > >>usage: btrfsck dev > >>Btrfs Btrfs v0.19 > > What exact version of the package do you have? Did you compile from > >a recent git, or do you have a distribution -progs package installed? > >If the latter, what date does it have in the version number? > > > > Hugo. > >-- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- __(_''> Squeak! ---
What have you done? Why do you need to recover data? What happened? A power failure? A kernel crash? On Tue, 29 May 2012 18:14:53 -0400, Maxim Mikheev wrote:> I recently decided to use btrfs. It works perfectly for a week even > under heavy load. Yesterday I destroyed backups as cannot afford to have > ~10TB in backups. I decided to switch on Btrfs because it was announced > that it stable already > I need to recover ~5TB data, this data is important and I do not have > backups....-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
It was a kernel panic from btrfs. I had around 40 parallel processes of reading/writing. On 06/04/2012 08:24 AM, Stefan Behrens wrote:> What have you done? Why do you need to recover data? What happened? A > power failure? A kernel crash? > > On Tue, 29 May 2012 18:14:53 -0400, Maxim Mikheev wrote: >> I recently decided to use btrfs. It works perfectly for a week even >> under heavy load. Yesterday I destroyed backups as cannot afford to have >> ~10TB in backups. I decided to switch on Btrfs because it was announced >> that it stable already >> I need to recover ~5TB data, this data is important and I do not have >> backups....-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
adding -v, as an example: sudo btrfs-find-root -v -v -v -v -v /dev/sdb didn''t change output at all. On 06/04/2012 08:11 AM, Hugo Mills wrote:> On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote: >> Thank you for helping. > I''m not sure I can be of much help, but there were a few things > missing from the earlier conversation that I wanted to check the > details of. > >> ~$ uname -a >> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 >> UTC 2012 x86_64 x86_64 x86_64 GNU/Linux >> >> I compiled progs from recent git (week or two ago). I can compile it >> again if there updates. > No, that should be recent enough. I don''t think there have been any > major updates since then. > >> The output of btrfs-find-root is pretty long and below: >> max@s0:~$ sudo btrfs-find-root /dev/sdb >> Super think''s the tree root is at 5532762525696, chunk root 20979712 >> Well block 619435147264 seems great, but generation doesn''t match, >> have=8746, want=9096 > This is not long enough, unfortunately. At least some of these > should have a list of trees before them. At the moment, it''s not > reporting any trees at all. (At least, it should be doing this unless > Chris took that line of code out). Do you get anything extra from > adding a few -v options to the command? > > I would suggest, in the absence of any better ideas, sorting this > list by the "have=" value, and systematically working down from the > largest to the smallest, running btrfs-restore -t $n for each one > (where $n is corresponding block number). > > Hugo. > >> Well block 743223128064 seems great, but generation doesn''t match, >> have=8748, want=9096 >> Well block 743248633856 seems great, but generation doesn''t match, >> have=8752, want=9096 >> Well block 743266234368 seems great, but generation doesn''t match, >> have=8753, want=9096 >> Well block 969724792832 seems great, but generation doesn''t match, >> have=5637, want=9096 >> Well block 1098761064448 seems great, but generation doesn''t match, >> have=8754, want=9096 >> Well block 1098864041984 seems great, but generation doesn''t match, >> have=8755, want=9096 >> Well block 1098934513664 seems great, but generation doesn''t match, >> have=5669, want=9096 >> Well block 1158068850688 seems great, but generation doesn''t match, >> have=8756, want=9096 >> Well block 1158075731968 seems great, but generation doesn''t match, >> have=8758, want=9096 >> Well block 1158076698624 seems great, but generation doesn''t match, >> have=8757, want=9096 >> Well block 1158088052736 seems great, but generation doesn''t match, >> have=8759, want=9096 >> Well block 1158100176896 seems great, but generation doesn''t match, >> have=8760, want=9096 >> Well block 1158148661248 seems great, but generation doesn''t match, >> have=8761, want=9096 >> Well block 2100757860352 seems great, but generation doesn''t match, >> have=5764, want=9096 >> Well block 2466610917376 seems great, but generation doesn''t match, >> have=8762, want=9096 >> Well block 2466672377856 seems great, but generation doesn''t match, >> have=8763, want=9096 >> Well block 2488210591744 seems great, but generation doesn''t match, >> have=8764, want=9096 >> Well block 2531410206720 seems great, but generation doesn''t match, >> have=8767, want=9096 >> Well block 2531417419776 seems great, but generation doesn''t match, >> have=8768, want=9096 >> Well block 2531424862208 seems great, but generation doesn''t match, >> have=8766, want=9096 >> Well block 2531434991616 seems great, but generation doesn''t match, >> have=8771, want=9096 >> Well block 2531487129600 seems great, but generation doesn''t match, >> have=8772, want=9096 >> Well block 2531488305152 seems great, but generation doesn''t match, >> have=8773, want=9096 >> Well block 2531535376384 seems great, but generation doesn''t match, >> have=8774, want=9096 >> Well block 2531544256512 seems great, but generation doesn''t match, >> have=8775, want=9096 >> Well block 2531545665536 seems great, but generation doesn''t match, >> have=8777, want=9096 >> Well block 2531545931776 seems great, but generation doesn''t match, >> have=8776, want=9096 >> Well block 2531556044800 seems great, but generation doesn''t match, >> have=8778, want=9096 >> Well block 2531566743552 seems great, but generation doesn''t match, >> have=8779, want=9096 >> Well block 2531568861184 seems great, but generation doesn''t match, >> have=8781, want=9096 >> Well block 2531580190720 seems great, but generation doesn''t match, >> have=8780, want=9096 >> Well block 2531622813696 seems great, but generation doesn''t match, >> have=8783, want=9096 >> Well block 2531640410112 seems great, but generation doesn''t match, >> have=8784, want=9096 >> Well block 2579974737920 seems great, but generation doesn''t match, >> have=5973, want=9096 >> Well block 2579981524992 seems great, but generation doesn''t match, >> have=8786, want=9096 >> Well block 2579986628608 seems great, but generation doesn''t match, >> have=8785, want=9096 >> Well block 2579986755584 seems great, but generation doesn''t match, >> have=8787, want=9096 >> Well block 2580003811328 seems great, but generation doesn''t match, >> have=8788, want=9096 >> Well block 2580047167488 seems great, but generation doesn''t match, >> have=8789, want=9096 >> Well block 2580099772416 seems great, but generation doesn''t match, >> have=8790, want=9096 >> Well block 2580101951488 seems great, but generation doesn''t match, >> have=8791, want=9096 >> Well block 2580141211648 seems great, but generation doesn''t match, >> have=8793, want=9096 >> Well block 2580156325888 seems great, but generation doesn''t match, >> have=8795, want=9096 >> Well block 2580163178496 seems great, but generation doesn''t match, >> have=8797, want=9096 >> Well block 2580167774208 seems great, but generation doesn''t match, >> have=8796, want=9096 >> Well block 2580172333056 seems great, but generation doesn''t match, >> have=8798, want=9096 >> Well block 2580174557184 seems great, but generation doesn''t match, >> have=8799, want=9096 >> Well block 2580185227264 seems great, but generation doesn''t match, >> have=8801, want=9096 >> Well block 2580185432064 seems great, but generation doesn''t match, >> have=8800, want=9096 >> Well block 2623192694784 seems great, but generation doesn''t match, >> have=8802, want=9096 >> Well block 2623196753920 seems great, but generation doesn''t match, >> have=8803, want=9096 >> Well block 2623215923200 seems great, but generation doesn''t match, >> have=8805, want=9096 >> Well block 2623224307712 seems great, but generation doesn''t match, >> have=8806, want=9096 >> Well block 2623225470976 seems great, but generation doesn''t match, >> have=8807, want=9096 >> Well block 2623260155904 seems great, but generation doesn''t match, >> have=8808, want=9096 >> Well block 2623262482432 seems great, but generation doesn''t match, >> have=8809, want=9096 >> Well block 2623318437888 seems great, but generation doesn''t match, >> have=8810, want=9096 >> Well block 2623324426240 seems great, but generation doesn''t match, >> have=8812, want=9096 >> Well block 2623325806592 seems great, but generation doesn''t match, >> have=8811, want=9096 >> Well block 2623406964736 seems great, but generation doesn''t match, >> have=8813, want=9096 >> Well block 2623412379648 seems great, but generation doesn''t match, >> have=8814, want=9096 >> Well block 2714790506496 seems great, but generation doesn''t match, >> have=8815, want=9096 >> Well block 2827742171136 seems great, but generation doesn''t match, >> have=6089, want=9096 >> Well block 2827822125056 seems great, but generation doesn''t match, >> have=8816, want=9096 >> Well block 2827859574784 seems great, but generation doesn''t match, >> have=8817, want=9096 >> Well block 2827885608960 seems great, but generation doesn''t match, >> have=8818, want=9096 >> Well block 2827965370368 seems great, but generation doesn''t match, >> have=8819, want=9096 >> Well block 2827996184576 seems great, but generation doesn''t match, >> have=8820, want=9096 >> Well block 2887060779008 seems great, but generation doesn''t match, >> have=6096, want=9096 >> Well block 2887071612928 seems great, but generation doesn''t match, >> have=8823, want=9096 >> Well block 2887072702464 seems great, but generation doesn''t match, >> have=8821, want=9096 >> Well block 2887078473728 seems great, but generation doesn''t match, >> have=8822, want=9096 >> Well block 2887120949248 seems great, but generation doesn''t match, >> have=8825, want=9096 >> Well block 2887138783232 seems great, but generation doesn''t match, >> have=8827, want=9096 >> Well block 2887139446784 seems great, but generation doesn''t match, >> have=8828, want=9096 >> Well block 2887146373120 seems great, but generation doesn''t match, >> have=8829, want=9096 >> Well block 2887158681600 seems great, but generation doesn''t match, >> have=8830, want=9096 >> Well block 2887163641856 seems great, but generation doesn''t match, >> have=8831, want=9096 >> Well block 2887189938176 seems great, but generation doesn''t match, >> have=6113, want=9096 >> Well block 2887196823552 seems great, but generation doesn''t match, >> have=6114, want=9096 >> Well block 2887205089280 seems great, but generation doesn''t match, >> have=6115, want=9096 >> Well block 2887206965248 seems great, but generation doesn''t match, >> have=6116, want=9096 >> Well block 2887213744128 seems great, but generation doesn''t match, >> have=6117, want=9096 >> Well block 2887220994048 seems great, but generation doesn''t match, >> have=6118, want=9096 >> Well block 2887242031104 seems great, but generation doesn''t match, >> have=6119, want=9096 >> Well block 2887250567168 seems great, but generation doesn''t match, >> have=6120, want=9096 >> Well block 2887255285760 seems great, but generation doesn''t match, >> have=6121, want=9096 >> Well block 2887261667328 seems great, but generation doesn''t match, >> have=6122, want=9096 >> Well block 2887268433920 seems great, but generation doesn''t match, >> have=6123, want=9096 >> Well block 2887275102208 seems great, but generation doesn''t match, >> have=6124, want=9096 >> Well block 2887281631232 seems great, but generation doesn''t match, >> have=6125, want=9096 >> Well block 2887284424704 seems great, but generation doesn''t match, >> have=6126, want=9096 >> Well block 2887291019264 seems great, but generation doesn''t match, >> have=6127, want=9096 >> Well block 2887296684032 seems great, but generation doesn''t match, >> have=6128, want=9096 >> Well block 2887303159808 seems great, but generation doesn''t match, >> have=6129, want=9096 >> Well block 2887309631488 seems great, but generation doesn''t match, >> have=8833, want=9096 >> Well block 2887310200832 seems great, but generation doesn''t match, >> have=8832, want=9096 >> Well block 2946415992832 seems great, but generation doesn''t match, >> have=8834, want=9096 >> Well block 2946421940224 seems great, but generation doesn''t match, >> have=8835, want=9096 >> Well block 2946563887104 seems great, but generation doesn''t match, >> have=8837, want=9096 >> Well block 2946566844416 seems great, but generation doesn''t match, >> have=8838, want=9096 >> Well block 2984269713408 seems great, but generation doesn''t match, >> have=8844, want=9096 >> Well block 2984277966848 seems great, but generation doesn''t match, >> have=8840, want=9096 >> Well block 2984284127232 seems great, but generation doesn''t match, >> have=8845, want=9096 >> Well block 2984285650944 seems great, but generation doesn''t match, >> have=8846, want=9096 >> Well block 2984365961216 seems great, but generation doesn''t match, >> have=8847, want=9096 >> Well block 2984382795776 seems great, but generation doesn''t match, >> have=8848, want=9096 >> Well block 2984383729664 seems great, but generation doesn''t match, >> have=8849, want=9096 >> Well block 2984385560576 seems great, but generation doesn''t match, >> have=8850, want=9096 >> Well block 2984389398528 seems great, but generation doesn''t match, >> have=8851, want=9096 >> Well block 2984396165120 seems great, but generation doesn''t match, >> have=8852, want=9096 >> Well block 2984406396928 seems great, but generation doesn''t match, >> have=8854, want=9096 >> Well block 2984458833920 seems great, but generation doesn''t match, >> have=8856, want=9096 >> Well block 2984486629376 seems great, but generation doesn''t match, >> have=8857, want=9096 >> Well block 3022114922496 seems great, but generation doesn''t match, >> have=8860, want=9096 >> Well block 3022121263104 seems great, but generation doesn''t match, >> have=8859, want=9096 >> Well block 3022132695040 seems great, but generation doesn''t match, >> have=8861, want=9096 >> Well block 3022162219008 seems great, but generation doesn''t match, >> have=8862, want=9096 >> Well block 3022205132800 seems great, but generation doesn''t match, >> have=8863, want=9096 >> Well block 3022215454720 seems great, but generation doesn''t match, >> have=8864, want=9096 >> Well block 3022216146944 seems great, but generation doesn''t match, >> have=8865, want=9096 >> Well block 3022300733440 seems great, but generation doesn''t match, >> have=8866, want=9096 >> Well block 3022307860480 seems great, but generation doesn''t match, >> have=8867, want=9096 >> Well block 3065306001408 seems great, but generation doesn''t match, >> have=8868, want=9096 >> Well block 3065333374976 seems great, but generation doesn''t match, >> have=8870, want=9096 >> Well block 3065333981184 seems great, but generation doesn''t match, >> have=8869, want=9096 >> Well block 3065341300736 seems great, but generation doesn''t match, >> have=8871, want=9096 >> Well block 3065341947904 seems great, but generation doesn''t match, >> have=8872, want=9096 >> Well block 3065384251392 seems great, but generation doesn''t match, >> have=8873, want=9096 >> Well block 3065395499008 seems great, but generation doesn''t match, >> have=8874, want=9096 >> Well block 3065432784896 seems great, but generation doesn''t match, >> have=8875, want=9096 >> Well block 3065435996160 seems great, but generation doesn''t match, >> have=8876, want=9096 >> Well block 3065474654208 seems great, but generation doesn''t match, >> have=8877, want=9096 >> Well block 3065476714496 seems great, but generation doesn''t match, >> have=8878, want=9096 >> Well block 3065496293376 seems great, but generation doesn''t match, >> have=8880, want=9096 >> Well block 3065496625152 seems great, but generation doesn''t match, >> have=8881, want=9096 >> Well block 3065499615232 seems great, but generation doesn''t match, >> have=8882, want=9096 >> Well block 3065522581504 seems great, but generation doesn''t match, >> have=8886, want=9096 >> Well block 3103147339776 seems great, but generation doesn''t match, >> have=8887, want=9096 >> Well block 3103149096960 seems great, but generation doesn''t match, >> have=8888, want=9096 >> Well block 3103154950144 seems great, but generation doesn''t match, >> have=8889, want=9096 >> Well block 3103165263872 seems great, but generation doesn''t match, >> have=8890, want=9096 >> Well block 3103165509632 seems great, but generation doesn''t match, >> have=8891, want=9096 >> Well block 3103168155648 seems great, but generation doesn''t match, >> have=8892, want=9096 >> Well block 3103171592192 seems great, but generation doesn''t match, >> have=8893, want=9096 >> Well block 3103175024640 seems great, but generation doesn''t match, >> have=8894, want=9096 >> Well block 3103185793024 seems great, but generation doesn''t match, >> have=8896, want=9096 >> Well block 3103188922368 seems great, but generation doesn''t match, >> have=8895, want=9096 >> Well block 3103210176512 seems great, but generation doesn''t match, >> have=8897, want=9096 >> Well block 3103210971136 seems great, but generation doesn''t match, >> have=8898, want=9096 >> Well block 3103228485632 seems great, but generation doesn''t match, >> have=8899, want=9096 >> Well block 3103237844992 seems great, but generation doesn''t match, >> have=8900, want=9096 >> Well block 3103238144000 seems great, but generation doesn''t match, >> have=8901, want=9096 >> Well block 3103240544256 seems great, but generation doesn''t match, >> have=8902, want=9096 >> Well block 3103243677696 seems great, but generation doesn''t match, >> have=8903, want=9096 >> Well block 3103244447744 seems great, but generation doesn''t match, >> have=8904, want=9096 >> Well block 3103258718208 seems great, but generation doesn''t match, >> have=8905, want=9096 >> Well block 3103261290496 seems great, but generation doesn''t match, >> have=8906, want=9096 >> Well block 3103281463296 seems great, but generation doesn''t match, >> have=8907, want=9096 >> Well block 3103282315264 seems great, but generation doesn''t match, >> have=8908, want=9096 >> Well block 3103298441216 seems great, but generation doesn''t match, >> have=8909, want=9096 >> Well block 3103302393856 seems great, but generation doesn''t match, >> have=6389, want=9096 >> Well block 3103323017216 seems great, but generation doesn''t match, >> have=8910, want=9096 >> Well block 3103323447296 seems great, but generation doesn''t match, >> have=8911, want=9096 >> Well block 3103325896704 seems great, but generation doesn''t match, >> have=8912, want=9096 >> Well block 3103329468416 seems great, but generation doesn''t match, >> have=8913, want=9096 >> Well block 3103330926592 seems great, but generation doesn''t match, >> have=8914, want=9096 >> Well block 3103345201152 seems great, but generation doesn''t match, >> have=8915, want=9096 >> Well block 3103345958912 seems great, but generation doesn''t match, >> have=8916, want=9096 >> Well block 3103368241152 seems great, but generation doesn''t match, >> have=8917, want=9096 >> Well block 3103369076736 seems great, but generation doesn''t match, >> have=8918, want=9096 >> Well block 3103382474752 seems great, but generation doesn''t match, >> have=8919, want=9096 >> Well block 3103395471360 seems great, but generation doesn''t match, >> have=8920, want=9096 >> Well block 3103395753984 seems great, but generation doesn''t match, >> have=8921, want=9096 >> Well block 3103398383616 seems great, but generation doesn''t match, >> have=8922, want=9096 >> Well block 3103401709568 seems great, but generation doesn''t match, >> have=8923, want=9096 >> Well block 3103403487232 seems great, but generation doesn''t match, >> have=8924, want=9096 >> Well block 3103411101696 seems great, but generation doesn''t match, >> have=8925, want=9096 >> Well block 3216163819520 seems great, but generation doesn''t match, >> have=6451, want=9096 >> Well block 3216170438656 seems great, but generation doesn''t match, >> have=6507, want=9096 >> Well block 3216180826112 seems great, but generation doesn''t match, >> have=6509, want=9096 >> Well block 3216188633088 seems great, but generation doesn''t match, >> have=8926, want=9096 >> Well block 3216201068544 seems great, but generation doesn''t match, >> have=8927, want=9096 >> Well block 3216202895360 seems great, but generation doesn''t match, >> have=8928, want=9096 >> Well block 3216226570240 seems great, but generation doesn''t match, >> have=8929, want=9096 >> Well block 3216227221504 seems great, but generation doesn''t match, >> have=8930, want=9096 >> Well block 3216242212864 seems great, but generation doesn''t match, >> have=8931, want=9096 >> Well block 3216250953728 seems great, but generation doesn''t match, >> have=8932, want=9096 >> Well block 3216251314176 seems great, but generation doesn''t match, >> have=8933, want=9096 >> Well block 3216253743104 seems great, but generation doesn''t match, >> have=8934, want=9096 >> Well block 3216257495040 seems great, but generation doesn''t match, >> have=8935, want=9096 >> Well block 3216260059136 seems great, but generation doesn''t match, >> have=8936, want=9096 >> Well block 3216274374656 seems great, but generation doesn''t match, >> have=8938, want=9096 >> Well block 3216274612224 seems great, but generation doesn''t match, >> have=8937, want=9096 >> Well block 3216289120256 seems great, but generation doesn''t match, >> have=6477, want=9096 >> Well block 3216298061824 seems great, but generation doesn''t match, >> have=6520, want=9096 >> Well block 3216307097600 seems great, but generation doesn''t match, >> have=6521, want=9096 >> Well block 3216364236800 seems great, but generation doesn''t match, >> have=8939, want=9096 >> Well block 3216365404160 seems great, but generation doesn''t match, >> have=8940, want=9096 >> Well block 3216387854336 seems great, but generation doesn''t match, >> have=8941, want=9096 >> Well block 3216403558400 seems great, but generation doesn''t match, >> have=8942, want=9096 >> Well block 3216404262912 seems great, but generation doesn''t match, >> have=8943, want=9096 >> Well block 3216407031808 seems great, but generation doesn''t match, >> have=8944, want=9096 >> Well block 3216410210304 seems great, but generation doesn''t match, >> have=8945, want=9096 >> Well block 3216411242496 seems great, but generation doesn''t match, >> have=8946, want=9096 >> Well block 3259377135616 seems great, but generation doesn''t match, >> have=8947, want=9096 >> Well block 3259392462848 seems great, but generation doesn''t match, >> have=8949, want=9096 >> Well block 3259393642496 seems great, but generation doesn''t match, >> have=8950, want=9096 >> Well block 3259398754304 seems great, but generation doesn''t match, >> have=8948, want=9096 >> Well block 3259411787776 seems great, but generation doesn''t match, >> have=8951, want=9096 >> Well block 3259419025408 seems great, but generation doesn''t match, >> have=6571, want=9096 >> Well block 3259422646272 seems great, but generation doesn''t match, >> have=8952, want=9096 >> Well block 3259423043584 seems great, but generation doesn''t match, >> have=8953, want=9096 >> Well block 3259430191104 seems great, but generation doesn''t match, >> have=8954, want=9096 >> Well block 3259436924928 seems great, but generation doesn''t match, >> have=8955, want=9096 >> Well block 3259439054848 seems great, but generation doesn''t match, >> have=8956, want=9096 >> Well block 3259453116416 seems great, but generation doesn''t match, >> have=8958, want=9096 >> Well block 3259454189568 seems great, but generation doesn''t match, >> have=8957, want=9096 >> Well block 3259479363584 seems great, but generation doesn''t match, >> have=8959, want=9096 >> Well block 3259480965120 seems great, but generation doesn''t match, >> have=8960, want=9096 >> Well block 3259514920960 seems great, but generation doesn''t match, >> have=8961, want=9096 >> Well block 3259539255296 seems great, but generation doesn''t match, >> have=8962, want=9096 >> Well block 3259540201472 seems great, but generation doesn''t match, >> have=8963, want=9096 >> Well block 3259547783168 seems great, but generation doesn''t match, >> have=8964, want=9096 >> Well block 3259555237888 seems great, but generation doesn''t match, >> have=8965, want=9096 >> Well block 3259556577280 seems great, but generation doesn''t match, >> have=8966, want=9096 >> Well block 3259586670592 seems great, but generation doesn''t match, >> have=6553, want=9096 >> Well block 3259622793216 seems great, but generation doesn''t match, >> have=8967, want=9096 >> Well block 3259631628288 seems great, but generation doesn''t match, >> have=8968, want=9096 >> Well block 3259633111040 seems great, but generation doesn''t match, >> have=8969, want=9096 >> Well block 3307982426112 seems great, but generation doesn''t match, >> have=8971, want=9096 >> Well block 3308003274752 seems great, but generation doesn''t match, >> have=8972, want=9096 >> Well block 3308008677376 seems great, but generation doesn''t match, >> have=6881, want=9096 >> Well block 3308015095808 seems great, but generation doesn''t match, >> have=8975, want=9096 >> Well block 3308031418368 seems great, but generation doesn''t match, >> have=8977, want=9096 >> Well block 3308041916416 seems great, but generation doesn''t match, >> have=8979, want=9096 >> Well block 3308047122432 seems great, but generation doesn''t match, >> have=8978, want=9096 >> Well block 3308071120896 seems great, but generation doesn''t match, >> have=8980, want=9096 >> Well block 3308072128512 seems great, but generation doesn''t match, >> have=8981, want=9096 >> Well block 3308085768192 seems great, but generation doesn''t match, >> have=8982, want=9096 >> Well block 3308094103552 seems great, but generation doesn''t match, >> have=8983, want=9096 >> Well block 3308094361600 seems great, but generation doesn''t match, >> have=8984, want=9096 >> Well block 3308096815104 seems great, but generation doesn''t match, >> have=8985, want=9096 >> Well block 3308100771840 seems great, but generation doesn''t match, >> have=8986, want=9096 >> Well block 3308102782976 seems great, but generation doesn''t match, >> have=8987, want=9096 >> Well block 3308120915968 seems great, but generation doesn''t match, >> have=6936, want=9096 >> Well block 3308123820032 seems great, but generation doesn''t match, >> have=8988, want=9096 >> Well block 3308124946432 seems great, but generation doesn''t match, >> have=8989, want=9096 >> Well block 3308149964800 seems great, but generation doesn''t match, >> have=8990, want=9096 >> Well block 3308151025664 seems great, but generation doesn''t match, >> have=8991, want=9096 >> Well block 3308163780608 seems great, but generation doesn''t match, >> have=8992, want=9096 >> Well block 3308167315456 seems great, but generation doesn''t match, >> have=6961, want=9096 >> Well block 3308169089024 seems great, but generation doesn''t match, >> have=6962, want=9096 >> Well block 3308169162752 seems great, but generation doesn''t match, >> have=6963, want=9096 >> Well block 3308170137600 seems great, but generation doesn''t match, >> have=6964, want=9096 >> Well block 3308173950976 seems great, but generation doesn''t match, >> have=6965, want=9096 >> Well block 3308174385152 seems great, but generation doesn''t match, >> have=6966, want=9096 >> Well block 3308176388096 seems great, but generation doesn''t match, >> have=8993, want=9096 >> Well block 3308176777216 seems great, but generation doesn''t match, >> have=8994, want=9096 >> Well block 3308179423232 seems great, but generation doesn''t match, >> have=8995, want=9096 >> Well block 3308183064576 seems great, but generation doesn''t match, >> have=8996, want=9096 >> Well block 3308184117248 seems great, but generation doesn''t match, >> have=8997, want=9096 >> Well block 3308192186368 seems great, but generation doesn''t match, >> have=6974, want=9096 >> Well block 3308195463168 seems great, but generation doesn''t match, >> have=8998, want=9096 >> Well block 3308196433920 seems great, but generation doesn''t match, >> have=8999, want=9096 >> Well block 3308217372672 seems great, but generation doesn''t match, >> have=9000, want=9096 >> Well block 3308218216448 seems great, but generation doesn''t match, >> have=9001, want=9096 >> Well block 3356598226944 seems great, but generation doesn''t match, >> have=7009, want=9096 >> Well block 3356689129472 seems great, but generation doesn''t match, >> have=7012, want=9096 >> Well block 3356693929984 seems great, but generation doesn''t match, >> have=7013, want=9096 >> Well block 3356699000832 seems great, but generation doesn''t match, >> have=7015, want=9096 >> Well block 3356700569600 seems great, but generation doesn''t match, >> have=9002, want=9096 >> Well block 3356708118528 seems great, but generation doesn''t match, >> have=9003, want=9096 >> Well block 3356712800256 seems great, but generation doesn''t match, >> have=9005, want=9096 >> Well block 3356714262528 seems great, but generation doesn''t match, >> have=9004, want=9096 >> Well block 3356742012928 seems great, but generation doesn''t match, >> have=9009, want=9096 >> Well block 3356807712768 seems great, but generation doesn''t match, >> have=9010, want=9096 >> Well block 3356810645504 seems great, but generation doesn''t match, >> have=9011, want=9096 >> Well block 3405159735296 seems great, but generation doesn''t match, >> have=5263, want=9096 >> Well block 3405196206080 seems great, but generation doesn''t match, >> have=9012, want=9096 >> Well block 3405245640704 seems great, but generation doesn''t match, >> have=9015, want=9096 >> Well block 3405265301504 seems great, but generation doesn''t match, >> have=9016, want=9096 >> Well block 3405270106112 seems great, but generation doesn''t match, >> have=9017, want=9096 >> Well block 3405289385984 seems great, but generation doesn''t match, >> have=9019, want=9096 >> Well block 3405383557120 seems great, but generation doesn''t match, >> have=9020, want=9096 >> Well block 3405384617984 seems great, but generation doesn''t match, >> have=9021, want=9096 >> Well block 3475205132288 seems great, but generation doesn''t match, >> have=5278, want=9096 >> Well block 3475206082560 seems great, but generation doesn''t match, >> have=5279, want=9096 >> Well block 3475230576640 seems great, but generation doesn''t match, >> have=7123, want=9096 >> Well block 3475232927744 seems great, but generation doesn''t match, >> have=7125, want=9096 >> Well block 3475235311616 seems great, but generation doesn''t match, >> have=9023, want=9096 >> Well block 3475237179392 seems great, but generation doesn''t match, >> have=9025, want=9096 >> Well block 3475240382464 seems great, but generation doesn''t match, >> have=7128, want=9096 >> Well block 3475266129920 seems great, but generation doesn''t match, >> have=9028, want=9096 >> Well block 3475279261696 seems great, but generation doesn''t match, >> have=9027, want=9096 >> Well block 3475337981952 seems great, but generation doesn''t match, >> have=9032, want=9096 >> Well block 3475352297472 seems great, but generation doesn''t match, >> have=9031, want=9096 >> Well block 3518444724224 seems great, but generation doesn''t match, >> have=9038, want=9096 >> Well block 3518445928448 seems great, but generation doesn''t match, >> have=9035, want=9096 >> Well block 3518446641152 seems great, but generation doesn''t match, >> have=9033, want=9096 >> Well block 3518460506112 seems great, but generation doesn''t match, >> have=9042, want=9096 >> Well block 3518469042176 seems great, but generation doesn''t match, >> have=9041, want=9096 >> Well block 3518602706944 seems great, but generation doesn''t match, >> have=9048, want=9096 >> Well block 3518610194432 seems great, but generation doesn''t match, >> have=9044, want=9096 >> Well block 3518614880256 seems great, but generation doesn''t match, >> have=9049, want=9096 >> Well block 3518619107328 seems great, but generation doesn''t match, >> have=9050, want=9096 >> Well block 3567122915328 seems great, but generation doesn''t match, >> have=9054, want=9096 >> Well block 3567130849280 seems great, but generation doesn''t match, >> have=9055, want=9096 >> Well block 3567159414784 seems great, but generation doesn''t match, >> have=9056, want=9096 >> Well block 3567164891136 seems great, but generation doesn''t match, >> have=9059, want=9096 >> Well block 3567209717760 seems great, but generation doesn''t match, >> have=9061, want=9096 >> Well block 3567210893312 seems great, but generation doesn''t match, >> have=9062, want=9096 >> Well block 3674660569088 seems great, but generation doesn''t match, >> have=9063, want=9096 >> Well block 3674713247744 seems great, but generation doesn''t match, >> have=9065, want=9096 >> Well block 3674727108608 seems great, but generation doesn''t match, >> have=9066, want=9096 >> Well block 3674785320960 seems great, but generation doesn''t match, >> have=9067, want=9096 >> Well block 3674788827136 seems great, but generation doesn''t match, >> have=9069, want=9096 >> Well block 3674792534016 seems great, but generation doesn''t match, >> have=9068, want=9096 >> Well block 3674808315904 seems great, but generation doesn''t match, >> have=9071, want=9096 >> Well block 3728604938240 seems great, but generation doesn''t match, >> have=5297, want=9096 >> Well block 3728635133952 seems great, but generation doesn''t match, >> have=7598, want=9096 >> Well block 3728682438656 seems great, but generation doesn''t match, >> have=7599, want=9096 >> Well block 3728770461696 seems great, but generation doesn''t match, >> have=9074, want=9096 >> Well block 3728819929088 seems great, but generation doesn''t match, >> have=9073, want=9096 >> Well block 3820340637696 seems great, but generation doesn''t match, >> have=9075, want=9096 >> Well block 3960145862656 seems great, but generation doesn''t match, >> have=9076, want=9096 >> Well block 4046161489920 seems great, but generation doesn''t match, >> have=9077, want=9096 >> Well block 4046213595136 seems great, but generation doesn''t match, >> have=9079, want=9096 >> Well block 4046217637888 seems great, but generation doesn''t match, >> have=9081, want=9096 >> Well block 4046217846784 seems great, but generation doesn''t match, >> have=9080, want=9096 >> Well block 4046252736512 seems great, but generation doesn''t match, >> have=9083, want=9096 >> Well block 4046301515776 seems great, but generation doesn''t match, >> have=9085, want=9096 >> Well block 4046302756864 seems great, but generation doesn''t match, >> have=9084, want=9096 >> Well block 4046358921216 seems great, but generation doesn''t match, >> have=9086, want=9096 >> Well block 4046409486336 seems great, but generation doesn''t match, >> have=9087, want=9096 >> Well block 4046414626816 seems great, but generation doesn''t match, >> have=9088, want=9096 >> Well block 4148447113216 seems great, but generation doesn''t match, >> have=7618, want=9096 >> Well block 4148522024960 seems great, but generation doesn''t match, >> have=9089, want=9096 >> Well block 4148539457536 seems great, but generation doesn''t match, >> have=9090, want=9096 >> Well block 4455562448896 seems great, but generation doesn''t match, >> have=9092, want=9096 >> Well block 4455568302080 seems great, but generation doesn''t match, >> have=9091, want=9096 >> Well block 4848395739136 seems great, but generation doesn''t match, >> have=9093, want=9096 >> Well block 4923796594688 seems great, but generation doesn''t match, >> have=9094, want=9096 >> Well block 4923798065152 seems great, but generation doesn''t match, >> have=9095, want=9096 >> Found tree root at 5532762525696 >> >> >> On 06/04/2012 07:49 AM, Hugo Mills wrote: >>> On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote: >>>> Hi Arne, >>>> >>>> Can you advice how can I recover data? >>>> I tried almost everything what I found on https://btrfs.wiki.kernel.org >>>> >>>> /btrfs-restore restored some files but it is not what was stored. >>> Can you post the complete output of find-root please? >>> >>>> I have seen this command >>>> >>>> -------------------------------------------------- >>>> In case of a corrupted superblock, start by asking btrfsck to use an >>>> alternate copy of the superblock instead of the superblock #0. This >>>> is achieved via the -s option followed by the number of the >>>> alternate copy you wish to use. In the following example we ask for >>>> using the superblock copy #2 of /dev/sda7: >>>> >>>> # ./btrfsck -s 2 /dev/sd7 >>>> >>>> ----------------------------------------- >>>> but it gave me: >>>> $ sudo btrfsck -s 2 /dev/sdb >>>> btrfsck: invalid option -- ''s'' >>>> usage: btrfsck dev >>>> Btrfs Btrfs v0.19 >>> What exact version of the package do you have? Did you compile from >>> a recent git, or do you have a distribution -progs package installed? >>> If the latter, what date does it have in the version number? >>> >>> Hugo. >>>-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
By the way, If data will be recovered I can easily reproduce crash situation. So it can be real-life heavy load test.... On 06/04/2012 08:24 AM, Stefan Behrens wrote:> What have you done? Why do you need to recover data? What happened? A > power failure? A kernel crash? > > On Tue, 29 May 2012 18:14:53 -0400, Maxim Mikheev wrote: >> I recently decided to use btrfs. It works perfectly for a week even >> under heavy load. Yesterday I destroyed backups as cannot afford to have >> ~10TB in backups. I decided to switch on Btrfs because it was announced >> that it stable already >> I need to recover ~5TB data, this data is important and I do not have >> backups....-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
[trimmed Arne & Jan from cc by request] On Mon, Jun 04, 2012 at 08:28:22AM -0400, Maxim Mikheev wrote:> adding -v, as an example: > sudo btrfs-find-root -v -v -v -v -v /dev/sdb > > didn''t change output at all.OK, then all I can suggest is what I said below -- work through the potential tree roots in order from largest generation id to smallest. Given that it''s not reporting any trees, though, I''m not certain that you''ll get any success with it. Did you have your data in a subvolume? Hugo.> On 06/04/2012 08:11 AM, Hugo Mills wrote: > >On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote: > >>Thank you for helping. > > I''m not sure I can be of much help, but there were a few things > >missing from the earlier conversation that I wanted to check the > >details of. > > > >>~$ uname -a > >>Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 > >>UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > >> > >>I compiled progs from recent git (week or two ago). I can compile it > >>again if there updates. > > No, that should be recent enough. I don''t think there have been any > >major updates since then. > > > >>The output of btrfs-find-root is pretty long and below: > >>max@s0:~$ sudo btrfs-find-root /dev/sdb > >>Super think''s the tree root is at 5532762525696, chunk root 20979712 > >>Well block 619435147264 seems great, but generation doesn''t match, > >>have=8746, want=9096 > > This is not long enough, unfortunately. At least some of these > >should have a list of trees before them. At the moment, it''s not > >reporting any trees at all. (At least, it should be doing this unless > >Chris took that line of code out). Do you get anything extra from > >adding a few -v options to the command? > > > > I would suggest, in the absence of any better ideas, sorting this > >list by the "have=" value, and systematically working down from the > >largest to the smallest, running btrfs-restore -t $n for each one > >(where $n is corresponding block number). > > > > Hugo.[snip] -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- __(_''> Squeak! ---
I used only one volume. I will work through your suggestion. Is any other options here? On 06/04/2012 08:34 AM, Hugo Mills wrote:> [trimmed Arne& Jan from cc by request] > > On Mon, Jun 04, 2012 at 08:28:22AM -0400, Maxim Mikheev wrote: >> adding -v, as an example: >> sudo btrfs-find-root -v -v -v -v -v /dev/sdb >> >> didn''t change output at all. > OK, then all I can suggest is what I said below -- work through the > potential tree roots in order from largest generation id to smallest. > Given that it''s not reporting any trees, though, I''m not certain that > you''ll get any success with it. > > Did you have your data in a subvolume? > > Hugo. > >> On 06/04/2012 08:11 AM, Hugo Mills wrote: >>> On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote: >>>> Thank you for helping. >>> I''m not sure I can be of much help, but there were a few things >>> missing from the earlier conversation that I wanted to check the >>> details of. >>> >>>> ~$ uname -a >>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 >>>> UTC 2012 x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> I compiled progs from recent git (week or two ago). I can compile it >>>> again if there updates. >>> No, that should be recent enough. I don''t think there have been any >>> major updates since then. >>> >>>> The output of btrfs-find-root is pretty long and below: >>>> max@s0:~$ sudo btrfs-find-root /dev/sdb >>>> Super think''s the tree root is at 5532762525696, chunk root 20979712 >>>> Well block 619435147264 seems great, but generation doesn''t match, >>>> have=8746, want=9096 >>> This is not long enough, unfortunately. At least some of these >>> should have a list of trees before them. At the moment, it''s not >>> reporting any trees at all. (At least, it should be doing this unless >>> Chris took that line of code out). Do you get anything extra from >>> adding a few -v options to the command? >>> >>> I would suggest, in the absence of any better ideas, sorting this >>> list by the "have=" value, and systematically working down from the >>> largest to the smallest, running btrfs-restore -t $n for each one >>> (where $n is corresponding block number). >>> >>> Hugo. > [snip] >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 04 Jun 2012 08:26:43 -0400, Maxim Mikheev wrote:> It was a kernel panic from btrfs. > I had around 40 parallel processes of reading/writing.Do you have a stack trace for this kernel panic, something with the term "BUG", "WARNING" and/or "Call Trace" in /var/log/kern.log or /var/log/syslog (or in the old /var/log/syslog.?.gz /var/log/kern.log.?.gz)? And are the disks connected via USB or how? Is there an MD, LVM or encryption layer below btrfs in your setup? Was the filesystem almost full? -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote:> alternate copy you wish to use. In the following example we ask for > using the superblock copy #2 of /dev/sda7: > > # ./btrfsck -s 2 /dev/sd7 > > ----------------------------------------- > but it gave me: > $ sudo btrfsck -s 2 /dev/sdb > btrfsck: invalid option -- ''s'' > usage: btrfsck dev > Btrfs Btrfs v0.19 > > What can I do more?I have found that due to an error in getopt_long() usage, btrfsck does not accept the short-form option. Use --super instead. -- Ryan C. Underwood, <nemesis@icequake.net>
After looking on Kernel.log, looks like I had raid card failure and data was not stored properly on one of disks (/dev/sde). Btrfs didn''t recognized disk failure and keep trying to write data until reboot. Some other tests after reboot shows that /dev/sde has generation 9095 and other 4 disks have 9096. This case shows that btrfs does not recognize and does not handle such errors. It probably need to be included as RAID cards can fail.... But more important, that btrfs cannot recover automatically when one disk lost some data. The next question how to roll back to generation 9095? On 06/04/2012 10:08 AM, Maxim Mikheev wrote:> Disks were connected to RocketRaid 2760 directly as JBOD. > > There is no LVM, MD or encryption. I used plain disks directly. > > The file system was 55% full (1.7TB from 3TB for each disk). > > Logs are attached. > The error happens at May 29, 13:55. > > Log contain errors on May 27 for ZFS, It is why I decided to switch to > btrfs. On the moment of failure, no ZFS was installed in the system. > > > On 06/04/2012 09:03 AM, Stefan Behrens wrote: >> On Mon, 04 Jun 2012 08:26:43 -0400, Maxim Mikheev wrote: >>> It was a kernel panic from btrfs. >>> I had around 40 parallel processes of reading/writing. >> Do you have a stack trace for this kernel panic, something with the term >> "BUG", "WARNING" and/or "Call Trace" in /var/log/kern.log or >> /var/log/syslog (or in the old /var/log/syslog.?.gz >> /var/log/kern.log.?.gz)? >> >> And are the disks connected via USB or how? >> >> Is there an MD, LVM or encryption layer below btrfs in your setup? >> >> Was the filesystem almost full?-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote:> Disks were connected to RocketRaid 2760 directly as JBOD. > > There is no LVM, MD or encryption. I used plain disks directly. > > The file system was 55% full (1.7TB from 3TB for each disk). > > Logs are attached. > The error happens at May 29, 13:55. > > Log contain errors on May 27 for ZFS, It is why I decided to switch to > btrfs. On the moment of failure, no ZFS was installed in the system.According to the kern.1.log file that you have sent (which is not visible on the mailing list because it exceeded the 100,000 chars limit of vger.kernel.org), a rebalance operation was active when the disks or the RAID controller started to cause IO errors. There seems to be a bug! Like that a write failure is ignored in btrfs. For instance, the result of barrier_all_devices() is ignored. Afterwards the superblocks are written referencing trees which have not been completely written to disk. ... May 29 13:08:07 s0 kernel: [46017.194519] btrfs: relocating block group 7236780818432 flags 9 May 29 13:08:36 s0 kernel: [46046.149492] btrfs: found 18543 extents May 29 13:09:03 s0 kernel: [46072.944773] btrfs: found 18543 extents May 29 13:09:04 s0 kernel: [46074.317760] btrfs: relocating block group 7235707076608 flags 20 ... May 29 13:55:56 s0 kernel: [48882.551881] /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 rx_desc 30001 has error info8000000080000000. May 29 13:55:56 s0 kernel: [48882.551918] /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active FFFFFCFD, slot [1]. May 29 13:55:56 s0 kernel: [48882.552084] btrfs csum failed ino 62276 off 1019039744 csum 1546305812 private 3211821089 May 29 13:55:56 s0 kernel: [48882.552241] btrfs csum failed ino 62276 off 1018056704 csum 3750159096 private 3390793248 ... May 29 13:55:56 s0 kernel: [48882.553791] btrfs csum failed ino 62276 off 1018712064 csum 872056089 private 2640477920 May 29 13:55:56 s0 kernel: [48882.554528] /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 rx_desc 30001 has error info0000000000010000. May 29 13:55:56 s0 kernel: [48882.554541] /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active FF3FFEFD, slot [1]. May 29 13:55:56 s0 kernel: [48882.555626] /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 22 rx_desc 30016 has error info0000000001000000. May 29 13:55:56 s0 kernel: [48882.555635] /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active FF3FFEFB, slot [16]. May 29 13:55:56 s0 kernel: [48882.555659] sd 8:0:3:0: [sde] command ffff880006c57800 timed out May 29 13:56:00 s0 kernel: [48886.313989] sd 8:0:3:0: [sde] command ffff88117af65700 timed out ... May 29 13:56:00 s0 kernel: [48886.314186] sas: Enter sas_scsi_recover_host busy: 31 failed: 31 May 29 13:56:00 s0 kernel: [48886.314204] sas: trying to find task 0xffff881083807640 May 29 13:56:00 s0 kernel: [48886.314210] sas: sas_scsi_find_task: aborting task 0xffff881083807640 May 29 13:56:00 s0 kernel: [48886.314220] /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1632:mvs_abort_task() mvi=ffff8837faa80000 task=ffff881083807640 slot=ffff8837faaa5140 slot_idx=x3 May 29 13:56:00 s0 kernel: [48886.314231] sas: sas_scsi_find_task: task 0xffff881083807640 is aborted May 29 13:56:00 s0 kernel: [48886.314236] sas: sas_eh_handle_sas_errors: task 0xffff881083807640 is aborted ... May 29 13:56:00 s0 kernel: [48886.315030] sas: ata10: end_device-8:3: cmd error handler May 29 13:56:00 s0 kernel: [48886.315108] sas: ata7: end_device-8:0: dev error handler May 29 13:56:00 s0 kernel: [48886.315138] sas: ata8: end_device-8:1: dev error handler May 29 13:56:00 s0 kernel: [48886.315168] sas: ata9: end_device-8:2: dev error handler May 29 13:56:00 s0 kernel: [48886.315193] sas: ata10: end_device-8:3: dev error handler May 29 13:56:00 s0 kernel: [48886.315219] ata10.00: exception Emask 0x1 SAct 0x7fffffff SErr 0x0 action 0x6 frozen May 29 13:56:00 s0 kernel: [48886.315239] ata10.00: failed command: WRITE FPDMA QUEUED May 29 13:56:00 s0 kernel: [48886.315255] ata10.00: cmd 61/08:00:88:a0:98/00:00:7c:00:00/40 tag 0 ncq 4096 out May 29 13:56:00 s0 kernel: [48886.315258] res 41/54:08:68:d6:98/00:00:7c:00:00/40 Emask 0x8d (timeout) May 29 13:56:00 s0 kernel: [48886.315278] ata10.00: status: { DRDY ERR } May 29 13:56:00 s0 kernel: [48886.315286] ata10.00: error: { UNC IDNF ABRT } ... May 29 13:56:54 s0 kernel: [48940.752647] btrfs: run_one_delayed_ref returned -5 May 29 13:56:54 s0 kernel: [48940.752652] btrfs: run_one_delayed_ref returned -5 May 29 13:56:54 s0 kernel: [48940.752656] 99 28 May 29 13:56:54 s0 kernel: [48940.752665] ------------[ cut here ]------------ May 29 13:56:54 s0 kernel: [48940.752669] ------------[ cut here ]------------ May 29 13:56:54 s0 kernel: [48940.752674] c2 00 May 29 13:56:54 s0 kernel: [48940.752683] ------------[ cut here ]------------ May 29 13:56:54 s0 kernel: [48940.752747] WARNING: at /home/apw/COD/linux/fs/btrfs/super.c:219 __btrfs_abort_transaction+0xae/0xc0 [btrfs]() May 29 13:56:54 s0 kernel: [48940.752760] 30 May 29 13:56:54 s0 kernel: [48940.752825] WARNING: at /home/apw/COD/linux/fs/btrfs/super.c:219 __btrfs_abort_transaction+0xae/0xc0 [btrfs]() May 29 13:56:54 s0 kernel: [48940.752832] 45 May 29 13:56:54 s0 kernel: [48940.752862] WARNING: at /home/apw/COD/linux/fs/btrfs/super.c:219 __btrfs_abort_transaction+0xae/0xc0 [btrfs]() May 29 13:56:54 s0 kernel: [48940.752871] 00 May 29 13:56:54 s0 kernel: [48940.752876] Hardware name: H8QG6 May 29 13:56:54 s0 kernel: [48940.752880] bf May 29 13:56:54 s0 kernel: [48940.752884] Hardware name: H8QG6 May 29 13:56:54 s0 kernel: [48940.752892] 00 May 29 13:56:54 s0 kernel: [48940.752896] btrfs: Transaction aborted 44 May 29 13:56:54 s0 kernel: [48940.752902] btrfs: Transaction aborted ... May 29 13:56:54 s0 kernel: [48940.754032] [<ffffffffa00db45e>] __btrfs_abort_transaction+0xae/0xc0 [btrfs] ... May 29 13:56:54 s0 kernel: [48940.756438] BTRFS error (device sdg) in __btrfs_free_extent:5134: IO failure May 29 13:56:54 s0 kernel: [48940.756455] btrfs: run_one_delayed_ref returned -5 May 29 13:56:54 s0 kernel: [48940.756462] BTRFS error (device sdg) in btrfs_run_delayed_refs:2454: IO failure May 29 13:56:55 s0 kernel: [48940.997869] BUG: unable to handle kernel paging request at ffffffffffffff99 May 29 13:56:55 s0 kernel: [48940.997904] IP: [<ffffffffa012305c>] btrfs_dec_test_ordered_pending+0xdc/0x220 [btrfs] May 29 13:56:55 s0 kernel: [48940.998631] Call Trace: May 29 13:56:55 s0 kernel: [48940.998682] [<ffffffffa010e838>] btrfs_finish_ordered_io+0x58/0x3c0 [btrfs] May 29 13:56:55 s0 kernel: [48940.998714] [<ffffffff8103ff59>] ? default_spin_lock_flags+0x9/0x10 May 29 13:56:55 s0 kernel: [48940.998739] [<ffffffff8166c7bf>] ? _raw_spin_lock_irqsave+0x2f/0x40 May 29 13:56:55 s0 kernel: [48940.998796] [<ffffffffa010ebf1>] btrfs_writepage_end_io_hook+0x51/0xa0 [btrfs] May 29 13:56:55 s0 kernel: [48940.998860] [<ffffffffa0127b39>] end_extent_writepage+0x69/0x100 [btrfs] May 29 13:56:55 s0 kernel: [48940.998919] [<ffffffffa0127c36>] end_bio_extent_writepage+0x66/0xa0 [btrfs] May 29 13:56:55 s0 kernel: [48940.998949] [<ffffffff811b80fd>] bio_endio+0x1d/0x40 May 29 13:56:55 s0 kernel: [48940.999009] [<ffffffffa00fbe45>] end_workqueue_fn+0x45/0x50 [btrfs] May 29 13:56:55 s0 kernel: [48940.999058] [<ffffffffa013433c>] worker_loop+0x16c/0x510 [btrfs] -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Can I roll back to 9095, as all disks has 9095? How can I send this file to the mailing list? On 06/04/2012 11:02 AM, Stefan Behrens wrote:> On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote: >> Disks were connected to RocketRaid 2760 directly as JBOD. >> >> There is no LVM, MD or encryption. I used plain disks directly. >> >> The file system was 55% full (1.7TB from 3TB for each disk). >> >> Logs are attached. >> The error happens at May 29, 13:55. >> >> Log contain errors on May 27 for ZFS, It is why I decided to switch to >> btrfs. On the moment of failure, no ZFS was installed in the system. > According to the kern.1.log file that you have sent (which is not > visible on the mailing list because it exceeded the 100,000 chars limit > of vger.kernel.org), a rebalance operation was active when the disks or > the RAID controller started to cause IO errors. > > There seems to be a bug! Like that a write failure is ignored in btrfs. > For instance, the result of barrier_all_devices() is ignored. Afterwards > the superblocks are written referencing trees which have not been > completely written to disk. > > > ... > May 29 13:08:07 s0 kernel: [46017.194519] btrfs: relocating block group > 7236780818432 flags 9 > May 29 13:08:36 s0 kernel: [46046.149492] btrfs: found 18543 extents > May 29 13:09:03 s0 kernel: [46072.944773] btrfs: found 18543 extents > May 29 13:09:04 s0 kernel: [46074.317760] btrfs: relocating block group > 7235707076608 flags 20 > ... > May 29 13:55:56 s0 kernel: [48882.551881] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 > rx_desc 30001 has error info8000000080000000. > May 29 13:55:56 s0 kernel: [48882.551918] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active > FFFFFCFD, slot [1]. > May 29 13:55:56 s0 kernel: [48882.552084] btrfs csum failed ino 62276 > off 1019039744 csum 1546305812 private 3211821089 > May 29 13:55:56 s0 kernel: [48882.552241] btrfs csum failed ino 62276 > off 1018056704 csum 3750159096 private 3390793248 > ... > May 29 13:55:56 s0 kernel: [48882.553791] btrfs csum failed ino 62276 > off 1018712064 csum 872056089 private 2640477920 > May 29 13:55:56 s0 kernel: [48882.554528] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 > rx_desc 30001 has error info0000000000010000. > May 29 13:55:56 s0 kernel: [48882.554541] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active > FF3FFEFD, slot [1]. > May 29 13:55:56 s0 kernel: [48882.555626] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 22 > rx_desc 30016 has error info0000000001000000. > May 29 13:55:56 s0 kernel: [48882.555635] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active > FF3FFEFB, slot [16]. > May 29 13:55:56 s0 kernel: [48882.555659] sd 8:0:3:0: [sde] command > ffff880006c57800 timed out > May 29 13:56:00 s0 kernel: [48886.313989] sd 8:0:3:0: [sde] command > ffff88117af65700 timed out > ... > May 29 13:56:00 s0 kernel: [48886.314186] sas: Enter > sas_scsi_recover_host busy: 31 failed: 31 > May 29 13:56:00 s0 kernel: [48886.314204] sas: trying to find task > 0xffff881083807640 > May 29 13:56:00 s0 kernel: [48886.314210] sas: sas_scsi_find_task: > aborting task 0xffff881083807640 > May 29 13:56:00 s0 kernel: [48886.314220] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1632:mvs_abort_task() > mvi=ffff8837faa80000 task=ffff881083807640 slot=ffff8837faaa5140 slot_idx=x3 > May 29 13:56:00 s0 kernel: [48886.314231] sas: sas_scsi_find_task: task > 0xffff881083807640 is aborted > May 29 13:56:00 s0 kernel: [48886.314236] sas: sas_eh_handle_sas_errors: > task 0xffff881083807640 is aborted > ... > May 29 13:56:00 s0 kernel: [48886.315030] sas: ata10: end_device-8:3: > cmd error handler > May 29 13:56:00 s0 kernel: [48886.315108] sas: ata7: end_device-8:0: dev > error handler > May 29 13:56:00 s0 kernel: [48886.315138] sas: ata8: end_device-8:1: dev > error handler > May 29 13:56:00 s0 kernel: [48886.315168] sas: ata9: end_device-8:2: dev > error handler > May 29 13:56:00 s0 kernel: [48886.315193] sas: ata10: end_device-8:3: > dev error handler > May 29 13:56:00 s0 kernel: [48886.315219] ata10.00: exception Emask 0x1 > SAct 0x7fffffff SErr 0x0 action 0x6 frozen > May 29 13:56:00 s0 kernel: [48886.315239] ata10.00: failed command: > WRITE FPDMA QUEUED > May 29 13:56:00 s0 kernel: [48886.315255] ata10.00: cmd > 61/08:00:88:a0:98/00:00:7c:00:00/40 tag 0 ncq 4096 out > May 29 13:56:00 s0 kernel: [48886.315258] res > 41/54:08:68:d6:98/00:00:7c:00:00/40 Emask 0x8d (timeout) > May 29 13:56:00 s0 kernel: [48886.315278] ata10.00: status: { DRDY ERR } > May 29 13:56:00 s0 kernel: [48886.315286] ata10.00: error: { UNC IDNF ABRT } > ... > May 29 13:56:54 s0 kernel: [48940.752647] btrfs: run_one_delayed_ref > returned -5 > May 29 13:56:54 s0 kernel: [48940.752652] btrfs: run_one_delayed_ref > returned -5 > May 29 13:56:54 s0 kernel: [48940.752656] 99 28 > May 29 13:56:54 s0 kernel: [48940.752665] ------------[ cut here > ]------------ > May 29 13:56:54 s0 kernel: [48940.752669] ------------[ cut here > ]------------ > May 29 13:56:54 s0 kernel: [48940.752674] c2 00 > May 29 13:56:54 s0 kernel: [48940.752683] ------------[ cut here > ]------------ > May 29 13:56:54 s0 kernel: [48940.752747] WARNING: at > /home/apw/COD/linux/fs/btrfs/super.c:219 > __btrfs_abort_transaction+0xae/0xc0 [btrfs]() > May 29 13:56:54 s0 kernel: [48940.752760] 30 > May 29 13:56:54 s0 kernel: [48940.752825] WARNING: at > /home/apw/COD/linux/fs/btrfs/super.c:219 > __btrfs_abort_transaction+0xae/0xc0 [btrfs]() > May 29 13:56:54 s0 kernel: [48940.752832] 45 > May 29 13:56:54 s0 kernel: [48940.752862] WARNING: at > /home/apw/COD/linux/fs/btrfs/super.c:219 > __btrfs_abort_transaction+0xae/0xc0 [btrfs]() > May 29 13:56:54 s0 kernel: [48940.752871] 00 > May 29 13:56:54 s0 kernel: [48940.752876] Hardware name: H8QG6 > May 29 13:56:54 s0 kernel: [48940.752880] bf > May 29 13:56:54 s0 kernel: [48940.752884] Hardware name: H8QG6 > May 29 13:56:54 s0 kernel: [48940.752892] 00 > May 29 13:56:54 s0 kernel: [48940.752896] btrfs: Transaction aborted 44 > May 29 13:56:54 s0 kernel: [48940.752902] btrfs: Transaction aborted > ... > May 29 13:56:54 s0 kernel: [48940.754032] [<ffffffffa00db45e>] > __btrfs_abort_transaction+0xae/0xc0 [btrfs] > ... > May 29 13:56:54 s0 kernel: [48940.756438] BTRFS error (device sdg) in > __btrfs_free_extent:5134: IO failure > May 29 13:56:54 s0 kernel: [48940.756455] btrfs: run_one_delayed_ref > returned -5 > May 29 13:56:54 s0 kernel: [48940.756462] BTRFS error (device sdg) in > btrfs_run_delayed_refs:2454: IO failure > May 29 13:56:55 s0 kernel: [48940.997869] BUG: unable to handle kernel > paging request at ffffffffffffff99 > May 29 13:56:55 s0 kernel: [48940.997904] IP: [<ffffffffa012305c>] > btrfs_dec_test_ordered_pending+0xdc/0x220 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998631] Call Trace: > May 29 13:56:55 s0 kernel: [48940.998682] [<ffffffffa010e838>] > btrfs_finish_ordered_io+0x58/0x3c0 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998714] [<ffffffff8103ff59>] ? > default_spin_lock_flags+0x9/0x10 > May 29 13:56:55 s0 kernel: [48940.998739] [<ffffffff8166c7bf>] ? > _raw_spin_lock_irqsave+0x2f/0x40 > May 29 13:56:55 s0 kernel: [48940.998796] [<ffffffffa010ebf1>] > btrfs_writepage_end_io_hook+0x51/0xa0 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998860] [<ffffffffa0127b39>] > end_extent_writepage+0x69/0x100 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998919] [<ffffffffa0127c36>] > end_bio_extent_writepage+0x66/0xa0 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998949] [<ffffffff811b80fd>] > bio_endio+0x1d/0x40 > May 29 13:56:55 s0 kernel: [48940.999009] [<ffffffffa00fbe45>] > end_workqueue_fn+0x45/0x50 [btrfs] > May 29 13:56:55 s0 kernel: [48940.999058] [<ffffffffa013433c>] > worker_loop+0x16c/0x510 [btrfs]-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, 04 Jun 2012 11:08:36 -0400, Maxim Mikheev wrote:> How can I send this file to the mailing list?Using web space, e.g. http://pastebin.com/ -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
pastebin.com has limit 500K I put file here: http://www.4shared.com/archive/I8cU3K43/kernlog1.html? On 06/04/2012 11:11 AM, Stefan Behrens wrote:> On Mon, 04 Jun 2012 11:08:36 -0400, Maxim Mikheev wrote: >> How can I send this file to the mailing list? > Using web space, e.g. http://pastebin.com/-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
I run through all potential tree roots. It gave me everytime messages like these: parent transid verify failed on 3405159735296 wanted 9096 found 5263 parent transid verify failed on 3405159735296 wanted 9096 found 5263 parent transid verify failed on 3405159735296 wanted 9096 found 5263 parent transid verify failed on 3405159735296 wanted 9096 found 5263 Ignoring transid failure parent transid verify failed on 3356745109504 wanted 5263 found 9008 parent transid verify failed on 3356745109504 wanted 5263 found 9008 parent transid verify failed on 3356745109504 wanted 5263 found 9008 parent transid verify failed on 3356745109504 wanted 5263 found 9008 Ignoring transid failure parent transid verify failed on 3356744548352 wanted 5262 found 9008 parent transid verify failed on 3356744548352 wanted 5262 found 9008 parent transid verify failed on 3356744548352 wanted 5262 found 9008 parent transid verify failed on 3356744548352 wanted 5262 found 9008 Ignoring transid failure parent transid verify failed on 3356745035776 wanted 5263 found 9008 parent transid verify failed on 3356745035776 wanted 5263 found 9008 parent transid verify failed on 3356745035776 wanted 5263 found 9008 parent transid verify failed on 3356745035776 wanted 5263 found 9008 Ignoring transid failure parent transid verify failed on 3356745015296 wanted 5263 found 9008 parent transid verify failed on 3356745015296 wanted 5263 found 9008 parent transid verify failed on 3356745015296 wanted 5263 found 9008 parent transid verify failed on 3356745015296 wanted 5263 found 9008 Ignoring transid failure Root objectid is 5 The largest recovered data is 12Kb. max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088 total 28K 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 . 20K drwxrwxr-x 347 max max 20K Jun 4 12:18 .. 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 Irina max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088/Irina/ total 12K 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 . 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 .. 4.0K drwxr-xr-x 2 root root 4.0K Jun 4 12:06 .idmapdir2 max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088/Irina/.idmapdir2/ total 8.0K 4.0K drwxr-xr-x 2 root root 4.0K Jun 4 12:06 . 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 .. 0 -rw-r--r-- 1 root root 0 Jun 4 12:06 4.bucket.lock 0 -rw-r--r-- 1 root root 0 Jun 4 12:06 7.bucket max@s0:~/btrfs-recovering./recovered$ What can I do next? On 06/04/2012 08:34 AM, Hugo Mills wrote:> [trimmed Arne& Jan from cc by request] > > On Mon, Jun 04, 2012 at 08:28:22AM -0400, Maxim Mikheev wrote: >> adding -v, as an example: >> sudo btrfs-find-root -v -v -v -v -v /dev/sdb >> >> didn''t change output at all. > OK, then all I can suggest is what I said below -- work through the > potential tree roots in order from largest generation id to smallest. > Given that it''s not reporting any trees, though, I''m not certain that > you''ll get any success with it. > > Did you have your data in a subvolume? > > Hugo. > >> On 06/04/2012 08:11 AM, Hugo Mills wrote: >>> On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote: >>>> Thank you for helping. >>> I''m not sure I can be of much help, but there were a few things >>> missing from the earlier conversation that I wanted to check the >>> details of. >>> >>>> ~$ uname -a >>>> Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 >>>> UTC 2012 x86_64 x86_64 x86_64 GNU/Linux >>>> >>>> I compiled progs from recent git (week or two ago). I can compile it >>>> again if there updates. >>> No, that should be recent enough. I don''t think there have been any >>> major updates since then. >>> >>>> The output of btrfs-find-root is pretty long and below: >>>> max@s0:~$ sudo btrfs-find-root /dev/sdb >>>> Super think''s the tree root is at 5532762525696, chunk root 20979712 >>>> Well block 619435147264 seems great, but generation doesn''t match, >>>> have=8746, want=9096 >>> This is not long enough, unfortunately. At least some of these >>> should have a list of trees before them. At the moment, it''s not >>> reporting any trees at all. (At least, it should be doing this unless >>> Chris took that line of code out). Do you get anything extra from >>> adding a few -v options to the command? >>> >>> I would suggest, in the absence of any better ideas, sorting this >>> list by the "have=" value, and systematically working down from the >>> largest to the smallest, running btrfs-restore -t $n for each one >>> (where $n is corresponding block number). >>> >>> Hugo. > [snip] >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
--super works but my root tree 2 has many errors too. What can I do next? Thanks On 06/04/2012 10:54 AM, Ryan C. Underwood wrote:> On Mon, Jun 04, 2012 at 07:43:40AM -0400, Maxim Mikheev wrote: >> alternate copy you wish to use. In the following example we ask for >> using the superblock copy #2 of /dev/sda7: >> >> # ./btrfsck -s 2 /dev/sd7 >> >> ----------------------------------------- >> but it gave me: >> $ sudo btrfsck -s 2 /dev/sdb >> btrfsck: invalid option -- ''s'' >> usage: btrfsck dev >> Btrfs Btrfs v0.19 >> >> What can I do more? > I have found that due to an error in getopt_long() usage, btrfsck does > not accept the short-form option. Use --super instead. >-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jun 04, 2012 at 12:24:05PM -0400, Maxim Mikheev wrote:> I run through all potential tree roots. It gave me everytime > messages like these: > > parent transid verify failed on 3405159735296 wanted 9096 found 5263 > parent transid verify failed on 3405159735296 wanted 9096 found 5263 > parent transid verify failed on 3405159735296 wanted 9096 found 5263 > parent transid verify failed on 3405159735296 wanted 9096 found 5263 > Ignoring transid failure > parent transid verify failed on 3356745109504 wanted 5263 found 9008 > parent transid verify failed on 3356745109504 wanted 5263 found 9008 > parent transid verify failed on 3356745109504 wanted 5263 found 9008 > parent transid verify failed on 3356745109504 wanted 5263 found 9008 > Ignoring transid failure > parent transid verify failed on 3356744548352 wanted 5262 found 9008 > parent transid verify failed on 3356744548352 wanted 5262 found 9008 > parent transid verify failed on 3356744548352 wanted 5262 found 9008 > parent transid verify failed on 3356744548352 wanted 5262 found 9008 > Ignoring transid failure > parent transid verify failed on 3356745035776 wanted 5263 found 9008 > parent transid verify failed on 3356745035776 wanted 5263 found 9008 > parent transid verify failed on 3356745035776 wanted 5263 found 9008 > parent transid verify failed on 3356745035776 wanted 5263 found 9008 > Ignoring transid failure > parent transid verify failed on 3356745015296 wanted 5263 found 9008 > parent transid verify failed on 3356745015296 wanted 5263 found 9008 > parent transid verify failed on 3356745015296 wanted 5263 found 9008 > parent transid verify failed on 3356745015296 wanted 5263 found 9008 > Ignoring transid failure > Root objectid is 5 > > The largest recovered data is 12Kb. > max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088 > total 28K > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 . > 20K drwxrwxr-x 347 max max 20K Jun 4 12:18 .. > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 Irina > max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088/Irina/ > total 12K > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 . > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 .. > 4.0K drwxr-xr-x 2 root root 4.0K Jun 4 12:06 .idmapdir2 > max@s0:~/btrfs-recovering./recovered$ ls -lahs > 3728819929088/Irina/.idmapdir2/ > total 8.0K > 4.0K drwxr-xr-x 2 root root 4.0K Jun 4 12:06 . > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 .. > 0 -rw-r--r-- 1 root root 0 Jun 4 12:06 4.bucket.lock > 0 -rw-r--r-- 1 root root 0 Jun 4 12:06 7.bucket > max@s0:~/btrfs-recovering./recovered$ > > What can I do next?I''m out of ideas. At this point, though, you''re probably looking at somebody writing custom code to scan the FS and attempt to find and retrieve anything that''s recoverable. You might try writing a tool to scan all the disks for useful fragments of old trees, and see if you can find some of the tree roots independently of the tree of tree roots (which clearly isn''t particularly functional right now). You might try simply scanning the disks looking for your lost data, and try to reconstruct as much of it as you can from that. You could try to find a company specialising in data recovery and pay them to try to get your data back. Or you might just have to accept that the data''s gone and work on reconstructing it. Hugo.> On 06/04/2012 08:34 AM, Hugo Mills wrote: > >[trimmed Arne& Jan from cc by request] > > > >On Mon, Jun 04, 2012 at 08:28:22AM -0400, Maxim Mikheev wrote: > >>adding -v, as an example: > >>sudo btrfs-find-root -v -v -v -v -v /dev/sdb > >> > >>didn''t change output at all. > > OK, then all I can suggest is what I said below -- work through the > >potential tree roots in order from largest generation id to smallest. > >Given that it''s not reporting any trees, though, I''m not certain that > >you''ll get any success with it. > > > > Did you have your data in a subvolume? > > > > Hugo. > > > >>On 06/04/2012 08:11 AM, Hugo Mills wrote: > >>>On Mon, Jun 04, 2012 at 08:01:32AM -0400, Maxim Mikheev wrote: > >>>>Thank you for helping. > >>> I''m not sure I can be of much help, but there were a few things > >>>missing from the earlier conversation that I wanted to check the > >>>details of. > >>> > >>>>~$ uname -a > >>>>Linux s0 3.4.0-030400-generic #201205210521 SMP Mon May 21 09:22:02 > >>>>UTC 2012 x86_64 x86_64 x86_64 GNU/Linux > >>>> > >>>>I compiled progs from recent git (week or two ago). I can compile it > >>>>again if there updates. > >>> No, that should be recent enough. I don''t think there have been any > >>>major updates since then. > >>> > >>>>The output of btrfs-find-root is pretty long and below: > >>>>max@s0:~$ sudo btrfs-find-root /dev/sdb > >>>>Super think''s the tree root is at 5532762525696, chunk root 20979712 > >>>>Well block 619435147264 seems great, but generation doesn''t match, > >>>>have=8746, want=9096 > >>> This is not long enough, unfortunately. At least some of these > >>>should have a list of trees before them. At the moment, it''s not > >>>reporting any trees at all. (At least, it should be doing this unless > >>>Chris took that line of code out). Do you get anything extra from > >>>adding a few -v options to the command? > >>> > >>> I would suggest, in the absence of any better ideas, sorting this > >>>list by the "have=" value, and systematically working down from the > >>>largest to the smallest, running btrfs-restore -t $n for each one > >>>(where $n is corresponding block number). > >>> > >>> Hugo. > >[snip] > >-- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- Quantum est ille canis in fenestra? ---
On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote:> I''m out of ideas.... but that''s not to say that someone else may have some ideas. I wouldn''t get your hopes up too much, though.> At this point, though, you''re probably looking at somebody writing > custom code to scan the FS and attempt to find and retrieve anything > that''s recoverable. > > You might try writing a tool to scan all the disks for useful > fragments of old trees, and see if you can find some of the tree roots > independently of the tree of tree roots (which clearly isn''t > particularly functional right now). You might try simply scanning the > disks looking for your lost data, and try to reconstruct as much of it > as you can from that. You could try to find a company specialising in > data recovery and pay them to try to get your data back. Or you might > just have to accept that the data''s gone and work on reconstructing > it.Hugo. -- === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk == PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk --- A linked list is still a binary tree. Just a very unbalanced --- one. -- dragon
Is any chance to fix it and recover data after such failure? On 06/04/2012 11:02 AM, Stefan Behrens wrote:> On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote: >> Disks were connected to RocketRaid 2760 directly as JBOD. >> >> There is no LVM, MD or encryption. I used plain disks directly. >> >> The file system was 55% full (1.7TB from 3TB for each disk). >> >> Logs are attached. >> The error happens at May 29, 13:55. >> >> Log contain errors on May 27 for ZFS, It is why I decided to switch to >> btrfs. On the moment of failure, no ZFS was installed in the system. > According to the kern.1.log file that you have sent (which is not > visible on the mailing list because it exceeded the 100,000 chars limit > of vger.kernel.org), a rebalance operation was active when the disks or > the RAID controller started to cause IO errors. > > There seems to be a bug! Like that a write failure is ignored in btrfs. > For instance, the result of barrier_all_devices() is ignored. Afterwards > the superblocks are written referencing trees which have not been > completely written to disk. > > > ... > May 29 13:08:07 s0 kernel: [46017.194519] btrfs: relocating block group > 7236780818432 flags 9 > May 29 13:08:36 s0 kernel: [46046.149492] btrfs: found 18543 extents > May 29 13:09:03 s0 kernel: [46072.944773] btrfs: found 18543 extents > May 29 13:09:04 s0 kernel: [46074.317760] btrfs: relocating block group > 7235707076608 flags 20 > ... > May 29 13:55:56 s0 kernel: [48882.551881] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 > rx_desc 30001 has error info8000000080000000. > May 29 13:55:56 s0 kernel: [48882.551918] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active > FFFFFCFD, slot [1]. > May 29 13:55:56 s0 kernel: [48882.552084] btrfs csum failed ino 62276 > off 1019039744 csum 1546305812 private 3211821089 > May 29 13:55:56 s0 kernel: [48882.552241] btrfs csum failed ino 62276 > off 1018056704 csum 3750159096 private 3390793248 > ... > May 29 13:55:56 s0 kernel: [48882.553791] btrfs csum failed ino 62276 > off 1018712064 csum 872056089 private 2640477920 > May 29 13:55:56 s0 kernel: [48882.554528] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 > rx_desc 30001 has error info0000000000010000. > May 29 13:55:56 s0 kernel: [48882.554541] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active > FF3FFEFD, slot [1]. > May 29 13:55:56 s0 kernel: [48882.555626] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 22 > rx_desc 30016 has error info0000000001000000. > May 29 13:55:56 s0 kernel: [48882.555635] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active > FF3FFEFB, slot [16]. > May 29 13:55:56 s0 kernel: [48882.555659] sd 8:0:3:0: [sde] command > ffff880006c57800 timed out > May 29 13:56:00 s0 kernel: [48886.313989] sd 8:0:3:0: [sde] command > ffff88117af65700 timed out > ... > May 29 13:56:00 s0 kernel: [48886.314186] sas: Enter > sas_scsi_recover_host busy: 31 failed: 31 > May 29 13:56:00 s0 kernel: [48886.314204] sas: trying to find task > 0xffff881083807640 > May 29 13:56:00 s0 kernel: [48886.314210] sas: sas_scsi_find_task: > aborting task 0xffff881083807640 > May 29 13:56:00 s0 kernel: [48886.314220] > /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1632:mvs_abort_task() > mvi=ffff8837faa80000 task=ffff881083807640 slot=ffff8837faaa5140 slot_idx=x3 > May 29 13:56:00 s0 kernel: [48886.314231] sas: sas_scsi_find_task: task > 0xffff881083807640 is aborted > May 29 13:56:00 s0 kernel: [48886.314236] sas: sas_eh_handle_sas_errors: > task 0xffff881083807640 is aborted > ... > May 29 13:56:00 s0 kernel: [48886.315030] sas: ata10: end_device-8:3: > cmd error handler > May 29 13:56:00 s0 kernel: [48886.315108] sas: ata7: end_device-8:0: dev > error handler > May 29 13:56:00 s0 kernel: [48886.315138] sas: ata8: end_device-8:1: dev > error handler > May 29 13:56:00 s0 kernel: [48886.315168] sas: ata9: end_device-8:2: dev > error handler > May 29 13:56:00 s0 kernel: [48886.315193] sas: ata10: end_device-8:3: > dev error handler > May 29 13:56:00 s0 kernel: [48886.315219] ata10.00: exception Emask 0x1 > SAct 0x7fffffff SErr 0x0 action 0x6 frozen > May 29 13:56:00 s0 kernel: [48886.315239] ata10.00: failed command: > WRITE FPDMA QUEUED > May 29 13:56:00 s0 kernel: [48886.315255] ata10.00: cmd > 61/08:00:88:a0:98/00:00:7c:00:00/40 tag 0 ncq 4096 out > May 29 13:56:00 s0 kernel: [48886.315258] res > 41/54:08:68:d6:98/00:00:7c:00:00/40 Emask 0x8d (timeout) > May 29 13:56:00 s0 kernel: [48886.315278] ata10.00: status: { DRDY ERR } > May 29 13:56:00 s0 kernel: [48886.315286] ata10.00: error: { UNC IDNF ABRT } > ... > May 29 13:56:54 s0 kernel: [48940.752647] btrfs: run_one_delayed_ref > returned -5 > May 29 13:56:54 s0 kernel: [48940.752652] btrfs: run_one_delayed_ref > returned -5 > May 29 13:56:54 s0 kernel: [48940.752656] 99 28 > May 29 13:56:54 s0 kernel: [48940.752665] ------------[ cut here > ]------------ > May 29 13:56:54 s0 kernel: [48940.752669] ------------[ cut here > ]------------ > May 29 13:56:54 s0 kernel: [48940.752674] c2 00 > May 29 13:56:54 s0 kernel: [48940.752683] ------------[ cut here > ]------------ > May 29 13:56:54 s0 kernel: [48940.752747] WARNING: at > /home/apw/COD/linux/fs/btrfs/super.c:219 > __btrfs_abort_transaction+0xae/0xc0 [btrfs]() > May 29 13:56:54 s0 kernel: [48940.752760] 30 > May 29 13:56:54 s0 kernel: [48940.752825] WARNING: at > /home/apw/COD/linux/fs/btrfs/super.c:219 > __btrfs_abort_transaction+0xae/0xc0 [btrfs]() > May 29 13:56:54 s0 kernel: [48940.752832] 45 > May 29 13:56:54 s0 kernel: [48940.752862] WARNING: at > /home/apw/COD/linux/fs/btrfs/super.c:219 > __btrfs_abort_transaction+0xae/0xc0 [btrfs]() > May 29 13:56:54 s0 kernel: [48940.752871] 00 > May 29 13:56:54 s0 kernel: [48940.752876] Hardware name: H8QG6 > May 29 13:56:54 s0 kernel: [48940.752880] bf > May 29 13:56:54 s0 kernel: [48940.752884] Hardware name: H8QG6 > May 29 13:56:54 s0 kernel: [48940.752892] 00 > May 29 13:56:54 s0 kernel: [48940.752896] btrfs: Transaction aborted 44 > May 29 13:56:54 s0 kernel: [48940.752902] btrfs: Transaction aborted > ... > May 29 13:56:54 s0 kernel: [48940.754032] [<ffffffffa00db45e>] > __btrfs_abort_transaction+0xae/0xc0 [btrfs] > ... > May 29 13:56:54 s0 kernel: [48940.756438] BTRFS error (device sdg) in > __btrfs_free_extent:5134: IO failure > May 29 13:56:54 s0 kernel: [48940.756455] btrfs: run_one_delayed_ref > returned -5 > May 29 13:56:54 s0 kernel: [48940.756462] BTRFS error (device sdg) in > btrfs_run_delayed_refs:2454: IO failure > May 29 13:56:55 s0 kernel: [48940.997869] BUG: unable to handle kernel > paging request at ffffffffffffff99 > May 29 13:56:55 s0 kernel: [48940.997904] IP: [<ffffffffa012305c>] > btrfs_dec_test_ordered_pending+0xdc/0x220 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998631] Call Trace: > May 29 13:56:55 s0 kernel: [48940.998682] [<ffffffffa010e838>] > btrfs_finish_ordered_io+0x58/0x3c0 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998714] [<ffffffff8103ff59>] ? > default_spin_lock_flags+0x9/0x10 > May 29 13:56:55 s0 kernel: [48940.998739] [<ffffffff8166c7bf>] ? > _raw_spin_lock_irqsave+0x2f/0x40 > May 29 13:56:55 s0 kernel: [48940.998796] [<ffffffffa010ebf1>] > btrfs_writepage_end_io_hook+0x51/0xa0 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998860] [<ffffffffa0127b39>] > end_extent_writepage+0x69/0x100 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998919] [<ffffffffa0127c36>] > end_bio_extent_writepage+0x66/0xa0 [btrfs] > May 29 13:56:55 s0 kernel: [48940.998949] [<ffffffff811b80fd>] > bio_endio+0x1d/0x40 > May 29 13:56:55 s0 kernel: [48940.999009] [<ffffffffa00fbe45>] > end_workqueue_fn+0x45/0x50 [btrfs] > May 29 13:56:55 s0 kernel: [48940.999058] [<ffffffffa013433c>] > worker_loop+0x16c/0x510 [btrfs]-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
If he has it in a RAID 1, could he manually fail the bad disk and try it from there? Obviously this could be harmful, so a dd copy would be a VERY good idea(truthfully, that should have been the first thing that was done). Michael On Mon, Jun 4, 2012 at 12:09 PM, Hugo Mills <hugo@carfax.org.uk> wrote:> > On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote: > > I''m out of ideas. > > ... but that''s not to say that someone else may have some ideas. I > wouldn''t get your hopes up too much, though. > > > At this point, though, you''re probably looking at somebody writing > > custom code to scan the FS and attempt to find and retrieve anything > > that''s recoverable. > > > > You might try writing a tool to scan all the disks for useful > > fragments of old trees, and see if you can find some of the tree roots > > independently of the tree of tree roots (which clearly isn''t > > particularly functional right now). You might try simply scanning the > > disks looking for your lost data, and try to reconstruct as much of it > > as you can from that. You could try to find a company specialising in > > data recovery and pay them to try to get your data back. Or you might > > just have to accept that the data''s gone and work on reconstructing > > it. > > Hugo. > > -- > === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ==> PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk > --- A linked list is still a binary tree. Just a very unbalanced --- > one. -- dragon-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
It was a RAID0 unfortunately. On 06/04/2012 02:02 PM, Michael wrote:> If he has it in a RAID 1, could he manually fail the bad disk and try > it from there? Obviously this could be harmful, so a dd copy would be > a VERY good idea(truthfully, that should have been the first thing > that was done). > Michael > > On Mon, Jun 4, 2012 at 12:09 PM, Hugo Mills<hugo@carfax.org.uk> wrote: >> On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote: >>> I''m out of ideas. >> ... but that''s not to say that someone else may have some ideas. I >> wouldn''t get your hopes up too much, though. >> >>> At this point, though, you''re probably looking at somebody writing >>> custom code to scan the FS and attempt to find and retrieve anything >>> that''s recoverable. >>> >>> You might try writing a tool to scan all the disks for useful >>> fragments of old trees, and see if you can find some of the tree roots >>> independently of the tree of tree roots (which clearly isn''t >>> particularly functional right now). You might try simply scanning the >>> disks looking for your lost data, and try to reconstruct as much of it >>> as you can from that. You could try to find a company specialising in >>> data recovery and pay them to try to get your data back. Or you might >>> just have to accept that the data''s gone and work on reconstructing >>> it. >> Hugo. >> >> -- >> === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ==>> PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk >> --- A linked list is still a binary tree. Just a very unbalanced --- >> one. -- dragon-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On 06/04/2012 19:35, Maxim Mikheev wrote:> Is any chance to fix it and recover data after such failure? > > On 06/04/2012 11:02 AM, Stefan Behrens wrote: >> On Mon, 04 Jun 2012 10:08:54 -0400, Maxim Mikheev wrote: >>> Disks were connected to RocketRaid 2760 directly as JBOD. >>> >>> There is no LVM, MD or encryption. I used plain disks directly. >>> >>> The file system was 55% full (1.7TB from 3TB for each disk). >>> >>> Logs are attached. >>> The error happens at May 29, 13:55. >>> >>> Log contain errors on May 27 for ZFS, It is why I decided to switch to >>> btrfs. On the moment of failure, no ZFS was installed in the system. >> According to the kern.1.log file that you have sent (which is not >> visible on the mailing list because it exceeded the 100,000 chars limit >> of vger.kernel.org), a rebalance operation was active when the disks or >> the RAID controller started to cause IO errors. >> >> There seems to be a bug! Like that a write failure is ignored in btrfs. >> For instance, the result of barrier_all_devices() is ignored. Afterwards >> the superblocks are written referencing trees which have not been >> completely written to disk. >> >> >> ... >> May 29 13:08:07 s0 kernel: [46017.194519] btrfs: relocating block group >> 7236780818432 flags 9 >> May 29 13:08:36 s0 kernel: [46046.149492] btrfs: found 18543 extents >> May 29 13:09:03 s0 kernel: [46072.944773] btrfs: found 18543 extents >> May 29 13:09:04 s0 kernel: [46074.317760] btrfs: relocating block group >> 7235707076608 flags 20 >> ... >> May 29 13:55:56 s0 kernel: [48882.551881] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 >> rx_desc 30001 has error info8000000080000000. >> May 29 13:55:56 s0 kernel: [48882.551918] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active >> FFFFFCFD, slot [1]. >> May 29 13:55:56 s0 kernel: [48882.552084] btrfs csum failed ino 62276 >> off 1019039744 csum 1546305812 private 3211821089 >> May 29 13:55:56 s0 kernel: [48882.552241] btrfs csum failed ino 62276 >> off 1018056704 csum 3750159096 private 3390793248 >> ... >> May 29 13:55:56 s0 kernel: [48882.553791] btrfs csum failed ino 62276 >> off 1018712064 csum 872056089 private 2640477920 >> May 29 13:55:56 s0 kernel: [48882.554528] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 1 >> rx_desc 30001 has error info0000000000010000. >> May 29 13:55:56 s0 kernel: [48882.554541] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active >> FF3FFEFD, slot [1]. >> May 29 13:55:56 s0 kernel: [48882.555626] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1858:port 6 slot 22 >> rx_desc 30016 has error info0000000001000000. >> May 29 13:55:56 s0 kernel: [48882.555635] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_94xx.c 626:command active >> FF3FFEFB, slot [16]. >> May 29 13:55:56 s0 kernel: [48882.555659] sd 8:0:3:0: [sde] command >> ffff880006c57800 timed out >> May 29 13:56:00 s0 kernel: [48886.313989] sd 8:0:3:0: [sde] command >> ffff88117af65700 timed out >> ... >> May 29 13:56:00 s0 kernel: [48886.314186] sas: Enter >> sas_scsi_recover_host busy: 31 failed: 31 >> May 29 13:56:00 s0 kernel: [48886.314204] sas: trying to find task >> 0xffff881083807640 >> May 29 13:56:00 s0 kernel: [48886.314210] sas: sas_scsi_find_task: >> aborting task 0xffff881083807640 >> May 29 13:56:00 s0 kernel: [48886.314220] >> /home/apw/COD/linux/drivers/scsi/mvsas/mv_sas.c 1632:mvs_abort_task() >> mvi=ffff8837faa80000 task=ffff881083807640 slot=ffff8837faaa5140 >> slot_idx=x3 >> May 29 13:56:00 s0 kernel: [48886.314231] sas: sas_scsi_find_task: task >> 0xffff881083807640 is aborted >> May 29 13:56:00 s0 kernel: [48886.314236] sas: sas_eh_handle_sas_errors: >> task 0xffff881083807640 is aborted >> ... >> May 29 13:56:00 s0 kernel: [48886.315030] sas: ata10: end_device-8:3: >> cmd error handler >> May 29 13:56:00 s0 kernel: [48886.315108] sas: ata7: end_device-8:0: dev >> error handler >> May 29 13:56:00 s0 kernel: [48886.315138] sas: ata8: end_device-8:1: dev >> error handler >> May 29 13:56:00 s0 kernel: [48886.315168] sas: ata9: end_device-8:2: dev >> error handler >> May 29 13:56:00 s0 kernel: [48886.315193] sas: ata10: end_device-8:3: >> dev error handler >> May 29 13:56:00 s0 kernel: [48886.315219] ata10.00: exception Emask 0x1 >> SAct 0x7fffffff SErr 0x0 action 0x6 frozen >> May 29 13:56:00 s0 kernel: [48886.315239] ata10.00: failed command: >> WRITE FPDMA QUEUED >> May 29 13:56:00 s0 kernel: [48886.315255] ata10.00: cmd >> 61/08:00:88:a0:98/00:00:7c:00:00/40 tag 0 ncq 4096 out >> May 29 13:56:00 s0 kernel: [48886.315258] res >> 41/54:08:68:d6:98/00:00:7c:00:00/40 Emask 0x8d (timeout) >> May 29 13:56:00 s0 kernel: [48886.315278] ata10.00: status: { DRDY ERR } >> May 29 13:56:00 s0 kernel: [48886.315286] ata10.00: error: { UNC IDNF >> ABRT } >> ... >> May 29 13:56:54 s0 kernel: [48940.752647] btrfs: run_one_delayed_ref >> returned -5 >> May 29 13:56:54 s0 kernel: [48940.752652] btrfs: run_one_delayed_ref >> returned -5 >> May 29 13:56:54 s0 kernel: [48940.752656] 99 28 >> May 29 13:56:54 s0 kernel: [48940.752665] ------------[ cut here >> ]------------ >> May 29 13:56:54 s0 kernel: [48940.752669] ------------[ cut here >> ]------------ >> May 29 13:56:54 s0 kernel: [48940.752674] c2 00 >> May 29 13:56:54 s0 kernel: [48940.752683] ------------[ cut here >> ]------------ >> May 29 13:56:54 s0 kernel: [48940.752747] WARNING: at >> /home/apw/COD/linux/fs/btrfs/super.c:219 >> __btrfs_abort_transaction+0xae/0xc0 [btrfs]() >> May 29 13:56:54 s0 kernel: [48940.752760] 30 >> May 29 13:56:54 s0 kernel: [48940.752825] WARNING: at >> /home/apw/COD/linux/fs/btrfs/super.c:219 >> __btrfs_abort_transaction+0xae/0xc0 [btrfs]() >> May 29 13:56:54 s0 kernel: [48940.752832] 45 >> May 29 13:56:54 s0 kernel: [48940.752862] WARNING: at >> /home/apw/COD/linux/fs/btrfs/super.c:219 >> __btrfs_abort_transaction+0xae/0xc0 [btrfs]() >> May 29 13:56:54 s0 kernel: [48940.752871] 00 >> May 29 13:56:54 s0 kernel: [48940.752876] Hardware name: H8QG6 >> May 29 13:56:54 s0 kernel: [48940.752880] bf >> May 29 13:56:54 s0 kernel: [48940.752884] Hardware name: H8QG6 >> May 29 13:56:54 s0 kernel: [48940.752892] 00 >> May 29 13:56:54 s0 kernel: [48940.752896] btrfs: Transaction aborted 44 >> May 29 13:56:54 s0 kernel: [48940.752902] btrfs: Transaction aborted >> ... >> May 29 13:56:54 s0 kernel: [48940.754032] [<ffffffffa00db45e>] >> __btrfs_abort_transaction+0xae/0xc0 [btrfs] >> ... >> May 29 13:56:54 s0 kernel: [48940.756438] BTRFS error (device sdg) in >> __btrfs_free_extent:5134: IO failure >> May 29 13:56:54 s0 kernel: [48940.756455] btrfs: run_one_delayed_ref >> returned -5 >> May 29 13:56:54 s0 kernel: [48940.756462] BTRFS error (device sdg) in >> btrfs_run_delayed_refs:2454: IO failure >> May 29 13:56:55 s0 kernel: [48940.997869] BUG: unable to handle kernel >> paging request at ffffffffffffff99 >> May 29 13:56:55 s0 kernel: [48940.997904] IP: [<ffffffffa012305c>] >> btrfs_dec_test_ordered_pending+0xdc/0x220 [btrfs] >> May 29 13:56:55 s0 kernel: [48940.998631] Call Trace: >> May 29 13:56:55 s0 kernel: [48940.998682] [<ffffffffa010e838>] >> btrfs_finish_ordered_io+0x58/0x3c0 [btrfs] >> May 29 13:56:55 s0 kernel: [48940.998714] [<ffffffff8103ff59>] ? >> default_spin_lock_flags+0x9/0x10 >> May 29 13:56:55 s0 kernel: [48940.998739] [<ffffffff8166c7bf>] ? >> _raw_spin_lock_irqsave+0x2f/0x40 >> May 29 13:56:55 s0 kernel: [48940.998796] [<ffffffffa010ebf1>] >> btrfs_writepage_end_io_hook+0x51/0xa0 [btrfs] >> May 29 13:56:55 s0 kernel: [48940.998860] [<ffffffffa0127b39>] >> end_extent_writepage+0x69/0x100 [btrfs] >> May 29 13:56:55 s0 kernel: [48940.998919] [<ffffffffa0127c36>] >> end_bio_extent_writepage+0x66/0xa0 [btrfs] >> May 29 13:56:55 s0 kernel: [48940.998949] [<ffffffff811b80fd>] >> bio_endio+0x1d/0x40 >> May 29 13:56:55 s0 kernel: [48940.999009] [<ffffffffa00fbe45>] >> end_workqueue_fn+0x45/0x50 [btrfs] >> May 29 13:56:55 s0 kernel: [48940.999058] [<ffffffffa013433c>] >> worker_loop+0x16c/0x510 [btrfs]Btrfs should not corrupt the filesystem after a crash or after a hardware failure. It is designed to always have a correct file system on disk. You can lose new data and updated data from the last 30 seconds if you for instance disconnect the box from power without a proper shutdown before, but everything else is still a valid and correct filesystem. You do not even need to run a fsck tool. In such situations, you do not need any recovery tools or any special mount options, the filesystem recovers itself, or to be exact, it is not even corrupted at all. Since this does not work for you, and since all recovery attempts did not look successful, and since even Hugo is out of ideas, you have found a bug in the btrfs implementation in conjunction with disk write I/O errors. In this case, you seem to have a corrupted file system which needs a lot of manual work to partially recover the data. Otherwise, the existing tools would have already helped you to recover the data. If I were you, I would wait a few days whether people have new ideas, then ask the mailing list once more. -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
On Mon, Jun 04, 2012 at 05:02:26PM +0200, Stefan Behrens wrote:> > According to the kern.1.log file that you have sent (which is not > visible on the mailing list because it exceeded the 100,000 chars limit > of vger.kernel.org), a rebalance operation was active when the disks or > the RAID controller started to cause IO errors. > > There seems to be a bug! Like that a write failure is ignored in btrfs. > For instance, the result of barrier_all_devices() is ignored. Afterwards > the superblocks are written referencing trees which have not been > completely written to disk.This may be also what happened when my hardware RAID blew up. I was left with two completely inconsistent/unusable btrfs which I am still attempting to recover. Assuming that the general mount options to remount read-only on errors are correctly handled by btrfs, that would seem to be the wise thing to do. IMO it seems a volume which experiences a metadata write error to the underlying medium should be made immediately read-only anyway. -- Ryan C. Underwood, <nemesis@icequake.net>
Below is what you used? So you have RAID 0 for data, RAID 1 for metadata. This doesn''t help any, but a point of info. # Create a filesystem across four drives (metadata mirrored, data striped) mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde Just to make sure I understand correctly: This FS with critical info used a non-production filesystem, in RAID 0(no redundancy), with no backups. Another option I found(and I am no authority on the subject) is to use btrfs.restore with -i -i: Ignore errors. Normally the restore tool exits immediately for any error. This option forces it to keep going if it can, usually this results in some missing data. Again, this can be destructive, and it would be very smart to make block level copies of everything. On Mon, Jun 4, 2012 at 1:03 PM, Maxim Mikheev <mikhmv@gmail.com> wrote:> It was a RAID0 unfortunately. > > > On 06/04/2012 02:02 PM, Michael wrote: >> >> If he has it in a RAID 1, could he manually fail the bad disk and try >> it from there? Obviously this could be harmful, so a dd copy would be >> a VERY good idea(truthfully, that should have been the first thing >> that was done). >> Michael >> >> On Mon, Jun 4, 2012 at 12:09 PM, Hugo Mills<hugo@carfax.org.uk> wrote: >>> >>> On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote: >>>> >>>> I''m out of ideas. >>> >>> ... but that''s not to say that someone else may have some ideas. I >>> wouldn''t get your hopes up too much, though. >>> >>>> At this point, though, you''re probably looking at somebody writing >>>> custom code to scan the FS and attempt to find and retrieve anything >>>> that''s recoverable. >>>> >>>> You might try writing a tool to scan all the disks for useful >>>> fragments of old trees, and see if you can find some of the tree roots >>>> independently of the tree of tree roots (which clearly isn''t >>>> particularly functional right now). You might try simply scanning the >>>> disks looking for your lost data, and try to reconstruct as much of it >>>> as you can from that. You could try to find a company specialising in >>>> data recovery and pay them to try to get your data back. Or you might >>>> just have to accept that the data''s gone and work on reconstructing >>>> it. >>> >>> Hugo. >>> >>> -- >>> === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ==>>> PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk >>> --- A linked list is still a binary tree. Just a very unbalanced --- >>> one. -- dragon-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Am Montag, 4. Juni 2012 schrieb Hugo Mills:> On Mon, Jun 04, 2012 at 12:24:05PM -0400, Maxim Mikheev wrote: > > I run through all potential tree roots. It gave me everytime > > messages like these: > > > > parent transid verify failed on 3405159735296 wanted 9096 found 5263 > > parent transid verify failed on 3405159735296 wanted 9096 found 5263[…]> > The largest recovered data is 12Kb. > > max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088 > > total 28K > > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 .[…]> > What can I do next? > > I''m out of ideas. > > At this point, though, you''re probably looking at somebody writing > custom code to scan the FS and attempt to find and retrieve anything > that''s recoverable. > > You might try writing a tool to scan all the disks for useful > fragments of old trees, and see if you can find some of the tree roots > independently of the tree of tree roots (which clearly isn''t > particularly functional right now). You might try simply scanning the > disks looking for your lost data, and try to reconstruct as much of it > as you can from that. You could try to find a company specialising in > data recovery and pay them to try to get your data back. Or you might > just have to accept that the data''s gone and work on reconstructing > it.Only thing that comes to my mind thats still tryable without involving a data recover firm or engage a developer for an improved recovery tool is: PhotoRec from testdisk package or some other data recovery tool that looks for headers for known fileformats like I think foremost. It has some drawbacks: - AFAIK it has no means to glue back together fragmented files, so these are likely gone or truncated - filenames are lost - directory structure is lost I think it has been said, but I think its important to repeat it: BTRFS - or any other filesystem - with RAID 0 without backup is not for important production data. Not ever. Maxim, I suggest if you learn anything out of this let it be at least this. When I think about your setup, Maxim, the sentence "I want to have my data destroyed" comes to my mind. I would try with photorec from testdisk first. Its quite easy to use. -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Am Dienstag, 5. Juni 2012 schrieb Martin Steigerwald:> Am Montag, 4. Juni 2012 schrieb Hugo Mills: > > On Mon, Jun 04, 2012 at 12:24:05PM -0400, Maxim Mikheev wrote: > > > I run through all potential tree roots. It gave me everytime > > > messages like these: > > > > > > parent transid verify failed on 3405159735296 wanted 9096 found > > > 5263 parent transid verify failed on 3405159735296 wanted 9096 > > > found 5263 > > […] > > > > The largest recovered data is 12Kb. > > > max@s0:~/btrfs-recovering./recovered$ ls -lahs 3728819929088 > > > total 28K > > > 4.0K drwxr-xr-x 3 root root 4.0K Jun 4 12:06 . > > […] > > > > What can I do next? > > > > > I''m out of ideas. > > > > At this point, though, you''re probably looking at somebody writing > > > > custom code to scan the FS and attempt to find and retrieve anything > > that''s recoverable. > > > > You might try writing a tool to scan all the disks for useful > > > > fragments of old trees, and see if you can find some of the tree > > roots independently of the tree of tree roots (which clearly isn''t > > particularly functional right now). You might try simply scanning > > the disks looking for your lost data, and try to reconstruct as much > > of it as you can from that. You could try to find a company > > specialising in data recovery and pay them to try to get your data > > back. Or you might just have to accept that the data''s gone and work > > on reconstructing it. > > Only thing that comes to my mind thats still tryable without involving > a data recover firm or engage a developer for an improved recovery > tool is: > > PhotoRec from testdisk package or some other data recovery tool that > looks for headers for known fileformats like I think foremost. > > It has some drawbacks: > > - AFAIK it has no means to glue back together fragmented files, so > these are likely gone or truncated > > - filenames are lost > > - directory structure is lostIt won´t work for striped files either, so it may only help for rather small files depending on BTRFS RAID 0 stripe size. -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Am Montag, 4. Juni 2012 schrieb Maxim Mikheev:> --super works but my root tree 2 has many errors too. > > What can I do next?Have a data recovery company try to physically recover the bad harddisk to a good one and then try to mount BTRFS again and hope that your previous attempts to repair the filesystem didn´t make things worse? -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Am Dienstag, 5. Juni 2012 schrieb Martin Steigerwald:> Am Montag, 4. Juni 2012 schrieb Maxim Mikheev: > > --super works but my root tree 2 has many errors too. > > > > What can I do next? > > Have a data recovery company try to physically recover the bad harddisk > to a good one and then try to mount BTRFS again and hope that your > previous attempts to repair the filesystem didn´t make things worse?Disregard this. I read thread further and it seems that a BTRFS or BTRFS tool bugfix could have your filesystem restored and that it was a RAID card problem, not necessary a faulty disk. Still your setup was a really bad idea. -- Martin ''Helios'' Steigerwald - http://www.Lichtvoll.de GPG: 03B0 0D6C 0040 0710 4AFA B82F 991B EAAC A599 84C7 -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hallo, Martin, Du meintest am 05.06.12:>> --super works but my root tree 2 has many errors too. >> >> What can I do next?> Have a data recovery company try to physically recover the bad > harddisk to a good oneAbout 1 year ago I asked Kroll-Ontrack. They told me they couldn''t (yet) recover btrfs. Maybe not only time has changed ... Viele Gruesse! Helmut -- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Option -i was helpful. Some date was restored. during restoring some files I got message "ret is -3". This files has 0 size. Can anyone tell me what is code "-3" mean. Is it recoverable? So basically data is on harddrives but not completely available. the questions is: Is it possible to btrfs push to roll back on several generations? Thanks On 06/04/2012 02:37 PM, Michael wrote:> Below is what you used? So you have RAID 0 for data, RAID 1 for > metadata. This doesn''t help any, but a point of info. > # Create a filesystem across four drives (metadata mirrored, data striped) > mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde > > > Just to make sure I understand correctly: This FS with critical info > used a non-production filesystem, in RAID 0(no redundancy), with no > backups. > > Another option I found(and I am no authority on the subject) is to use > btrfs.restore with -i > -i: Ignore errors. Normally the restore tool exits immediately for any > error. This option forces it to keep going if it can, usually this > results in some missing data. > Again, this can be destructive, and it would be very smart to make > block level copies of everything. > > On Mon, Jun 4, 2012 at 1:03 PM, Maxim Mikheev<mikhmv@gmail.com> wrote: >> It was a RAID0 unfortunately. >> >> >> On 06/04/2012 02:02 PM, Michael wrote: >>> If he has it in a RAID 1, could he manually fail the bad disk and try >>> it from there? Obviously this could be harmful, so a dd copy would be >>> a VERY good idea(truthfully, that should have been the first thing >>> that was done). >>> Michael >>> >>> On Mon, Jun 4, 2012 at 12:09 PM, Hugo Mills<hugo@carfax.org.uk> wrote: >>>> On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote: >>>>> I''m out of ideas. >>>> ... but that''s not to say that someone else may have some ideas. I >>>> wouldn''t get your hopes up too much, though. >>>> >>>>> At this point, though, you''re probably looking at somebody writing >>>>> custom code to scan the FS and attempt to find and retrieve anything >>>>> that''s recoverable. >>>>> >>>>> You might try writing a tool to scan all the disks for useful >>>>> fragments of old trees, and see if you can find some of the tree roots >>>>> independently of the tree of tree roots (which clearly isn''t >>>>> particularly functional right now). You might try simply scanning the >>>>> disks looking for your lost data, and try to reconstruct as much of it >>>>> as you can from that. You could try to find a company specialising in >>>>> data recovery and pay them to try to get your data back. Or you might >>>>> just have to accept that the data''s gone and work on reconstructing >>>>> it. >>>> Hugo. >>>> >>>> -- >>>> === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | lug.org.uk ==>>>> PGP key: 515C238D from wwwkeys.eu.pgp.net or http://www.carfax.org.uk >>>> --- A linked list is still a binary tree. Just a very unbalanced --- >>>> one. -- dragon-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html
Hi Everyone, Is it possible to extract specific file instead of downloading everything? Thanks On 06/06/2012 12:25 PM, Maxim Mikheev wrote:> Option -i was helpful. > Some date was restored. > > during restoring some files I got message "ret is -3". This files has > 0 size. > Can anyone tell me what is code "-3" mean. Is it recoverable? > > So basically data is on harddrives but not completely available. > the questions is: Is it possible to btrfs push to roll back on several > generations? > > Thanks > > On 06/04/2012 02:37 PM, Michael wrote: >> Below is what you used? So you have RAID 0 for data, RAID 1 for >> metadata. This doesn''t help any, but a point of info. >> # Create a filesystem across four drives (metadata mirrored, data >> striped) >> mkfs.btrfs /dev/sdb /dev/sdc /dev/sdd /dev/sde >> >> >> Just to make sure I understand correctly: This FS with critical info >> used a non-production filesystem, in RAID 0(no redundancy), with no >> backups. >> >> Another option I found(and I am no authority on the subject) is to use >> btrfs.restore with -i >> -i: Ignore errors. Normally the restore tool exits immediately for any >> error. This option forces it to keep going if it can, usually this >> results in some missing data. >> Again, this can be destructive, and it would be very smart to make >> block level copies of everything. >> >> On Mon, Jun 4, 2012 at 1:03 PM, Maxim Mikheev<mikhmv@gmail.com> wrote: >>> It was a RAID0 unfortunately. >>> >>> >>> On 06/04/2012 02:02 PM, Michael wrote: >>>> If he has it in a RAID 1, could he manually fail the bad disk and try >>>> it from there? Obviously this could be harmful, so a dd copy would be >>>> a VERY good idea(truthfully, that should have been the first thing >>>> that was done). >>>> Michael >>>> >>>> On Mon, Jun 4, 2012 at 12:09 PM, Hugo Mills<hugo@carfax.org.uk> >>>> wrote: >>>>> On Mon, Jun 04, 2012 at 06:04:22PM +0100, Hugo Mills wrote: >>>>>> I''m out of ideas. >>>>> ... but that''s not to say that someone else may have some ideas. I >>>>> wouldn''t get your hopes up too much, though. >>>>> >>>>>> At this point, though, you''re probably looking at somebody >>>>>> writing >>>>>> custom code to scan the FS and attempt to find and retrieve anything >>>>>> that''s recoverable. >>>>>> >>>>>> You might try writing a tool to scan all the disks for useful >>>>>> fragments of old trees, and see if you can find some of the tree >>>>>> roots >>>>>> independently of the tree of tree roots (which clearly isn''t >>>>>> particularly functional right now). You might try simply scanning >>>>>> the >>>>>> disks looking for your lost data, and try to reconstruct as much >>>>>> of it >>>>>> as you can from that. You could try to find a company >>>>>> specialising in >>>>>> data recovery and pay them to try to get your data back. Or you >>>>>> might >>>>>> just have to accept that the data''s gone and work on reconstructing >>>>>> it. >>>>> Hugo. >>>>> >>>>> -- >>>>> === Hugo Mills: hugo@... carfax.org.uk | darksatanic.net | >>>>> lug.org.uk ==>>>>> PGP key: 515C238D from wwwkeys.eu.pgp.net or >>>>> http://www.carfax.org.uk >>>>> --- A linked list is still a binary tree. Just a very >>>>> unbalanced --- >>>>> one. -- dragon-- To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in the body of a message to majordomo@vger.kernel.org More majordomo info at http://vger.kernel.org/majordomo-info.html