Displaying 20 results from an estimated 400 matches similar to: "fix to lib/talloc.c"
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don''t think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname -type f and here is the stack,
hopefully this someone a hint at what the issue is, I have
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
2010 May 07
0
confused about zpool import -f and export
Hi, all,
I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up.
I do a successful install, then I boot OK,
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi,
more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB
HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS
Version 2). Everything went fine and I used the pool to store personal
stuff on it, like lots of photos and music. (So getting the data back is
not time critical, but still important to me.)
Later, since the development of the ZFS extension was
2012 Jan 08
0
Pool faulted in a bad way
Hello,
I have been asked to take a look at at poll on a old OSOL 2009.06 host. It have been left unattended for a long time and it was found in a FAULTED state. Two of the disks in the raildz2 pool seems to have failed, one have been replaced by a spare, the other one is UNAVAIL. The machine was restarted and the damaged disks was removed to make it possible to access the pool without it hanging
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2009 Apr 08
2
ZFS data loss
Hi,
I have lost a ZFS volume and I am hoping to get some help to recover the
information ( a couple of months worth of work :( ).
I have been using ZFS for more than 6 months on this project. Yesterday
I ran a "zvol status" command, the system froze and rebooted. When it
came back the discs where not available.
See bellow the output of " zpool status", "format"
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi,
I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help.
Now it looks like that:
# zpool status
pool: tank
state: UNAVAIL
status:
2016 Jan 14
0
[PATCH] nv50/ir: only use FILE_LOCAL_MEMORY for temp arrays that use indirection
Previously we were treating any indirect temp array usage to mean that
everything should end up in lmem. The MemoryOpt pass would clean a lot
of that up later, but in the meanwhile we would lose a lot of
opportunity for optimization.
This helps a lot of Metro 2033 Redux and a handful of KSP shaders:
total instructions in shared programs : 6288373 -> 6261517 (-0.43%)
total gprs used in shared
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2010 Sep 10
3
zpool upgrade and zfs upgrade behavior on b145
Not sure what the best list to send this to is right now, so I have selected
a few, apologies in advance.
A couple questions. First I have a physical host (call him bob) that was
just installed with b134 a few days ago. I upgraded to b145 using the
instructions on the Illumos wiki yesterday. The pool has been upgraded (27)
and the zfs file systems have been upgraded (5).
chris at bob:~# zpool
2010 May 01
5
Single-disk pool corrupted after controller failure
I had a single spare 500GB HDD and I decided to install a FreeBSD file
server in it for learning purposes, and I moved almost all of my data
to it. Yesterday, and naturally after no longer having backups of the
data in the server, I had a controller failure (SiS 180 (oh, the
quality)) and the HDD was considered unplugged. When I noticed a few
checksum failures on `zfs status` (including two on
2009 Aug 05
0
zfs export and import between diferent controllers
Problem itself happened on FreeBSD, but as I understand it''s ZFS related, not FreeBSD.
So:
I got error when tried to migrate zfs disk between 2 different servers. After exporting on first, import on second one are failing with following:
Output from import pool:
#zpool import storage750
cannot import ''storage750'': one or more devices is currently unavailable
Output
2018 Mar 01
0
Released Pigeonhole v0.4.22 for Dovecot v2.2.34.
Hello Dovecot users,
Here is a new Pigeonhole release that goes with the new Dovecot v2.2.34
release. This release is not strictly necessary, as previous versions
should be usable as well. This release only contains bugfixes.
Changelog v0.4.22:
- Fixed filesystem path handling problem: sieve plugin could have
assert-crashed with specific path lengths with: "Panic: file
realpath.c: line
2018 Mar 01
0
Released Pigeonhole v0.4.22 for Dovecot v2.2.34.
Hello Dovecot users,
Here is a new Pigeonhole release that goes with the new Dovecot v2.2.34
release. This release is not strictly necessary, as previous versions
should be usable as well. This release only contains bugfixes.
Changelog v0.4.22:
- Fixed filesystem path handling problem: sieve plugin could have
assert-crashed with specific path lengths with: "Panic: file
realpath.c: line
2007 Mar 12
1
roundup in vdev_raidz.c
Hi guys,
There seems to have been some discussion about this before
(http://mail.opensolaris.org/pipermail/zfs-discuss/2006-September/013050.html)
but I don''t *quite* understand why the roundup is necessary.
Using Bill''s notation, if there isn''t a roundup (writing 4k fs blocks
to a 4 device RAID-Z) wouldn''t you get something like this:
Disk 0 1 2
2012 Jan 11
0
Clarifications wanted for ZFS spec
I''m reading the "ZFS On-disk Format" PDF (dated 2006 -
are there newer releases?), and have some questions
regarding whether it is outdated:
1) On page 16 it has the following phrase (which I think
is in general invalid):
The value stored in offset is the offset in terms of
sectors (512 byte blocks). To find the physical block
byte offset from the beginning of a slice,
2004 Feb 06
4
memory reduction
As those of you who watch CVS will be aware Wayne has been
making progress in reducing memory requirements of rsync.
Much of what he has done has been the product of discussions
between he and myself that started a month ago with John Van
Essen.
Most recently Wayne has changed how the file_struct and its
associated data are allocated, eliminating the string areas.
Most of these changes have been