similar to: ZFS panic while mounting lofi device?

Displaying 20 results from an estimated 1000 matches similar to: "ZFS panic while mounting lofi device?"

2007 Feb 13
2
zpool export consumes whole CPU and takes more than 30 minutes to complete
Hi. T2000 1.2GHz 8-core, 32GB RAM, S10U3, zil_disable=1. Command ''zpool export f3-2'' is hung for 30 minutes now and still is going. Nothing else is running on the server. I can see one CPU being 100% in SYS like: bash-3.00# mpstat 1 [...] CPU minf mjf xcal intr ithr csw icsw migr smtx srw syscl usr sys wt idl 0 0 0 67 220 110 20 0 0 0 0
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend. I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2006 Sep 11
7
installing a pseudo driver in a Solaris DOM U and DOM U reboot
Hello, on a v20z, we have as DOM 0 a Solaris XEN on snv44 64bits and we have as DOM U a Solaris XEN on snv44 64 bits. We then install a pseudo driver in the Solaris DOM 1 XEN snv44: installation is ok and driver works as expected. But on reboot of DOM 1, the driver is no more there (in modinfo, driver not found). Is there something special to do after a pseudo driver installation in a Solaris
2007 Apr 03
2
ZFS panics with dmu_buf_hold_array
Hi, I have been wrestling with ZFS issues since yesterday when one of my disks sort of died. After much wrestling with "zpool replace" I managed to get the new disk in and got the pool to resilver, but since then I have one error left that I can''t clear: pool: data state: ONLINE status: One or more devices has experienced an error resulting in data corruption.
2005 Nov 19
11
ZFS related panic!
> My current zfs setup lookst like this: > > homepool 3.63G 34.1G 8K /homepool > > homepool/db 61.6M 34.1G 8.50K /var/db > > homepool/db/pgsql 61.5M 34.1G 61.5M > > /var/db/pgsql > > homepool/home 3.57G 34.1G 10.0K /users > > homepool/home/carrie 8K 34.1G 8K > > /users/carrie > >
2009 Apr 21
0
opensolaris crash in vn_rele()
My newly upgraded opensolaris 2008.11 laptop crashed last weekend. (The OS was installed from the os 2008.11 live-cd and then upgraded using the package manager to snv_111.) I was trying to copy a large virtual pc image from my wife''s imac to the laptop. On a whim I had decided to create a separate zvol in the root pool to contain the image, figuring I could create a vbox with linux or
2009 Aug 12
4
zpool import -f rpool hangs
I had the rpool with two sata disks in the miror. Solaris 10 5.10 Generic_141415-08 i86pc i386 i86pc Unfortunately the first disk with grub loader has failed with unrecoverable block write/read errors. Now I have the problem to import rpool after the first disk has failed. So I decided to do: "zpool import -f rpool" only with second disk, but it''s hangs and the system is
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed? I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit. OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all. One of our server had a panic and now can''t mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67 00009000), file: ../../common/fs/zfs/space_map.c, line: 126 Mar 21 11:09:17 SERVER142
2007 Oct 10
6
server-reboot
Hi. Just migrated to zfs on opensolaris. I copied data to the server using rsync and got this message: Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80: Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP: type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000 Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice] Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2006 Dec 07
6
cannot load kqemu modul on solaris 5.10 3/05
Hi @all I want to load the precompile kqemu modul on my solaris 5.10 3/05, but this dont work. MAKE --- -bash-3.00# make install cp kqemu-solaris-i386 kqemu /usr/sbin/install -f /usr/kernel/drv -m 755 -u root -g sys kqemu new owner is root kqemu installed as /usr/kernel/drv/kqemu cp kqemu-solaris-x86_64 kqemu /usr/sbin/install -f /usr/kernel/drv/amd64 -m 755 -u root -g sys kqemu new owner is
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82): Apr 23 02:02:21 SERVER144 i/o to invalid geometry Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82): Apr 23 02:02:21 SERVER144
2008 Dec 28
2
zfs mount hangs
Hi, System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and 2x146GB (space pool). snv_98. After a panic the system hangs on boot and manual attempts to mount (at least) one dataset in single user mode, hangs. The Panic: Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20: Dec 27 04:42:11 base unix: [ID 521688 kern.notice] [AFT1] errID 0x00167f73.1c737868 UE Error(s) Dec 27
2008 Nov 13
5
BAD TRAP with Crossbow Beta October 31 2008
Hi. I tried to send this to the mailing list, but it never showed up in the archives, so I''m trying the forum instead... I recently installed the Crossbow Beta October 31 2008 on my SunFire T1000, and let me first say that I''m very pleased with the functionality it provides. What''s not so pleasing is the fact that after installing this, the computer now get very
2007 Apr 30
4
B62 AHCI and ZFS
Hardware Supermicro X7DAE (AHCI BIOS) dual Intel Woodcrest processors, 6 x Western Digital Raptor SATA drives. I have installed b62 running 64 bit succesfully on a PATA drive. The BIOS is configured to access the SATA drives in native mode using hte AHCI Bios. I have 6 SATA II drives accessed via the Solaris AHCI driver. I have created a ZFS file system across all 6 drives. This works fine until
2008 May 26
2
indiana as nfs server: crash due to zfs
hello all, i have indiana freshly installed on a sun ultra 20 machine. It only does nfs server. During one night, the kernel had crashed, and i got this messages: " May 22 02:18:57 ultra20 unix: [ID 836849 kern.notice] May 22 02:18:57 ultra20 ^Mpanic[cpu0]/thread=ffffff0003d06c80: May 22 02:18:57 ultra20 genunix: [ID 603766 kern.notice] assertion failed: sm->sm_space == 0 (0x40000000 ==
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.> I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part. The system reboots immediately. Here is the log in /var/adm/messages Feb 8 16:07:09 amber unix: [ID 836849 kern.notice] Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40: Feb 8
2007 Sep 19
2
import zpool error if use loop device as vdev
Hey, guys I just do the test for use loop device as vdev for zpool Procedures as followings: 1) mkfile -v 100m disk1 mkfile -v 100m disk2 2) lofiadm -a disk1 /dev/lofi lofiadm -a disk2 /dev/lofi 3) zpool create pool_1and2 /dev/lofi/1 and /dev/lofi/2 4) zpool export pool_1and2 5) zpool import pool_1and2 error info here: bash-3.00# zpool import pool1_1and2 cannot import
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror): [root at einstein;0]~# zpool status poolm pool: poolm state: FAULTED scrub: none requested config: NAME STATE READ WRITE CKSUM poolm UNAVAIL 0 0 0 insufficient replicas mirror UNAVAIL 0 0 0 corrupted data c2t0d0s0 ONLINE 0
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors. The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic. I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14). But still when trying to do zpool import