Displaying 20 results from an estimated 3000 matches similar to: "panic after zfs mount"
2010 Jan 17
3
opensolaris fresh install lock up
I just installed opensolaris build 130 which i downloaded from genunix. The
install went fine....and the first reboot after install seemed to work but
when i powered down and rebooted fully, it locks up as soon as i log in.
Gnome is still showing the icon it shows when stuff hasn''t finished
loading....is there any way i can find out why it''s locking up and how to
fix it?
2006 Jul 30
6
zfs mount stuck in zil_replay
Hello ZFS,
System was rebooted and after reboot server again
System is snv_39, SPARC, T2000
bash-3.00# ptree
7 /lib/svc/bin/svc.startd -s
163 /sbin/sh /lib/svc/method/fs-local
254 /usr/sbin/zfs mount -a
[...]
bash-3.00# zfs list|wc -l
46
Using df I can see most file systems are already mounted.
> ::ps!grep zfs
R 254 163 7 7 0 0x4a004000
2007 Nov 25
2
Corrupted pool
Howdy,
We are using ZFS on one of our Solaris 10 servers, and the box paniced
this evening with the following stack trace:
Nov 24 04:03:35 foo unix: [ID 100000 kern.notice]
Nov 24 04:03:35 foo genunix: [ID 802836 kern.notice] fffffe80004a14d0
fffffffffb9b49f3 ()
Nov 24 04:03:35 foo genunix: [ID 655072 kern.notice] fffffe80004a1550
zfs:space_map_remove+239 ()
Nov 24 04:03:35 foo genunix: [ID
2001 Feb 05
2
Could not find working SSLeay?
I'm installing openssl 0.9.5a and openssh 2.3.0p1 on an Ultra 5 running
Solaris 8 with the latest cluster patch. Openssl installed without any
problems. When I do a configure for openssh I get:
Checking for OpenSSL directory. . . configure: error: Could not find
working SSLeay /
OpenSSL libraries, please install
I've reinstalled openssl and everything is there. As a note I've
2010 Jun 07
1
samba printing from 64-bit windows server 2008
I have a redhat EL5 samba server hosting a collection of printers and
joined to a domain. I can connect to this server and print happily from
a 32-bit XP box on the domain, but a 64-bit windows server 2008 box
cannot connect, and returns the error 0x000006d1.
I get the same results with samba 3.0.33 (came with redhat), 3.5.3 (the
latest from sernet), and 3.3.12 (this message from the
2006 Mar 14
1
Problems compiling on Solaris 8
I have two machines that we are having problems compiling version 4.3p2.
Both machines are Solaris 8 and gcc 3.3.2 openssl 0.9.8a is installed on
both machines as well.
The first exhibits an error in log.h:
In file included from bsd-arc4random.c:18:
../log.h: In function `fatal':
../log.h:56: warning: empty declaration
../log.h:65: error: parse error before "volatile"
2007 Aug 26
3
Kernel panic receiving incremental snapshots
Before I open a new case with Sun, I am wondering if anyone has seen this
kernel panic before? It happened on an X4500 running Sol10U3 while it was
receiving incremental snapshot updates.
Thanks.
Aug 25 17:01:50 ldasdata6 ^Mpanic[cpu0]/thread=fffffe857d53f7a0:
Aug 25 17:01:50 ldasdata6 genunix: [ID 895785 kern.notice] dangling dbufs (dn=fffffe82a3532d10, dbuf=fffffe8b4e338b90)
Aug 25 17:01:50
2008 Apr 24
0
panic on zfs scrub on builds 79 & 86
This just started happening to me. It''s a striped non mirrored pool (I know I know). A zfs scrub causes a panic under a minute. I can also trigger a panic by doing tars etc. x86 64-bit kernel ... any ideas? Just to help rule out some things, I changed the motherboard, memory and cpu and it still happens ... I also think it happens on a 32-bit kernel.
genunix: [ID 335743 kern.notice] BAD
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release.
Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134?
These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2007 Sep 14
9
Possible ZFS Bug - Causes OpenSolaris Crash
I?d like to report the ZFS related crash/bug described below. How do I go about reporting the crash and what additional information is needed?
I?m using my own very simple test app that creates numerous directories and files of randomly generated data. I have run the test app on two machines, both 64 bit.
OpenSolaris crashes a few minutes after starting my test app. The crash has occurred on
2010 Jan 12
0
dmu_zfetch_find - lock contention?
Hi,
I have a mysql instance which if I point more load towards it it suddenly gets 100% in SYS as shown below. It can work fine for an hour but eventually it gets to a jump from 5-15% of CPU utilization to 100% in SYS as show in mpstat output below:
# prtdiag | head
System Configuration: SUN MICROSYSTEMS SUN FIRE X4170 SERVER
BIOS Configuration: American Megatrends Inc. 07060215
2007 Mar 21
4
HELP!! I can''t mount my zpool!!
Hi all.
One of our server had a panic and now can''t mount the zpool anymore!
Here is what I get at boot:
Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=ffffffff90878200:
Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x670000b800 <= 0x67
00009000), file: ../../common/fs/zfs/space_map.c, line: 126
Mar 21 11:09:17 SERVER142
2006 Oct 05
0
Crash when doing rm -rf
Not an really good subject, I know but that''s kind of what happend.
I''m trying to build an backup-solution server, Windows users using OSCAR (which uses rsync) to sync their files to an folder and when complete takes a snapshot. It has worked before but then I turned on the -R switch to rsync and when I then removed the folder with rm -rf it crashed. I didn''t save what
2009 Apr 21
0
opensolaris crash in vn_rele()
My newly upgraded opensolaris 2008.11 laptop crashed last weekend.
(The OS was installed from the os 2008.11 live-cd and then upgraded
using the package manager to snv_111.)
I was trying to copy a large virtual pc image from my wife''s imac to the
laptop. On a whim I had decided to create a separate zvol in the
root pool to contain the image, figuring I could create a vbox
with linux or
2007 Oct 10
6
server-reboot
Hi.
Just migrated to zfs on opensolaris. I copied data to the server using
rsync and got this message:
Oct 10 17:24:04 zetta ^Mpanic[cpu1]/thread=ffffff0007f1bc80:
Oct 10 17:24:04 zetta genunix: [ID 683410 kern.notice] BAD TRAP:
type=e (#pf Page fault) rp=ffffff0007f1b640 addr=fffffffecd873000
Oct 10 17:24:04 zetta unix: [ID 100000 kern.notice]
Oct 10 17:24:04 zetta unix: [ID 839527 kern.notice]
2008 Nov 13
5
BAD TRAP with Crossbow Beta October 31 2008
Hi.
I tried to send this to the mailing list, but it never showed up in the
archives, so I''m trying the forum instead...
I recently installed the Crossbow Beta October 31 2008 on my
SunFire T1000, and let me first say that I''m very pleased
with the functionality it provides.
What''s not so pleasing is the fact that after installing this,
the computer now get very
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144
2008 Dec 28
2
zfs mount hangs
Hi,
System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and
2x146GB (space pool). snv_98.
After a panic the system hangs on boot and manual attempts to mount
(at least) one dataset in single user mode, hangs.
The Panic:
Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20:
Dec 27 04:42:11 base unix: [ID 521688 kern.notice] [AFT1] errID
0x00167f73.1c737868 UE Error(s)
Dec 27
2007 Apr 30
4
B62 AHCI and ZFS
Hardware Supermicro X7DAE (AHCI BIOS) dual Intel Woodcrest processors, 6 x Western Digital Raptor SATA drives.
I have installed b62 running 64 bit succesfully on a PATA drive. The BIOS is configured to access the SATA drives in native mode using hte AHCI Bios.
I have 6 SATA II drives accessed via the Solaris AHCI driver. I have created a ZFS file system across all 6 drives. This works fine until
2007 Apr 03
2
ZFS panics with dmu_buf_hold_array
Hi,
I have been wrestling with ZFS issues since yesterday when one of my disks sort of died. After much wrestling with "zpool replace" I managed to get the new disk in and got the pool to resilver, but since then I have one error left that I can''t clear:
pool: data
state: ONLINE
status: One or more devices has experienced an error resulting in data
corruption.