similar to: FreeBSD 9.1-BETA1 amd64 fails to mount ZFS rootfs with error 2 when system has more than 3584MB of RAM

Displaying 20 results from an estimated 2000 matches similar to: "FreeBSD 9.1-BETA1 amd64 fails to mount ZFS rootfs with error 2 when system has more than 3584MB of RAM"

2013 Dec 09
1
10.0-BETA4 (upgraded from 9.2-RELEASE) zpool upgrade -> boot failure
Hi, Is there anything known about ZFS under 10.0-BETA4 when FreeBSD was upgraded from 9.2-RELEASE? I have two servers, with very different hardware (on is with soft raid and the other have not) and after a zpool upgrade, no way to get the server booting. Do I miss something when upgrading? I cannot get the error message for the moment. I reinstalled the raid server under Linux and the other
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list, and I''d like someone to confirm-or-reject the discussed statement. Paraphrasing in my words and understanding: "Labels, including Uberblock rings, are fixed 256KB in size each, of which 128KB is the UB ring. Normally there is 1KB of data in one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2011 Oct 05
1
Fwd: Re: zvol space consumption vs ashift, metadata packing
Hello, Daniel, Apparently your data is represented by rather small files (thus many small data blocks), so proportion of metadata is relatively high, and your<4k blocks are now using at least 4k disk space. For data with small blocks (a 4k volume on an ashift=12 pool) I saw metadata use up most of my drive - becoming equal to data size. Just for the sake of completeness, I brought up a
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2012 Feb 16
3
4k sector support in Solaris 11?
If I want to use a batch of new Seagate 3TB Barracudas with Solaris 11, will zpool let me create a new pool with ashift=12 out of the box or will I need to play around with a patched zpool binary (or the iSCSI loopback)? -- Dave Pooser Manager of Information Services Alford Media http://www.alfordmedia.com
2012 Jul 19
4
Thinkpad X61s cannot boot 9.1-BETA1
Hi, Did anyone else experience this? With 9.1-BETA1 the boot process freezes, among the last lines with verbose boot are acpi_acad0: On Line acpi_acad0: acline initialization done, tried 1 times after this, dead. What is supposed to happen in the next stage? This laptop worked fine with 9-STABLE to at least february. //per
2010 Nov 23
14
ashift and vdevs
zdb -C shows an shift value on each vdev in my pool, I was just wondering if it is vdev specific, or pool wide. Google didn''t seem to know. I''m considering a mixed pool with some "advanced format" (4KB sector) drives, and some normal 512B sector drives, and was wondering if the ashift can be set per vdev, or only per pool. Theoretically, this would save me some size on
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi, my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller. So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2005 Jan 14
1
debugging encrypted part of isakmp
Are there any tools to decode encrypted part of isakmp provided that identities of both peers are known to me and that I am able to observe the whole exchange ? -- Andriy Gapon
2013 Dec 17
2
Setting up a lustre zfs dual mgs/mdt over tcp - help requested
Hi all, Here is the situation: I have 2 nodes MDS1 , MDS2 (10.0.0.22 , 10.0.0.23) I wish to use as failover MGS, active/active MDT with zfs. I have a jbod shelf with 12 disks, seen by both nodes as das (the shelf has 2 sas ports, connected to a sas hba on each node), and I am using lustre 2.4 on centos 6.4 x64 I have created 3 zfs pools: 1. mgs: # zpool
2008 Jan 30
2
mouse problems [A4 Tech OP-3D]
After some poking into psm.c code I've got some results. First, for the archives, debug.psm.loglevel tunable is much more useful than a verbose boot for debugging PS/2 mouse issues. A good value is 2. Second, I fiddled with various probe methods to force them to "recognize" my mouse (by loosening their checks) and found out that the mouse works perfectly if it is treated as
2011 Jul 29
12
booting from ashift=12 pool..
.. evidently doesn''t work. GRUB reboots the machine moments after loading stage2, and doesn''t recognise the fstype when examining the disk loaded from an alernate source. This is with SX-151. Here''s hoping a future version (with grub2?) resolves this, as well as lets us boot from raidz. Just a note for the archives in case it helps someone else get back the afternoon
2008 Oct 19
9
My 500-gig ZFS is gone: insufficient replicas, corrupted data
Hi, I''m running FreeBSD 7.1-PRERELEASE with a 500-gig ZFS drive. Recently I''ve encountered a FreeBSD problem (PR kern/128083) and decided about updating the motherboard BIOS. It looked like the update went right but after that I was shocked to see my ZFS destroyed! Rolling the BIOS back did not help. Now it looks like that: # zpool status pool: tank state: UNAVAIL status:
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7 and tried to add my pool to freenas. After adding the zfs disk, vdev and pool. I decided to back out and went back to opensolaris. Now my raidz pool will not mount and got the following errors. Hope someone expert can help me recover from this error.
2013 Aug 21
1
Properties list for zfs in FreeBSD
Hi: Where can I find a list of properties (-o/-O property=value) for creating a zpool? I meant something like: #zpool create \ -o ashift=12 \ -0 dedup=off -O autoexpand=off -O atime=off \ -O canmount=off \ -O compression=lz4 \ -O normalization=formD \ -O mountpoint=/jail \ tank \ mirror \ /dev/gptid/diskname0 \ /dev/gptid/diskname1 \
2009 Jan 24
4
panic in callout_reset: bad link in callwheel
System: FreeBSD 7.1-STABLE i386 (revision 187025) Panic message: kernel trap 12 with interrupts disabled Fatal trap 12: page fault while in kernel mode fault virtual address = 0xd2006ad0 fault code = supervisor write, page not present instruction pointer = 0x20:0xc05623aa stack pointer = 0x28:0xdd4f6c34 frame pointer = 0x28:0xdd4f6c40 code segment
2013 Jul 17
3
Help with filing a [maybe] ZFS/mmap bug.
Hi All, I have what I think is a ZFS related bug. Unfortunately my simplest test case is a bit cumbersome and I haven't definitively proven that the problem is ZFS related. I'm hoping for some feedback on how to move forward. Quick background: I rip my CD's using grip and produce flac files. I tag the music using Musicbrainz' Picard and transcode it to mp3's within Picard
2012 Jan 11
0
Clarifications wanted for ZFS spec
I''m reading the "ZFS On-disk Format" PDF (dated 2006 - are there newer releases?), and have some questions regarding whether it is outdated: 1) On page 16 it has the following phrase (which I think is in general invalid): The value stored in offset is the offset in terms of sectors (512 byte blocks). To find the physical block byte offset from the beginning of a slice,
2008 Dec 04
1
rc.firewall: default loopback rules are set up even for custom file
I've just realized that I see in releng/7 something that I did not see in releng/6 - even if I use a file with custom rules in firewall_type I still get default loopback rules installed. I think that this is not correct, I am using custom rules exactly because I want to control *everything* (e.g. all deny rules come with log logamount xxx). -- Andriy Gapon