similar to: very slow boot: stuck at mounting zfs filesystems

Displaying 20 results from an estimated 2000 matches similar to: "very slow boot: stuck at mounting zfs filesystems"

2011 Jul 12
1
invalid SID in passdb on stand-alone file server with ldapsam
hello! I got some log message I can't explain. when I log in to a server it says: [2011/07/12 14:20:41.784580, 0] passdb/passdb.c:627(lookup_global_sam_name) User frvdamme with invalid SID S-1-5-21-2863620551-4077714424-203869783-5020 in passdb It's a standalone file server, no domain, and the password backend is (open)ldap. Samba is version 3.5.6 on Debian 6.0. Using the server
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi, Created a zpool with 64k recordsize and enabled dedupe on it. zpool create -O recordsize=64k TestPool device1 zfs set dedup=on TestPool I copied files onto this pool over nfs from a windows client. Here is the output of zpool list Prompt:~# zpool list NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT TestPool 696G 19.1G 677G 2% 1.13x ONLINE - When I ran a
2011 Jan 28
8
ZFS Dedup question
I created a zfs pool with dedup with the following settings: zpool create data c8t1d0 zfs create data/shared zfs set dedup=on data/shared The thing I was wondering about was it seems like ZFS only dedup at the file level and not the block. When I make multiple copies of a file to the store I see an increase in the deup ratio, but when I copy similar files the ratio stays at 1.00x. -- This
2007 Sep 26
2
smbldap-useradd problem
Dear list, Arghl! (I'm sure you know the feeling). I'm still hooked on Samba by example, and trying to add users to my ldap tree. $ smbldap-useradd -m -a ldaptest2 Can't call method "get_value" on an undefined value at /usr/sbin/smbldap-useradd line 197 The documentation of the smbldap scripts mentions this sort of error (albeit with a different line number). Two possible
2012 Jul 30
10
encfs on top of zfs
Dear ZFS-Users, I want to switch to ZFS, but still want to encrypt my data. Native Encryption for ZFS was added in "ZFS Pool Version Number 30<http://en.wikipedia.org/wiki/ZFS#Release_history>", but I''m using ZFS on FreeBSD with Version 28. My question is how would encfs (fuse encryption) affect zfs specific features like data Integrity and deduplication? Regards
2009 Dec 27
7
[osol-help] zfs destroy stalls, need to hard reboot
On Sun, Dec 27, 2009 at 12:55 AM, Stephan Budach <stephan.budach at jvm.de> wrote: > Brent, > > I had known about that bug a couple of weeks ago, but that bug has been files against v111 and we''re at v130. I have also seached the ZFS part of this forum and really couldn''t find much about this issue. > > The other issue I noticed is that, as opposed to the
2011 Mar 08
3
[LLVMdev] backend question
Hi All, I am writing a backend for an architecture that has only 16-bit word addressing (No byte addresses ever. All data are always 16-bit). How can I specify this in the backend? As an example, consider the following instruction: %arrayidx = getelementptr [129 x i16]* @flags, i16 0, i16 %i.043 When I generate assembler code, this now results in %i.043 being multiplied by 2 in the address
2010 Jan 27
13
zfs destroy hangs machine if snapshot exists- workaround found
Hi, I was suffering for weeks from the following problem: a zfs dataset contained an automatic snapshot (monthly) that used 2.8 TB of data. The dataset was deprecated, so I chose to destroy it after I had deleted some files; eventually it was completely blank besides the snapshot that still locked 2.8 TB on the pool. ''zfs destroy -r pool/dataset'' hung the machine within seconds
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2007 Oct 12
2
default kerberos realm??
Hello list, I am trying to join a win2k domain with Samba, with security = ads. My member server is a Debian Etch. I get the following error when trying to join the domain: #net ads join -U administrator administrator's password: [2007/10/12 12:04:19, 0] libsmb/cliconnect.c:cli_session_setup_spnego(785) -- Frank Van Damme A: Because it destroys the flow of the conversation
2011 Mar 08
0
[LLVMdev] backend question
On Tue, Mar 8, 2011 at 5:14 AM, Jacques Van Damme <Jacques.VanDamme at synopsys.com> wrote: > I am writing a backend for an architecture that has only 16-bit word > addressing (No byte addresses ever.  All data are always 16-bit). > > How can I specify this in the backend? In short, you can't. Word-addressable memory is not currently supported in LLVM (or Clang, for that
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all, I am not sure my original mail got through to the list (I haven''t received it back), so I attach it below. Anyhow, now I have a saved kernel crash dump of the system panicking when it tries to - I believe - deferred-release the corrupted deduped blocks which are no longer referenced by the userdata/blockpointer tree. As I previously wrote in my thread on unfixeable
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All, I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2011 Feb 12
1
existing performance data for on-disk dedup?
Hello. I am looking to see if performance data exists for on-disk dedup. I am currently in the process of setting up some tests based on input from Roch, but before I get started, thought I''d ask here. Thanks for the help, Janice
2010 Mar 02
2
dedup source code
Hello ZFS experts: I would like to study ZFS de-duplication feature. Can someone please let me know which directory/files I should be looking at? Thanks in advance. -- This message posted from opensolaris.org
2010 Jul 02
14
NexentaStor 3.0.3 vs OpenSolaris - Patches more up to date?
I see in NexentaStor''s announcement of Community Edition 3.0.3 they mention some backported patches in this release. Aside from their management features / UI what is the core OS difference if we move to Nexenta from OpenSolaris b134? These DeDup bugs are my main frustration - if a staff member does a rm * in a directory with dedup you can take down the whole storage server - all with
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all, I have a 5 drive RAIDZ volume with data that I''d like to recover. The long story runs roughly: 1) The volume was running fine under FreeBSD on motherboard SATA controllers. 2) Two drives were moved to a HP P411 SAS/SATA controller 3) I *think* the HP controllers wrote some volume information to the end of each disk (hence no more ZFS labels 2,3) 4) In its "auto
2011 Aug 10
9
zfs destory snapshot takes an hours.
Hi, I am facing issue with zfs destroy, this takes almost 3 Hours to delete the snapshot of size 150G. Could you please help me to resolve this issue, why zfs destroy takes this much time. While taking snapshot, it''s done within few seconds. I have tried with removing with old snapshot but still problem is same. =========================== I am using : Release : OpenSolaris
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. Since I am hoping to provide a 2TB
2012 Feb 26
3
zfs diff performance
I had high hopes of significant performance gains using zfs diff in Solaris 11 compared to my home-brew stat based version in Solaris 10. However the results I have seen so far have been disappointing. Testing on a reasonably sized filesystem (4TB), a diff that listed 41k changes took 77 minutes. I haven''t tried my old tool, but I would expect the same diff to take a couple of