similar to: RFE for two-level ZFS

Displaying 20 results from an estimated 40000 matches similar to: "RFE for two-level ZFS"

2011 Feb 01
1
zpool-poolname has 99 threads
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice a process called zpool-poolname that has 99 threads. This seems to be a limit, as it never goes above that. It is lower on workstations. The `zpool'' man page says only: Processes Each imported pool has an associated process, named zpool- poolname. The threads in this process are the pool''s
2011 Jul 10
3
How create a FAT filesystem on a zvol?
The `lofiadm'' man page describes how to export a file as a block device and then use `mkfs -F pcfs'' to create a FAT filesystem on it. Can''t I do the same thing by first creating a zvol and then creating a FAT filesystem on it? Nothing I''ve tried seems to work. Isn''t the zvol just another block device? -- -Gary Mills- -Unix Group-
2009 Apr 26
9
Peculiarities of COW over COW?
We run our IMAP spool on ZFS that''s derived from LUNs on a Netapp filer. There''s a great deal of churn in e-mail folders, with messages appearing and being deleted frequently. I know that ZFS uses copy-on- write, so that blocks in use are never overwritten, and that deleted blocks are added to a free list. This behavior would spread the free list all over the zpool. As well,
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?" That''s a kind of radical, possibly offensive, question formula that I have lately. Reading up on theory of RAID5, I grasped the idea of the write hole (where one of the sectors of the stripe, such as the parity data, doesn''t get written - leading to invalid data upon read). In general, I think the same applies to bitrot of
2007 Nov 06
4
Checksum Algorithm
Hi, We have seen a huge performance drop in 1.6.3, due to the checksum being enabled by default. I looked at the algorithm being used, and it is actually a CRC32, which is a very strong algorithm for detecting all sorts of problems, such as single bit errors, swapped bytes, and missing bytes. I''ve been experimenting with using a simple XOR algorithm. I''ve been able to recover
2009 Aug 25
41
snv_110 -> snv_121 produces checksum errors on Raid-Z pool
I have a 5-500GB disk Raid-Z pool that has been producing checksum errors right after upgrading SXCE to build 121. They seem to be randomly occurring on all 5 disks, so it doesn''t look like a disk failure situation. Repeatingly running a scrub on the pools randomly repairs between 20 and a few hundred checksum errors. Since I hadn''t physically touched the machine, it seems a
2010 Aug 21
8
ZFS with Equallogic storage
I''m planning on setting up an NFS server for our ESXi hosts and plan on using a virtualized Solaris or Nexenta host to serve ZFS over NFS. The storage I have available is provided by Equallogic boxes over 10Gbe iSCSI. I am trying to figure out the best way to provide both performance and resiliency given the Equallogic provides the redundancy. Since I am hoping to provide a 2TB
2008 Aug 10
15
corrupt zfs stream? checksum mismatch
Hi Folks, I''m in the very unsettling position of fearing that I''ve lost all of my data via a zfs send/receive operation, despite ZFS''s legendary integrity. The error that I''m getting on restore is: receiving full stream of faith/home at 09-08-08 into Z/faith/home at 09-08-08 cannot receive: invalid stream (checksum mismatch) Background: I was running snv_91,
2013 Dec 09
5
Btrfs questions
i am looking at using btrfs for a new project and i have a few questions:     * i have heard that as it currently stands Btrfs has some issues to be used as a Lustre file system; is he aware of the issues and any plans to address these and integrate Btrfs in to Lustre     * any plans to support native clustering on Btrfs     * on ZFS the ZIL is a separate device, any plans to implement a the
2010 Jan 12
11
How do separate ZFS filesystems affect performance?
I''m working with a Cyrus IMAP server running on a T2000 box under Solaris 10 10/09 with current patches. Mailboxes reside on six ZFS filesystems, each containing about 200 gigabytes of data. These are part of a single zpool built on four Iscsi devices from our Netapp filer. One of these ZFS filesystems contains a number of global and per-user databases in addition to one sixth of the
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with about 1 TB of mailboxes on ZFS filesystems. Recently, when under load, we''ve had incidents where IMAP operations became very slow. The general symptoms are that the number of imapd, pop3d, and lmtpd processes increases, the CPU load average increases, but the ZFS I/O bandwidth decreases. At the same time, ZFS
2008 Jan 15
1
ZFS-DMU benchmark on Linux
Hi all, You will find in attachment to this mail some benchmarks we made on Linux with pios over ZFS-DMU. There are some interesting things about ZFS tuning and some ideas for breaking a bottleneck we identified in DMU. Thomas LEIBOVICI CEA/DAM - Ile de France -------------- next part -------------- A non-text attachment was scrubbed... Name: ZFS_benchs_Linux.pdf Type: application/pdf Size:
2009 Apr 25
4
What is the 32 GB 2.5-Inch SATA Solid State Drive?
Does anyone know about this device? SESX3Y11Z 32 GB 2.5-Inch SATA Solid State Drive with Marlin Bracket for Sun SPARC Enterprise T5120, T5220, T5140 and T5240 Servers, RoHS-6 Compliant This is from Sun''s catalog for the T5120 server. Would this work well as a separate ZIL device for ZFS? Is there any way I could use this in a T2000 server? The brackets appear to be
2010 Jan 11
3
Does ZFS use large memory pages?
Last April we put this in /etc/system on a T2000 server with large ZFS filesystems: set pg_contig_disable=1 This was while we were attempting to solve a couple of ZFS problems that were eventually fixed with an IDR. Since then, we''ve removed the IDR and brought the system up to Solaris 10 10/09 with current patches. It''s stable now, but seems slower. This line was a
2009 Feb 04
26
ZFS snapshot splitting & joining
Hello everyone, I am trying to take ZFS snapshots (ie. zfs send) and burn them to DVD''s for offsite storage. In many cases, the snapshots greatly exceed the 8GB I can stuff onto a single DVD-DL. In order to make this work, I have used the "split" utility to break the images into smaller, fixed-size chunks that will fit onto a DVD. For example: #split -b8100m
2008 Jul 17
4
RFE: -t flag for ''zfs destroy''
I would like to request an additional flag for the command line zfs tools. Specifically, I''d like to have a -t flag for "zfs destroy", as shown below. Suppose I have a pool "home" with child filesystem "will", and a snapshot "home/will at yesterday". Then I run the following commands: # zfs destroy -t volume home/will at yesterday zfs: not
2011 May 21
7
[cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc.
----- Forwarded message from Zooko O''Whielacronx <zooko at zooko.com> ----- From: Zooko O''Whielacronx <zooko at zooko.com> Date: Sat, 21 May 2011 12:50:19 -0600 To: Crypto discussion list <cryptography at randombit.net> Subject: Re: [cryptography] rolling hashes, EDC/ECC vs MAC/MIC, etc. Reply-To: Crypto discussion list <cryptography at randombit.net>
2007 Dec 15
4
Is round-robin I/O correct for ZFS?
I''m testing an Iscsi multipath configuration on a T2000 with two disk devices provided by a Netapp filer. Both the T2000 and the Netapp have two ethernet interfaces for Iscsi, going to separate switches on separate private networks. The scsi_vhci devices look like this in `format'': 1. c4t60A98000433469764E4A413571444B63d0 <NETAPP-LUN-0.2-50.00GB>
2012 Oct 27
7
How does btrfs behave on checksum mismatch?
I came across the tidbit that ZFS has a contract guarantee that the data read back will either be correct (the checksum computed over the data read from the disk matches the checksum stored on disk), or you get an I/O error. Obviously, this greatly reduces the probability that the data is invalid. (Particularly when taken in combination with the disk firmware''s own ECC and checksumming.)
2006 Oct 13
24
Self-tuning recordsize
Would it be worthwhile to implement heuristics to auto-tune ''recordsize'', or would that not be worth the effort? -- Regards, Jeremy