similar to: zfs defragmentation via resilvering?

Displaying 20 results from an estimated 30000 matches similar to: "zfs defragmentation via resilvering?"

2010 Oct 17
10
RaidzN blocksize ... or blocksize in general ... and resilver
The default blocksize is 128K. If you are using mirrors, then each block on disk will be 128K whenever possible. But if you''re using raidzN with a capacity of M disks (M disks useful capacity + N disks redundancy) then the block size on each individual disk will be 128K / M. Right? This is one of the reasons the raidzN resilver code is inefficient. Since you end up waiting for the
2010 Apr 10
41
Secure delete?
Hi all Is it possible to securely delete a file from a zfs dataset/zpool once it''s been snapshotted, meaning "delete (and perhaps overwrite) all copies of this file"? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at karlsbakk.net http://blogg.karlsbakk.net/ -- I all pedagogikk er det essensielt at pensum presenteres intelligibelt. Det er et element?rt imperativ
2010 Sep 09
37
resilver = defrag?
A) Resilver = Defrag. True/false? B) If I buy larger drives and resilver, does defrag happen? C) Does zfs send zfs receive mean it will defrag? -- This message posted from opensolaris.org
2008 Jun 30
20
Some basic questions about getting the best performance for database usage
I''m new so opensolaris and very new to ZFS. In the past we have always used linux for our database backends. So now we are looking for a new database server to give us a big performance boost, and also the possibility for scalability. Our current database consists mainly of a huge table containing about 230 million records and a few (relatively) smaller tables (something like 13 million
2008 Feb 15
38
Performance with Sun StorageTek 2540
Under Solaris 10 on a 4 core Sun Ultra 40 with 20GB RAM, I am setting up a Sun StorageTek 2540 with 12 300GB 15K RPM SAS drives and connected via load-shared 4Gbit FC links. This week I have tried many different configurations, using firmware managed RAID, ZFS managed RAID, and with the controller cache enabled or disabled. My objective is to obtain the best single-file write performance.
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403 and another review here: http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html Regards, -- Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005
2011 Oct 04
6
zvol space consumption vs ashift, metadata packing
I sent a zvol from host a, to host b, twice. Host b has two pools, one ashift=9, one ashift=12. I sent the zvol to each of the pools on b. The original source pool is ashift=9, and an old revision (2009_06 because it''s still running xen). I sent it twice, because something strange happened on the first send, to the ashift=12 pool. "zfs list -o space" showed figures at
2008 Jun 22
6
ZFS-Performance: Raid-Z vs. Raid5/6 vs. mirrored
Hi list, as this matter pops up every now and then in posts on this list I just want to clarify that the real performance of RaidZ (in its current implementation) is NOT anything that follows from raidz-style data efficient redundancy or the copy-on-write design used in ZFS. In a M-Way mirrored setup of N disks you get the write performance of the worst disk and a read performance that is
2009 Mar 28
53
Can this be done?
I currently have a 7x1.5tb raidz1. I want to add "phase 2" which is another 7x1.5tb raidz1 Can I add the second phase to the first phase and basically have two raid5''s striped (in raid terms?) Yes, I probably should upgrade the zpool format too. Currently running snv_104. Also should upgrade to 110. If that is possible, would anyone happen to have the simple command lines to
2008 Nov 29
75
Slow death-spiral with zfs gzip-9 compression
I am [trying to] perform a test prior to moving my data to solaris and zfs. Things are going very poorly. Please suggest what I might do to understand what is going on, report a meaningful bug report, fix it, whatever! Both to learn what the compression could be, and to induce a heavy load to expose issues, I am running with compress=gzip-9. I have two machines, both identical 800MHz P3 with
2012 Jun 17
26
Recommendation for home NAS external JBOD
Hi, my oi151 based home NAS is approaching a frightening "drive space" level. Right now the data volume is a 4*1TB Raid-Z1, 3 1/2" local disks individually connected to an 8 port LSI 6Gbit controller. So I can either exchange the disks one by one with autoexpand, use 2-4 TB disks and be happy. This was my original approach. However I am totally unclear about the 512b vs 4Kb issue.
2012 Jan 15
22
Does raidzN actually protect against bitrot? If yes - how?
"Does raidzN actually protect against bitrot?" That''s a kind of radical, possibly offensive, question formula that I have lately. Reading up on theory of RAID5, I grasped the idea of the write hole (where one of the sectors of the stripe, such as the parity data, doesn''t get written - leading to invalid data upon read). In general, I think the same applies to bitrot of
2010 May 31
2
mirror writes 10x slower than individual writes
I have an odd setup at present, because I''m testing while still building my machine. It''s an Intel Atom D510 mobo running snv_134 2GB RAM with 2 SATA drives (AHCI): 1: Samsung 250GB old laptop drive 2: WD Green 1.5TB drive (idle3 turned off) Ultimately, it will be a time machine backup for my Mac laptop. So I have installed Netatalk 2.1.1 which is working great. Read
2009 Apr 27
23
Raidz vdev size... again.
Hi, i''m new to the list so please bare with me. This isn''t an OpenSolaris related problem but i hope it''s still the right list to post to. I''m on the way to move a backup server to using zfs based storage, but i don''t want to spend too much drives to parity (the 16 drives are attached to a 3ware raid controller so i could also just use raid6 there). I
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2010 Apr 07
53
ZFS RaidZ recommendation
I have been searching this forum and just about every ZFS document i can find trying to find the answer to my questions. But i believe the answer i am looking for is not going to be documented and is probably best learned from experience. This is my first time playing around with open solaris and ZFS. I am in the midst of replacing my home based filed server. This server hosts all of my media
2009 Jun 10
13
Apple Removes Nearly All Reference To ZFS
http://hardware.slashdot.org/story/09/06/09/2336223/Apple-Removes-Nearly-All-Reference-To-ZFS
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all, I am thinking about a new laptop. I see that there are a number of higher-performance models (incidenatlly, they are also marketed as "gamer" ones) which offer two SATA 2.5" bays and an SD flash card slot. Vendors usually position the two-HDD bay part as either "get lots of capacity with RAID0 over two HDDs, or get some capacity and some performance by mixing one
2009 Oct 15
8
sub-optimal ZFS performance
Hello, ZFS is behaving strange on a OSOL laptop, your thoughts are welcome. I am running OSOL on my laptop, currently b124 and i found that the performance of ZFS is not optimal in all situations. If i check the how much space the package cache for pkg(1) uses, it takes a bit longer on this host than on comparable machine to which i transferred all the data. user at host:/var/pkg$ time
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to