similar to: Secure delete?

Displaying 20 results from an estimated 20000 matches similar to: "Secure delete?"

2010 Oct 19
7
SSD partitioned into multiple L2ARC read cache
What would the performance impact be of splitting up a 64 GB SSD into four partitions of 16 GB each versus having the entire SSD dedicated to each pool? Scenario A: 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition 2 TB Mirror w/ 16 GB read cache partition versus Scenario B: 2 TB Mirror w/ 64 GB read cache SSD 2 TB
2008 Sep 10
7
Intel M-series SSD
Interesting flash technology overview and SSD review here: http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403 and another review here: http://www.tomshardware.com/reviews/Intel-x25-m-SSD,2012.html Regards, -- Al Hopper Logical Approach Inc,Plano,TX al at logical-approach.com Voice: 972.379.2133 Timezone: US CDT OpenSolaris Governing Board (OGB) Member - Apr 2005
2010 Apr 27
42
Performance drop during scrub?
Hi all I have a test system with snv134 and 8x2TB drives in RAIDz2 and currently no Zil or L2ARC. I noticed the I/O speed to NFS shares on the testpool drops to something hardly usable while scrubbing the pool. How can I address this? Will adding Zil or L2ARC help? Is it possible to tune down scrub''s priority somehow? Best regards roy -- Roy Sigurd Karlsbakk (+47) 97542685 roy at
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had been discussed in a while ... What is the status of ZFS support for TRIM? For the pool in general... and... Specifically for the slog and/or cache??? -------------- next part -------------- An HTML attachment was scrubbed... URL:
2009 Nov 11
20
zfs eradication
Hi, I was discussing the common practice of disk eradication used by many firms for security. I was thinking this may be a useful feature of ZFS to have an option to eradicate data as its removed, meaning after the last reference/snapshot is done and a block is freed, then write the eradication patterns back to the removed blocks. By any chance, has this been discussed or considered before?
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this: ool 488K 20.0T 0 0 0 0 xpool 488K 20.0T 0 0 0 0 xpool
2010 Jun 19
6
does sharing an SSD as slog and l2arc reduces its life span?
Hi, I don''t know if it''s already been discussed here, but while thinking about using the OCZ Vertex 2 Pro SSD (which according to spec page has supercaps built in) as a shared slog and L2ARC device it stroke me that this might not be a such a good idea. Because this SSD is MLC based, write cycles are an issue here, though I can''t find any number in their spec. Why do I
2010 Feb 08
17
ZFS ZIL + L2ARC SSD Setup
I have some questions about the choice of SSDs to use for ZIL and L2ARC. I''m trying to build an OpenSolaris iSCSI SAN out of a whitebox system, which is intended to be used as a backup SAN during storage migration, so it''s built on a tight budget. The system currently has 4GB RAM, 3GHz Core2-Quad and 8x 500GB WD REII SATA HDDs attached to an Areca 8port ARC-1220 controller
2010 Dec 23
31
SAS/short stroking vs. SSDs for ZIL
Hi, as I have learned from the discussion about which SSD to use as ZIL drives, I stumbled across this article, that discusses short stroking for increasing IOPs on SAS and SATA drives: http://www.tomshardware.com/reviews/short-stroking-hdd,2157.html Now, I am wondering if using a mirror of such 15k SAS drives would be a good-enough fit for a ZIL on a zpool that is mainly used for file
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb) It seems to work really well as a ZIL performance wise. My question is, how safe is it? I know it doesn''t have a supercap so lets'' say dataloss occurs....is it just dataloss or is it pool loss? also, does the fact that i have a UPS matter? the numbers i''m seeing are really nice....these are some nfs tar times before
2011 Nov 08
6
Couple of questions about ZFS on laptops
Hello all, I am thinking about a new laptop. I see that there are a number of higher-performance models (incidenatlly, they are also marketed as "gamer" ones) which offer two SATA 2.5" bays and an SD flash card slot. Vendors usually position the two-HDD bay part as either "get lots of capacity with RAID0 over two HDDs, or get some capacity and some performance by mixing one
2010 Jun 25
13
OCZ Vertex 2 Pro performance numbers
Now the test for the Vertex 2 Pro. This was fun. For more explanation please see the thread "Crucial RealSSD C300 and cache flush?" This time I made sure the device is attached via 3GBit SATA. This is also only a short test. I''ll retest after some weeks of usage. cache enabled, 32 buffers, 64k blocks linear write, random data: 96 MB/s linear read, random data: 206 MB/s linear
2009 Dec 02
10
Separate Zil on HDD ?
Hi all, I have a home server based on SNV_127 with 8 disks; 2 x 500GB mirrored root pool 6 x 1TB raidz2 data pool This server performs a few functions; NFS : for several ''lab'' ESX virtual machines NFS : mythtv storage (videos, music, recordings etc) Samba : for home directories for all networked PCs I backup the important data to external USB hdd each day. I previously had
2009 Jun 15
33
compression at zfs filesystem creation
Hi, I just installed 2009.06 and found that compression isn''t enabled by default when filesystems are created. Does is make sense to have an RFE open for this? (I''ll open one tonight if need be.) We keep telling people to turn on compression. Are there any situations where turning on compression doesn''t make sense, like rpool/swap? what about rpool/dump? Thanks, ~~sa
2010 Apr 26
23
SAS vs SATA: Same size, same speed, why SAS?
I''m building another 24-bay rackmount storage server, and I''m considering what drives to put in the bays. My chassis is a Supermicro SC846A, so the backplane supports SAS or SATA; my controllers are LSI3081E, again supporting SAS or SATA. Looking at drives, Seagate offers an enterprise (Constellation) 2TB 7200RPM drive in both SAS and SATA configurations; the SAS model offers
2010 Jun 07
20
Homegrown Hybrid Storage
Hi, I''m looking to build a virtualized web hosting server environment accessing files on a hybrid storage SAN. I was looking at using the Sun X-Fire x4540 with the following configuration: - 6 RAID-Z vdevs with one hot spare each (all 500GB 7200RPM SATA drives) - 2 Intel X-25 32GB SSD''s as a mirrored ZIL - 4 Intel X-25 64GB SSD''s as the L2ARC. -
2010 Jan 28
16
Large scale ZFS deployments out there (>200 disks)
While thinking about ZFS as the next generation filesystem without limits I am wondering if the real world is ready for this kind of incredible technology ... I''m actually speaking of hardware :) ZFS can handle a lot of devices. Once in the import bug (http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6761786) is fixed it should be able to handle a lot of disks. I want to
2009 Jul 24
6
When writing to SLOG at full speed all disk IO is blocked
Hello all... I''m seeing this behaviour in an old build (89), and i just want to hear from you if there is some known bug about it. I''m aware of the "picket fencing" problem, and that ZFS is not choosing right if write to slog is better or not (thinking if we have a better throughput from disks). But i did not find anything about 100% slog activity (~115MB/s) blocks
2009 Oct 10
11
SSD over 10gbe not any faster than 10K SAS over GigE
GigE wasn''t giving me the performance I had hoped for so I spring for some 10Gbe cards. So what am I doing wrong. My setup is a Dell 2950 without a raid controller, just a SAS6 card. The setup is as such : mirror rpool (boot) SAS 10K raidz SSD 467 GB on 3 Samsung 256 MLC SSD (220MB/s each) to create the raidz I did a simple zpool create raidz SSD c1xxxxx c1xxxxxx c1xxxxx. I have
2010 Jan 19
5
ZFS/NFS/LDOM performance issues
[Cross-posting to ldoms-discuss] We are occasionally seeing massive time-to-completions for I/O requests on ZFS file systems on a Sun T5220 attached to a Sun StorageTek 2540 and a Sun J4200, and using a SSD drive as a ZIL device. Primary access to this system is via NFS, and with NFS COMMITs blocking until the request has been sent to disk, performance has been deplorable. The NFS server is a