Displaying 20 results from an estimated 1200 matches similar to: "btrfs vs data deduplication"
2012 May 17
6
SSD format/mount parameters questions
For using SSDs:
Are there any format/mount parameters that should be set for using btrfs
on SSDs (other than the "ssd" mount option)?
General questions:
How long is the ''delay'' for the delayed alloc?
Are file allocations aligned to 4kiB boundaries, or larger?
What byte value is used to pad unused space?
(Aside: For some, the erased state reads all 0x00, and for
2010 May 24
16
questions about zil
I recently got a new SSD (ocz vertex LE 50gb)
It seems to work really well as a ZIL performance wise. My question is, how
safe is it? I know it doesn''t have a supercap so lets'' say dataloss
occurs....is it just dataloss or is it pool loss?
also, does the fact that i have a UPS matter?
the numbers i''m seeing are really nice....these are some nfs tar times
before
2011 Jun 29
0
SandForce SSD internal dedup
This article raises the concern that SSD controllers (in particular
SandForce) do internal dedup, and in particular that this could defeat
ditto-block style replication of critical metadata as done by
filesystems including ZFS.
http://storagemojo.com/2011/06/27/de-dup-too-much-of-good-thing/
Along with discussion of risk evaluation, it also suggests that
filesystems could vary each copy in some
2010 Oct 06
14
Bursty writes - why?
I have a 24 x 1TB system being used as an NFS file server. Seagate SAS disks connected via an LSI 9211-8i SAS controller, disk layout 2 x 11 disk RAIDZ2 + 2 spares. I am using 2 x DDR Drive X1s as the ZIL. When we write anything to it, the writes are always very bursty like this:
ool 488K 20.0T 0 0 0 0
xpool 488K 20.0T 0 0 0 0
xpool
2010 Aug 03
2
When is the L2ARC refreshed if on a separate drive?
I''m running a mirrored pair of 2 TB SATA drives as my data storage drives on my home workstation, a Core i7-based machine with 10 GB of RAM. I recently added a sandforce-based 60 GB SSD (OCZ Vertex 2, NOT the pro version) as an L2ARC to the single mirrored pair. I''m running B134, with ZFS pool version 22, with dedup enabled. If I understand correctly, the dedup table should be in
2016 Oct 27
4
Re: Disk near failure
On Thu, 27 Oct 2016 11:25, Alessandro Baggi wrote:
> Il 24/10/2016 14:05, Leonard den Ottolander ha scritto:
>> On Mon, 2016-10-24 at 12:07 +0200, Alessandro Baggi wrote:
>> > === START OF READ SMART DATA SECTION ===
>> > SMART Error Log not supported
>>
>> I reckon there's a <snip> between those lines. The line right after the
>> first
2010 Mar 02
3
BackupPC, per-dir hard link limit, Debian packaging
I realise that the hard link limit is in the queue to fix, and I read
the recent thread as well as the older (october I think) thread.
I just wanted to note that BackupPC *does* in fact run into the hard
link limit, and its due to the dpkg configuration scripts.
BackupPC hard links files with the same content together by scanning new
files and linking them together, whether or not they started
2011 Jan 25
3
How to fasten btrfs?
Hi,
I am using 2.6.36.3 kernel with btrfs, 512MB memory and a very slow
disk, no special options for mounting btrfs except noatime. Now I
found it very slow. When I rm a 5GB movie, it took 20 secs.
--
竹密岂妨流水过
山高哪阻野云飞
--
To unsubscribe from this list: send the line "unsubscribe linux-btrfs" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at
2012 May 22
3
SSD erase state and reducing SSD wear
I''ve got two recent examples of SSDs. Their pristine state from the
manufacturer shows:
Device Model: OCZ-VERTEX3
# hexdump -C /dev/sdd
00000000 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
|................|
*
1bf2976000
Device Model: OCZ VERTEX PLUS
(OCZ VERTEX 2E)
# hexdump -C /dev/sdd
00000000 ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff ff
|................|
*
2012 Feb 20
11
btrfs-convert processing time
Hi,
I''m trying to convert two ext4 FS to btrfs, but I''m surprised by the
time needed to do that conversion.
The first FS is on a 500GiB block device, and btrfs-convert is running
since more than 48h :
root 1978 25.6 47.7 748308 732556 ? D Feb18 944:44
btrfs-convert /dev/vg-backup/backup
The second is on a 340GiB block device, and the processing time is similar
2008 Aug 30
2
S.M.A.R.T
At my physics lab we have 30 servers with 1TB disk packs. I am in need
of monitoring for disk failures. I have been reading about SMART and
it seems it can help. However, I am not sure what to look for if a
drive is about to fail. Any thoughts about this? Is anyone using this
method to predetermine disk failures?
TIA
2015 Mar 27
3
FYI: SSH1 now disabled at compile-time by default
Hi,
On Fri, Mar 27, 2015 at 12:53:05PM +0100, Hubert Kario wrote:
> On Thursday 26 March 2015 11:19:28 Michael Felt wrote:
> > Experience: I have some hardware, on an internal network - that only
> > supports 40-bit ssl. I am forced to continue to use FF v17 because that was
> > the last browser to provide SSL40-bit support. My security is weakened
> > because I cannot
2015 May 29
2
Weak DH primes and openssh
On Fri, 29 May 2015, Hubert Kario wrote:
> Not really, no.
>
> We can use this time an initial seed of "OpenSSH 1024 bit prime, attempt #1".
> Next time we generate the primes we can use the initial seed of "2017 OpenSSH
> 1024 bit prime, attempt #1", but we can use just as well a "2nd generation
> OpenSSH 1024 bit DH parameters, try number 1".
2015 May 26
2
Weak DH primes and openssh
On Tue 2015-05-26 12:57:05 -0400, Hubert Kario wrote:
> creating composites that will pass even 100000 rounds of Miller-Rabin is
> relatively simple....
> (assuming the values for M-R tests are picked randomly)
Can you point me to the algorithms for doing that? This would suggest
that we really do want primality proofs (and a good way to verify them).
Do those algorithms hold for
2006 Dec 18
3
ZFS on Mac - new sighting
There''s been another sighting of ZFS on Mac. The latest developer
release of Leopard (Mac OS 10.5) has a dialogue box calling out the
"Zettabyte File System (ZFS)" as an option. The first publication I
saw this is a French website called Mac4Ever - http://mac4ever.com/
news/27485/zettabyte_sur_leopard/
I put up a Babelfish translation at my site, http://storagemojo.com/?
2015 May 28
2
Weak DH primes and openssh
On Thu, 28 May 2015, Hubert Kario wrote:
> > If this is the only attack you're trying to address, and you've
> > already limited yourself to safe primes, then NUMS properties don't
> > really add anything. The NUMS approach is there are to try to avoid
> > the possibility of other, unknown cryptanalytic attacks against some
> > infrequent type of group,
2015 May 27
3
Weak DH primes and openssh
On Wed 2015-05-27 05:23:41 -0400, Hubert Kario wrote:
> On Tuesday 26 May 2015 15:10:01 Daniel Kahn Gillmor wrote:
>> On Tue 2015-05-26 14:02:07 -0400, Hubert Kario wrote:
>> > OEIS A014233
>>
>> Hm, this is a sequence, but not an algorithm. It looks to me like it is
>> not exhaustive, just a list of those integers which are known to have
>> the stated
2015 Apr 01
2
FYI: SSH1 now disabled at compile-time by default
I mentioned extensions because I had a few and saw them die.
the 40-bit ssl is the web interface for power5 (the so-called ASMI https
interface). These ports have no access to "outside", on a separate lan
segment. my desktop, not acting as router, can connect to non-Natted and
NATted segments.
re: use of a stunnel - how does this turn 40-bit https into >40-bit https.
Sounds like a
2015 Mar 27
2
FYI: SSH1 now disabled at compile-time by default
Hi,
On Fri, Mar 27, 2015 at 02:36:50PM +0100, Hubert Kario wrote:
> > Same thing with needing sshv1 to access old network gear where even sshv1
> > was an achievement. "Throw away gear that does its job perfectly well,
> > but has no sshv2 for *management*" or "keep around an ssh v1 capable
> > client"?
>
> If you depend on hardware like this,
2016 Oct 28
0
Disk near failure
Il 27/10/2016 19:38, Yamaban ha scritto:
> On Thu, 27 Oct 2016 11:25, Alessandro Baggi wrote:
>> Il 24/10/2016 14:05, Leonard den Ottolander ha scritto:
>>> On Mon, 2016-10-24 at 12:07 +0200, Alessandro Baggi wrote:
>>> > === START OF READ SMART DATA SECTION ===
>>> > SMART Error Log not supported
>>>
>>> I reckon there's a