/Nikos
On Jun 15, 2010, at 4:04 PM, zfs-discuss-request at opensolaris.org wrote:
> Send zfs-discuss mailing list submissions to
> zfs-discuss at opensolaris.org
>
> To subscribe or unsubscribe via the World Wide Web, visit
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
> or, via email, send a message with subject or body ''help''
to
> zfs-discuss-request at opensolaris.org
>
> You can reach the person managing the list at
> zfs-discuss-owner at opensolaris.org
>
> When replying, please edit your Subject line so it is more specific
> than "Re: Contents of zfs-discuss digest..."
>
>
> Today''s Topics:
>
> 1. Re: Dedup... still in beta status (Fco Javier Garcia)
> 2. Re: Dedup... still in beta status (Erik Trimble)
> 3. Re: Dedup... still in beta status (Erik Trimble)
> 4. Complete Linux Noob (CarlPalmer)
> 5. Re: Complete Linux Noob (Freddie Cash)
> 6. Re: Complete Linux Noob (Roy Sigurd Karlsbakk)
> 7. Re: Native ZFS for Linux (Bob Friesenhahn)
>
>
> ----------------------------------------------------------------------
>
> Message: 1
> Date: Tue, 15 Jun 2010 11:53:28 PDT
> From: Fco Javier Garcia <correo at javido.com>
> To: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] Dedup... still in beta status
> Message-ID: <1426457463.371276628042143.JavaMail.Twebapp at sf-app1>
> Content-Type: text/plain; charset=UTF-8
>
> or as a member of the ZFS team
>> (which I''m not).
>>
>
> Then you have to be brutally good with Java
>
>
>
>
>
>
>
>> --
>> Erik Trimble
>> Java System Support
>> Mailstop: usca22-123
>> Phone: x17195
>> Santa Clara, CA
>>
>> _______________________________________________
>> zfs-discuss mailing list
>> zfs-discuss at opensolaris.org
>> http://mail.opensolaris.org/mailman/listinfo/zfs-discu
>> ss
>>
> --
> This message posted from opensolaris.org
>
>
> ------------------------------
>
> Message: 2
> Date: Tue, 15 Jun 2010 12:00:42 -0700
> From: Erik Trimble <erik.trimble at oracle.com>
> To: Geoff Nordli <geoffn at gnaa.net>
> Cc: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] Dedup... still in beta status
> Message-ID: <4C17CDDA.6050609 at oracle.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 6/15/2010 11:49 AM, Geoff Nordli wrote:
>>> From: Fco Javier Garcia
>>> Sent: Tuesday, June 15, 2010 11:21 AM
>>>
>>>
>>>> Realistically, I think people are overtly-enamored with dedup
as a
>>>> feature - I would generally only consider it worth-while in
cases
>>>> where you get significant savings. And by significant,
I''m
>>>> talking an
>>>> order of magnitude space savings. A 2x savings isn''t
really
>>>> enough to
>>>> counteract the down sides. Especially when even enterprise
disk
>>>> space
>>>> is
>>>> (relatively) cheap.
>>>>
>>>>
>>>
>>> I think dedup may have its greatest appeal in VDI environments
>>> (think about
>>>
>> a
>>
>>> environment with 85% if the data that the virtual machine needs is
>>> into ARC
>>>
>> or
>>
>>> L2ARC... is like a dream...almost instantaneous response... and
>>> you can
>>>
>> boot a
>>
>>> new machine in a few seconds)...
>>>
>>>
>> Does dedup benefit in the ARC/L2ARC space?
>>
>> For some reason, I have it in my head that for each time it
>> requests the
>> block from storage it will copy it into cache; therefore if I had
>> 10 VMs
>> requesting the same dedup''d block, there will be 10 copies of
the
>> same block
>> in ARC/L2ARC.
>>
>> Geoff
>>
>
> No, that''s not correct. It''s the *same* block, regardless
of where it
> was referenced from. The cached block has no idea where it was
> referenced from (that''s in the metadata). So, even if I have 10
VMs,
> requesting access to 10 different files, if those files have been
> dedup-ed, then any "common" (i.e. deduped) blocks will be stored
only
> once in the ARC/L2ARC.
>
> --
> Erik Trimble
> Java System Support
> Mailstop: usca22-123
> Phone: x17195
> Santa Clara, CA
>
>
>
> ------------------------------
>
> Message: 3
> Date: Tue, 15 Jun 2010 12:06:40 -0700
> From: Erik Trimble <erik.trimble at oracle.com>
> To: Fco Javier Garcia <correo at javido.com>
> Cc: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] Dedup... still in beta status
> Message-ID: <4C17CF40.9020204 at oracle.com>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
>
> On 6/15/2010 11:53 AM, Fco Javier Garcia wrote:
>> or as a member of the ZFS team
>>
>>> (which I''m not).
>>>
>>>
>> Then you have to be brutally good with Java
>>
>
> Thanks, but I do get it wrong every so often (hopefully, rarely).
> More
> importantly, I don''t know anything about the internal goings-on of
the
> ZFS team, so I have nothing extra to say about schedules, plans,
> timing,
> etc. that everyone else doesn''t know. I can only speculate based
on
> what''s been publicly said on those topics. E.g. I wish I knew
when
> certain bugs would be fixed, but I don''t have any more visibility
to
> that than the public.
>
>
> --
> Erik Trimble
> Java System Support
> Mailstop: usca22-123
> Phone: x17195
> Santa Clara, CA
>
>
>
> ------------------------------
>
> Message: 4
> Date: Tue, 15 Jun 2010 12:13:25 PDT
> From: CarlPalmer <dwarvenlord1 at yahoo.com>
> To: zfs-discuss at opensolaris.org
> Subject: [zfs-discuss] Complete Linux Noob
> Message-ID: <1918282415.381276629236528.JavaMail.Twebapp at sf-app1>
> Content-Type: text/plain; charset=UTF-8
>
> I have been researching different types of raids, and I happened
> across raidz, and I am blown away. I have been trying to find
> resources to answer some of my questions, but many of them are
> either over my head in terms of details, or foreign to me as I am a
> linux noob, and I have to admit I have never even looked at Solaris.
>
> Are the Parity drives just that, a drive assigned to parity, or is
> the parity shared over several drives?
>
> I understand that you can build a raidz2 that will have 2 parity
> disks. So in theory I could lose 2 disks and still rebuild my array
> so long as they are not both the parity disks correct?
>
> I understand that you can have Spares assigned to the raid, so that
> if a drive fails, it will immediately grab the spare and rebuild the
> damaged drive. Is this correct?
>
> Now I can not find anything on how much space is taken up in the
> raidz1 or raidz2. If all the drives are the same size, does a
> raidz2 take up the space of 2 of the drives for parity, or is the
> space calculation different?
>
> I get that you can not expand a raidz as you would a normal raid, by
> simply slapping on a drive. Instead it seems that the preferred
> method is to create a new raidz. Now Lets say that I want to add
> another raidz1 to my system, can I get the OS to present this as one
> big drive with the space from both raid pools?
>
> How do I share these types of raid pools across the network. Or
> more specifically, how do I access them from Windows based systems?
> Is there any special trick?
> --
> This message posted from opensolaris.org
>
>
> ------------------------------
>
> Message: 5
> Date: Tue, 15 Jun 2010 12:23:06 -0700
> From: Freddie Cash <fjwcash at gmail.com>
> To: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] Complete Linux Noob
> Message-ID:
> <AANLkTinnFGQXYNfbX2fNh4EWjDlhWPBoldJ1NGCUYdys at mail.gmail.com>
> Content-Type: text/plain; charset="utf-8"
>
> Some of my terminology may not be 100% accurate, so apologies in
> advance to
> the pedants on this list. ;)
>
> On Tue, Jun 15, 2010 at 12:13 PM, CarlPalmer
> <dwarvenlord1 at yahoo.com> wrote:
>
>> I have been researching different types of raids, and I happened
>> across
>> raidz, and I am blown away. I have been trying to find resources
>> to answer
>> some of my questions, but many of them are either over my head in
>> terms of
>> details, or foreign to me as I am a linux noob, and I have to admit
>> I have
>> never even looked at Solaris.
>>
>> Are the Parity drives just that, a drive assigned to parity, or is
>> the
>> parity shared over several drives?
>>
>
> Separate parity drives are RAID3 setups. raidz1 is similar to RAID5
> in that
> it uses distributed parity (parity blocks are written out to all the
> disks
> as needed). raidz2 is similar to RAID6. raidz3 (triple-parity
> raid) is
> similar to ... RAID7? Don''t think there''s actually any
formal RAID
> levels
> above RAID6, is there?
>
>
>> I understand that you can build a raidz2 that will have 2 parity
>> disks. So
>> in theory I could lose 2 disks and still rebuild my array so long
>> as they
>> are not both the parity disks correct?
>>
>
> There are no "parity disks" in raidz. With raidz2, you can lose
any 2
> drives in the vdev, without losing any data. Lose a third drive,
> though,
> and everything is gone.
>
> With raidz3, you can lose any 3 drives in the vdev without losing
> any data.
> Lose a fourth drive, though, and everything is gone.
>
>
>> I understand that you can have Spares assigned to the raid, so that
>> if a
>> drive fails, it will immediately grab the spare and rebuild the
>> damaged
>> drive. Is this correct?
>>
>
> Depending on the version of ZFS being used, and whether or not you
> set the
> property that controls this feature, yes. Hot-spares will start
> rebuilding
> a degraded vdev right away.
>
>
>> Now I can not find anything on how much space is taken up in the
>> raidz1 or
>> raidz2. If all the drives are the same size, does a raidz2 take up
>> the
>> space of 2 of the drives for parity, or is the space calculation
>> different?
>>
>
> Correct. raidz1 loses 1 drive worth of space to parity. raidz2
> loses 2
> drives worth of space. raidz3 loses 3 drives worth of space.
>
>
>> I get that you can not expand a raidz as you would a normal raid,
>> by simply
>> slapping on a drive. Instead it seems that the preferred method is
>> to
>> create a new raidz. Now Lets say that I want to add another raidz1
>> to my
>> system, can I get the OS to present this as one big drive with the
>> space
>> from both raid pools?
>>
>
> Yes. That is the whole point of pooled storage. :) As you add
> vdevs to
> the pool, the available space increases. There''s no partitioning
> required,
> you just create ZFS filesystems and volumes as needed.
>
>
>> How do I share these types of raid pools across the network. Or more
>> specifically, how do I access them from Windows based systems? Is
>> there any
>> special trick?
>>
>
> The same way you access any harddrive over the network:
> - NFS
> - SMB/CIFS
> - iSCSI
> - etc
>
> It just depends at what level you want to access the storage (files,
> shares,
> block devices, etc).
>
> --
> Freddie Cash
> fjwcash at gmail.com
> -------------- next part --------------
> An HTML attachment was scrubbed...
> URL:
<http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20100615/767d6c7d/attachment-0001.html
> >
>
> ------------------------------
>
> Message: 6
> Date: Tue, 15 Jun 2010 21:32:02 +0200 (CEST)
> From: Roy Sigurd Karlsbakk <roy at karlsbakk.net>
> To: CarlPalmer <dwarvenlord1 at yahoo.com>
> Cc: OpenSolaris ZFS discuss <zfs-discuss at opensolaris.org>
> Subject: Re: [zfs-discuss] Complete Linux Noob
> Message-ID: <2566843.64.1276630322625.JavaMail.root at zimbra>
> Content-Type: text/plain; charset=utf-8
>
> <snip/>
>> How do I share these types of raid pools across the network. Or more
>> specifically, how do I access them from Windows based systems? Is
>> there any special trick?
>
> Most of your questions are answered here
>
>
http://hub.opensolaris.org/bin/download/Community+Group+zfs/docs/zfslast.pdf
>
> Vennlige hilsener / Best regards
>
> roy
> --
> Roy Sigurd Karlsbakk
> (+47) 97542685
> roy at karlsbakk.net
> http://blogg.karlsbakk.net/
> --
> I all pedagogikk er det essensielt at pensum presenteres
> intelligibelt. Det er et element?rt imperativ for alle pedagoger ?
> unng? eksessiv anvendelse av idiomer med fremmed opprinnelse. I de
> fleste tilfeller eksisterer adekvate og relevante synonymer p? norsk.
>
>
> ------------------------------
>
> Message: 7
> Date: Tue, 15 Jun 2010 15:03:35 -0500 (CDT)
> From: Bob Friesenhahn <bfriesen at simple.dallas.tx.us>
> To: Joerg Schilling <Joerg.Schilling at fokus.fraunhofer.de>
> Cc: zfs-discuss at opensolaris.org
> Subject: Re: [zfs-discuss] Native ZFS for Linux
> Message-ID:
> <alpine.GSO.2.01.1006151456200.12887 at freddy.simplesystems.org>
> Content-Type: TEXT/PLAIN; charset=US-ASCII; format=flowed
>
> On Tue, 15 Jun 2010, Joerg Schilling wrote:
>>
>> Sorry but your reply is completely misleading as the people who
>> claim that
>> there is a legal problem with having ZFS in the Linux kernel would
>> of course
>> also claim that Reiserfs cannot be in the FreeBSD kernel.
>
> It seems that it is a license violation to link a computer containing
> GPLed code to the Internet. I think I heard on usenet or a blog that
> it was illegal to link GPLed code with non-GPLed code. The Internet
> itself is obviously a derived work and is therefore subject to the
> GPL.
>
> Bob
> --
> Bob Friesenhahn
> bfriesen at simple.dallas.tx.us,
http://www.simplesystems.org/users/bfriesen/
> GraphicsMagick Maintainer, http://www.GraphicsMagick.org/
>
>
> ------------------------------
>
> _______________________________________________
> zfs-discuss mailing list
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>
>
> End of zfs-discuss Digest, Vol 56, Issue 78
> *******************************************