Jason Pfingstmann
2009-Aug-21 19:22 UTC
[zfs-discuss] Need 1.5 TB drive size to use for array for testing
This is an odd question, to be certain, but I need to find out what size a 1.5 TB drive is to help me create a sparse/fake array. Basically, if I could have someone do a dd if=<1.5 TB disk> of=<somefile> and then post the ls -l size of that file, it would greatly assist me. Here''s what I''m doing: I have a 1 TB drive with my data on it (NTFS) and a second 1 TB drive that I want to move my data on to. However, I eventually want to have 6 x 1.5 TB drives for this array (with raidz2 for 6 TB of usable storage - I have a ton of additional drives with data). I can''t afford the drives now, but want to get it ready for when I can, so here''s my plan: 1) Create a ZFS volume with the 1 TB drive that''s empty 2) Move data onto it 3) Wipe out the original NTFS drive 4) Create 1.5 TB sparse files (5 of them) on the old NTFS drive 5) Create raidz2 on the 1.5 TB files (mount as loopback if necessary) Note: I realize this defeats the benefits of the raidz, the drive dies and I lose everything on it. 6) Copy data from the 1 TB ZFS volume to the psuedo/fake raidz array (I''d set up an rsync or something) This way I still have -some- redundancy should 1 of the 2 drives fail. As I get new drives, I''ll replace the loopback device with the physical device. This way I''ll slowly gain the redundancy I desire (raidz2), while still being redundant while the amount of data I have is low. Any thoughts on this? I don''t see why it shouldn''t work, but I''ve only been tinkering with ZFS for 2 days now and this is all unexplored territory. P-Chan -- This message posted from opensolaris.org
Eric D. Mudama
2009-Aug-22 05:58 UTC
[zfs-discuss] Need 1.5 TB drive size to use for array for testing
On Fri, Aug 21 at 12:22, Jason Pfingstmann wrote:> This is an odd question, to be certain, but I need to find out what > size a 1.5 TB drive is to help me create a sparse/fake array.(Personally, I think you''re making your job a lot harder than it should be. Just wait til you have the real disks, and do your array creation then with no gimicks.) That being said, while all drives can have slightly different numbers of LBAs, vendors seems to be standardizing on the IDEMA formula in their "Document LBA1-02: LBA Count for IDE Hard Disk Drives Standard" LBA count = (97696368) + (1953504 * (Desired Capacity in Gbytes - 50.0)) Just plug in 1500 for "desired capacity in Gbytes" and it should tell you what most vendors are configuring their 1.5TB drives to. I just checked, and the 1.5TB Caviar Green spec sheet on wdc.com indicates exactly the resulting number of LBAs when you plug in 1500 to the above formula. --eric -- Eric D. Mudama edmudama at mail.bounceswoosh.org
Jason Pfingstmann
2009-Aug-22 07:00 UTC
[zfs-discuss] Need 1.5 TB drive size to use for array for testing
Thanks for the reply! The reason I''m not waiting until I have the disks is mostly because it will take me several months to get the funds together and in the meantime, I need the extra space 1 or 2 drives gets me. Since the sparse files will only take up the space in use, if I''ve migrated 2 of the sparse files to actual disk, I should have enough storage for about 2 TB of data without risking running out of space on the sparse file drive. I know it''ll be quirky and I''d need to monitor the sparse file drive closely to insure it doesn''t run out of room (or risk unexpected results, possibly complete data loss depending on how ZFS deals with that kind of problem). -- This message posted from opensolaris.org
Pawel Jakub Dawidek
2009-Aug-22 08:44 UTC
[zfs-discuss] Need 1.5 TB drive size to use for array for testing
On Sat, Aug 22, 2009 at 12:00:42AM -0700, Jason Pfingstmann wrote:> Thanks for the reply! > > The reason I''m not waiting until I have the disks is mostly because it will take me several months to get the funds together and in the meantime, I need the extra space 1 or 2 drives gets me. Since the sparse files will only take up the space in use, if I''ve migrated 2 of the sparse files to actual disk, I should have enough storage for about 2 TB of data without risking running out of space on the sparse file drive. I know it''ll be quirky and I''d need to monitor the sparse file drive closely to insure it doesn''t run out of room (or risk unexpected results, possibly complete data loss depending on how ZFS deals with that kind of problem).It doesn''t work exactly how you describe. ZFS cannot report back to the file that the given block is free. Because of COW model, if you will modify your pool a lot, blocks will be allocated in the spare files, but they will never be released, so your spare files will only grow. You can end up with quite empty pool and fully populated spare files. As for the idea itself, it did something similar in the past when I was changing pool layout - I created raidz2 vdev with two spare files, which I removed immediately and two disks I saved I used as temporary storage. Once I copied the data to the raidz2 destination pool, I added those two disks into the holes and I let ZFS resliver to do its job. -- Pawel Jakub Dawidek http://www.wheel.pl pjd at FreeBSD.org http://www.FreeBSD.org FreeBSD committer Am I Evil? Yes, I Am! -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 187 bytes Desc: not available URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20090822/b2cf2ba8/attachment.bin>
Brandon High
2009-Aug-23 01:30 UTC
[zfs-discuss] Need 1.5 TB drive size to use for array for testing
On Fri, Aug 21, 2009 at 12:22 PM, Jason Pfingstmann<no-reply at opensolaris.org> wrote:> Any thoughts on this? ?I don''t see why it shouldn''t work, but I''ve only been tinkering with ZFS for 2 days now and this is all unexplored territory.You shouldn''t need to fake the size of your file-backed vdevs. If you plan on having 6 drives, create 6 files, each of them 1/6 of the total drive size. When you replace the all the files with the actual drives (or just with larger files) and export/import the pool, you''ll have your 6 x 1.5T pool. For instance: 1) start with 6 x 250G on one drive. [6x250, 0 free] 2) Add second drive, create 3 x 500GB files [6x250, 0 free][0x0, 1.5T free] 3) Replace 1 vdev files in your pool with one of the new files [6x250, 0 free][3x500, 0 free] 4) Re-silver. 5) Repeat 3 & 4 for the other two files on the 2nd drive. 6) Delete the vdev files on the first drive that are no longer in use [3x250, 750 free][3x500, 0 free] 7) Create a new 500G file on the first drive and replace a vdev with it [3x250, 1x500, 250 free][3x500, 0 free] 8) Delete the vdev file you''ve replaced. [2x250, 1x500, 500 free][3x500, 0 free] 9) Repeat steps 7 & 8 [1x250, 2x500, 250 free][3x500, 0 free] 10) Fail the last 250G file out of the pool & delete the file. Your pool will be degraded. [2x500, 500 free][3x500, 0 free] 11) Create a new 500G file on the first drive and replace the missing vdev with it [3x500, 0 free][3x500, 0 free] You might be better off using slices/partitions so that the files don''t become interleaved. You''re going to have horrible performance due to excessive seeking regardless of what you do, too. If one of the drives fails, you''ll still lose all your data. If you went to 3 drives you''d have some data protection using raidz2. You could continue to add drives one by one in this manner. For 3 drives, you''d use 750G slices, but 4 or 5 wouldn''t be possible without major tom-foolery since the slice or file size would be > 0.5 * 1.5T. I''m too embarrassed to post my suggestion on how to work around that; it makes me feel dirty. In the long run, you''re better off starting out with 6 drives. If you can''t buy them all now, use 3x1.5 in a raidz and add another 3x1.5 raidz to your pool later. -B -- Brandon High : bhigh at freaks.com