Hey, I''m running some test right now before setting up my server. I''m running Nexenta Core 3.02 (RC2, based on opensolaris build 134 I believe) in Virtualbox. To do the test, I''m creating three empty files and then making a raidz mirror: mkfile -n 1g /foo mkfile -n 1g /foo1 mkfile -n 1g /foo2 Then I make a zpool: zpool create testpool raidz /foo /foo1 /foo2 Now I destroy the pool and attempt to restore it: zpool destroy testpool But when I try to list available imports, the list is empty: zpool import -D return nothing. zpool import testpool also return nothing. Even if I try to export the pool (so before destroying it): zpool export testpool I see it disappear from the zpool list, but I can''t import it (commands return nothing). Is this due to the fact that I''m using test files instead of real drives? Thanks. -- This message posted from opensolaris.org
On 06/11/10 22:07, zfsnoob4 wrote:> Hey, > > I''m running some test right now before setting up my server. I''m running Nexenta Core 3.02 (RC2, based on opensolaris build 134 I believe) in Virtualbox. > > To do the test, I''m creating three empty files and then making a raidz mirror: > mkfile -n 1g /foo > mkfile -n 1g /foo1 > mkfile -n 1g /foo2 > > Then I make a zpool: > zpool create testpool raidz /foo /foo1 /foo2 > > Now I destroy the pool and attempt to restore it: > zpool destroy testpool > > But when I try to list available imports, the list is empty: > zpool import -D > return nothing. > > zpool import testpool > also return nothing. > > Even if I try to export the pool (so before destroying it): > zpool export testpool > > I see it disappear from the zpool list, but I can''t import it (commands return nothing). > > Is this due to the fact that I''m using test files instead of real drives? >- Yes. "zpool import" will by default look in "/dev/dsk". You need to specify the directory (using -d <dir>) if your pool devices are located elsewhere. See "man zpool". Neil.
Thanks, that works. But it only when I do a proper export first. If I export the pool then I can import with: zpool import -d / (test files are located in /) but if I destroy the pool, then I can no longer import it back, even though the files are still there. Is this normal? Thanks for your help. -- This message posted from opensolaris.org
I''m guessing that the virtualbox VM is ignoring write cache flushes. See this for more ifno: http://forums.virtualbox.org/viewtopic.php?f=8&t=13661 On 12 Jun, 2010, at 5.30, zfsnoob4 wrote:> Thanks, that works. But it only when I do a proper export first. > > If I export the pool then I can import with: > zpool import -d / > (test files are located in /) > > but if I destroy the pool, then I can no longer import it back, even though the files are still there. Is this normal? > > > Thanks for your help. > -- > This message posted from opensolaris.org > _______________________________________________ > zfs-discuss mailing list > zfs-discuss at opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Thanks. As I discovered from that post, VB does not have cache flush enabled by default. Ignoreflush must be explicitly turned off. VBoxManage setextradata VMNAME "VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0 where VMNAME is the name of your virtual machine. Although I tried that it it returned with no output (indicating it worked) but it still won''t detect a pool that has been destroyed. Is there any way to detect if flushes are working from inside the OS? Maybe a command that tells you if cacheflush is enabled? Thanks. -- This message posted from opensolaris.org
On 06/12/10 17:13, zfsnoob4 wrote:> Thanks. As I discovered from that post, VB does not have cache flush enabled by default. Ignoreflush must be explicitly turned off. > > VBoxManage setextradata VMNAME "VBoxInternal/Devices/piix3ide/0/LUN#[x]/Config/IgnoreFlush" 0 > > where VMNAME is the name of your virtual machine. > > > Although I tried that it it returned with no output (indicating it worked) but it still won''t detect a pool that has been destroyed. Is there any way to detect if flushes are working from inside the OS? Maybe a command that tells you if cacheflush is enabled? > > Thanks. >You also need the "-D" flag. I could successfully import. This was running the latest bits: : trasimene ; mkdir /pf : trasimene ; mkfile 100m /pf/a /pf/b /pf/c : trasimene ; zpool create whirl /pf/a /pf/b log /pf/c : trasimene ; zpool destroy whirl : trasimene ; zpool import -D -d /pf pool: whirl id: 1406684148029707587 state: ONLINE (DESTROYED) action: The pool can be imported using its name or numeric identifier. config: whirl ONLINE /pf/a ONLINE /pf/b ONLINE logs /pf/c ONLINE : trasimene ; zpool import -D -d /pf whirl : trasimene ; zpool status whirl pool: whirl state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM whirl ONLINE 0 0 0 /pf/a ONLINE 0 0 0 /pf/b ONLINE 0 0 0 logs /pf/c ONLINE 0 0 0 errors: No known data errors : trasimene ; It would, of course, have been easier if you''d been using real devices but I understand you want to experiment first...
Thank you. The -D option works. And yes, now I feel a lot more confident about playing around with the FS. I''m planning on moving an existing raid1 NTFS setup to ZFS, but since I''m on a budget I only have three drive in total to work with. I want to make sure I know what I''m doing before I mess around with anything. Also I can confirm that the cache flush option is not ALWAYS needed for the import. I have opensolaris build 134 in VirtualBox, but I didn''t enable cache flush. After destroying the import worked correctly with the -D option. I emphasize always because if you are writing to the disk, while you destroy it, it may not work very well; I haven''t tested this. Thanks for your help. -- This message posted from opensolaris.org