Displaying 20 results from an estimated 400 matches similar to: "ZFS Boot Won''t work with a straight or mirror zfsroot"
2007 May 31
3
zfs boot error recovery
hi all,
i would like to ask some questions regarding best practices for zfs
recovery if disk errors occur.
currently i have zfs boot (nv62) and the following setup:
2 si3224 controllers (each 4 sata disks)
8 sata disks, same size, same type
i have two pools:
a) rootpool
b) datapool
the rootpool is a mirrored pool, where every disk has a slice (the s0,
which is 5 % of the whole disk) and this
2007 Oct 25
1
How to have ZFS root with /usr on a separate datapool
Ref: http://blogs.sun.com/timf/entry/zfs_bootable_datasets_happily_rumbling
Ref: http://mediacast.sun.com/share/timf/zfs-actual-root-install.sh
This is my errata for Tim Foster''s zfs root install script:
1/ Correct mode for /tmp should be 1777.
2/ The zfs boot install should allow you to have /usr on a separate zpool:
	a/ We need to create /zfsroot/usr/lib in the root partition and
2010 Aug 28
4
ufs root to zfs root liveupgrade?
hi all
Try to learn how UFS root to ZFS root  liveUG work.
I download the vbox image of s10u8, it come up as UFS root.
add a new  disks (16GB)
create zpool rpool
run lucreate -n zfsroot -p rpool
run luactivate zfsroot
run lustatus it do show zfsroot will be active in next boot
init 6
but it come up with UFS root,
lustatus show ufsroot active
zpool rpool is mounted but not used by boot
Is this a
2011 Dec 08
1
Can't create striped replicated volume
Hi,
I'm trying to create striped replicated volume but getting tis error:
gluster volume create cloud stripe 4 replica 2 transport tcp
nebula1:/dataPool nebula2:/dataPool nebula3:/dataPool nebula4:/dataPool
wrong brick type: replica, use<HOSTNAME>:<export-dir-abs-path>
Usage: volume create<NEW-VOLNAME>  [stripe<COUNT>] [replica<COUNT>]
2010 Jul 09
2
snapshot out of space
I am getting the following erorr message when trying to do a zfs snapshot:
root at pluto#zfs snapshot datapool/mars at backup1
cannot create snapshot ''datapool/mars at backup1'': out of space
root at pluto#zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 556G 110G 446G 19% ONLINE -
rpool 278G 12.5G 265G 4% ONLINE -
Any ideas???
-------------- next part
2007 Dec 19
0
zfs boot suddenly not working
On Dec 18, 2007, at 6:15 PM, Michael Hale wrote:
> We have a machine that is configured with zfs boot , Nevada v67- we  
> have two pools, rootpool and datapool.  It has been working ok since  
> June.  Today it kernel panicked and now when we try to boot it up,  
> it gets to the grub screen, we select ZFS, and then there is a  
> kernel panic that flashes by too quickly for us to
2014 Oct 20
1
2.2.14 Panic in sync_expunge_range()
I am getting some panics after upgrading from 2.2.13 to 2.2.14
This panic happens for one user only, he is subscribed to 86 folders,
on two of them this panic happens quite often - several times a day.
The mbox folders seems OK, less than 30M with 30 and 200 messages.
Panic: file mail-index-sync-update.c: line 250 (sync_expunge_range): assertion failed: (count > 0)
hmk
GNU gdb 6.8
2011 Dec 13
1
question regarding samba permissions
I want to make a subfolder read only for certain users.
for example: /data/pool is public rwx for all users. 
and now i would like to make a /data/pool/subfolder only rwx for user1 and grant read only permissions to user2 and user3
how do i do this? any links or direct tips on that?
my suggestion would be something like this, but as you can imagine it didn't work:
# The general datapool
2011 Dec 14
1
Fwd: Re: question regarding samba permissions
woudln't work because all the users are in one group anyway.
and i am not allowed to to give read rights do "any" (i.e. 755)
but theres really no option in smb.conf like "read only users = " or
something like that?
Am 13.12.2011 17:56, schrieb Raffael Sahli:
>  On Tue, 13 Dec 2011 16:38:41 +0100, "skull"<skull17 at gmx.ch>   wrote:
>>  I want to
2017 Jul 25
1
memory snapshot save error
libvirt
version: 3.4.0
architecture: x86_64 ubuntu16.04-server
hypervisor: kvm,qemu
when I want make a memery snapshot for a vm ,and I call virsh save ,but tell me this error:
error: Failed to save domain 177 to /datapool/mm.img
error: operation failed: domain save job: unexpectedly failed
xml configure:
<domain type='kvm' id='177'>
  <name>virt23</name>
 
2010 Mar 13
3
When to Scrub..... ZFS That Is
When would it be necessary to scrub a ZFS filesystem?
We have many "rpool", "datapool", and a NAS 7130, would you consider to
schedule monthly scrubs at off-peak hours or is it really necessary?
Thanks
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2007 Jul 27
0
cloning disk with zpool
Hello the list,
I thought that it should be easy to do a clone (not in the term of zfs) of a disk with zpool. This manipulation is strongly inspired by
http://www.opensolaris.org/jive/thread.jspa?messageID=135038
and 
http://www.opensolaris.org/os/community/zfs/boot/
But unfortunately this doesn''t work, and we do have no clue what could be wrong
on c1d0 you have a zfs root
create a
2007 Aug 13
2
ZFS Boot for Solaris SPARC
Hi,
Searching this alias I can find a number of guides and scripts that
describe the configuration of Solaris to boot from a ZFS rootpool.
However, these guides appear to be Solaris 10 x86 specific.
Is the creation of a ZFS boot disk available for Solaris SPARC ?
If so, could you point me in the direction of where I can obtain details
of this new feature from.
Thanks and Regards,
Paul.
PS:
2008 Jun 12
2
Getting Batch mode to continue running a script after running into errors
I'm invoking R in batch mode from a bash script as follows:
R --no-restore --no-save --vanilla
<$TARGET/$directory/o2sat-$VERSION.R>
$TARGET/$directory/o2sat-$VERSION.Routput
When R comes across some error in the script however it seems to halt
instead of running subsequent lines in the script:
Error in file(file, "r") : cannot open the connection
Calls: read.table ->
2007 Jul 25
3
Any fix for zpool import kernel panic (reboot loop)?
My system (a laptop with ZFS root and boot, SNV 64A) on which I was trying Opensolaris now has the zpool-related kernel panic reboot loop. 
Booting into failsafe mode or another solaris installation and attempting:
''zpool import -F rootpool'' results in a kernel panic and reboot.
A search shows this type of kernel panic has been discussed on this forum over the last year.
2018 Nov 15
1
libvirt call qemu to create vm need more than 10 seconds
Hi all:
    It takes more than 10 seconds to create a vm on a Dell R830 machine, but it takes less than 2 seconds on other machines. This is not normal, so I turned on the debug log for libvirtd. I analyzed the log and found that the time was spent on libvirtd calling qemu. Thread 95225 calls the qemuProcessLaunch interface at 14:22:30.129 and then builds the emulator command line, but the
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a  
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a  
zfs file server... but I also need to relinquish a lot of my memory to  
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?
2007 Nov 19
1
Recommended settings for dom0_mem when using zfs
I have a xVm b75 server and use zfs for storage (zfs root mirror and a  
raid-z2 datapool.)
I see everywhere that it is recommended to have a lot of memory on a  
zfs file server... but I also need to relinquish a lot of my memory to  
be used by the domUs.
What would a good value for dom0_mem on a box with 4 gig of ram?
2009 Jan 09
24
zfs root, jumpstart and flash archives
I understand that currently, at least under Solaris 10u6, it is not 
possible to jumpstart a new system with a zfs root using a flash archive 
as a source.
Can anyone comment as to whether this restriction will pass in the near 
term, or if this is a while out (6+ months) before this will be possible?
Thanks,
Jerry
2010 Aug 28
0
zfs-discuss Digest, Vol 58, Issue 117
>> hi all
>> Try to learn how UFS root to ZFS root  liveUG work.
>> 
>> I download the vbox image of s10u8, it come up as UFS root.
>> add a new  disks (16GB)
>> create zpool rpool
>> run lucreate -n zfsroot -p rpool
>> run luactivate zfsroot
>> run lustatus it do show zfsroot will be active in next boot
>> init 6
>> but it come up