Displaying 20 results from an estimated 1000 matches similar to: "create mirror copy of existing zfs stack"
2010 Nov 12
11
how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?
Hi,
How I can I quiesce / freeze all writes to zfs and zpool if want to take hardware level snapshots or array snapshot of all devices under a pool ?
are there any commands or ioctls or apis available ?
Thanks & Regards,
sridhar.
--
This message posted from opensolaris.org
2010 Oct 19
4
rename zpool
Hi,
I have two questions:
1) Is there any way of renaming zpool without export/import ??
2) If I took hardware snapshot of devices under a zpool ( where the snapshot device will be exact copy including metadata i.e zpool and associated file systems) is there any way to rename zpool name of snapshotted devices ?? without losing data part?
Thanks & Regards,
sridhar.
--
This message posted
2008 Jul 02
14
Changing GUID
Hi,
How difficult would it be to write some code to change the GUID of a pool?
----
Thanks
Peter
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20080701/cf494cc1/attachment.html>
2010 Apr 16
1
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI labeled devices
I am getting the following error, however as you can see below this is a SMI
label...
cannot set property for ''rpool'': property ''bootfs'' not supported on EFI
labeled devices
# zpool get bootfs rpool
NAME PROPERTY VALUE SOURCE
rpool bootfs - default
# zpool set bootfs=rpool/ROOT/s10s_u8wos_08a rpool
cannot set property for ''rpool'': property
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance.
We are seeing poor read performance from the zfs pool.
The release of solaris we are using is:
Solaris 10 10/09 s10s_u8wos_08a SPARC
The server itself is a T2000
I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
2006 Oct 18
5
ZFS and IBM sdd (vpath)
Hello, I am trying to configure ZFS with IBM sdd. IBM sdd is like powerpath, MPXIO or VxDMP.
Here is the error message when I try to create my pool:
bash-3.00# zpool create tank /dev/dsk/vpath1a
warning: device in use checking failed: No such device
internal error: unexpected error 22 at line 446 of ../common/libzfs_pool.c
bash-3.00# zpool create tank /dev/dsk/vpath1c
cannot open
2005 Nov 20
11
NFS question (and Best Practices)
I saw in another post that a best practices doc will be coming, but I figured I would try to get this working.
I''m trying to understand why zfs uses so many "zfs create" so I can use it better. What makes sense is that each zfs fs can have it''s own options (compression, nfs, atime, quota, etc). I really love this because it is so tuneable -- compression on these
2011 Apr 19
2
Problem with softlinks under samba on RHEL
Hello all Samba enthusiasts.
I am a noob w.r.t. samba configuration and just happened to use it on
Solaris systems by creating conf files using copy-paste :-). This is
just to convey that I'm no advanced user and might need help with the
solutions / alternatives to come.
Some background => Here is my smb.conf:
===
[global]
workgroup = <company domain>
server string = Samba
2010 Sep 12
2
dovecot 2.0.2 compile issues on Solaris 10u8 Sparc
I am having compile problems with Dovecot v2.0.2 on a well patched
Solaris 10u8 Sparc system using the included gcc compiler.
Version 2.0.0 compiled with out any issues using the same configure
syntax. My ./configure syntax looks like this:
./configure --with-ssl=openssl --with-shadow
Yahoo and Google searches turned up nothing for me.
Reviewing the file mountpoint.c between version 2.0.0
2010 Jul 14
6
Temporary files
In v1.0 .. v1.1 deliver was writing incoming >128k mail to /tmp file (to
avoid reading it all into memory). In v1.2 I moved it to user's home
directory. This slowed deliveries for NFS users. Also people with
filesystem quota had trouble since now user required twice as much
available quota to save a message. The FS quota problem was "solved" by
having quota-fs plugin change the
2018 Mar 19
0
[job] LLVM compiler engineer position at SARC (Samsung Austin R&D Center)
Hi,
SARC System Performance Architecture is seeking a full time compiler engineer
to join a world class compiler team supporting Samsung ARM
microprocessor designs.
Responsibilities include research, design, development, analysis, and
optimization
of open-source LLVM compiler and Android open-source runtime libraries.
The compiler engineer will assist in establishing Samsung’s direction and
2010 Jan 07
2
ZFS upgrade.
Hello,
Is there a way to upgrade my current ZFS version. I show the version could
be as high as 22.
I tried the command below. It seems that you can only upgrade by upgrade
the OS release.
[ilmcoso0vs056:root] / # zpool upgrade -V 16 tank
invalid version ''16''
usage:
upgrade
upgrade -v
upgrade [-V version] <-a | pool ...>
[ilmcoso0vs056:root] /
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host
2007 Jan 30
3
Export ZFS over NFS ?
I''ve got my first server deployment with ZFS.
Consolidating a pair of other file servers that used to have
a dozen or so NFS exports in /etc/dfs/dfstab similar to;
/export/solaris/images
/export/tools
/export/ws
..... and so on....
For the new server, I have one large zfs pool;
-bash-3.00# df -hl
bigpool 16T 1.5T 15T 10% /export
that I am starting to
2011 Apr 07
0
Update LDOM bootdisk (ZFS) from control domain
Hello,
I am trying to update files on LDOM boot disk from Control Domain, but I
can''t get it working fully.
I have a setup that allows to quickly deploy LDOM guest domains. The Golden
OS Image of guest domain boot disk is on a ZFS Volume. This ZFS Volume has
been Snapshot and Cloned for the quick deployment of guest domains.
Filesystem in both the control domain and guest domain is
2008 Sep 03
8
SAS or SATA HBA with write cache
Anyone know of a SATA and/or SAS HBA with battery backed write cache?
Seems like using a full-blown RAID controller and exporting each individual drive back to ZFS as a single LUN is a waste of power and $$$. Looking for any thoughts or ideas.
Thanks.
-Matt
--
This message posted from opensolaris.org
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du.
The other ZFS was used for similar
2006 Feb 25
9
trace("") prints hex dump table
I''m running
SunOS unknown 5.11 snv_33 i86pc i386 i86pc
and to print a blank line in the output of my script I tried adding:
trace("");
which printed
0 1 2 3 4 5 6 7 8 9 a b c d e f 0123456789abcdef
0: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
10: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
2006 Jun 13
4
ZFS panic while mounting lofi device?
I believe ZFS is causing a panic whenever I attempt to mount an iso image (SXCR build 39) that happens to reside on a ZFS file system. The problem is 100% reproducible. I''m quite new to OpenSolaris, so I may be incorrect in saying it''s ZFS'' fault. Also, let me know if you need any additional information or debug output to help diagnose things.
Config:
[b]bash-3.00#
2006 Jun 08
7
Wrong reported free space over NFS
NFS server (b39):
bash-3.00# zfs get quota nfs-s5-s8/d5201 nfs-s5-p0/d5110
NAME PROPERTY VALUE SOURCE
nfs-s5-p0/d5110 quota 600G local
nfs-s5-s8/d5201 quota 600G local
bash-3.00#
bash-3.00# df -h | egrep "d5201|d5110"
nfs-s5-p0/d5110 600G 527G 73G 88% /nfs-s5-p0/d5110