Displaying 20 results from an estimated 7000 matches similar to: "zfs chattiness at boot time"
2008 Nov 03
4
cleaning user properties
I have a little question about user properties, I have two filesystems:
rpool/export/home/luca
and
rpool/export/home/luca/src
in this two I have one user property, setted with:
zfs set net.morettoni:test=xyz rpool/export/home/luca
zfs set net.morettoni:test=123 rpool/export/home/luca/src
now I need to *clear* (remove) the property from
rpool/export/home/luca/src filesystem, but if I use the
2007 Apr 10
15
Poor man''s backup by attaching/detaching mirror drives on a _striped_ pool?
Hi,
one quick&dirty way of backing up a pool that is a mirror of two devices is to
zpool attach a third one, wait for the resilvering to finish, then zpool detach
it again.
The third device then can be used as a poor man''s simple backup.
Has anybody tried it yet with a striped mirror? What if the pool is
composed out of two mirrors? Can I attach devices to both mirrors, let them
2009 Apr 09
8
ZIL SSD performance testing... -IOzone works great, others not so great
Hi folks,
I would appreciate it if someone can help me understand some weird
results I''m seeing with trying to do performance testing with an SSD
offloaded ZIL.
I''m attempting to improve my infrastructure''s burstable write capacity
(ZFS based WebDav servers), and naturally I''m looking at implementing
SSD based ZIL devices.
I have a test machine with the
2007 Oct 29
9
zpool question
hello folks, I am running Solaris 10 U3 and I have small problem that I dont
know how to fix...
I had a pool of two drives:
bash-3.00# zpool status
pool: mypool
state: ONLINE
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
mypool ONLINE 0 0 0
emcpower0a ONLINE 0 0 0
emcpower1a ONLINE
2007 Apr 12
10
How to bind the oracle 9i data file to zfs volumes
Experts,
I''m installing Oracle 9i on Solaris 10 11/06(update 3),I created some
zfs volumes which will be used by oracle data file,as:
# zfs create -V 200m ora_pool/controlfile01_200m
# zfs create -V 800m ora_pool/system_800m
...
# ls -l /dev/zvol/rdsk/ora_pool
lrwxrwxrwx 1 root root 39 Apr 11 12:23
controlfile01_200m -> ../../../../devices/pseudo/zfs at 0:1c,raw
2009 Apr 09
3
vdev_disk_io_start() sending NULL pointer in ldi_ioctl()
Hi All,
I have corefile where we see NULL pointer de-reference PANIC as we have
sent (deliberately) NULL pointer for return value.
vdev_disk_io_start()
...
...
error = ldi_ioctl(dvd->vd_lh, zio->io_cmd,
(uintptr_t)&zio->io_dk_callback,
FKIOCTL, kcred, NULL);
ldi_ioctl() expects last parameter as an
2009 Jan 22
3
Failure to boot from zfs on Sun v880
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi.
I am trying to move the root volume from an existing svm mirror to a zfs
root. The machine is a Sun V880 (SPARC) running nv_96, with OBP version
4.22.34 which is AFAICT the latest.
The svm mirror was constructed as follows
/
d4 m 18GB d14
d14 s 35GB c1t0d0s0
d24 s 35GB c1t1d0s0
swap
d3
2007 Apr 10
3
Renaming a pool?
Hi all,
I have a pool called tank/home/foo and I want to rename it to
tank/home/bar. What''s the best way to do this (the zfs and
zpool man pages don''t have a "rename" option)?
One way I can think of is to create a clone of tank/home/foo
called tank/home/bar, and then destroy the former. Is that
the best (or even only) way?
TIA,
--
Rich Teer, SCSA, SCNA, SCSECA,
2007 Apr 19
9
ZFS disables nfs/server on a host
I have an Ultra 10 client running Sol10 U3 that has a zfs pool set up on the extra space of the internal ide disk. There''s just the one fs and it is shared with the sharenfs property. When this system reboots nfs/server ends up getting disabled and this is the error from the SMF logs:
[ Apr 16 08:41:22 Executing start method ("/lib/svc/method/nfs-server start") ]
[ Apr 16
2007 Apr 19
14
Permanently removing vdevs from a pool
Is it possible to gracefully and permanently remove a vdev from a pool without data loss? The type of pool in question here is a simple pool without redundancies (i.e. JBOD). The documentation mentions for instance offlining, but without going into the end results of doing that. The thing I''m looking for is an option to evacuate, for the lack of a better word, the data from a specific
2009 Sep 24
5
Checksum property change does not change pre-existing data - right?
My understanding is that if I "zfs set checksum=<different>" to change the algorithm that this will change the checksum algorithm for all FUTURE data blocks written, but does not in any way change the checksum for previously written data blocks.
I need to corroborate this understanding. Could someone please point me to a document that states this? I have searched and searched
2010 Mar 27
16
zpool split problem?
Zpool split is a wonderful feature and it seems to work well,
and the choice of which disk got which name was perfect!
But there seems to be an odd anomaly (at least with b132) .
Started with c0t1d0s0 running b132 (root pool is called rpool)
Attached c0t0d0s0 and waited for it to resilver
Rebooted from c0t0d0s0
zpool split rpool spool
Rebooted from c0t0d0s0, both rpool and spool were mounted
2009 Feb 02
8
ZFS core contributor nominations
The time has come to review the current Contributor and Core contributor
grants for ZFS. Since all of the ZFS core contributors grants are set
to expire on 02-24-2009 we need to renew the members that are still
contributing at core contributor levels. We should also add some new
members to both Contributor and Core contributor levels.
First the current list of Core contributors:
Bill
2008 Jul 01
17
Memory leak scripts
Hola, I am trying to isolate the memory leak I suspect in a mailman
installation ? I found:
http://blogs.sun.com/sanjeevb/date/200506
It gives an error:
god at irt-smtp-02:~ 9:21am 65 # ./memleak.d 10312
dtrace: failed to compile script ./memleak.d: line 3: probe description
pid10312:libc.so.1:malloc:entry does not match any probes
I am on SunOS 5.10 Generic_127112-07 i86pc i386 i86pc
Are
2015 Mar 10
3
[LLVMdev] Chatty C++API code generation
Hi all,
when I have c code like
--- c code -------------
struct stest { /* deklariert den Strukturtyp person */
int age;
float weight;
} foo={44,67.2}; /* deklariert Variable des Typs person */
int main() {
callAFunction(foo.weight);
------------------------
The generated c++API code to me seems to be too chatty in the sense that
the foo.weight's data
2009 Nov 28
2
ZFS CIFS, smb.conf (smb/server) and LDAP
All;
I am deeply sorry if this topic has been rehashed, checksummed,
de-duplicated and archived before.
But I just need a small clarification.
/etc/sfw/smb.conf is necessary only for smb/server to function properly
but is smb/server SMF service necessary for ZFS sharesmb to work
I am trying to setup an open solaris file server acting as a Windows PDC
with SAMBA/LDAP integration on the open
2006 Sep 18
7
drbd using zfs send/receive?
hi everyone,
I am planning on creating a local SAN via NFS(v4) and several redundant
nodes.
I have been using DRBD on linux before and now am asking whether some of
you have experience on on-demand network filesystem mirrors.
I have yet little Solaris sysadmin know how, but i am interesting
whether there is an on-demand support for sending snapshots. I.e. not
via a cron job, but via a
2011 Mar 04
13
cannot replace c10t0d0 with c10t0d0: device is too small
In 2007 I bought 6 WD1600JS 160GB sata disks and used 4 to create a raidz storage pool and then shelved the other two for spares. One of the disks failed last night so I shut down the server and replaced it with a spare. When I tried to zpool replace the disk I get:
zpool replace tank c10t0d0
cannot replace c10t0d0 with c10t0d0: device is too small
The 4 original disk partition tables look like
2006 Dec 12
23
ZFS Storage Pool advice
This question is concerning ZFS. We have a Sun Fire V890 attached to a EMC disk array. Here''s are plan to incorporate ZFS:
On our EMC storage array we will create 3 LUNS. Now how would ZFS be used for the best performance?
What I''m trying to ask is if you have 3 LUNS and you want to create a ZFS storage pool, would it be better to have a storage pool per LUN or combine the 3
2007 Mar 27
4
ZFS filesystem online backup question
I have to backup many filesystems, which are changing and machines are heavy loaded.
The idea is to backup online - this should avoid I/O read operations from disks,
data should go from cache.
Now I''m using script that does snapshot and zfs send.
I want to automate this operation and add new option to zfs send
zfs send [-w sec ] [-i <snapshot>] <snapshot>
for example