similar to: ZFS-DMU benchmark on Linux

Displaying 20 results from an estimated 500 matches similar to: "ZFS-DMU benchmark on Linux"

2007 Feb 14
5
Need help making lsof work with ZFS
I contacted the author of ''lsof'' regarding the missing ZFS support. The command works but fails to display any files that are opened by the process in a ZFS filesystem. He indicates that the required ZFS kernel structure definitions (header files) are not shipped with the OS. He further indicated that he rummaged through the OpenSolaris source tree and the files doesn''t
2006 Jan 04
8
Using same ZFS under different kernel versions
I build two zfs filesystems using b29 (from brandz). I then re-installed solaris express b28, preserving the zfs filesystems. When I tried to "zpool import" my zfs filesystems I got a kernel panic: > debugging crash dump vmcore.0 (32-bit) from blackbird > operating system: 5.11 snv_28 (i86pc) > panic message: > ZFS: bad checksum (read on /dev/dsk/c1d0p0 off 24d5e000: zio
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi, I just found out that ZFS triggers a kernel-panic while switching a mounted volume into read-only mode: The system is attached to a Symmetrix, all zfs-io goes through Powerpath: I ran some io-intensive stuff on /tank/foo and switched the device into read-only mode at the same time (symrdf -g bar failover -establish). ZFS went ''bam'' and triggered a Panic: WARNING: /pci at
2006 Nov 09
16
Some performance questions with ZFS/NFS/DNLC at snv_48
Hello. We''re currently using a Sun Blade1000 (2x750MHz, 1G ram, 2x160MB/s mpt scsi buses, skge GigE network) as a NFS backend with ZFS for distribution of free software like Debian (cdimage.debian.org, ftp.se.debian.org) and have run into some performance issues. We are running SX snv_48 and have run with a raidz2 with 7x300G for a while now, just added another 7x300G raidz2 today but
2009 Apr 09
3
vdev_disk_io_start() sending NULL pointer in ldi_ioctl()
Hi All, I have corefile where we see NULL pointer de-reference PANIC as we have sent (deliberately) NULL pointer for return value. vdev_disk_io_start() ... ... error = ldi_ioctl(dvd->vd_lh, zio->io_cmd, (uintptr_t)&zio->io_dk_callback, FKIOCTL, kcred, NULL); ldi_ioctl() expects last parameter as an
2008 Sep 08
1
6745678 zio->io_checksum == ZIO_CHECKSUM_SHA256_CCM_MAC (0x5 == 0x9), file: zio.c, line: 1498
Author: Darren Moffat <Darren.Moffat at Sun.COM> Repository: /hg/zfs-crypto/gate Latest revision: 32a041998ab168dc335d487020fc0cb59c85d81f Total changesets: 1 Log message: 6745678 zio->io_checksum == ZIO_CHECKSUM_SHA256_CCM_MAC (0x5 == 0x9), file: zio.c, line: 1498 Files: update: usr/src/uts/common/fs/zfs/zio.c
2013 Jul 19
2
9.2PRERELEASE ZFS panic in lzjb_compress
Hi, Running 9.2-PRERELEASE #19 r253313 I got the following panic Fatal trap 12: page fault while in kernel mode cpuid = 22; apic id = 46 fault virtual address = 0xffffff827ebca30c fault code = supervisor read data, page not present instruction pointer = 0x20:0xffffffff81983055 stack pointer = 0x28:0xffffffcf75bd60a0 frame pointer = 0x28:0xffffffcf75bd68f0
2008 Jun 10
2
How to join data.frames and vectors of different length, in an inteligent way?
I have a data set something like this: "YYYY", "Value" 1972 , 117 1984 , 73 1969 , 92 1976 , 113 1999 , 80 1996 , 78 1976 , 98 1984 , 106 1976 , 99 it could be created with: > dafSamp <- data.frame(cbind(c(1972,1984,1969,1976,1999,1996,1976,1984,1976),c(117,73,92,113,80,78,98,106,99))) The real dataset is of cause much larger, app. 100.000 samples
2012 Feb 04
2
zpool fails with panic in zio_ddt_free()
Hello all, I am not sure my original mail got through to the list (I haven''t received it back), so I attach it below. Anyhow, now I have a saved kernel crash dump of the system panicking when it tries to - I believe - deferred-release the corrupted deduped blocks which are no longer referenced by the userdata/blockpointer tree. As I previously wrote in my thread on unfixeable
2006 Aug 21
1
[PATCH 3 of 6] dm-userspace internal libdmu support for userspace tool
# HG changeset patch # User Ryan Grimm <grimm@us.ibm.com> # Date 1156190589 18000 # Node ID a19a066dea764a70f06b4e4341229db92c2eb5c3 # Parent 53c5bcecfcfdb70cb3a2aed0adb564312988fbdd dm-userspace internal libdmu support for userspace tool Signed-off-by: Ryan Grimm <grimm@us.ibm.com> Signed-off-by: Dan Smith <danms@us.ibm.com> diff -r 53c5bcecfcfd -r a19a066dea76 tools/Makefile
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi, I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is: #!/bin/sh -x uname -a mkfile 100m /data zpool create tank /data zpool status cd /tank ls -al cp /etc/services . ls -al cd / rm /data zpool status # uncomment the following lines if you want to see the system think # it can still read and write to the
2002 Nov 25
2
Q: Segmentation fault at all commands ?
Dear List Sorry if this is trivial, but I can't find it in the manual, FAQ or mail list (2002). I just installed R 1.6.0 (2002-10-01) on Linux R.H. 7.2 (2.4.9-32.5) on an DIGITAL AlphaServer 800 5/500 The program starts up nicely but exits immediately with a "Segmentation fault" at the first command. It seems that it doesn't matter what command, I tried q(), help(), version,
2007 Aug 02
3
ZFS, ZIL, vq_max_pending and OSCON
The slides from my ZFS presentation at OSCON (as well as some additional information) are available at http://www.meangrape.com/ 2007/08/oscon-zfs/ Jay Edwards jay at meangrape.com http://www.meangrape.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070802/f2fa7b08/attachment.html>
2008 Feb 19
32
storing SOM epoch in EA
Good day, some time ago we discussed that it would be very helpful to store epoch in inode on mds. the perfect solution could be to store epoch in old inode body, but there is no much space for this in the body and with DMU we''ll have this problem again. given the minimal inode size we use on MDS is 512 bytes, we can store upto 13 stripes in the body. larger EAs go to a dedicated block.
2002 Nov 21
5
PDC Problems...
Hi all, I've looked through the archives and I can't seem to find a solution, so here's my problem. I have three Win2k clients and one Samba server which I set up as a PDC (or at least I thought so.) The domain is "THEMOLE" yet when I try to join the domain from the clients it says; "The following error occured validaing the name "THEMOLE" The specified
2009 Mar 03
8
zfs list extentions related to pNFS
Hi, I am soliciting input from the ZFS engineers and/or ZFS users on an extension to "zfs list". Thanks in advance for your feedback. Quick Background: The pNFS project (http://opensolaris.org/os/project/nfsv41/) is adding a new DMU object set type which is used on the pNFS data server to store pNFS stripe DMU objects. A pNFS dataset gets created with the "zfs
2006 May 19
11
tracking error to file
In my testing, I''ve found the following error: zpool status -v pool: local state: ONLINE status: One or more devices has experienced an error resulting in data corruption. Applications may be affected. action: Restore the file in question if possible. Otherwise restore the entire pool from backup. see: http://www.sun.com/msg/ZFS-8000-8A scrub: none requested
2005 Apr 13
2
easy question: obtaining rw1080.exe
Dear All, Can anyone please tell me where I can obtain uncompiled binary instalation files for R version 1.8. (i.e. rw1080.exe)? I can only find the uncompiled source code on CRAN today. Thank you, Mary Wisz msw@dmu.dk [[alternative HTML version deleted]]
2011 Feb 01
1
zpool-poolname has 99 threads
After an upgrade of a busy server to Oracle Solaris 10 9/10, I notice a process called zpool-poolname that has 99 threads. This seems to be a limit, as it never goes above that. It is lower on workstations. The `zpool'' man page says only: Processes Each imported pool has an associated process, named zpool- poolname. The threads in this process are the pool''s
2010 Jul 25
4
zpool destroy causes panic
I'm trying to destroy a zfs array which I recently created. It contains nothing of value. # zpool status pool: storage state: ONLINE status: One or more devices could not be used because the label is missing or invalid. Sufficient replicas exist for the pool to continue functioning in a degraded state. action: Replace the device using 'zpool replace'.