similar to: L2ARC in clusters

Displaying 20 results from an estimated 2000 matches similar to: "L2ARC in clusters"

2012 May 29
1
need help to find type I error rate for modified F statistic
Hello everyone, I want to calculate type I error rate for modified F statistic for one way robust anova. I need to find the j group trimmed mean and winsorized sum of squared deviations. Here I attached my code for j=2 to make it simple. Originally I have j=4. Hope you can help. I need to run it for 1000 times My problem is: i) the value of F-test obtain from my simulation below is in negative
2012 Jun 14
0
fixed trimmed mean for j-group
Hello...i want to find the empirical rate for type 1 error using fixed trimmed mean. To make it easy, i'm referring to journal given by this website http://www.academicjournals.org/ajmcsr/PDF/pdf2011/Yusof%20et%20al.pdf. I already run the programme and there is no error in it but i got zero for the empirical rate of type 1 error. The empirical rate for the type 1 error given in the journal
2012 Jul 07
0
fixed trimmed mean for group
Hello, I haven't found errors in your code. I implemented the test in the paper (the first, fixed symetric mean) and it also gives me zero Type I errors, when alpha = 0.05. Try to see the value of min(pv) or to plot the histogram of 'pv', hist(pv) and you'll see that there are no significant p-values, at that level. Anyway I'll continue to look at it, but my first
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at
2009 Aug 14
16
What''s eating my disk space? Missing snapshots?
Please can someone take a look at the attached file which shows the output on my machine of zfs list -r -t filesystem,snapshot -o space rpool/export/home/matt The USEDDS figure of ~2GB is what I would expect, and is the same figure reported by the Disk Usage Analyzer. Where is the remaining 13.8GB USEDSNAP figure coming from? If I total up the list of zfs-auto snapshots it adds up to about 4.8GB,
2012 Dec 12
20
Solaris 11 System Reboots Continuously Because of a ZFS-Related Panic (7191375)
I''ve hit this bug on four of my Solaris 11 servers. Looking for anyone else who has seen it, as well as comments/speculation on cause. This bug is pretty bad. If you are lucky you can import the pool read-only and migrate it elsewhere. I''ve also tried setting zfs:zfs_recover=1,aok=1 with varying results. http://docs.oracle.com/cd/E26502_01/html/E28978/gmkgj.html#scrolltoc
2010 Mar 29
19
sharing a ssd between rpool and l2arc
Hi, as Richard Elling wrote earlier: "For more background, low-cost SSDs intended for the boot market are perfect candidates. Take a X-25V @ 40GB and use 15-20 GB for root and the rest for an L2ARC. For small form factor machines or machines with max capacity of 8GB of RAM (a typical home system) this can make a pleasant improvement over a HDD-only implementation." For the upcoming
2010 Mar 05
17
why L2ARC device is used to store files ?
Greeting All I have create a pool that consists oh a hard disk and a ssd as a cache zpool create hdd c11t0d0p3 zpool add hdd cache c8t0d0p0 - cache device I ran an OLTP bench mark to emulate a DMBS One I ran the benchmark, the pool started create the database file on the ssd cache device ??????????? can any one explain why this happening ? is not L2ARC is used to absorb the evicted data
2009 Aug 04
2
flowadm -i 1 - shows only first flow
Hi, OSOL, b118 > milek at r600:~# flowadm show-flow > FLOW LINK IPADDR PROTO PORT > DSFLD > local_25 iwh0 -- tcp 25 -- > local_22 iwh0 -- tcp 22 -- > milek at r600:~# flowadm show-flow -s -i 1 > FLOW IPACKETS RBYTES IERRORS
2010 Apr 23
12
Re-attaching zpools after machine termination [amazon ebs & ec2]
I''m trying to provide some "disaster-proofing" on Amazon EC2 by using a ZFS-based EBS volume for primary data storage with Amazon S3-backed snapshots. My aim is to ensure that, should the instance terminate, a new instance can spin-up, attach the EBS volume and auto-/re-configure the zpool. I''ve created an OpenSolaris 2009.06 x86_64 image with the zpool structure
2007 Sep 18
3
ZFS and encryption
Hello zfs-discuss, I wonder if ZFS will be able to take any advantage of Niagara''s built-in crypto? -- Best regards, Robert Milkowski mailto:rmilkowski at task.gda.pl http://milek.blogspot.com
2008 Jan 18
7
how to relocate a disk
Hi, I''d like to move a disk from one controller to another. This disk is part of a mirror in a zfs pool. How can one do this without having to export/import the pool or reboot the system? I tried taking it offline and online again, but then zpool says the disk is unavailable. Trying a zpool replace didn''t work because it complains that the "new" disk is part of a
2018 Jun 07
2
[lld] ObjFile::createRegular is oblivious of PendingComdat
I encountered a segfault when using lld to cross-compile for Windows (+MinGW) from Linux. The problem happens with objects built by gcc. The problem is that ObjFile::CreateRegular considers a PendingComdat to be valid (and later causes an illegal pointer dereference). The following patch fixes the crash: diff --git a/COFF/InputFiles.cpp b/COFF/InputFiles.cpp index 9e2345b0a..f47d612df 100644
2008 May 14
2
vdev cache - comments in the source
Hello zfs-code, http://cvs.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/vdev_cache.c 72 * All i/os smaller than zfs_vdev_cache_max will be turned into 73 * 1<<zfs_vdev_cache_bshift byte reads by the vdev_cache (aka software 74 * track buffer). At most zfs_vdev_cache_size bytes will be kept in each 75 * vdev''s vdev_cache. While it
2018 Jun 07
2
[lld] ObjFile::createRegular is oblivious of PendingComdat
Can you upload a reproducer? You can create it using the /linkrepro flag. Peter On Thu, Jun 7, 2018 at 2:50 PM, Reid Kleckner via llvm-dev < llvm-dev at lists.llvm.org> wrote: > GCC does comdats completely differently from the spec. Since you contacted > me about this off of the mailing list, I started investigating what they > do, and it is completely different. It's
2018 Jan 20
3
Stale locks on shards
Hi all! One hypervisor on our virtualization environment crashed and now some of the VM images cannot be accessed. After investigation we found out that there was lots of images that still had active lock on crashed hypervisor. We were able to remove locks from "regular files", but it doesn't seem possible to remove locks from shards. We are running GlusterFS 3.8.15 on all
2018 Jan 23
2
Stale locks on shards
On Tue, Jan 23, 2018 at 1:38 PM, Samuli Heinonen <samppah at neutraali.net> wrote: > Pranith Kumar Karampuri kirjoitti 23.01.2018 09:34: > >> On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen >> <samppah at neutraali.net> wrote: >> >> Hi again, >>> >>> here is more information regarding issue described earlier >>> >>>
2018 Jan 23
3
Stale locks on shards
On Mon, Jan 22, 2018 at 12:33 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi again, > > here is more information regarding issue described earlier > > It looks like self healing is stuck. According to "heal statistics" crawl > began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun > Jan 21 20:30 when writing this).
2018 Jan 21
0
Stale locks on shards
Hi again, here is more information regarding issue described earlier It looks like self healing is stuck. According to "heal statistics" crawl began at Sat Jan 20 12:56:19 2018 and it's still going on (It's around Sun Jan 21 20:30 when writing this). However glustershd.log says that last heal was completed at "2018-01-20 11:00:13.090697" (which is 13:00 UTC+2).
2018 Jan 25
2
Stale locks on shards
On Thu, Jan 25, 2018 at 2:27 AM, Samuli Heinonen <samppah at neutraali.net> wrote: > Hi! > > Thank you very much for your help so far. Could you please tell an example > command how to use aux-gid-mount to remove locks? "gluster vol clear-locks" > seems to mount volume by itself. > You are correct, sorry, this was implemented around 7 years back and I forgot