similar to: zfs? page faults

Displaying 20 results from an estimated 2000 matches similar to: "zfs? page faults"

2006 Jan 13
26
A couple of issues
I''ve been testing ZFS since it came out on b27 and this week I BFUed to b30. I''ve seen two problems, one I''ll call minor and the other major. The hardware is a Dell PowerEdge 2600 with 2 3.2GHz Xeons, 2GB memory and a perc3 controller. I have created a filesystem for over 1000 users on it and take hourly snapshots, which destroy the one from 24 hours ago, except the
2008 Jun 16
1
"stuck" in kmdb due to dtrace breakpoint()
So I realize this is somewhat stupid, and I''ve actually gotten myself out of kmdb to kill my dtrace script but this has happened in the past and I''m wondering if there''s any better way around it than hitting :c a bunch of times. Say you set a breakpoint() to fire in a common function. This will drop you into kmdb where you can do some debugging, you take a look around
2009 Jul 19
5
How to disable checksum offloading for OSOL DomU via kmdb at initial boot ?
Adding -kd to extra line drops me to kmdb :- root@ServerJaunty:/home/boris/nevada# xm create -c osol.install Using config file "./osol.install". Started domain osol.install (id=4) Loading kmdb... Welcome to kmdb Loaded modules: [ unix krtld genunix ] [0]> I want patch kernel like it happens when adding to /etc/system:- set xnf:xnf_cksum_offload = 0
2006 Jun 12
3
ZFS + Raid-Z pool size incorrect?
I''m seeing odd behaviour when I create a ZFS raidz pool using three disks. The output of "zpool status" shows the pool size as the size of the three disks combined (as if it were a Raid 0 volume). This isn''t expected behaviour is it? When I create a mirrored volume in ZFS everything is as one would expect the pool is the size of a single drive. My setup: Compaq
2008 Mar 26
25
Failure to install SNV85 DomU at Xen 3.2 CentOS 5.1 Dom0 (64-bit)
************************ Installation profile ************************ [root@ServerRHL51 vm]# cat snv85.install name = "Solaris85pvm" vcpus = 1 memory = "1024" kernel = "/usr/lib/xen-solaris/unix-85" ramdisk = "/usr/lib/xen-solaris/x86.miniroot-85" extra = "/platform/i86xpv/kernel/amd64/unix - nowin -B install_media=cdrom" disk =
2009 May 29
4
can Dtrace be used for the error injection?
Hi, is it somehow possible to use Dtrace for error injection in a kernel module? Something like changing: - function return value - value of a register If not, can it be implemented? I can do that via kmdb, but I need Dtrace for the time synchronization - chill() action. I can not combine Dtrace & kmdb: dtrace: failed to initialize dtrace: DTrace cannot be used when kernel debugger
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2011 Sep 02
5
Linux kernel crash due to ocfs2
Hello, we have a pair of IBM P570 servers running RHEL5.2 kernel 2.6.18-92.el5.ppc64 We have Oracle RAC on ocfs2 storage ocfs2 is 1.4.7-1 for the above kernel (downloaded from oracle oss site) Recently both servers have been crashing with the following error: Assertion failure in journal_dirty_metadata() at fs/jbd/transaction.c:1130: "handle->h_buffer_credits > 0" kernel BUG in
2007 Sep 04
2
Solaris HDD crash, Restore?
First time user of Solaris and ZFS .... I have Solaris 10 installed on the primary IDE drive of my motherboard. I also have a 4 disc RAIDZ setup on my sata connections. I setup up a successful 1.5TB ZFS server with all discs operational. Well ... I was trying out something new and I borked my Solaris install HDD; the main problem is that I also had my RAIDZ zpool operational. I don''t
2009 Oct 22
1
raidz "ZFS Best Practices" wiki inconsistency
<http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#RAID-Z_Configuration_Requirements_and_Recommendations> says that the number of disks in a RAIDZ should be (N+P) with N = {2,4,8} and P = {1,2}. But if you go down the page just a little further to the thumper configuration examples, none of the 3 examples follow this recommendation! I will have 10 disks to put into a
2006 Oct 12
3
Best way to carve up 8 disks
Ok, previous threads have lead me to believe that I want to make raidz vdevs [0] either 3, 5 or 9 disks in size [1]. Let''s say I have 8 disks. Do I want to create a zfs pool with a 5-disk vdev and a 3-disk vdev? Are there performance issues with mixing differently sized raidz vdevs in a pool? If there *is* a performance hit to mix like that, would it be greater or lesser than building
2007 Apr 02
4
Convert raidz
Hi Is it possible to convert live 3 disks zpool from raidz to raidz2 And is it possible to add 1 new disk to raidz configuration without backups and recreating zpool from cratch. Thanks This message posted from opensolaris.org
2010 Jul 16
1
ZFS mirror to RAIDz?
Hi all, I currently have four drives in my OpenSolaris box. The drives are split into two mirrors, one mirror containing my rpool (disks 1 & 2) and one containing other data (disks 2 & 3). I''m running out of space on my data mirror and am thinking of upgrading it to two 2TB disks. I then considered replacing disk 2 with a 2TB disk and making a RAIDz from the three new drives.
2007 Sep 14
3
Convert Raid-Z to Mirror
Is there a way to convert a 2 disk raid-z file system to a mirror without backing up the data and restoring? We have this: bash-3.00# zpool status pool: archives state: ONLINE scrub: none requested config: NAME STATE READ WRITE CKSUM archives ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c1t2d0 ONLINE 0 0 0
2009 Jun 26
4
Backing up OS drive?
I have one drive that I''m running OpenSolaris on and a 6-drive RAIDZ. Unfortunately I don''t have another drive to mirror the OS drive, so I was wondering what the best way to back up that drive is. Can I mirror it onto a file on the RAIDZ, or will this cause problems before the array is loaded when booting? What about zfs send and recv to the RAIDZ? -- This message posted from
2008 Feb 05
2
ZFS+ config for 8 drives, mostly reads
Hi, I posted in the Solaris install forum as well about the fileserver I''m building for media files but wanted to ask more specific questions about zfs here. The setup is 8x500GB SATAII drives to start and down the road another 4x750 SATAII drives, the machine will mostly be doing reads and streaming data over GigaE. -I''m under the impression that ZFS+(ZFS2) is similar to
2006 Apr 06
15
A few Newbie questions about RAIDZ
1. I have a 4x18GB drive setup as RAIDZ. Now when thinking about it in terms of RAID5 I would expect to get (4-1)x18 worth of drive space, but DF -h shows 4x18. Is this a bug or do I not understand? 2. Once again thinking in RAID5 terms if I have 4X18GB and 12X9GB drives and I want to make a RAIDZ of all of them I would expect the 18GB to be treated at 9GB so the RAIDZ would be 16X9GB. Is
2009 Jan 20
2
How do you "re-attach" a 3 disk RAIDZ array to a new OS installation?
Hi, I''m completely new to Solaris, but have managed to bumble through installing it to a single disk, creating an additional 3 disk RAIDZ array and then copying over data from a separate NTFS formatted disk onto the array using NTFS-3G. However, the single disk that was used for the OS installation has since died (it was very old) and I have had to reinstall 2008.11 from scratch onto a
2008 Feb 08
16
Dom0 issues: snv_79b and Tecra M9
Hi all, I have a Toshiba Tecra M9 and have not been able to boot it dom0. This is running SXDE 01/08, snv79b. After booting under kmdb and setting moddebug=80000000 before booting the Solaris kernel (with help from Dan Mick), I was able to see mac_ether as the last thing loading, right after loading the e1000g driver. I cannot drop into kmdb via F1-A after it hangs. I''ve also
2007 Dec 03
2
Help replacing dual identity disk in ZFS raidz and SVM mirror
Hi, We have a number of 4200''s setup using a combination of an SVM 4 way mirror and a ZFS raidz stripe. Each disk (of 4) is divided up like this / 6GB UFS s0 Swap 8GB s1 /var 6GB UFS s3 Metadb 50MB UFS s4 /data 48GB ZFS s5 For SVM we do a 4 way mirror on /,swap, and /var So we have 3 SVM mirrors d0=root (sub mirrors d10, d20, d30, d40) d1=swap (sub mirrors d11, d21,d31,d41)