similar to: Recover ZFS destroyed dataset?

Displaying 20 results from an estimated 1000 matches similar to: "Recover ZFS destroyed dataset?"

2011 Jun 24
13
Fixing txg commit frequency
Hi All, I''d like to ask about whether there is a method to enforce a certain txg commit frequency on ZFS. I''m doing a large amount of video streaming from a storage pool while also slowly continuously writing a constant volume of data to it (using a normal file descriptor, *not* in O_SYNC). When reading volume goes over a certain threshold (and average pool load over ~50%), ZFS
2007 Feb 27
16
understanding zfs/thunoer "bottlenecks"?
Currently I''m trying to figure out the best zfs layout for a thumper wrt. to read AND write performance. I did some simple mkfile 512G tests and found out, that per average ~ 500 MB/s seems to be the maximum on can reach (tried initial default setup, all 46 HDDs as R0, etc.). According to http://www.amd.com/us-en/assets/content_type/DownloadableAssets/ArchitectureWP_062806.pdf I would
2010 Feb 24
3
How to know the recordsize of a file
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 I would like to know the blocksize of a particular file. I know the blocksize for a particular file is decided at creation time, in fuction of the write size done and the recordsize property of the dataset. How can I access that information?. Some zdb magic?. - -- Jesus Cea Avion _/_/ _/_/_/ _/_/_/ jcea at
2008 Aug 08
1
[install-discuss] lucreate into New ZFS pool
Hello, Since I''ve got my disk partitioning sorted out now, I want to move my BE from the old disk to the new disk. I created a new zpool, named RPOOL for distinction with the existing "rpool". I then did lucreate -p RPOOL -n new95 This completed without error, the log is at the bottom of this mail. I have not yet dared to run luactivate. I also have not yet dared set the
2011 Jan 29
19
multiple disk failure
Hi, I am using FreeBSD 8.2 and went to add 4 new disks today to expand my offsite storage. All was working fine for about 20min and then the new drive cage started to fail. Silly me for assuming new hardware would be fine :( The new drive cage started to fail, it hung the server and the box rebooted. After it rebooted, the entire pool is gone and in the state below. I had only written a few
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process: hydra# zpool import pool: tank id:
2012 Jun 18
1
Restore destroyed snapshot ???
OK, I am a butt-head and accidentally destroyed my last snapshot of a replicated ZFS dataset. The dataset is NOT mounted and other than a resilver going on, there is no I/O going on to this dataset. Is there any way to roll back and get my latest snapshot back? from zpool history -i: 2012-06-18.10:34:00 zfs destroy xxx at 1339668001 2012-06-18.10:34:00 [internal destroy txg:2213852] dataset =
2009 Jun 26
14
Unable to install Solaris 10 Update 7 Dom-U
uname -a SunOS i7 5.11 snv_114 i86pc i386 i86xpv The box is a core i7 with 6GB of RAM. The command virt-install --name 10u7 --ram 1024 --hvm --file /guests/10u7 --os-type=solaris os-variant=solaris10 --location /export/iso/sol-10-u7-ga-x86-dvd.iso get as far as opening the VNC client, when I select Solaris from grub, the OS starts to boot then panics, closing the session. So I
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error: Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after panic: assertion failed: ss != NULL, file: ../../common/fs/zfs/space_map.c, line: 125 The server saved a core file, and the resulting backtrace is listed below: $ mdb unix.0 vmcore.0 > $c vpanic() 0xfffffffffb9b49f3() space_map_remove+0x239()
2007 Sep 14
5
ZFS Space Map optimalization
I have a huge problem with space maps on thumper. Space maps takes over 3GB and write operations generates massive read operations. Before every spa sync phase zfs reads space maps from disk. I decided to turn on compression for pool ( only for pool, not filesystems ) and it helps. Now space maps, intent log, spa history are compressed. Not I''m thinking about disabling checksums. All
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all, I have an oi_148a PC with a single root disk, and since recently it fails to boot - hangs after the copyright message whenever I use any of my GRUB menu options. Booting with an oi_148a LiveUSB I had around since installation, I ran some zdb traversals over the rpool and zpool import attempts. The imports fail by running the kernel out of RAM (as recently discussed in the list with
2012 Dec 21
4
zfs receive options (was S11 vs illumos zfs compatiblity)
> From: zfs-discuss-bounces at opensolaris.org [mailto:zfs-discuss- > bounces at opensolaris.org] On Behalf Of bob netherton > > You can, with recv, override any property in the sending stream that can > be > set from the command line (ie, a writable). > > # zfs send repo/support at cpu-0412 | zfs recv -o version=4 repo/test > cannot receive: cannot override received
2009 Apr 22
6
PID provider can not create memcpy:return probe for 64bit process
I have found that pid provider can not create memcpy:return probe for 64bit process on snv_110. For example, the pid is 10603, I will have following output for dtrace command: #dtrace -n pid10603:libc.so.1:memcpy:return dtrace: invalid probe specifier pid10603:libc.so.1:memcpy:return: probe description pid10603:libc.so.1:memcpy:return does not match any probes This just
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool. root@:/$ zpool status panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2009 May 29
1
data manipulation involving aggregate
hi all, I often have a data frame like this example data.frame(sq=c(1,1,1,2,2,3,3,3,3),area=c(1,2,3,1,2,3,1,2,3),habitat=c("garden","garden","pond","field","garden","river","garden","field","field")) for each "sq" I have multiple "habitat"s each with an associated "area". I
2012 Nov 12
2
order in stacked barplot
Hello i did a stacked barplot using ggplot and R arranged the bars of the items in different orders. i don?t know why. but i want to have the same order in every stacked bar. I used the code data1 <- read.table("N_O_W_MAI.txt", header=TRUE, dec = ",") attach(data1) Teich1<-factor(Teich,levels=c(5,7,9,11,"G") ,ordered=is.ordered(Teich))
2012 Jun 12
15
Recovery of RAIDZ with broken label(s)
Hi all, I have a 5 drive RAIDZ volume with data that I''d like to recover. The long story runs roughly: 1) The volume was running fine under FreeBSD on motherboard SATA controllers. 2) Two drives were moved to a HP P411 SAS/SATA controller 3) I *think* the HP controllers wrote some volume information to the end of each disk (hence no more ZFS labels 2,3) 4) In its "auto
2006 Jul 28
20
3510 JBOD ZFS vs 3510 HW RAID
Hi there Is it fair to compare the 2 solutions using Solaris 10 U2 and a commercial database (SAP SD scenario). The cache on the HW raid helps, and the CPU load is less... but the solution costs more and you _might_ not need the performance of the HW RAID. Has anybody with access to these units done a benchmark comparing the performance (and with the pricelist in hand) came to a conclusion.
2020 Jul 15
2
read.csv fails in R console in Ubuntu terminal but works in RStudio after R 3.6.3 upgrade to R 4.0.2
Hi, I am trying to download some data using read.csv and it works perfectly in RStudio and fails in the R console in the terminal in Ubuntu 18.04 after upgrading from R 3.6.3 to 4.0.2. Before upgrading this worked in the R console in the terminal also without any issues. Why would that be? How to fix this? Below please find R code output and sessionInfo(). *Works in RStudio* >
2009 May 18
11
Zfs and b114 version
http://dlc.sun.com/osol/on/downloads/b114/ This URL makes me think that if I just sit down and figure out how to compile OpenSolaris, I can try b114 now^h^h^h eventually ? I am really eager to try out the new quota support.. has someone already tried compiling it perhaps? How complicated is compiling osol compared to, say, NetBSD/FreeBSD, Linux etc ? (IRIX and its quickstarting??) --