Displaying 20 results from an estimated 600 matches similar to: "Restore destroyed snapshot ???"
2011 Jun 24
13
Fixing txg commit frequency
Hi All,
I''d like to ask about whether there is a method to enforce a certain txg
commit frequency on ZFS. I''m doing a large amount of video streaming
from a storage pool while also slowly continuously writing a constant
volume of data to it (using a normal file descriptor, *not* in O_SYNC).
When reading volume goes over a certain threshold (and average pool load
over ~50%), ZFS
2009 Jul 21
1
zpool import is trying to tell me something...
I recently had an X86 system (running Nexenta Elatte, if that matters -- b101 kernel, I think) suffer hardware failure and refuse to boot. I''ve migrated the disks into a SPARC system (b115) in an attempt to bring the data back online while I see about repairing the former system. However, I''m having some trouble with the import process:
hydra# zpool import
pool: tank
id:
2012 Jan 11
1
How many "rollback" TXGs in a ring for 4k drives?
Hello all, I found this dialog on the zfs-devel at zfsonlinux.org list,
and I''d like someone to confirm-or-reject the discussed statement.
Paraphrasing in my words and understanding:
"Labels, including Uberblock rings, are fixed 256KB in size each,
of which 128KB is the UB ring. Normally there is 1KB of data in
one UB, which gives 128 TXGs to rollback to. When ashift=12 is
2010 Jan 12
6
x4500/x4540 does the internal controllers have a bbu?
Has anyone worked with a x4500/x4540 and know if the internal raid controllers have a bbu? I''m concern that we won''t be able to turn off the write-cache on the internal hds and SSDs to prevent data corruption in case of a power failure.
--
This message posted from opensolaris.org
2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy
writer to a pool via NFS, the reads can held back (basically paused).
An example is a RAID10 pool of 6 disks, whereby a directory of files
including some large 100+MB in size being written can cause other
clients over NFS to pause for seconds (5-30 or so). This on B70 bits.
I''ve gotten used to this behavior over NFS, but
2008 Dec 15
15
Need Help Invalidating Uberblock
I have a ZFS pool that has been corrupted. The pool contains a single device which was actually a file on UFS. The machine was accidentally halted and now the pool is corrupt. There are (of course) no backups and I''ve been asked to recover the pool. The system panics when trying to do anything with the pool.
root@:/$ zpool status
panic[cpu1]/thread=fffffe8000758c80: assertion failed:
2011 Jan 29
27
ZFS and TRIM
My google-fu is coming up short on this one... I didn''t see that it had
been discussed in a while ...
What is the status of ZFS support for TRIM?
For the pool in general...
and...
Specifically for the slog and/or cache???
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
2009 Oct 30
1
internal scrub keeps restarting resilvering?
After several days of trying to get a 1.5TB drive to resilver and it
continually restarting, I eliminated all of the snapshot-taking
facilities which were enabled and
2009-10-29.14:58:41 [internal pool scrub done txg:567780] complete=0
2009-10-29.14:58:41 [internal pool scrub txg:567780] func=1 mintxg=3
maxtxg=567354
2009-10-29.16:52:53 [internal pool scrub done txg:567999] complete=0
2008 Sep 05
0
raidz pool metadata corrupted nexanta-core->freenas 0.7->nexanta-core
I made a bad judgment and now my raidz pool is corrupted. I have a
raidz pool running on Opensolaris b85. I wanted to try out freenas 0.7
and tried to add my pool to freenas.
After adding the zfs disk,
vdev and pool. I decided to back out and went back to opensolaris. Now
my raidz pool will not mount and got the following errors. Hope someone
expert can help me recover from this error.
2011 Nov 08
1
Single-disk rpool with inconsistent checksums, import fails
Hello all,
I have an oi_148a PC with a single root disk, and since
recently it fails to boot - hangs after the copyright
message whenever I use any of my GRUB menu options.
Booting with an oi_148a LiveUSB I had around since
installation, I ran some zdb traversals over the rpool
and zpool import attempts. The imports fail by running
the kernel out of RAM (as recently discussed in the
list with
read.csv fails in R console in Ubuntu terminal but works in RStudio after R 3.6.3 upgrade to R 4.0.2
2020 Jul 16
2
read.csv fails in R console in Ubuntu terminal but works in RStudio after R 3.6.3 upgrade to R 4.0.2
On 7/15/20 1:35 PM, Dirk Eddelbuettel wrote:
> On 15 July 2020 at 16:16, Sam H wrote:
> | I am trying to download some data using read.csv and it works perfectly in
> | RStudio and fails in the R console in the terminal in Ubuntu 18.04 after
> | upgrading from R 3.6.3 to 4.0.2. Before upgrading this worked in the R
> | console in the terminal also without any issues.
> |
> |
2014 Apr 18
0
[PATCH] nouveau/codegen: add missing values for OP_TXLQ into the target arrays
Also rework things so that if someone were to add an opcode without
adjusting the values in these arrays, there will be a compilation error.
This fixes a few quadop-related piglit regressions since commit
d5faf8e78603.
Signed-off-by: Ilia Mirkin <imirkin at alum.mit.edu>
---
src/gallium/drivers/nouveau/codegen/nv50_ir_target.cpp | 12 +++++++-----
2010 Jun 29
0
Processes hang in /dev/zvol/dsk/poolname
After multiple power outages caused by storms coming through, I can no
longer access /dev/zvol/dsk/poolname, which are hold l2arc and slog devices
in another pool I don''t think this is related, since I the pools are ofline
pending access to the volumes.
I tried running find /dev/zvol/dsk/poolname -type f and here is the stack,
hopefully this someone a hint at what the issue is, I have
2007 Sep 18
5
ZFS panic in space_map.c line 125
One of our Solaris 10 update 3 servers paniced today with the following error:
Sep 18 00:34:53 m2000ef savecore: [ID 570001 auth.error] reboot after
panic: assertion failed: ss != NULL, file:
../../common/fs/zfs/space_map.c, line: 125
The server saved a core file, and the resulting backtrace is listed below:
$ mdb unix.0 vmcore.0
> $c
vpanic()
0xfffffffffb9b49f3()
space_map_remove+0x239()
read.csv fails in R console in Ubuntu terminal but works in RStudio after R 3.6.3 upgrade to R 4.0.2
2020 Jul 15
0
read.csv fails in R console in Ubuntu terminal but works in RStudio after R 3.6.3 upgrade to R 4.0.2
On 15 July 2020 at 16:16, Sam H wrote:
| I am trying to download some data using read.csv and it works perfectly in
| RStudio and fails in the R console in the terminal in Ubuntu 18.04 after
| upgrading from R 3.6.3 to 4.0.2. Before upgrading this worked in the R
| console in the terminal also without any issues.
|
| Why would that be? How to fix this?
|
| Below please find R code output and
2010 May 07
0
confused about zpool import -f and export
Hi, all,
I think I''m missing a concept with import and export. I''m working on installing a Nexenta b134 system under Xen, and I have to run the installer under hvm mode, then I''m trying to get it back up under pv mode. In that process the controller names change, and that''s where I''m getting tripped up.
I do a successful install, then I boot OK,
2007 Dec 13
0
zpool version 3 & Uberblock version 9 , zpool upgrade only half succeeded?
We are currently experiencing a very huge perfomance drop on our zfs storage server.
We have 2 pools, pool 1 stor is a raidz out of 7 iscsi nodes, home is a local mirror pool. Recently we had some issues with one of the storagenodes, because of that the pool was degraded. Since we did not succeed in bringing this storagenode back online (on zfs level) we upgraded our nashead from opensolaris b57
2006 Oct 31
0
6344108 snapshot create/delete interlock with scrub/resilver must sync txg
Author: bonwick
Repository: /hg/zfs-crypto/gate
Revision: 41acc27e604047650771dceb4535b8586bd34848
Log message:
6344108 snapshot create/delete interlock with scrub/resilver must sync txg
Files:
update: usr/src/cmd/ztest/ztest.c
update: usr/src/uts/common/fs/zfs/spa.c
2014 Feb 20
0
[PATCH] nv50: enable txg where supported
Signed-off-by: Ilia Mirkin <imirkin at alum.mit.edu>
---
This applies on top of Dave Airlie's r600g-texture-gather branch. Ran piglit
with -t gather, passed all 1057 tests. Can't say I fully understand what all
the arguments to handleTEX in the Coverter are but... seems to work. Will
probably require some care for nvc0 support which should have SM5 caps.
2010 Aug 28
1
mirrored pool unimportable (FAULTED)
Hi,
more than a year ago I created a mirrored ZFS-Pool consiting of 2x1TB
HDDs using the OSX 10.5 ZFS Kernel Extension (Zpool Version 8, ZFS
Version 2). Everything went fine and I used the pool to store personal
stuff on it, like lots of photos and music. (So getting the data back is
not time critical, but still important to me.)
Later, since the development of the ZFS extension was