Displaying 20 results from an estimated 10000 matches similar to: "Direct I/O ability with zfs?"
2007 Sep 19
3
ZFS panic when trying to import pool
I have a raid-z zfs filesystem with 3 disks. The disk was starting have read and write errors.
The disks was so bad that I started to have trans_err. The server lock up and the server was reset. Then now when trying to import the pool the system panic.
I installed the last Recommend on my Solaris U3 and also install the last Kernel patch (120011-14).
But still when trying to do zpool import
2008 Dec 28
2
zfs mount hangs
Hi,
System: Netra 1405, 4x450Mhz, 4GB RAM and 2x146GB (root pool) and
2x146GB (space pool). snv_98.
After a panic the system hangs on boot and manual attempts to mount
(at least) one dataset in single user mode, hangs.
The Panic:
Dec 27 04:42:11 base ^Mpanic[cpu0]/thread=300021c1a20:
Dec 27 04:42:11 base unix: [ID 521688 kern.notice] [AFT1] errID
0x00167f73.1c737868 UE Error(s)
Dec 27
2007 Jun 16
5
zpool mirror faulted
I have a strange problem with a faulted zpool (two way mirror):
[root at einstein;0]~# zpool status poolm
pool: poolm
state: FAULTED
scrub: none requested
config:
NAME STATE READ WRITE CKSUM
poolm UNAVAIL 0 0 0 insufficient replicas
mirror UNAVAIL 0 0 0 corrupted data
c2t0d0s0 ONLINE 0
2008 May 20
7
[Bug 1986] New: ''zfs destroy'' hangs on encrypted dataset
http://defect.opensolaris.org/bz/show_bug.cgi?id=1986
Summary: ''zfs destroy'' hangs on encrypted dataset
Classification: Development
Product: zfs-crypto
Version: unspecified
Platform: Other
OS/Version: Solaris
Status: NEW
Severity: major
Priority: P2
Component: other
2008 Mar 20
7
ZFS panics solaris while switching a volume to read-only
Hi,
I just found out that ZFS triggers a kernel-panic while switching a mounted volume
into read-only mode:
The system is attached to a Symmetrix, all zfs-io goes through Powerpath:
I ran some io-intensive stuff on /tank/foo and switched the device into
read-only mode at the same time (symrdf -g bar failover -establish).
ZFS went ''bam'' and triggered a Panic:
WARNING: /pci at
2008 Nov 13
5
BAD TRAP with Crossbow Beta October 31 2008
Hi.
I tried to send this to the mailing list, but it never showed up in the
archives, so I''m trying the forum instead...
I recently installed the Crossbow Beta October 31 2008 on my
SunFire T1000, and let me first say that I''m very pleased
with the functionality it provides.
What''s not so pleasing is the fact that after installing this,
the computer now get very
2012 Apr 17
10
kernel panic during zfs import [UPDATE]
Hello everybody,
just to let you know what happened in the meantime:
I was able to open a Service Request at Oracle.
The issue is a known bug (Bug 6742788 : assertion panic at: zfs:zap_deref_leaf)
The bug has bin fixed (according to Oracle support) since build 164, but there is no fix for Solaris 11 available so far (will be fixed in S11U7?).
There is a workaround available that works
2007 Apr 23
3
ZFS panic caused by an exported zpool??
Apr 23 02:02:21 SERVER144 offline or reservation conflict
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144 i/o to invalid geometry
Apr 23 02:02:21 SERVER144 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g60001fe100118db00009119074440055 (sd82):
Apr 23 02:02:21 SERVER144
2007 Dec 09
8
zpool kernel panics.
Hi Folks,
I''ve got a 3.9 Tb zpool, and it is casing kernel panics on my Solaris
10 280r (SPARC) server.
The message I get on panic is this:
panic[cpu1]/thread=2a100a95cc0: zfs: freeing free segment
(offset=423713792 size=1024)
This seems to come about when the zpool is being used or being
scrubbed - about twice a day at the moment. After the reboot, the
scrub seems to have
2010 Sep 17
3
ZFS Dataset lost structure
After a crash, in my zpool tree, some dataset report this we i do a ls -la:
brwxrwxrwx 2 777 root 0, 0 Oct 18 2009 mail-cts
also if i set
zfs set mountpoint=legacy dataset
and then i mount the dataset to other location
before the directory tree was only :
dataset
- vdisk.raw
The file was a backing device of a Xen VM, but i cannot access the directory structure of this dataset.
However i
2013 Nov 06
2
[LLVMdev] loop vectorizer: Unexpected extract/insertelement
The following IR implements the following nested loop:
for (int i = start ; i < end ; ++i )
for (int p = 0 ; p < 4 ; ++p )
a[i*4+p] = b[i*4+p] + c[i*4+p];
define void @main(i64 %arg0, i64 %arg1, i1 %arg2, i64 %arg3, float*
noalias %arg4, float* noalias %arg5, float* noalias %arg6) {
entrypoint:
br i1 %arg2, label %L0, label %L1
L0:
2013 Nov 06
0
[LLVMdev] loop vectorizer: Unexpected extract/insertelement
The loop vectorizer relies on cleanup passes to be run after it:
from Transforms/IPO/PassManagerBuilder.cpp:
// Add the various vectorization passes and relevant cleanup passes for
// them since we are no longer in the middle of the main scalar pipeline.
MPM.add(createLoopVectorizePass(DisableUnrollLoops));
MPM.add(createInstructionCombiningPass());
2013 Nov 06
2
[LLVMdev] loop vectorizer: Unexpected extract/insertelement
The instcombine pass cleans up a lot.
Any idea why there are still shufflevector, insertelement, *and* bitcast
(!!) etc. instructions left? The original loop is so clean, a textbook
example I'd say. There is no need to shuffle anything.At least I don't
see it.
Frank
vector.ph: ; preds = %L5
%broadcast.splatinsert1 = insertelement <4 x
2013 Nov 01
2
[LLVMdev] loop vectorizer: this loop is not worth vectorizing
I am trying a setup where the one loop is rewritten as two loops. This
avoids the 'rem' and 'div' instructions in the index calculation (which
give the loop vectorizer a hard time).
However, with this setup the loop vectorizer complains about a too small
loop.
LV: Checking a loop in "main"
LV: Found a loop: L3
LV: Found a loop with a very small trip count. This loop
2013 Nov 11
2
[LLVMdev] loop vectorizer: JIT + AVX segfaults
For what it's worth, I'm also experiencing this same issue. If there is
interest I can provide some very simple reproducible test cases, but I was
planning on moving to MCJIT this week anyway.
--
View this message in context: http://llvm.1065342.n5.nabble.com/loop-vectorizer-JIT-AVX-segfaults-tp63089p63115.html
Sent from the LLVM - Dev mailing list archive at Nabble.com.
2006 Nov 02
4
reproducible zfs panic on Solaris 10 06/06
Hi,
I am able to reproduce the following panic on a number of Solaris 10 06/06 boxes (Sun Blade 150, V210 and T2000). The script to do this is:
#!/bin/sh -x
uname -a
mkfile 100m /data
zpool create tank /data
zpool status
cd /tank
ls -al
cp /etc/services .
ls -al
cd /
rm /data
zpool status
# uncomment the following lines if you want to see the system think
# it can still read and write to the
2013 Nov 01
0
[LLVMdev] loop vectorizer: this loop is not worth vectorizing
In the case when coming from C it was probably the loop unroller and SLP
vectorizer which vectorized the code. Potentially I could do the same in
the IR. However, the loop body that is generated in the IR can get very
large. Thus, the loop unroller will refuse to unroll the loop in a large
number of (important) cases.
Isn't there a way to convince the loop vectorizer that it should
2013 Nov 10
3
[LLVMdev] loop vectorizer erroneously finds 256 bit vectors
The loop vectorizer is doing an amazing job so far. Most of the time.
I just came across one function which led to unexpected behavior:
On this function the loop vectorizer finds a 256 bit vector as the
wides vector type for the x86-64 architecture. (!)
This is strange, as it was always finding the correct size of 128 bit
as the widest type. I isolated the IR of the function to check if this
is
2010 Feb 08
2
[LLVMdev] How to check for "SPARC code generation" in MachineBasicBlock.cpp?
On 11/12/2009, at 10:43 AM, Anton Korobeynikov wrote:
> Hi, Chris
>
>> That is target independent code, so you should not put sparc specific changes there. It sounds like one of the sparc-specific target hooks is wrong.
> Since sparc does not provide any hooks for operation of branches (e.g.
> AnalyzeBranch and friends) it might be possible that generic codegen
> code is
2018 Feb 06
2
Nested KVM: L0 guest produces kernel BUG on wakeup from managed save (while a nested VM is running)
Hi everyone,
I hope this is the correct list to discuss this issue; please feel
free to redirect me otherwise.
I have a nested virtualization setup that looks as follows:
- Host: Ubuntu 16.04, kernel 4.4.0 (an OpenStack Nova compute node)
- L0 guest: openSUSE Leap 42.3, kernel 4.4.104-39-default
- Nested guest: SLES 12, kernel 3.12.28-4-default
The nested guest is configured with