Displaying 20 results from an estimated 300 matches similar to: "zpool I/O error"
2007 Apr 30
4
need some explanation
Hi,
OS : Solaris 10 11/06
zpool list doesn''t reflect pool usage stats instantly. Why?
# ls -l
total 209769330
-rw------T 1 root root 107374182400 Apr 30 14:28 deleteme
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
wo 136G 100G 36.0G 73% ONLINE -
# rm deleteme
# zpool list
NAME SIZE
2009 Mar 11
9
ZFS on a SAN
Hi All,
I''m new on ZFS, so I hope this isn''t too basic a question. I have a host where I setup ZFS. The Oracle DBAs did their thing and I know have a number of ZFS datasets with their respective clones and snapshots on serverA. I want to export some of the clones to serverB. Do I need to zone serverB to see the same LUNs as serverA? Or does it have to have preexisting,
2007 Nov 13
3
this command can cause zpool coredump!
in Solaris 10 U4,
type:
-bash-3.00# zpool create -R filepool mirror /export/home/f1.dat /export/home/f2.dat
invalid alternate root ''Segmentation Fault (core dumped)
--
This messages posted from opensolaris.org
2009 Feb 22
11
Confused about zfs recv -d, apparently
First, it fails because the destination directory doesn''t exist. Then it
fails because it DOES exist. I really expected one of those to work. So,
what am I confused about now? (Running 2008.11)
# zpool import -R /backups/bup-ruin bup-ruin
# zfs send -R "zp1 at bup-20090222-054457UTC" | zfs receive -dv
bup-ruin/fsfs/zp1"
cannot receive: specified fs (bup-ruin/fsfs/zp1)
2009 Jun 03
7
"no pool_props" for OpenSolaris 2009.06 with old SPARC hardware
Hi,
yesterday evening I tried to upgrade my Ultra 60 to 2009.06 from SXCE snv_98.
I can''t use AI Installer because OpenPROM is version 3.27.
So I built IPS from source, then created a zpool on a spare drive and installed OS 2006.06 on it
To make the disk bootable I used:
installboot -F zfs /usr/platform/`uname -i`/lib/fs/zfs/bootblk /dev/rdsk/c0t1d0s0
using the executable from my new
2009 Jan 15
21
4 disk raidz1 with 3 disks...
Hello,
I was hoping that this would work:
http://blogs.sun.com/zhangfan/entry/how_to_turn_a_mirror
I have 4x(1TB) disks, one of which is filled with 800GB of data (that I
cant delete/backup somewhere else)
> root at FSK-Backup:~# zpool create -f ambry raidz1 c4t0d0 c5t0d0 c5t1d0
> /dev/lofi/1
> root at FSK-Backup:~# zpool list
> NAME SIZE USED AVAIL CAP HEALTH ALTROOT
2010 May 15
7
Unable to Destroy One Particular Snapshot
Howdy All,
I''ve a bit of a strange problem here. I have a filesystem with one snapshot that simply refuses to be destroyed. The snapshots just prior to it and just after it were destroyed without problem. While running the zfs destroy command on this particular snapshot, the server becomes more-or-less hung. It''s pingable but will not open a new shell (local or via ssh) however
2011 Aug 14
4
Space usage
I''m just uploading all my data to my server and the space used is much more than what i''m uploading;
Documents = 147MB
Videos = 11G
Software= 1.4G
By my calculations, that equals 12.547T, yet zpool list is showing 21G as being allocated;
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
dpool 27.2T 21.2G 27.2T 0% 1.00x ONLINE -
It doesn''t look like
2011 Jun 06
3
Available space confusion
I recently created a raidz of four 2TB-disks and moved a bunch of movies onto them.
And then I noticed that I''ve somehow lost a full TB of space. Why?
nebol at filez:/$ zfs list tank2
NAME USED AVAIL REFER MOUNTPOINT
tank2 3.12T 902G 32.9K /tank2
nebol at filez:/$ zpool list tank2
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
tank2 5.44T 4.18T 1.26T 76% ONLINE -
2009 Dec 15
7
ZFS Dedupe reporting incorrect savings
Hi,
Created a zpool with 64k recordsize and enabled dedupe on it.
zpool create -O recordsize=64k TestPool device1
zfs set dedup=on TestPool
I copied files onto this pool over nfs from a windows client.
Here is the output of zpool list
Prompt:~# zpool list
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
TestPool 696G 19.1G 677G 2% 1.13x ONLINE -
When I ran a
2011 Apr 01
15
Zpool resize
Hi,
LUN is connected to solaris 10u9 from NETAP FAS2020a with ISCSI. I''m
changing LUN size on netapp and solaris format see new value but zpool
still have old value.
I tryed zpool export and zpool import but it didn''t resolve my problem.
bash-3.00# format
Searching for disks...done
AVAILABLE DISK SELECTIONS:
0. c0d1 <DEFAULT cyl 6523 alt 2 hd 255 sec 63>
2008 Jan 15
4
Moving zfs to an iscsci equallogic LUN
We have a Mirror setup in ZFS that''s 73GB (two internal disks on a sun fire v440). We currently are going to attach this system to an Equallogic Box, and will attach an ISCSCI LUN from the Equallogic box to the v440 of about 200gb. The Equallogic Box is configured as a Hardware Raid 50 (two hot spares for redundancy).
My question is what''s the best approach to moving the ZFS
2012 Jun 06
24
Occasional storm of xcalls on segkmem_zio_free
So I have this dual 16-core Opteron Dell R715 with 128G of RAM attached
to a SuperMicro disk enclosure with 45 2TB Toshiba SAS drives (via two
LSI 9200 controllers and MPxIO) running OpenIndiana 151a4 and I''m
occasionally seeing a storm of xcalls on one of the 32 VCPUs (>100000
xcalls a second). The machine is pretty much idle, only receiving a
bunch of multicast video streams and
2009 Oct 01
4
RAIDZ v. RAIDZ1
So, I took four 1.5TB drives and made RAIDZ, RAIDZ1 and RAIDZ2 pools. The sizes for the pools were 5.3TB, 4.0TB, and 2.67TB respectively. The man page for RAIDZ states that "The raidz vdev type is an alias for raidz1." So why was there a difference between the sizes for RAIDZ and RAIDZ1? Shouldn''t the size be the same for "zpool create raidz ..." and "zpool
2009 Jan 25
2
Unable to destory a pool
# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
jira-app-zpool 272G 330K 272G 0% ONLINE -
The following command hangs forever. If I reboot the box , zpool list shows online as I mentioned the output above.
# zpool destroy -f jira-app-zpool
How can get rid of this pool and any reference to it.
bash-3.00# zpool status
pool: jira-app-zpool
state: UNAVAIL
2010 Feb 10
5
zfs receive : is this expected ?
amber ~ # zpool list data
NAME SIZE ALLOC FREE CAP DEDUP HEALTH ALTROOT
data 930G 295G 635G 31% 1.00x ONLINE -
amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata
cannot receive new filesystem stream: destination ''ezdata'' exists
must specify -F to overwrite it
amber ~ # zfs send -RD data at prededup |zfs recv -d ezdata/data
cannot receive:
2008 Jun 01
1
capacity query
Hi,
My swap is on raidz1. Df -k and swap -l are showing almost no usage of
swap, while zfs list and zpool list are showing me 96% capacity. Which
should i believe?
Justin
# df -hk
Filesystem size used avail capacity Mounted on
/dev/dsk/c3t0d0s1 14G 4.0G 10G 28% /
/devices 0K 0K 0K 0% /devices
ctfs
2017 Aug 16
1
[ovirt-users] Recovering from a multi-node failure
On Sun, Aug 6, 2017 at 4:42 AM, Jim Kusznir <jim at palousetech.com> wrote:
> Well, after a very stressful weekend, I think I have things largely
> working. Turns out that most of the above issues were caused by the linux
> permissions of the exports for all three volumes (they had been reset to
> 600; setting them to 774 or 770 fixed many of the issues). Of course, I
>
2007 Nov 16
0
ZFS mirror and sun STK 2540 FC array
Hi all,
we have just bought a sun X2200M2 (4GB / 2 opteron 2214 / 2 disks 250GB
SATA2, solaris 10 update 4)
and a sun STK 2540 FC array (8 disks SAS 146 GB, 1 raid controller).
The server is attached to the array with a single 4 Gb Fibre Channel link.
I want to make a mirror using ZFS with this array.
I have created 2 volumes on the array
in RAID0 (stripe of 128 KB) presented to the host
2009 Oct 01
1
cachefile for snail zpool import mystery?
Hi,
We are seeing more long delays in zpool import, say, 4~5 or even
25~30 minutes, especially when backup jobs are going on in the FC SAN
the LUNs resides (no iSCSI LUNs yet). On the same node for the LUNs of the same array,
some pools takes a few seconds, but minutes for some. the pattern
seems random to me so far. It''s first noticed soon after being upgraded to
Solaris 10 U6