Displaying 7 results from an estimated 7 matches similar to: "FreeBSD 10-BETA3 - zfs clone of zvol snapshot is not created"
2010 Mar 04
8
Huge difference in reporting disk usage via du and zfs list. Fragmentation?
Do we have enormous fragmentation here on our X4500 with Solaris 10, ZFS Version 10?
What except zfs send/receive can be done to free the fragmented space?
One ZFS was used for some month to store some large disk images (each 50GByte large) which are copied there with rsync. This ZFS then reports 6.39TByte usage with zfs list and only 2TByte usage with du.
The other ZFS was used for similar
2006 May 12
1
zfs panic when unpacking open solaris source
When unpacking the solaris source onto a local disk on a system running build 39 I got the following panic:
panic[cpu0]/thread=d2c8ade0:
really out of space
d2c8a7b4 zfs:zio_write_allocate_gang_members+3e6 (e4385ac0)
d2c8a7d0 zfs:zio_dva_allocate+81 (e4385ac0)
d2c8a7e8 zfs:zio_next_stage+66 (e4385ac0)
d2c8a800 zfs:zio_checksum_generate+5e (e4385ac0)
d2c8a81c zfs:zio_next_stage+66 (e4385ac0)
2010 May 05
3
[indiana-discuss] image-update doesn''t work anymore (bootfs not supported on EFI)
On 5/5/10 1:44 AM, Christian Thalinger wrote:
> On Tue, 2010-05-04 at 16:19 -0600, Evan Layton wrote:
>> Can you try the following and see if it really thinks it''s an EFI lable?
>> # dd if=/dev/dsk/c12t0d0s2 of=x skip=512 bs=1 count=10
>> # cat x
>>
>> This may help us determine if this is another instance of bug 6860320
>
> # dd
2010 Feb 08
5
zfs send/receive : panic and reboot
<copied from opensolaris-dicuss as this probably belongs here.>
I kept on trying to migrate my pool with children (see previous threads) and had the (bad) idea to try the -d option on the receive part.
The system reboots immediately.
Here is the log in /var/adm/messages
Feb 8 16:07:09 amber unix: [ID 836849 kern.notice]
Feb 8 16:07:09 amber ^Mpanic[cpu1]/thread=ffffff014ba86e40:
Feb 8
2009 Apr 12
7
Any news on ZFS bug 6535172?
We''re running a Cyrus IMAP server on a T2000 under Solaris 10 with
about 1 TB of mailboxes on ZFS filesystems. Recently, when under
load, we''ve had incidents where IMAP operations became very slow. The
general symptoms are that the number of imapd, pop3d, and lmtpd
processes increases, the CPU load average increases, but the ZFS I/O
bandwidth decreases. At the same time, ZFS
2010 Apr 02
6
L2ARC & Workingset Size
Hi all
I ran a workload that reads & writes within 10 files each file is 256M, ie,
(10 * 256M = 2.5GB total Dataset Size).
I have set the ARC max size to 1 GB on etc/system file
In the worse case, let us assume that the whole dataset is hot, meaning my
workingset size= 2.5GB
My SSD flash size = 8GB and being used for L2ARC
No slog is used in the pool
My File system record size = 8K ,
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e