Displaying 18 results from an estimated 18 matches similar to: "Performance problem of ZFS ( Sol 10U2 )"
2006 Nov 28
7
Convert Zpool RAID Types
Hello,
Is it possible to non-destructively change RAID types in zpool while
the data remains on-line?
-J
2007 Aug 02
3
ZFS, ZIL, vq_max_pending and OSCON
The slides from my ZFS presentation at OSCON (as well as some
additional information) are available at http://www.meangrape.com/
2007/08/oscon-zfs/
Jay Edwards
jay at meangrape.com
http://www.meangrape.com
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.opensolaris.org/pipermail/zfs-discuss/attachments/20070802/f2fa7b08/attachment.html>
2007 Sep 04
23
I/O freeze after a disk failure
Hi all,
yesterday we had a drive failure on a fc-al jbod with 14 drives.
Suddenly the zpool using that jbod stopped to respond to I/O requests and we get tons of the following messages on /var/adm/messages:
Sep 3 15:20:10 fb2 scsi: [ID 107833 kern.warning] WARNING: /scsi_vhci/disk at g20000004cfd81b9f (sd52):
Sep 3 15:20:10 fb2 SCSI transport failed: reason ''timeout'':
2006 Oct 16
11
Configuring a 3510 for ZFS
Hi folks,
Myself and a colleague are currently involved in a prototyping exercise
to evaluate ZFS against our current filesystem. We are looking at the
best way to arrange the disks in a 3510 storage array.
We have been testing with the 12 disks on the 3510 exported as "nraid"
logical devices. We then configured a single ZFS pool on top of this,
using two raid-z arrays. We are getting
2010 Jul 28
2
[LLVMdev] Why are LLVM libraries enormous?
On Wed, Jul 28, 2010 at 9:01 AM, David Piepgrass
<dpiepgrass at mentoreng.com> wrote:
>> A LLVM JIT compiler for x86 under 1 MB? I doubt it is possible without
>> a major rewriting of LLVM.
>
> Even with no optimizations? Drat. That means I can't use it.
Why? I'd never checked, but I always assumed the LLVM JIT was much
larger than 3.4 MB.
For comparison:
[rnk at
2010 Oct 19
8
Balancing LVOL fill?
Hi all
I have this server with some 50TB disk space. It originally had 30TB on WD Greens, was filled quite full, and another storage chassis was added. Now, space problem gone, fine, but what about speed? Three of the VDEVs are quite full, as indicated below. VDEV #3 (the one with the spare active) just spent some 72 hours resilvering a 2TB drive. Now, those green drives suck quite hard, but not
2013 Jun 19
1
Weird I/O hangs (9.1R, arcsas, interrupt spikes on uhci0)
Hi,
very periodically, we see I/O hangs for about 10 seconds, roughly once per minute.
Each time this happens, the I/O rate simply drops to zero, and all disk access hangs; this is also very noticeable on the shell, for NFS clients etc. Everything else (networking, kernel, ?) seems to continue normally.
Environment: FreeBSD 9.1R GENERIC on amd64, using ZFS, on a ARC1320 PCIe with 24x Seagate
2006 Jun 02
0
zfs going out to lunch
I''ve been writing via tar to a pool some stuff from backup, around
500GB. Its taken quite a while as the tar is being read from NFS. My
ZFS partition in this case is a RAIDZ 3-disk job using 3 400GB SATA
drives (sil3124 card)
Ever once in a while, a "df" stalls and during that time my io''s go
flat, as in :
capacity operations bandwidth
pool
2018 Mar 05
0
[Bug 13317] rsync returns success when target filesystem is full
https://bugzilla.samba.org/show_bug.cgi?id=13317
--- Comment #6 from Rui DeSousa <rui.desousa at icloud.com> ---
(In reply to Rui DeSousa from comment #5)
It looks like no error is returned and result is a sparse file. I think a
sync() would be required otherwise the file is truncated on close to meet the
quota.
[postgres at hades ~]$ df -h arch
Filesystem Size Used
1995 Sep 26
0
GPM Modula-2 and Oberon-2 Compilers
GPM Modula-2 and Oberon-2 Compilers
File locations:
ftp.fit.qut.edu.au:/pub/gpm
ftp.psg.com:/pub/modula-2/gpm
WEB Site:
http://www.fit.qut.edu.au/CompSci/PLAS/GPM/
The Gardens Point Modula (GPM) compilers are an ongoing development
project for the Programming Languages and Systems Group in the Faculty
of Information Technology at the Queensland University of Technology.
2010 Nov 11
8
zpool import panics
Hi,
I just had my Dell R610 reboot with a kernel panic when I threw a couple
of zfs clone commands in the terminal at it.
Now, after the system had rebooted zfs will not import my pool anylonger
and instead the kernel will panic again.
I have had the same symptom on my other host, for which this one is
basically the backup, so this one is my last line if defense.
I tried to run zdb -e
2009 Sep 09
4
waiting IOs...
Hi,
We have a storage server (HP DL360G5 + MSA20 (12 disks in RAID 6) on a SmartArray6400).
10 directories are exported through nfs to 10 clients (rsize=32768,wsize=32768,soft,intr,nosuid,proto=udp,vers=3).
The server is apparently not doing much but... we have very high waiting IOs.
dstat show very little activity, but high 'wai'...
# dstat
----total-cpu-usage---- -dsk/total-
2018 Mar 01
29
[Bug 13317] New: rsync returns success when target filesystem is full
https://bugzilla.samba.org/show_bug.cgi?id=13317
Bug ID: 13317
Summary: rsync returns success when target filesystem is full
Product: rsync
Version: 3.1.2
Hardware: x64
OS: FreeBSD
Status: NEW
Severity: major
Priority: P5
Component: core
Assignee: wayned at samba.org
2011 May 13
27
Extremely slow zpool scrub performance
Running a zpool scrub on our production pool is showing a scrub rate
of about 400K/s. (When this pool was first set up we saw rates in the
MB/s range during a scrub).
Both zpool iostat and an iostat -Xn show lots of idle disk times, no
above average service times, no abnormally high busy percentages.
Load on the box is .59.
8 x 3GHz, 32GB ram, 96 spindles arranged into raidz zdevs on OI 147.
2007 Aug 24
3
traffic shaping stranges
Hello list,
I discover strange behaviour of shaping traffic that i setup from
Shorewall-4.0.2.
I know that this is not Shorewall problem but may be somebody from list
can help me
or explain this situation.
I have follow interfaces in 'tcdevices' files:
#INTERFACE IN-BANDWITH OUT-BANDWIDTH
#
$EXT_IF 500kbit 248kbit
$INT1_IF 500mbit
2016 May 07
0
RV: Daily mail report for 2016-05-06lzq
Enviado desde mi smartphone BlackBerry Z10 4G Lte.
Mensaje original
De: admin at pr.copextel.com.cu
Enviado: sábado, 7 de mayo de 2016 12:30 a.m.
Para: admin at pr.copextel.com.cu
Asunto: Daily mail report for 2016-05-06
Grand Totals
------------
messages
409 received
4135 delivered
0 forwarded
10 deferred (114 deferrals)
14 bounced
90 rejected (2%)
0 reject warnings
0 held
0 discarded (0%)
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks,
The following are initial virtio-scsi + target vhost benchmark results
using multiple target LUNs per vhost and multiple virtio PCI adapters to
scale the total number of virtio-scsi LUNs into a single KVM guest.
The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x
virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single
guest, along with each backend
2012 Aug 10
1
virtio-scsi <-> vhost multi lun/adapter performance results with 3.6-rc0
Hi folks,
The following are initial virtio-scsi + target vhost benchmark results
using multiple target LUNs per vhost and multiple virtio PCI adapters to
scale the total number of virtio-scsi LUNs into a single KVM guest.
The test setup is currently using 4x SCSI LUNs per vhost WWPN, with 8x
virtio PCI adapters for a total of 32x 500MB ramdisk LUNs into a single
guest, along with each backend