similar to: The ZFS Read / Write roundabout

Displaying 20 results from an estimated 600 matches similar to: "The ZFS Read / Write roundabout"

2007 Nov 17
11
slog tests on read throughput exhaustion (NFS)
I have historically noticed that in ZFS, when ever there is a heavy writer to a pool via NFS, the reads can held back (basically paused). An example is a RAID10 pool of 6 disks, whereby a directory of files including some large 100+MB in size being written can cause other clients over NFS to pause for seconds (5-30 or so). This on B70 bits. I''ve gotten used to this behavior over NFS, but
2008 Jan 10
2
NCQ
fun example that shows NCQ lowers wait and %w, but doesn''t have much impact on final speed. [scrubbing, devs reordered for clarity] extended device statistics device r/s w/s kr/s kw/s wait actv svc_t %w %b sd2 454.7 0.0 47168.0 0.0 0.0 5.7 12.6 0 74 sd4 440.7 0.0 45825.9 0.0 0.0 5.5 12.4 0 78 sd6 445.7 0.0
2010 May 18
25
Very serious performance degradation
Hi, I''m running Opensolaris 2009.06, and I''m facing a serious performance loss with ZFS ! It''s a raidz1 pool, made of 4 x 1TB SATA disks : zfs_raid ONLINE 0 0 0 raidz1 ONLINE 0 0 0 c7t2d0 ONLINE 0 0 0 c7t3d0 ONLINE 0 0 0 c7t4d0 ONLINE 0 0
2010 Feb 12
13
SSD and ZFS
Hi all, just after sending a message to sunmanagers I realized that my question should rather have gone here. So sunmanagers please excus ethe double post: I have inherited a X4140 (8 SAS slots) and have just setup the system with Solaris 10 09. I first setup the system on a mirrored pool over the first two disks pool: rpool state: ONLINE scrub: none requested config: NAME
2010 Jan 12
3
set zfs:zfs_vdev_max_pending
We have a zpool made of 4 512g iscsi luns located on a network appliance. We are seeing poor read performance from the zfs pool. The release of solaris we are using is: Solaris 10 10/09 s10s_u8wos_08a SPARC The server itself is a T2000 I was wondering how we can tell if the zfs_vdev_max_pending setting is impeding read performance of the zfs pool? (The pool consists of lots of small files).
2009 Apr 15
5
StorageTek 2540 performance radically changed
Today I updated the firmware on my StorageTek 2540 to the latest recommended version and am seeing radically difference performance when testing with iozone than I did in February of 2008. I am using Solaris 10 U5 with all the latest patches. This is the performance achieved (on a 32GB file) in February last year: KB reclen write rewrite read reread 33554432
2017 Dec 20
2
glusterfs, ganesh, and pcs rules
Hi, I've just created again the gluster with NFS ganesha. Glusterfs version 3.8 When I run the command gluster nfs-ganesha enable - it returns a success. However, looking at the pcs status, I see this: [root at tlxdmz-nfs1 ~]# pcs status Cluster name: ganesha-nfs Stack: corosync Current DC: tlxdmz-nfs2 (version 1.1.16-12.el7_4.5-94ff4df) - partition with quorum Last updated: Wed Dec 20
2017 Dec 21
0
glusterfs, ganesh, and pcs rules
Hi, In your ganesha-ha.conf do you have your virtual ip adresses set something like this?: VIP_tlxdmz-nfs1="192.168.22.33" VIP_tlxdmz-nfs2="192.168.22.34" Renaud De?: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] De la part de Hetz Ben Hamo Envoy??: 20 d?cembre 2017 04:35 ??: gluster-users at gluster.org Objet?: [Gluster-users]
2011 Oct 10
2
can't snapshot
Good morning Btrfs list, I am trying to create a subvolume of a directory tree (approximately 1.1 million subvolumes under nfs1). The following error is thrown and without the wiki I don''t know what argument is needed. I am running kernel 3.1.0-rc4. [root@btrfs ~]# btrfs sub snapshot /btrfs/nfs1/ /btrfs/snaps/ Invalid arguments for subvolume snapshot [root@btrfs ~]# btrfs sub list
2017 Dec 24
1
glusterfs, ganesh, and pcs rules
I checked, and I have it like this: # Name of the HA cluster created. # must be unique within the subnet HA_NAME="ganesha-nfs" # # The gluster server from which to mount the shared data volume. HA_VOL_SERVER="tlxdmz-nfs1" # # N.B. you may use short names or long names; you may not use IP addrs. # Once you select one, stay with it as it will be mildly unpleasant to # clean up
2000 Jun 20
2
Multiple Services on one Server
Newbie question! We currently are running a product call TAS from Syntax Corporation and would like to move to Samba. I have review the documentation and cannot find how to set up muliple services on one server. I tried using the Netbios name = and the include statement to bring in another smb.conf file but I don't think I'm on the right track.
2014 Aug 01
2
Live blockcopy onto storage pool that is an NFS mount?
Hello, I am running qemu-kvm 1.4.0 and libvirt 1.0.2 on Ubuntu 12.04. I have two NFS mountpoints configured as two separate pools in virsh: <pool type='dir'> <name>nfs1</name> <uuid>419d799c-2493-6ebc-6848-53b0919e7bad</uuid> <capacity unit='bytes'>6836057014272</capacity> <allocation unit='bytes'>0</allocation>
2009 Jan 13
12
OpenSolaris better Than Solaris10u6 with requards to ARECA Raid Card
Under Solaris 10 u6 , No matter how I configured my ARECA 1261ML Raid card I got errors on all drives that result from SCSI timeout errors. yoda:~ # tail -f /var/adm/messages Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Requested Block: 239683776 Error Block: 239683776 Jan 9 11:03:47 yoda.asc.edu scsi: [ID 107833 kern.notice] Vendor: Seagate
2009 Jul 07
0
[perf-discuss] help diagnosing system hang
Interresting... I wonder what differs between your system and mine. With my dirt-simple stress-test: server1# zpool create X25E c1t15d0 server1# zfs set sharenfs=rw X25E server1# chmod a+w /X25E server2# cd /net/server1/X25E server2# gtar zxf /var/tmp/emacs-22.3.tar.gz and a fully patched X42420 running Solaris 10 U7 I still see these errors: Jul 7 22:35:04 merope Error for Command:
2001 Oct 31
1
Xilinx ise4.1i par trouble
Hi, When I excecute the following command line wine --winver nt40 --dll shlwapi=b --managed -- /nfs2/bin/Xilinx/ise4.1i/bin/nt/par.exe -pl 5 -rl 5 -e 1 -t 1 -w /tmp/design.ncd design.ncd design.pcf the following error message is returned err:seh:EXC_DefaultHandling Unhandled exception code c0000005 flags 0 addr 0x400a5648 but if I run the following command line wine --winver nt40 --dll
2010 Apr 19
2
warnquota email domain ?
Dear All, Sorry if this is the wrong place for this question, or if I'm just being daft, but I can't get warnquota to send emails to the right address. When I put a test user (gollum) over quota, and run warnquota on a server (nfs2.lmb.internal), the email generated by warnquota appears in the maillog as "to=<gollum at nfs2.lmb.internal>," What I need is the email to
2007 Jan 11
4
Help understanding some benchmark results
G''day, all, So, I''ve decided to migrate my home server from Linux+swRAID+LVM to Solaris+ZFS, because it seems to hold much better promise for data integrity, which is my primary concern. However, naturally, I decided to do some benchmarks in the process, and I don''t understand why the results are what they are. I though I had a reasonable understanding of ZFS, but now
2010 Jul 13
5
Re-exporting an NFS mount.. Possible?
I have an issue that is not all that unique, so I'm hoping someone has done it before. On the client end I have some very old RedHat based systems. On the server end is a Windows 2008 system running NFS server software. The clients mount the server resource as an NFS2 mount but some compliance issues were discovered with the setup. For various reasons, updating the client is not an option at
2009 Dec 16
27
zfs hanging during reads
Hi, I hope there''s someone here who can possibly provide some assistance. I''ve had this read problem now for the past 2 months and just can''t get to the bottom of it. I have a home snv_111b server, with a zfs raid pool (4 x Samsung 750GB SATA drives). The motherboard is a ASUS M2N68-CM (4 SATA ports) with an Athlon LE1620 single core CPU and 4GB of RAM. I am using it
2007 May 14
37
Lots of overhead with ZFS - what am I doing wrong?
I was trying to simply test bandwidth that Solaris/ZFS (Nevada b63) can deliver from a drive, and doing this: dd if=(raw disk) of=/dev/null gives me around 80MB/s, while dd if=(file on ZFS) of=/dev/null gives me only 35MB/s!?. I am getting basically the same result whether it is single zfs drive, mirror or a stripe (I am testing with two Seagate 7200.10 320G drives hanging off the same interface