Displaying 20 results from an estimated 7000 matches similar to: "ZFS/NFS - SC 3.2 and AVS - HOWTO [SOLVED]"
2006 Nov 30
2
Rsync and DTrace
Hello all..
Using dtrace on solaris 10, i could investigate a performance issue with the sincronization of some files on a ZFS filesystem. I have started the follow rsync command (inside a gnome-terminal):
/opt/sfw/bin/rsync -av -e ssh user@IP:/DirA/DirB .
The current directory(.), was a ZFS pool with two SATA discs (mirror)...
The performance was terrible. After some tests with raid0,
2007 Dec 17
1
HA-NFS AND HA-ZFS
We are currently running sun cluster 3.2 on solaris 10u3. We are using ufs/vxvm 4.1 as our shared file systems. However, I would like to migrate to HA-NFS on ZFS. Since there is no conversion process from UFS to ZFS other than copy, I would like to migrate on my own time. To do this I am planning to add a new zpool HAStoragePlus resource to my existing HA-NFS resource group. This way I can migrate
2007 Sep 17
1
Strange behavior zfs and soalris cluster
Hi All,
Two and three-node clusters with SC3.2 and S10u3 (120011-14).
If a node is rebooted when using SCSI3-PGR the node is not
able to take the zpool by HAStoragePlus due to reservation conflict.
SCSI2-PGRE is okay.
Using the same SAN-LUN:s in a metaset (SVM) and HAStoragePlus
works okay with PGR and PGRE. (both SMI and EFI-labled disks)
If using scshutdown and restart all nodes then it will
2007 Dec 21
1
Odd behavior of NFS of ZFS versus UFS
I have a test cluster running HA-NFS that shares both ufs and zfs based file systems. However, the behavior that I am seeing is a little perplexing.
The Setup: I have Sun Cluster 3.2 on a pair of SunBlade 1000''s connecting to two T3B partner groups through a QLogic switch. All four bricks of the T3B are configured as RAID-5 with a hot spare. One brick from each pair is mirrored with VxVM
2008 Sep 11
4
ZFS Panicing System Cluster Crash effect
Issues with ZFS and Sun Cluster
If a cluster node crashes and HAStoragePlus resource group containing ZFS structure (ie. Zpool) is transitioned to a surviving node, the zpool import can cause the surviving node to panic. Zpool was obviously not exported in controlled fashion because of hard crash. Storage structure is - HW RAID protected LUN from array. Zpool build on single HW LUN. Zpool created
2007 May 30
0
ZFS snapshots and NFS
Hello all,
Sorry if you think that question is stupid, but i need to ask..
Imagine a normal situation on a NFS server with "N" client nodes. The objects of the shares is software (/usr/ for instance), and the admin wants to make available new versions of a few packages.
So, would be nice if the admin could associate a NFS share and a ZFS snapshot?
I mean, the admin have the option
2006 Nov 08
1
timestamp vs vtimestamp
Hello all,
I''m looking for answers about a question that i did post on ZFS/UFS discuss... without success, now i''m here to see your points. Any tips?
As you know, i''m making some "screencasts" about a few solaris features. That screencasts is one point of many tests that i''m making with solaris 10. Now, with some tests with dtrace, i have saw a
2006 Oct 17
10
ZFS, home and Linux
Hello,
I''m trying to implement a NAS server with solaris/NFS and, of course, ZFS. But for that, we have a little problem... what about the /home filesystem? I mean, i have a lot of linux clients, and the "/home" directory is on a NFS server (today, linux). I want to use ZFS, and
change the "directory" home like /home/leal, to "filesystems" like
/home/leal
2007 Nov 15
3
read/write NFS block size and ZFS
Hello all...
I''m migrating a nfs server from linux to solaris, and all clients(linux) are using read/write block sizes of 8192. That was the better performance that i got, and it''s working pretty well (nfsv3). I want to use all the zfs'' advantages, and i know i can have a performance loss, so i want to know if there is a "recomendation" for bs on nfs/zfs, or
2008 Sep 19
1
rsync efficiency
Hello all,
I have a doubt that i think you hackers of rsync has the answer. ;-)
I have make this post on my blog:
http://www.posix.brte.com.br/blog/?p=312
to start a serie about the copy-on-write semantics of ZFS. In my test
"VI" did rewrite the whole file just for change 3 bytes, so the whole
file was reallocated.
What i want to know from you is about the techniques used by rsync
2007 Jul 12
2
[AVS] Question concerning reverse synchronization of a zpool
Hi,
I''m struggling to get a stable ZFS replication using Solaris 10 110/06
(actual patches) and AVS 4.0 for several weeks now. We tried it on
VMware first and ended up in kernel panics en masse (yes, we read Jim
Dunham''s blog articles :-). Now we try on the real thing, two X4500
servers. Well, I have no trouble replicating our kernel panics there,
too ... but I think I
2008 Mar 13
3
Round-robin NFS protocol with ZFS
Hello all,
I was thinking if such scenario could be possible:
1 - Export/import a ZFS filesystem in two solaris servers.
2 - Export that filesystem (NFS).
3 - Mount that filesystem on clients in two different mount points (just to authenticate in both servers/UDP).
4a - Use some kind of "man-in-the middle" to auto-balance the connections (the same IP on servers)
or
4b - Use different
2011 May 13
0
sun (oracle) 7110 zfs low performace fith high latency and high disc util.
Hello!
Our company have 2 sun 7110 with the following configuration:
Primary:
7110 with 2 qc 1.9ghz HE opterons and 32GB ram
16 2.5" 10Krpm sas disc (2 system, 1 spare)
a pool is configured from the rest so we have 13 active working discs in raidz-2 (called main)
there is a sun J4200 jbod connected to this device with 12x750GB discs
with 1 spare and 11active discs there is another pool
2012 Sep 06
1
Is rsync -avS same as rsync -av --sparse
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.samba.org/pipermail/rsync/attachments/20120906/95019970/attachment.html>
2008 May 28
0
ZFS locking up! Bug in ZFS, device drivers or time for mobo RMA?
Greetings all.
I am facing serious problems running ZFS on a storage server assembled out of commodity hardware that is supposed to be Solaris compatible.
Although I am quite familiar with Linux distros and other unices, I am new to Solaris so any suggestions are highly appreciated.
First I tried SXDE 1/08 creating the following pool:
-bash-3.2# zpool status -v tank
pool: tank
state:
battery not installed, but battery still 100% and NUT 2.7.2-4 does not catch this and report a error
2017 Apr 03
0
battery not installed, but battery still 100% and NUT 2.7.2-4 does not catch this and report a error
On Monday 03 April 2017 11:41:56 Jon Bendtsen wrote:
> On 03/04/17 17.24, Roger Price wrote:
> > On Mon, 3 Apr 2017, Jon Bendtsen wrote:
> >> On 03/04/17 17.10, Roger Price wrote:
> >>> On Mon, 3 Apr 2017, Jon Bendtsen wrote:
> >>>> Power seem to be lost immediately.
> >>>> But my APC Smart-UPS 1500 always reported everything OK.
>
2009 Oct 27
0
Test environment question
I want to test out my first V1.2 Dovecot (upgraded from V1.1) instance.
What I have in mind to do is to run it on another machine that has the
Inbox dir and homedirs NFS import mounted from the production
mailserver. I then have 5 people test it in this test environment....
A) Then I can deal with the index filesystem in one of two ways:
1) Make it local OR
2) NFS import it from the
2009 Apr 08
2
ZFS data loss
Hi,
I have lost a ZFS volume and I am hoping to get some help to recover the
information ( a couple of months worth of work :( ).
I have been using ZFS for more than 6 months on this project. Yesterday
I ran a "zvol status" command, the system froze and rebooted. When it
came back the discs where not available.
See bellow the output of " zpool status", "format"
2010 May 12
6
ZFS High Availability
I''m looking for input on building an HA configuration for ZFS. I''ve read the FAQ and understand that the standard approach is to have a standby system with access to a shared pool that is imported during a failover.
The problem is that we use ZFS for a specialized purpose that results in 10''s of thousands of filesystems (mostly snapshots and clones). All versions of
2009 Mar 28
3
zfs scheduled replication script?
I have a backup system using zfs send/receive (I know there are pros
and cons to that, but it''s suitable for what I need).
What I have now is a script which runs daily, do zfs send, compress
and write it to a file, then transfer it with ftp to a remote host. It
does full backup every 1st, and do incremental (with 1st as reference)
after that. It works, but not quite resource-effective